2025-04-29
ยง
|
09:11 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.vps.migrate_floating_ip for address 185.15.56.55 to server 'maps-proxy-6' |
[project-proxy] |
09:11 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling P{lvs7001.magru.wmnet} and A:liberica |
[production] |
09:10 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.admin depooling P{lvs7001.magru.wmnet} and A:liberica |
[production] |
08:52 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs7002.magru.wmnet |
[production] |
08:48 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2050.codfw.wmnet |
[production] |
08:45 |
<vgutierrez@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host lvs7002.magru.wmnet |
[production] |
08:43 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2050.codfw.wmnet |
[production] |
08:42 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2049.codfw.wmnet |
[production] |
08:41 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2033 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P75601 and previous config saved to /var/cache/conftool/dbconfig/20250429-084116-root.json |
[production] |
08:39 |
<godog> |
bounce prometheus-statsd-exporter on stat1011 - T389344 |
[production] |
08:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P75600 and previous config saved to /var/cache/conftool/dbconfig/20250429-083855-root.json |
[production] |
08:37 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling P{lvs7002.magru.wmnet} and A:liberica |
[production] |
08:36 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.admin depooling P{lvs7002.magru.wmnet} and A:liberica |
[production] |
08:36 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2049.codfw.wmnet |
[production] |
08:31 |
<taavi@cloudcumin1001> |
END (PASS) - Cookbook wmcs.vps.migrate_floating_ip (exit_code=0) for address 185.15.56.55 to server 'maps-proxy-5' |
[project-proxy] |
08:31 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.vps.migrate_floating_ip for address 185.15.56.55 to server 'maps-proxy-5' |
[project-proxy] |
08:30 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.pool db1188 slowly with 10 steps - Pool db1188.eqiad.wmnet in after cloning |
[production] |
08:28 |
<moritzm> |
installing wget security updates |
[production] |
08:26 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2033 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P75598 and previous config saved to /var/cache/conftool/dbconfig/20250429-082611-root.json |
[production] |
08:25 |
<fabfur> |
rolling restart haproxykafka on A:cp to apply new configuration https://gerrit.wikimedia.org/r/c/operations/puppet/+/1136679 (T382571) |
[production] |
08:24 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 8 hosts with reason: Maintenance |
[production] |
08:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P75597 and previous config saved to /var/cache/conftool/dbconfig/20250429-082349-root.json |
[production] |
08:22 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. |
[production] |
08:21 |
<brouberol@deploy1003> |
helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. |
[production] |
08:19 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti2034.codfw.wmnet |
[production] |
08:19 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2034.codfw.wmnet |
[production] |
08:19 |
<klausman@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-lab1001.eqiad.wmnet |
[production] |
08:17 |
<hashar@deploy1003> |
rebuilt and synchronized wikiversions files: group0 to 1.44.0-wmf.27 refs T386222 |
[production] |
08:16 |
<taavi@cloudcumin1001> |
END (PASS) - Cookbook wmcs.openstack.tofu (exit_code=0) running tofu plan+apply for main branch |
[admin] |
08:15 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.openstack.tofu running tofu plan+apply for main branch |
[admin] |
08:12 |
<klausman@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host ml-lab1001.eqiad.wmnet |
[production] |
08:11 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti7004.magru.wmnet |
[production] |
08:11 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2033 (re)pooling @ 60%: Repooling', diff saved to https://phabricator.wikimedia.org/P75596 and previous config saved to /var/cache/conftool/dbconfig/20250429-081106-root.json |
[production] |
08:11 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti7004.magru.wmnet |
[production] |
08:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 60%: Repooling', diff saved to https://phabricator.wikimedia.org/P75595 and previous config saved to /var/cache/conftool/dbconfig/20250429-080844-root.json |
[production] |
08:03 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti7004.magru.wmnet |
[production] |
07:57 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti7004.magru.wmnet |
[production] |
07:56 |
<suzannewoodWMDE2> |
suzannewood@mwmaint1002:~$ foreachwikiindblist wikidataclient extensions/Wikibase/lib/maintenance/populateSitesTable.php --force-protocol https |
[production] |
07:56 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2033 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P75594 and previous config saved to /var/cache/conftool/dbconfig/20250429-075600-root.json |
[production] |
07:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1033 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P75593 and previous config saved to /var/cache/conftool/dbconfig/20250429-075339-root.json |
[production] |
07:53 |
<moritzm> |
copied cadvisor 0.44.0+ds1-1~wmf1 from bookworm-wikimedia to trixie-wikimedia T391083 |
[production] |
07:50 |
<moritzm> |
copied wmf-certificates 1~20230906-1 from bookworm-wikimedia to trixie-wikimedia T391083 |
[production] |
07:46 |
<taavi@cloudcumin1001> |
END (PASS) - Cookbook wmcs.openstack.tofu (exit_code=0) running tofu plan for https://gitlab.wikimedia.org/repos/cloud/cloud-vps/tofu-infra/-/merge_requests/223 |
[admin] |
07:45 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.openstack.tofu running tofu plan for https://gitlab.wikimedia.org/repos/cloud/cloud-vps/tofu-infra/-/merge_requests/223 |
[admin] |
07:44 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti7001.magru.wmnet |
[production] |
07:44 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti7001.magru.wmnet |
[production] |
07:43 |
<taavi@cloudcumin1001> |
END (FAIL) - Cookbook wmcs.openstack.tofu (exit_code=99) running tofu plan+apply for main branch |
[admin] |
07:43 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.openstack.tofu running tofu plan+apply for main branch |
[admin] |
07:42 |
<taavi@cloudcumin1001> |
END (FAIL) - Cookbook wmcs.openstack.tofu (exit_code=99) running tofu plan+apply for main branch |
[admin] |
07:41 |
<taavi@cloudcumin1001> |
START - Cookbook wmcs.openstack.tofu running tofu plan+apply for main branch |
[admin] |