2025-01-09
ยง
|
10:27 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling pc4', diff saved to https://phabricator.wikimedia.org/P71926 and previous config saved to /var/cache/conftool/dbconfig/20250109-102700-ladsgroup.json |
[production] |
10:25 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2131 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P71925 and previous config saved to /var/cache/conftool/dbconfig/20250109-102538-root.json |
[production] |
10:25 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2226 (re)pooling @ 10%: Repooling after moving sanitarium', diff saved to https://phabricator.wikimedia.org/P71924 and previous config saved to /var/cache/conftool/dbconfig/20250109-102522-root.json |
[production] |
10:25 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2126 (re)pooling @ 10%: Repooling after moving sanitarium', diff saved to https://phabricator.wikimedia.org/P71923 and previous config saved to /var/cache/conftool/dbconfig/20250109-102512-root.json |
[production] |
10:22 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2231 (re)pooling @ 3%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71922 and previous config saved to /var/cache/conftool/dbconfig/20250109-102218-root.json |
[production] |
10:21 |
<marostegui> |
Move db2187:3312 under db2226 s2 codfw dbmaint T373579 |
[production] |
10:20 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db[2126,2187,2226].codfw.wmnet with reason: maintenance |
[production] |
10:20 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db[2126,2187,2226].codfw.wmnet with reason: maintenance |
[production] |
10:20 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2126 db2226 T373579', diff saved to https://phabricator.wikimedia.org/P71921 and previous config saved to /var/cache/conftool/dbconfig/20250109-102010-marostegui.json |
[production] |
10:12 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes[2053,2056,2058].codfw.wmnet |
[production] |
10:11 |
<jelto@cumin1002> |
START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes[2053,2056,2058].codfw.wmnet |
[production] |
10:10 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2131 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P71920 and previous config saved to /var/cache/conftool/dbconfig/20250109-101033-root.json |
[production] |
10:09 |
<klausman@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2001.codfw.wmnet |
[production] |
10:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2231 (re)pooling @ 2%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71919 and previous config saved to /var/cache/conftool/dbconfig/20250109-100712-root.json |
[production] |
10:06 |
<jynus@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbprov2004.codfw.wmnet with reason: os upgrade |
[production] |
10:05 |
<jynus@cumin2002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on dbprov2004.codfw.wmnet with reason: os upgrade |
[production] |
09:58 |
<moritzm> |
installing glibc bugfix updates for Bookworm |
[production] |
09:55 |
<klausman@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ml-serve2001.codfw.wmnet |
[production] |
09:55 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2131 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P71918 and previous config saved to /var/cache/conftool/dbconfig/20250109-095527-root.json |
[production] |
09:55 |
<klausman@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ml-serve2001.codfw.wmnet |
[production] |
09:52 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2231 (re)pooling @ 1%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71917 and previous config saved to /var/cache/conftool/dbconfig/20250109-095207-root.json |
[production] |
09:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1042 (re)pooling @ 100%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71916 and previous config saved to /var/cache/conftool/dbconfig/20250109-094355-root.json |
[production] |
09:40 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2131 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P71915 and previous config saved to /var/cache/conftool/dbconfig/20250109-094022-root.json |
[production] |
09:28 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1042 (re)pooling @ 75%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71914 and previous config saved to /var/cache/conftool/dbconfig/20250109-092850-root.json |
[production] |
09:23 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on P{cp4044.ulsfo.wmnet,cp4052.ulsfo.wmnet} and A:cp |
[production] |
09:21 |
<klausman@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ml-serve2001.codfw.wmnet |
[production] |
09:20 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2013.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2013.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2012.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2012.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker2011.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker2011.codfw.wmnet |
[production] |
09:19 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2013.codfw.wmnet with OS bookworm |
[production] |
09:18 |
<vgutierrez@cumin1002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on P{cp4044.ulsfo.wmnet,cp4052.ulsfo.wmnet} and A:cp |
[production] |
09:17 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on P{cp4044.ulsfo.wmnet,cp4052.ulsfo.wmnet} and A:cp |
[production] |
09:15 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2012.codfw.wmnet with OS bookworm |
[production] |
09:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1042 (re)pooling @ 50%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71913 and previous config saved to /var/cache/conftool/dbconfig/20250109-091345-root.json |
[production] |
09:13 |
<vgutierrez@cumin1002> |
START - Cookbook sre.cdn.roll-upgrade-haproxy rolling upgrade of HAProxy on P{cp4044.ulsfo.wmnet,cp4052.ulsfo.wmnet} and A:cp |
[production] |
09:12 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2011.codfw.wmnet with OS bookworm |
[production] |
09:02 |
<vgutierrez> |
update to haproxy 2.8.13 on component thirdparty/haproxy28 bullseye-wikimedia (apt.wm.o) - T383111 |
[production] |
08:59 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2013.codfw.wmnet with reason: host reimage |
[production] |
08:58 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1042 (re)pooling @ 25%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71912 and previous config saved to /var/cache/conftool/dbconfig/20250109-085840-root.json |
[production] |
08:57 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.clone (exit_code=0) of db2131.codfw.wmnet onto db2231.codfw.wmnet |
[production] |
08:55 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2012.codfw.wmnet with reason: host reimage |
[production] |
08:52 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2011.codfw.wmnet with reason: host reimage |
[production] |
08:49 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on wikikube-worker2013.codfw.wmnet with reason: host reimage |
[production] |
08:49 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on wikikube-worker2012.codfw.wmnet with reason: host reimage |
[production] |
08:49 |
<jelto@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on wikikube-worker2011.codfw.wmnet with reason: host reimage |
[production] |
08:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1042 (re)pooling @ 10%: Repooling for the first time', diff saved to https://phabricator.wikimedia.org/P71911 and previous config saved to /var/cache/conftool/dbconfig/20250109-084335-root.json |
[production] |
08:33 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on es1043.eqiad.wmnet with reason: cloning |
[production] |