1-50 of 10000 results (28ms)
2025-03-14 ยง
21:36 <Reedy> deployed https://gerrit.wikimedia.org/r/1127982 [releng]
21:25 <zabe> zabe@mwmaint2002:~$ mwscript extensions/WikimediaMaintenance/migrateESRefToContentTableStage2.php testwiki --delete /home/zabe/afl_text_table_deletedump/testwiki --sleep 0.3 # T381599 [production]
21:06 <zabe> zabe@mwmaint2002:~$ mwscript extensions/AbuseFilter/maintenance/MigrateESRefToAflTable.php testwiki --dump /home/zabe/afl_text_table_dump/testwiki --deletedump /home/zabe/afl_text_table_deletedump/testwiki --sleep 0.3 # T381599 [production]
16:55 <Lucas_WMDE> manually killed job https://integration.wikimedia.org/ci/job/wmf-quibble-selenium-php81/2928/console which had been stuck since 16:33 UTC, blocking gate-and-submit :( [releng]
16:51 <root@deploy2002> helmfile [aux-k8s-codfw] DONE helmfile.d/admin 'sync'. [production]
16:51 <root@deploy2002> helmfile [aux-k8s-codfw] START helmfile.d/admin 'sync'. [production]
16:15 <slyngshede@cumin1002> DONE (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Vivian Rook out of all services on: 2288 hosts [production]
16:14 <slyngshede@cumin1002> DONE (FAIL) - Cookbook sre.idm.logout (exit_code=99) Logging Vivian Rook out of all services on: 2288 hosts [production]
16:05 <sukhe@cumin1002> END (FAIL) - Cookbook sre.loadbalancer.admin (exit_code=1) pooling A:liberica-canary [production]
16:05 <sukhe@cumin1002> START - Cookbook sre.loadbalancer.admin pooling A:liberica-canary [production]
16:04 <sukhe@cumin1002> END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling A:liberica-canary [production]
16:04 <sukhe@cumin1002> START - Cookbook sre.loadbalancer.admin depooling A:liberica-canary [production]
16:04 <sukhe@cumin1002> END (FAIL) - Cookbook sre.loadbalancer.admin (exit_code=1) pooling A:liberica-canary [production]
16:03 <sukhe@cumin1002> START - Cookbook sre.loadbalancer.admin pooling A:liberica-canary [production]
16:01 <sukhe@cumin1002> END (FAIL) - Cookbook sre.loadbalancer.admin (exit_code=1) pooling A:liberica-canary [production]
16:00 <sukhe@cumin1002> START - Cookbook sre.loadbalancer.admin pooling A:liberica-canary [production]
16:00 <sukhe@cumin1002> END (PASS) - Cookbook sre.loadbalancer.admin (exit_code=0) depooling A:liberica-canary [production]
16:00 <sukhe@cumin1002> START - Cookbook sre.loadbalancer.admin depooling A:liberica-canary [production]
15:50 <slyngshede@cumin1002> DONE (FAIL) - Cookbook sre.idm.logout (exit_code=99) Logging Vivian Rook out of all services on: 2288 hosts [production]
15:41 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host ms-be2075.codfw.wmnet with OS bullseye [production]
15:41 <jhancock@cumin2002> START - Cookbook sre.hosts.reimage for host ms-be2075.codfw.wmnet with OS bullseye [production]
15:37 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host ms-be2075.codfw.wmnet with OS bullseye [production]
15:20 <root@deploy2002> helmfile [aux-k8s-codfw] DONE helmfile.d/admin 'sync'. [production]
15:20 <root@deploy2002> helmfile [aux-k8s-codfw] START helmfile.d/admin 'sync'. [production]
15:20 <root@deploy2002> helmfile [aux-k8s-codfw] DONE helmfile.d/admin 'sync'. [production]
15:20 <root@deploy2002> helmfile [aux-k8s-codfw] START helmfile.d/admin 'sync'. [production]
15:19 <root@deploy2002> helmfile [aux-k8s-codfw] DONE helmfile.d/admin 'sync'. [production]
15:19 <root@deploy2002> helmfile [aux-k8s-codfw] START helmfile.d/admin 'sync'. [production]
15:19 <root@deploy2002> helmfile [aux-k8s-codfw] DONE helmfile.d/admin 'sync'. [production]
15:18 <root@deploy2002> helmfile [aux-k8s-codfw] START helmfile.d/admin 'sync'. [production]
14:55 <herron> kafka-logging reduce mediawiki.httpd.accesslog topic retention from 172800000ms (2d) to 129600000ms (1.5d) [production]
13:36 <volans> installed cumin v5.1.1 on cloudcumin* hosts [admin]
13:33 <kevinbazira@deploy2002> helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'article-models' for release 'main' . [production]
13:14 <pt1979@cumin1002> END (FAIL) - Cookbook sre.hosts.dhcp (exit_code=99) for host cloudgw1003.eqiad.wmnet [production]
13:13 <volans> installed cumin v5.1.1 on cloudcumin* and cuminunpriv* hosts [production]
12:03 <hnowlan@cumin2002> END (PASS) - Cookbook sre.switchdc.mediawiki.08-start-maintenance (exit_code=0) for datacenter switchover from eqiad to codfw [production]
12:02 <root@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-cron: apply [production]
12:02 <root@deploy2002> helmfile [codfw] START helmfile.d/services/mw-cron: apply [production]
12:02 <root@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-cron: apply [production]
12:02 <root@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-cron: apply [production]
12:00 <hnowlan@cumin2002> START - Cookbook sre.switchdc.mediawiki.08-start-maintenance for datacenter switchover from eqiad to codfw [production]
11:52 <hnowlan@cumin2002> END (PASS) - Cookbook sre.switchdc.mediawiki.01-stop-maintenance (exit_code=0) for datacenter switchover from eqiad to codfw [production]
11:52 <hnowlan@cumin2002> START - Cookbook sre.switchdc.mediawiki.01-stop-maintenance for datacenter switchover from eqiad to codfw [production]
11:40 <hnowlan@cumin2002> END (FAIL) - Cookbook sre.switchdc.mediawiki.01-stop-maintenance (exit_code=99) for datacenter switchover from eqiad to codfw [production]
11:40 <hnowlan@cumin2002> START - Cookbook sre.switchdc.mediawiki.01-stop-maintenance for datacenter switchover from eqiad to codfw [production]
11:36 <volans> uploaded cumin_5.1.1 to apt.wikimedia.org bullseye-wikimedia [production]
11:18 <wmbot~superpes@tools-bastion-13> restarted StewardBot stucked on IRC [tools.stewardbots]
11:13 <stevemunene@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on an-worker1199.eqiad.wmnet with reason: Adding the hosts to the analytics hadoop cluster in batches. this is part of the next batch [production]
11:13 <stevemunene@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on 9 hosts with reason: Adding the hosts to the analytics hadoop cluster in batches. this is part of the next batch [production]
10:58 <godog> set 80GB (per 6x partition ~500GB) retention for udp_localhost-err topic in kafka-logging eqiad [production]