1-50 of 10000 results (23ms)
2025-05-09 ยง
14:30 <fab@deploy1003> Finished deploy [airflow-dags/research@e3ccac9]: (no justification provided) (duration: 00m 38s) [production]
14:29 <fab@deploy1003> Started deploy [airflow-dags/research@e3ccac9]: (no justification provided) [production]
14:25 <fab@deploy1003> Finished deploy [airflow-dags/research@e3ccac9]: (no justification provided) (duration: 00m 31s) [production]
14:24 <fab@deploy1003> Started deploy [airflow-dags/research@e3ccac9]: (no justification provided) [production]
14:21 <lucaswerkmeister-wmde@deploy1003> Finished scap sync-world: Backport for [[gerrit:1143835|Bump wikibase-data-values-value-view to HEAD (T389633 T393641)]] (duration: 14m 12s) [production]
14:15 <lucaswerkmeister-wmde@deploy1003> lucaswerkmeister-wmde: Continuing with sync [production]
14:14 <lucaswerkmeister-wmde@deploy1003> lucaswerkmeister-wmde: Backport for [[gerrit:1143835|Bump wikibase-data-values-value-view to HEAD (T389633 T393641)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
14:07 <lucaswerkmeister-wmde@deploy1003> Started scap sync-world: Backport for [[gerrit:1143835|Bump wikibase-data-values-value-view to HEAD (T389633 T393641)]] [production]
13:36 <fab@deploy1003> Finished deploy [airflow-dags/research@e3ccac9]: (no justification provided) (duration: 04m 10s) [production]
13:32 <fab@deploy1003> Started deploy [airflow-dags/research@e3ccac9]: (no justification provided) [production]
12:51 <godog> upload prometheus-blackbox-exporter 0.26.0-0~bpo12+1 to bookworm-wikimedia - T385022 [production]
12:31 <arturo> hard-reboot tools-bastion-13 (login.toolforge.org) because unresponsive (out of memory) -- previous reboot was for tools-bastion-12 (dev.t.o) by mistake [tools]
12:29 <arturo> hard-reboot tools-bastion-12 (login.toolforge.org) because unresponsive (out of memory) [tools]
11:45 <taavi> update toolforge arc-enabled exim4 packages (component/exim4-arc) to latest in debian 12 T356171 [production]
11:17 <btullis@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-test-k8s: apply [production]
11:16 <btullis@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-test-k8s: apply [production]
11:06 <jelto@cumin1002> END (PASS) - Cookbook sre.gitlab.upgrade (exit_code=0) on GitLab host gitlab2002.wikimedia.org with reason: Upgrade Replica to GitLab 17.9 [production]
11:02 <mvernon@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-fe1005.eqiad.wmnet with OS bullseye [production]
11:02 <mvernon@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - mvernon@cumin1002" [production]
10:58 <mvernon@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - mvernon@cumin1002" [production]
10:40 <mvernon@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-fe1005.eqiad.wmnet with reason: host reimage [production]
10:37 <mvernon@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-fe1005.eqiad.wmnet with reason: host reimage [production]
10:20 <mvernon@cumin1002> START - Cookbook sre.hosts.reimage for host thanos-fe1005.eqiad.wmnet with OS bullseye [production]
09:50 <moritzm> imported debmonitor-client 0.4.0-3+deb13u1 for trixie-wikimedia T391083 [production]
09:06 <arturo> bump object storage quota, with: `aborrero@cloudcontrol1006:~$ sudo radosgw-admin quota set --quota-scope=user --uid wikilink\$wikilink --max-size=15GB --max-objects=20000` (T392870) [wikilink]
09:05 <zabe> zabe@deploy1003:~$ mwscript-k8s --comment="T393761" --follow -- extensions/CentralAuth/maintenance/fixStuckGlobalRename.php --wiki=amwiki --logwiki=metawiki 'Jeroen' 'Retireduser-vfs199s31yvbtxsfmygg' [production]
09:03 <zabe> zabe@deploy1003:~$ mwscript-k8s --comment="T393372" --follow -- extensions/CentralAuth/maintenance/fixStuckGlobalRename.php --wiki=enwikibooks --logwiki=metawiki 'Adityaindumdum' 'Renamed user a71c8354dc822ea0d3aab24d1ce886f02c25fe91' [production]
08:47 <taavi> failing over grafana to grafana-2 T393735 [metricsinfra]
08:44 <taavi@cloudcumin1001> END (PASS) - Cookbook wmcs.vps.refresh_puppet_certs (exit_code=0) on metricsinfra-grafana-2.metricsinfra.eqiad1.wikimedia.cloud [metricsinfra]
08:43 <taavi@cloudcumin1001> START - Cookbook wmcs.vps.refresh_puppet_certs on metricsinfra-grafana-2.metricsinfra.eqiad1.wikimedia.cloud [metricsinfra]
08:17 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2010.codfw.wmnet -> wdqs2013.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
08:10 <volans@cumin2002> END (PASS) - Cookbook sre.deploy.python-code (exit_code=0) homer to cumin1003.eqiad.wmnet with reason: Release v0.9.0 - volans@cumin2002 [production]
08:09 <volans@cumin2002> START - Cookbook sre.deploy.python-code homer to cumin1003.eqiad.wmnet with reason: Release v0.9.0 - volans@cumin2002 [production]
07:57 <moritzm> imported puppet-agent 7.23.0-1+wmf13u1 to component/puppet7 for trixie-wikimedia T392790 [production]
07:24 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2010.codfw.wmnet -> wdqs2013.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
07:23 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2010.codfw.wmnet -> wdqs2012.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
07:22 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2007.codfw.wmnet -> wdqs2011.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
07:22 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs1013.eqiad.wmnet -> wdqs1015.eqiad.wmnet w/ force delete existing files, repooling neither afterwards [production]
07:22 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs1012.eqiad.wmnet -> wdqs1014.eqiad.wmnet w/ force delete existing files, repooling neither afterwards [production]
07:15 <jelto@cumin1002> START - Cookbook sre.gitlab.upgrade on GitLab host gitlab2002.wikimedia.org with reason: Upgrade Replica to GitLab 17.9 [production]
07:10 <taavi> kill bunch of unwanted processes off of tools-bastion-13 T393732, please run your things as jobs [tools]
06:27 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs1013.eqiad.wmnet -> wdqs1015.eqiad.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:27 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs1012.eqiad.wmnet -> wdqs1014.eqiad.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:26 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2010.codfw.wmnet -> wdqs2012.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:26 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2007.codfw.wmnet -> wdqs2011.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:26 <ryankemper@cumin2002> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2007.codfw.wmnet -> wdqs2011.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:25 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2007.codfw.wmnet -> wdqs2011.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:10 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs2007.codfw.wmnet -> wdqs2010.codfw.wmnet w/ force delete existing files, repooling neither afterwards [production]
06:07 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T388134, bring new main graph hosts into service) xfer wikidata_main from wdqs1012.eqiad.wmnet -> wdqs1013.eqiad.wmnet w/ force delete existing files, repooling neither afterwards [production]
05:30 <jmm@cumin2002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on cumin1003.eqiad.wmnet with reason: WIP new Bookworm host [production]