501-550 of 6106 results (29ms)
2023-11-09 §
11:47 <btullis> restarting yarn-nodemanager service on an-worker1100.eqiad.wmnet as a canary for T344910 [analytics]
11:14 <btullis> deploying multiple spark shufflers to production for T344910 [analytics]
09:53 <btullis> executed `helmfile -e eqiad --state-values-set roll_restart=1 sync` to roll-restart datahub in eqiad [analytics]
09:43 <btullis> executed `helmfile -e codfw --state-values-set roll_restart=1 sync` to roll-restart datahub in codfw [analytics]
2023-11-08 §
15:52 <stevemunene> Add analytics-wmde service user to the Yarn production queue T340648 [analytics]
13:55 <btullis> beginning rolling restart of all hadoop workers in production, to pick up new puppet 7 CA settings. [analytics]
10:33 <btullis> restarting hadoop-hdfs-datanode.service and hadoop-yarn-nodemanager.service on an-worker1111 to pick up puppet7 changes. [analytics]
10:27 <brouberol> running scap deploy for airflow-dags/analytics [analytics]
2023-11-07 §
20:48 <xcollazo> Ran 'kerberos-run-command hdfs hdfs dfs -chmod -R g+w /wmf/data/wmf_dumps/wikitext_raw_rc2' to ease experimentation on this release candidate table. [analytics]
15:52 <btullis> restart airflow-sheduler and airflow-webserver services on an-test-client1002 [analytics]
15:50 <btullis> restart mariadb service on an-test-coord1001 [analytics]
15:50 <btullis> restart mariadb service on an-test-coord100 [analytics]
15:49 <btullis> restart presto-server service on an-test-coord1001 and an-test-presto1001 to pick up new puppet 7 CA settings [analytics]
15:48 <btullis> restart hive-server2 and hive-metastore services on an-test-coord1001 to pick up new puppet 7 CA settings. [analytics]
15:35 <btullis> roll-restarting hadoop workers in test, to test new puppet 7 CA settings. [analytics]
14:52 <btullis> roll-restarting hadoop masters on the test cluster, after upgrading to puppet 7 [analytics]
12:05 <btullis> deploying datahub to prod for the pki certificates. [analytics]
11:36 <btullis> deploying datahub to staging to start using pki certificates - https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/969345/ [analytics]
10:40 <btullis> re-running the kafka_jumbo_ingestion in analytics airflow [analytics]
2023-11-06 §
18:38 <milimetric> deployed refinery-source, starting to deploy analytics airflow dags [analytics]
13:57 <stevemunene> roll-restart druid public workers to pick up a new zookeeper node druid1009. T336042 [analytics]
13:32 <stevemunene> restart zookeper leader to pick up new host druid1009 T336042 [analytics]
13:25 <stevemunene> stop and disable zookeper on druid1004 T336042 [analytics]
13:19 <stevemunene> disable puppet on druid1004 and druid10[09-11] to Onboard new druid1009 to the ZooKeeper cluster for `druid-public-eqiad` cluster [analytics]
2023-11-01 §
15:58 <stevemunene> powercyle stat1008, host is frozen/stuck in an unresponsive state [analytics]
2023-10-31 §
09:26 <brouberol> I replaced the self-signed skein certificate by one issued by our cfssl PKI on an-test1002 - T329398 [analytics]
2023-10-26 §
16:18 <stevemunene> roll-restart druid public workers to pick up new zookeeper hosts. T336042 [analytics]
15:29 <stevemunene> stop zookeper on druid1005 current leader for the `druid-public-eqiad` this will trigger the election of a new leader T336042 [analytics]
10:18 <stevemunene> restart zookeper leader to pick up new host druid1011 T336042 [analytics]
09:18 <stevemunene> stop zookeper on druid1006 T336042 [analytics]
08:48 <brouberol> sudo cookbook sre.hosts.reimage --os bullseye -t T348495 kafka-jumbo1009 [analytics]
08:06 <brouberol> sudo cookbook sre.hosts.reimage --os bullseye -t T348495 kafka-jumbo1008 [analytics]
2023-10-24 §
16:46 <xcollazo> Deploying latest DAGs to analytics Airflow instance [analytics]
12:41 <joal> Drop wmf.referrer_daily hive table and data [analytics]
10:07 <btullis> transferring snapshot s2.2023-10-23--01-34-18 from dbprov1004 to dbstore1007:/srv/sqldata.s2 [analytics]
10:02 <btullis> stopping and deleting s2 on dbstore1007. [analytics]
2023-10-23 §
10:14 <brouberol> sudo cookbook sre.hosts.decommission -t T336044 kafka-jumbo1001.eqiad.wmnet [analytics]
10:11 <btullis> deploying multiple spark shufflers to the test cluster for T344910 [analytics]
09:58 <brouberol> sudo cookbook sre.hosts.decommission -t T336044 kafka-jumbo1002.eqiad.wmnet [analytics]
09:47 <btullis> restarting krb5-kdc.service and krb5-admin-server.service on krb1001 and re-enabling puppet for T346135 [analytics]
09:10 <btullis> root@krb1001:~# systemctl stop krb5-kdc.service krb5-admin-server.service [analytics]
09:09 <btullis> disabling puppet on krb1001 for T346135 [analytics]
08:53 <brouberol> sudo cookbook sre.hosts.decommission -t T336044 kafka-jumbo1004.eqiad.wmnet [analytics]
08:28 <brouberol> sudo cookbook sre.hosts.decommission -t T336044 kafka-jumbo1005.eqiad.wmnet - T336044 [analytics]
2023-10-19 §
19:58 <xcollazo> ran "sudo -u hdfs hdfs dfs -cp /user/xcollazo/artifacts/spark-3.3.2-assembly.zip /user/spark/share/lib/" and "sudo -u hdfs hdfs dfs -chmod o+r /user/spark/share/lib/spark-3.3.2-assembly.zip" to bring make Spark 3.3.2 assembly available for other folks. [analytics]
19:54 <xcollazo> ran "sudo -u hdfs hdfs dfs -rm /user/spark/share/lib/spark-3.1.2-assembly.jar.backup" to remove old spark assembly backup from May 25 2023. [analytics]
19:52 <xcollazo> ran "$ sudo -u hdfs hdfs dfs -rm /user/spark/share/lib/spark-3.1.2-assembly.jar.bak" to remove old spark assembly backup from Jun 13 2023. [analytics]
15:22 <brouberol> The kafka service has been stopped on kafka-jumbo100[1-6] - T336044 [analytics]
15:04 <brouberol> sudo cumin --batch-size 1 --batch-sleep 60 'kafka-jumbo100[1-6].eqiad.wmnet' 'sudo systemctl stop kafka.service' - T336044 [analytics]
15:02 <brouberol> disabling puppet on kafka-jumbo100[1-6] to make sure kafka isn't resarted - T336044 [analytics]