1201-1250 of 5554 results (31ms)
2022-02-09 §
16:39 <joal> Release refinery-source v0.1.25 to archiva [analytics]
2022-02-08 §
07:27 <elukey> restart hadoop-yarn-nodemanager on an-worker1115 (container executor reached unrecoverable exception, doesn't talk with the Yarn RM anymore) [analytics]
2022-02-07 §
18:43 <ottomata> manually installing airflow_2.1.4-py3.7-2_amd64.deb on an-test-client1001 [analytics]
14:38 <ottomata> merged Set spark maxPartitionBytes to hadoop dfs block size - T300299 [analytics]
12:17 <btullis> depooled aqs1009 [analytics]
11:59 <btullis> depooled aqs1008 [analytics]
11:41 <btullis> depooled aqs1007 [analytics]
11:03 <btullis> depooled aqs1006 [analytics]
10:22 <btullis> depooling aqs1005 [analytics]
2022-02-04 §
16:05 <elukey> unmask prometheus-mysqld-exporter.service and clean up the old @analytics + wmf_auto_restart units (service+timer) not used anymore on an-coord100[12] [analytics]
12:55 <joal> Rerun cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2022-2-3 [analytics]
07:12 <elukey> `GRANT PROCESS, REPLICATION CLIENT ON *.* TO `prometheus`@`localhost` IDENTIFIED VIA unix_socket WITH MAX_USER_CONNECTIONS 5` on an-test-coord1001 to allow the prometheus exporter to gather metrics [analytics]
07:09 <elukey> cleanup wmf_auto_restart_prometheus-mysqld-exporter@analytics-meta on an-test-coord1001 and unmasked wmf_auto_restart_prometheus-mysqld-exporter (now used) [analytics]
07:03 <elukey> clean up wmf_auto_restart_prometheus-mysqld-exporter@matomo on matomo1002 (not used anymore, listed as failed) [analytics]
2022-02-03 §
19:35 <joal> Rerun virtualpageview-druid-monthly-wf-2022-1 [analytics]
19:32 <btullis> re-running the failed refine_event job as per email. [analytics]
19:27 <joal> Rerun virtualpageview-druid-daily-wf-2022-1-16 [analytics]
19:12 <joal> Kill druid indexation stuck task on Druid (from 2022-01-17T02:31) [analytics]
19:09 <joal> Kill druid-loading stuck yarn applications (3 HiveToDruid, 2 oozie launchers) [analytics]
10:04 <btullis> pooling the remaining aqs_next nodes. [analytics]
07:01 <elukey> kill leftover processes of decommed user on an-test-client1001 [analytics]
2022-02-01 §
20:05 <btullis> btullis@an-launcher1002:~$ sudo systemctl restart refinery-sqoop-whole-mediawiki.service [analytics]
19:01 <joal> Deploying refinery with scap [analytics]
18:36 <joal> Rerun virtualpageview-druid-daily-wf-2022-1-16 [analytics]
18:34 <joal> rerun webrequest-druid-hourly-wf-2022-2-1-12 [analytics]
17:43 <btullis> btullis@an-launcher1002:~$ sudo systemctl start refinery-sqoop-whole-mediawiki.service [analytics]
17:29 <btullis> about to deploy analytics/refinery [analytics]
12:28 <elukey> kill processes related to offboarded user on stat1006 to unblock puppet [analytics]
11:09 <btullis> btullis@an-test-coord1001:~$ sudo apt-get -f install [analytics]
2022-01-31 §
14:51 <btullis> btullis@an-launcher1002:~$ sudo systemctl start mediawiki-history-drop-snapshot.service [analytics]
14:03 <btullis> btullis@an-launcher1002:~$ sudo systemctl start mediawiki-history-drop-snapshot.service [analytics]
2022-01-27 §
08:15 <joal> Rerun failed cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2022-1-26 [analytics]
2022-01-26 §
15:54 <joal> Add new CH-UA fields to wmf_raw.webrequest and wmf.webrequest [analytics]
15:44 <joal> Kill-restart webrequest oozie job after deploy [analytics]
15:40 <joal> Kill-restart edit-hourly oozie job after deploy [analytics]
15:27 <joal> Deploy refinery to HDFS [analytics]
15:10 <elukey> elukey@cp4036:~$ sudo systemctl restart varnishkafka-eventlogging [analytics]
15:10 <elukey> elukey@cp4036:~$ sudo systemctl restart varnishkafka-statsv [analytics]
15:06 <elukey> elukey@cp4035:~$ sudo systemctl restart varnishkafka-eventlogging.service - metrics showing messages stuck for a poll() [analytics]
14:56 <elukey> elukey@cp4035:~$ sudo systemctl restart varnishkafka-webrequest.service - metrics showing messages stuck for a poll() [analytics]
14:52 <joal> Deploy refinery with scap [analytics]
10:07 <btullis> btullis@cumin1001:~$ sudo cumin 'O:cache::upload or O:cache::text' 'disable-puppet btullis-T296064-T299401' [analytics]
2022-01-25 §
19:46 <ottomata> removing hdfs druid deep storage from test cluster [analytics]
19:37 <ottomata> reseting test cluster druid via druid reset-cluster https://druid.apache.org/docs/latest/operations/reset-cluster.html - T299930 [analytics]
14:30 <ottomata> stopping services on an-test-coord1001 - T299930 [analytics]
14:29 <ottomata> stopping druid* on an-test-druid1001 - T299930 [analytics]
11:30 <btullis> pooled aqs1011 T298516 [analytics]
11:29 <btullis> btullis@puppetmaster1001:~$ sudo -i confctl select name=aqs1011.eqiad.wmnet set/pooled=yes [analytics]
2022-01-24 §
21:18 <btullis> btullis@deploy1002:/srv/deployment/analytics/refinery$ scap deploy -e hadoop-test -l an-test-coord1001.eqiad.wmnet [analytics]
20:35 <btullis> rebooting an-test-coord1001 after recreating the /srv/file system. [analytics]