2551-2600 of 4244 results (21ms)
2018-12-13 §
21:11 <ottomata> superset is back up at version 0.26.3 [analytics]
20:57 <ottomata> stopped superset on analytics-tool1003 for revert to previous version (luca will revert the db backup) [analytics]
15:25 <mforns> restarted turnilo to clean a deleted test datasource [analytics]
2018-12-12 §
22:49 <mforns> restarted turnilo to clear deleted test datasource [analytics]
20:07 <mforns> restarted turnilo to clear deleted test datasource [analytics]
17:10 <mforns> restarted turnilo to clear deleted test datasource [analytics]
2018-12-11 §
16:03 <mforns> restarted Turnilo to clear deleted datasource [analytics]
15:27 <mforns> restarted Turnilo to clear deleted datasource [analytics]
14:58 <joal> Restart clickstream job after having repaired hive mediawiki-tables partitions [analytics]
2018-12-10 §
19:37 <joal> Manually deleting old druid-public snapshots that were not following datasource naming convention (- instead of _) [analytics]
14:51 <milimetric> trying the labsdb/analytics-store combination sqoop, live logs in /home/milimetric/sqoop-[private-]log.log on stat1004 [analytics]
2018-12-07 §
08:10 <joal> manually create /wmf/data/raw/mediawiki/tables/change_tag/snapshot=2018-11/_SUCCESS on hdfs to unlock mw-history-load and therefore mw-history-reduced [analytics]
2018-12-06 §
15:08 <elukey> turnilo migrated to nodejs 10 [analytics]
2018-12-05 §
14:53 <elukey> restart hdfs namenodes and yarn rm to update rack awareness config (prep for new nodes) [analytics]
11:58 <fdans> backfilling in progress, killing uniques coordinators within bundle, will restart bundle on Jan 1st [analytics]
11:34 <fdans> backfill test successful. Starting job to backfill family uniques since mar 2017 [analytics]
10:03 <fdans> backfilling test for unique project families - start_time=2016-01-01T00:00Z stop_time=2016-02-01T00:00Z [analytics]
09:13 <elukey> matomo read only + upgrade to matomo 3.7.0 on matomo1001 [analytics]
07:43 <elukey> restart middlemanager/broker/historical on druid-public to pick up new log4j settings [analytics]
2018-12-04 §
18:26 <ottomata> reenabled refinement of mediawiki_revision_score [analytics]
17:50 <joal> Deploying aqs using scap for offset and underestimate values in unique-devices endpoints [analytics]
17:12 <elukey> cleanup logs on /var/log/druid on druid100[1-3] after change in log4j settings [analytics]
15:25 <elukey> rolling restart of broker/historical/middlemanager on druid100[1-3] to pick up new logging settings [analytics]
15:01 <joal> Update test values for uniques in cassandra before deploy [analytics]
14:56 <elukey> restart druid broker and historical on druid1001 [analytics]
12:16 <joal> Drop cassandra test keyspace "local_group_default_T_unique_devices_TEST" [analytics]
10:55 <fdans> deploying AQS to expose offset and underestimate numbers on unique devices [analytics]
2018-12-03 §
20:05 <ottomata> dropping and recreating hive event.mediawiki_revision_score table and data - T210465 [analytics]
18:11 <mforns> rerun webrequest upload load job for 2018-12-01T14:00 [analytics]
2018-12-01 §
08:50 <fdans> bundle restarted successfully [analytics]
08:39 <fdans> killing current cassandra bundle [analytics]
2018-11-30 §
12:45 <joal> Update hive wmf_raw mediawiki schemas (namespace bigint -> int) [analytics]
2018-11-29 §
18:33 <mforns> Finished refinery deployment using scap and refinery-deploy-to-hdfs [analytics]
17:41 <mforns> Starting refinery deployment using scap and refinery-deploy-to-hdfs [analytics]
17:37 <mforns> Deployed refinery-source using jenkins [analytics]
2018-11-26 §
15:47 <ottomata> moved old raw revision-score data to hdfs in /user/otto/revision_score_old_schema_raw - T210013 [analytics]
15:41 <ottomata> stopped producing revision-score events with old schema; merged and deployed new schema; petr to deploy change to produce events with new schema soon. https://phabricator.wikimedia.org/T210013 [analytics]
15:27 <fdans> monthly and daily jobs for uniques killed, replaced with backfilling jobs until Dec 1st [analytics]
2018-11-22 §
13:42 <elukey> allow the research user to create/alter/etc.. tables on staging@db1108 [analytics]
2018-11-21 §
19:49 <milimetric> deploying AQS [analytics]
13:06 <fdans> launching backfilling jobs for daily and monthly uniques from beginning of time until Nov 20 [analytics]
13:05 <fdans> test backfill on 13 Nov daily uniques successful [analytics]
12:54 <fdans> testing backfill of daily uniques in production for 2018-11-13 [analytics]
2018-11-20 §
14:02 <elukey> restart hive-server2 to pick up new settings - T209536 [analytics]
11:44 <elukey> re-run pageview-hourly-wf-2018-11-20-9 [analytics]
2018-11-19 §
13:59 <joal> failing deployment on aqs to include a new patch [analytics]
13:41 <joal> Deploying aqs using scap [analytics]
13:27 <fdans> deploying aqs to add new fields to uniques dataset (T167539) [analytics]
2018-11-18 §
08:44 <elukey> re-run webrequest-load-wf-text-2018-11-17-23 via Hue [analytics]
08:37 <elukey> restart yarn on analytics1039 - not clear why the process failed (nothing in the logs, no other disks failed) [analytics]