| 2022-06-13
      
      § | 
    
  | 13:20 | <btullis> | btullis@datahubsearch1001:~$ sudo systemctl reset-failed ifup@ens13.service T273026 | [analytics] | 
            
  | 13:09 | <btullis> | restarting oozie service on an-coord1001 | [analytics] | 
            
  | 12:59 | <btullis> | havaing failed over  hive to an-coord1002 10 minutes ago, I'm restarting hive services on an-coord1001 | [analytics] | 
            
  | 12:26 | <btullis> | restarting hive-server2 and hive-metastore on an-coord1002 | [analytics] | 
            
  | 09:54 | <joal> | rerun failed refine for network_flows_internal | [analytics] | 
            
  | 09:54 | <joal> | Rerun failed refine for mediawiki_talk_page_edit events | [analytics] | 
            
  | 09:51 | <joal> | Manually rerun webrequest_text laod for hour 2022-06-13T03:00 | [analytics] | 
            
  | 07:18 | <joal> | Manually rerun webrequest_text laod for hour 2022-06-12T08:00 | [analytics] | 
            
  
    | 2022-06-02
      
      § | 
    
  | 15:50 | <mforns> | deployed wikistats 2.9.5 | [analytics] | 
            
  | 14:02 | <joal> | Start browser_general_daily on airflow | [analytics] | 
            
  | 13:19 | <joal> | Drop and recreate wmf_raw.mediawiki_page table (field removal) | [analytics] | 
            
  | 12:44 | <joal> | Remove wrongly formatted interlanguage data | [analytics] | 
            
  | 12:36 | <joal> | Kill interlanguage-daily oozie job after successfull move to airflow | [analytics] | 
            
  | 12:15 | <joal> | Deploy interlanguage fix to airflow | [analytics] | 
            
  | 09:56 | <joal> | Relaunch sqoop after having deployed a corrective patch | [analytics] | 
            
  | 09:46 | <joal> | Manually mark interlaguage historical tasks failed in airflow | [analytics] | 
            
  | 08:54 | <joal> | Deploy airflow with spark3 jobs | [analytics] | 
            
  | 08:47 | <joal> | Merging 2 airflow  spark3 jobs now that their refinery counterpart is dpeloyed | [analytics] | 
            
  | 08:07 | <joal> | Deploy refinery onto HDFS | [analytics] | 
            
  | 07:26 | <joal> | Deploy refinery using scap | [analytics] | 
            
  
    | 2022-05-31
      
      § | 
    
  | 18:48 | <ottomata> | sudo -u hdfs hdfs dfsadmin -safemode leave on an-master1001 | [analytics] | 
            
  | 18:12 | <ottomata> | sudo service hadoop-hdfs-namenode start on an-master1002 | [analytics] | 
            
  | 18:10 | <ottomata> | sudo -u hdfs hdfs dfsadmin -safemode enter | [analytics] | 
            
  | 17:47 | <btullis> | starting namenode services on am-master1001 | [analytics] | 
            
  | 17:44 | <btullis> | restarting the datanodes on all five of the affected hadoop workers. | [analytics] | 
            
  | 17:43 | <btullis> | restarting journalnode service on each of the five hadoop workers with journals. | [analytics] | 
            
  | 17:41 | <btullis> | resizing each journalnode with resize2fs | [analytics] | 
            
  | 17:38 | <btullis> | sudo lvresize -L+20G analytics1069-vg/journalnode | [analytics] | 
            
  | 17:38 | <btullis> | increasing each of the hadoop journalnodes by 20 GB | [analytics] | 
            
  | 17:33 | <ottomata> | stop journalnodes and datanodes on 5 hadoop journalnode hosts | [analytics] |