| 2016-02-03
      
      § | 
    
  | 15:12 | <hashar> | apt-get upgrade deployment-sentry2 | [releng] | 
            
  | 15:03 | <hashar> | redeployed rcstream/rcstream on deployment-stream by using git-deploy on deployment-bastion | [releng] | 
            
  | 14:55 | <hashar> | upgrading deployment-stream | [releng] | 
            
  | 14:42 | <hashar> | pooled back integration-slave-trusty-1015  Seems ok | [releng] | 
            
  | 14:35 | <hashar> | manually triggered a bunch of browser tests jobs | [releng] | 
            
  | 11:40 | <hashar> | apt-get upgrade deployment-ms-be01 and deployment-ms-be02 | [releng] | 
            
  | 11:32 | <hashar> | fixing puppet.conf on deployment-memc04 | [releng] | 
            
  | 11:08 | <hashar> | restarting beta cluster puppetmaster just in case | [releng] | 
            
  | 11:07 | <hashar> | beta: apt-get upgrade on delpoyment-cache* hosts  and checking puppet | [releng] | 
            
  | 10:59 | <hashar> | integration/beta:  deleting /etc/apt/apt.conf.d/*proxy  files.  There is no need for them, in fact web proxy is not reachable from labs | [releng] | 
            
  | 10:53 | <hashar> | integration: switched puppet repo back to 'production' branch, rebased. | [releng] | 
            
  | 10:49 | <hashar> | various beta cluster have puppet errors .. | [releng] | 
            
  | 10:46 | <hashar> | integration-slave-trusty-1013 heading to out of disk space on /mnt ... | [releng] | 
            
  | 10:42 | <hashar> | integration-slave-trusty-1016 out of disk space on /mnt ... | [releng] | 
            
  | 03:45 | <bd808> | Puppet failing on deployment-fluorine with "Error: Could not set uid on user[datasets]: Execution of '/usr/sbin/usermod -u 10003 datasets' returned 4: usermod: UID '10003' already exists" | [releng] | 
            
  | 03:44 | <bd808> | Freed 28G by deleting deployment-fluorine:/srv/mw-log/archive/*2015* | [releng] | 
            
  | 03:41 | <bd808> | Ran deployment-bastion.deployment-prep:/home/bd808/cleanup-var-crap.sh and freed 565M | [releng] | 
            
  
    | 2016-02-01
      
      § | 
    
  | 23:53 | <bd808> | Logstash working again; I applied a change to the default mapping template for Elasticsearch that ensures that fields named "timestamp" are indexed as plain strings | [releng] | 
            
  | 23:46 | <bd808> | Elasticsearch index template for beta logstash cluster making crappy guesses about syslog events; dropped 2016-02-01 index; trying to fix default mappings | [releng] | 
            
  | 23:08 | <bd808> | HHVM logs causing rejections during document parse when inserting in Elasticsearch from logstash. They contain a "timestamp" field that looks like "Feb  1 22:56:39" which is making the mapper in Elasticsearch sad. | [releng] | 
            
  | 23:02 | <bd808> | Elasticsearch on deployment-logstash2 rejecting all documents with 400 status. Investigating | [releng] | 
            
  | 22:50 | <bd808> | Copying deployment-logstash2.deployment-prep:/var/log/logstash/logstash.log to /srv for debugging later | [releng] | 
            
  | 22:48 | <bd808> | deployment-logstash2.deployment-prep:/var/log/logstash/logstash.log is 11G of fail! | [releng] | 
            
  | 22:46 | <bd808> | root partition on deployment-logstash2 full | [releng] | 
            
  | 22:43 | <bd808> | No data in logstash since 2016-01-30T06:55:37.838Z; investigating | [releng] | 
            
  | 15:33 | <hashar> | Image ci-jessie-wikimedia-1454339883 in wmflabs-eqiad is ready | [releng] | 
            
  | 15:01 | <hashar> | Refreshing Nodepool image. Might have npm/grunt properly set up | [releng] | 
            
  | 03:15 | <legoktm> | deploying https://gerrit.wikimedia.org/r/267630 | [releng] | 
            
  
    | 2016-01-28
      
      § | 
    
  | 23:22 | <MaxSem> | Updated portals on betalabs to master | [releng] | 
            
  | 22:23 | <hashar> | salt '*slave-precise*' cmd.run 'apt-get install php5-ldap'  ( https://phabricator.wikimedia.org/T124613 )  will need to be puppetized | [releng] | 
            
  | 18:17 | <thcipriani> | cleaning npm cache on slave machines: salt -v '*slave*' cmd.run 'sudo -i -u jenkins-deploy -- npm cache clean' | [releng] | 
            
  | 18:12 | <thcipriani> | running npm cache clean on integration-slave-precise-1011 sudo -i -u jenkins-deploy -- npm cache clean | [releng] | 
            
  | 15:25 | <hashar> | apt-get upgrade deployment-sca01 and deployment-sca02 | [releng] | 
            
  | 15:09 | <hashar> | fixing puppet.conf hostname on deployment-upload   deployment-conftool  deployment-tmh01 deployment-zookeeper01 and deployment-urldownloader | [releng] | 
            
  | 15:06 | <hashar> | fixing puppet.con hostname on deployment-upload.deployment-prep.eqiad.wmflabs and running puppet | [releng] | 
            
  | 15:00 | <hashar> | Running puppet on deployment-memc02 and deployment-elastic07 . It is catching up with lot of changes | [releng] |