| 2014-03-24
      
      § | 
    
  | 23:16 | <marktraceur> | Touching all the MMV scripts because they're not getting invalidated or something | [releng] | 
            
  | 23:10 | <hashar> | l10n cache got broken due to a PHP fatal error I introduced. It is back up now.   Found out via https://integration.wikimedia.org/dashboard/ | [releng] | 
            
  | 23:09 | <hashar> | upgraded all pmtpa varnishes, ran puppet on all of them. all set! | [releng] | 
            
  | 22:57 | <hashar> | restarting deployment-cache-upload04  , apparently stalled\t | [releng] | 
            
  | 22:48 | <hashar> | upgrading varnish on all pmtpa caches. | [releng] | 
            
  | 22:47 | <hashar> | apt-get upgrade varnish on deployment-cache-bits03 | [releng] | 
            
  | 22:45 | <marktraceur> | attempted restart of varnish on betalabs; seems to have failed, trying again | [releng] | 
            
  | 22:42 | <hashar> | made marktraceur a project admin and granted sudo rights | [releng] | 
            
  | 22:39 | <marktraceur> | Restarting betalabs varnish to workaround https://bugzilla.wikimedia.org/show_bug.cgi?id=63034 | [releng] | 
            
  | 17:25 | <bd808> | Converted deployment-db1.eqiad.wmflabs to use local puppet & salt masters | [releng] | 
            
  | 17:06 | <bd808> | Changed rules in sql security group to use CIDR 10.0.0.0/8. | [releng] | 
            
  | 17:05 | <bd808> | Changed rules in search security group to use CIDR 10.0.0.0/8. | [releng] | 
            
  | 17:05 | <bd808> | Built deployment-elastic04.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server | [releng] | 
            
  | 16:19 | <bd808> | Built deployment-elastic03.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server | [releng] | 
            
  | 16:08 | <bd808> | Built deployment-elastic02.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server | [releng] | 
            
  | 15:54 | <bd808> | Built deployment-elastic01.eqiad.wmflabs with local salt/puppet master, secondary disk on /var/lib/elasticsearch and role::elasticsearch::server | [releng] | 
            
  | 10:31 | <hashar> | migrated deployment-solr to self puppet/salt masters | [releng] | 
            
  
    | 2014-03-20
      
      § | 
    
  | 23:46 | <bd808> | Mounted secondary disk as /var/lib/elasticsearch on deployment-logstash1 | [releng] | 
            
  | 23:46 | <bd808> | Converted deployment-tin to use local puppet & salt masters | [releng] | 
            
  | 22:09 | <hashar> | Migrated videoscaler01 to use self salt/puppet masters. | [releng] | 
            
  | 21:30 | <hashar> | manually installing timidity-daemon on jobrunner01.eqiad so puppet can stop it and stop whining | [releng] | 
            
  | 21:00 | <hashar> | migrate jobrunner01.eqiad.wmflabs to self puppet/salt masters | [releng] | 
            
  | 20:55 | <hashar> | deleting deployment-jobrunner02 , lets start with a single instance for nwo | [releng] | 
            
  | 20:51 | <hashar> | Creating deployment-jobrunner01 and 02 in eqiad. | [releng] | 
            
  | 15:47 | <hashar> | fixed salt-minion service on deployment-cache-upload01 and deployment-cache-mobile03 by deleting /etc/salt/pki/minion/minion_master.pub | [releng] | 
            
  | 15:30 | <hashar> | migrated deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs to use the salt/puppetmaster deployment-salt.eqiad.wmflabs. | [releng] | 
            
  | 15:30 | <hashar> | deployment-cache-upload01.eqiad.wmflabs and deployment-cache-mobile03.eqiad.wmflabs recovered!!  /dev/vdb does not exist on eqiad which caused the instance to be stalled. | [releng] | 
            
  | 10:48 | <hashar> | Stopped the simplewiki script.  Would need to recreate the db from scratch instead | [releng] | 
            
  | 10:37 | <hashar> | Cleaning up simplewiki by deleting most pages in the main namespace.  Would free up some disk space.  deleteBatch.php is running in a screen on deployment-bastion.pmtpa.wmflabs | [releng] | 
            
  | 10:08 | <hashar> | applying role::labs::lvm::mnt on deployment-db1 to provide additional disk space on /mnt | [releng] | 
            
  | 09:39 | <hashar> | convert all remaining hosts but db1 to use the local puppet and salt masters | [releng] | 
            
  
    | 2014-03-19
      
      § | 
    
  | 21:23 | <bd808> | Converted deployment-cache-text02 to use local puppet & salt masters | [releng] | 
            
  | 20:21 | <hashar> | migrating eqiad varnish caches to use xfs | [releng] | 
            
  | 17:58 | <bd808> | Converted deployment-parsoid04 to use local puppet & salt masters | [releng] | 
            
  | 17:51 | <bd808> | Converted deployment-eventlogging02 to use local puppet & salt masters | [releng] | 
            
  | 17:22 | <bd808> | Converted deployment-cache-bits01 to use local puppet & salt masters; puppet:///volatile/GeoIP not found on deployment-salt puppetmaster | [releng] | 
            
  | 17:00 | <bd808> | Converted deployment-apache02 to use local puppet & salt masters | [releng] | 
            
  | 16:49 | <bd808> | Converted deployment-apache01 to use local puppet & salt masters | [releng] | 
            
  | 16:30 | <hashar> | Varnish caches in eqiad are failing puppet because there is no /dev/vdb. Will figure it out tomorrow :-] | [releng] | 
            
  | 16:15 | <hashar> | Applying role::logging::mediawiki::errors on deployment-fluoride.eqiad.wmflabs . It is not receiving anything yet though. | [releng] |