51-100 of 5050 results (9ms)
2016-08-24 §
15:14 <halfak> deploying ores d00171 [releng]
09:50 <hashar> deployment-redis02 fixed AOF file /srv/redis/deployment-redis02-6379.aof and restarted the redis instance should fix T143655 and might help T142600 [releng]
09:43 <hashar> T143655 stopping redis 6379 on deployment-redis02 : initctl stop redis-instance-tcp_6379 [releng]
09:38 <hashar> deployment-redis02 initctl stop redis-instance-tcp_6379 && initctl start redis-instance-tcp_6379 | That did not fix it magically though T143655 [releng]
2016-08-23 §
18:21 <legoktm> deploying https://gerrit.wikimedia.org/r/306257 [releng]
16:38 <bd808> Fixed ops/puppet sync by removing stale cherry-pick of https://gerrit.wikimedia.org/r/#/c/305996/ [releng]
08:22 <hashar> running puppet on integration-slave-trusty-1014 [releng]
08:18 <hashar> reboot integration-slave-trusty-1014 [releng]
08:16 <hashar> disabled/enabled Jenkins Gearman client to remove deadlock with Throttle plugin [releng]
2016-08-22 §
23:40 <legoktm> updating slave_scripts on all slaves [releng]
2016-08-19 §
00:39 <Krenair> deployment-fluorine is now deployment-fluorine02 running jessie with the old precise packages shoehorned in [releng]
2016-08-18 §
22:03 <bd808> deployment-fluorine02: Hack 'datasets:x:10003:997::/home/datasets:/bin/bash' into /etc/passwd for T117028 [releng]
20:30 <MaxSem> Restarted hhvm on appservers for wikidiff2 upgrades [releng]
19:03 <MaxSem> Upgrading hhvm-wikidiff2 in beta cluster [releng]
16:53 <legoktm> deploying https://gerrit.wikimedia.org/r/#/c/305532/ [releng]
2016-08-17 §
22:27 <legoktm> deploying https://gerrit.wikimedia.org/r/305408 [releng]
21:33 <cscott> updated OCG to version e3e0fd015ad8fdbf9da1838c830fe4b075c59a29 [releng]
21:28 <bd808> restarted salt-minion on deployment-pdf02 [releng]
21:26 <bd808> restarted salt-minion on deployment-pdf01 [releng]
21:15 <cscott> starting OCG deploy to beta [releng]
14:10 <gehel> upgrading elasticsearch to 2.3.4 on deployment-logstash2.deployment-prep.eqiad.wmflabs [releng]
13:28 <gehel> upgrading elasticsearch to 2.3.4 on deployment-elastic*.deployment-prep + JVM upgrade [releng]
2016-08-16 §
23:10 <thcipriani> max_servers at 6, seeing 6 allocated instances, still seeing 403 already used 10 of 10 instances :(( [releng]
22:37 <thcipriani> restarting nodepool, bumping max_servers to match up with what openstack seems willing to allocate (6) [releng]
09:06 <Amir1> removing ores-related-cherry-picked commits from deployment-puppetmaster [releng]
2016-08-15 §
21:30 <thcipriani> update scap on beta to 3.2.3-1 bugfix release [releng]
02:30 <bd808> Forced a zuul restart -- https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Restart [releng]
02:23 <bd808> Lots and lots of "AttributeError: 'NoneType' object has no attribute 'name'" errors in /var/log/zuul/zuul.log [releng]
02:21 <bd808> nodepool delete 301068 [releng]
02:20 <bd808> nodepool delete 301291 [releng]
02:20 <bd808> nodepool delete 301282 [releng]
02:19 <bd808> nodepool delete 301144 [releng]
02:11 <bd808> nodepool delete 299641 [releng]
02:11 <bd808> nodepool delete 278848 [releng]
02:08 <bd808> Aug 15 02:07:48 labnodepool1001 nodepoold[24796]: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
2016-08-13 §
23:16 <Amir1> cherry-picking 304678/1 into the puppetmaster [releng]
00:08 <legoktm> deploying https://gerrit.wikimedia.org/r/304588 [releng]
00:06 <legoktm> deploying https://gerrit.wikimedia.org/r/304068 [releng]
2016-08-12 §
23:57 <legoktm> p [releng]
23:57 <legoktm> deploying https://gerrit.wikimedia.org/r/304587, no-o [releng]
19:20 <Krenair> that fixed it, upload.beta is back up [releng]
19:14 <Krenair> rebooting deployment-cache-upload04, it's stuck in https://phabricator.wikimedia.org/T141673 and varnish is no longer working there afaict, so trying to bring upload.beta.wmflabs.org back up [releng]
18:19 <Amir1> deploying 2ef24f2 to ores-beta in sca03 [releng]
2016-08-10 §
23:56 <legoktm> deploying https://gerrit.wikimedia.org/r/304149 [releng]
23:47 <thcipriani> stopping nodepool to clean up [releng]
23:40 <legoktm> deploying https://gerrit.wikimedia.org/r/304131 [releng]
21:59 <thcipriani> restarted nodepool, no trusty instances were being used by jobs [releng]
01:58 <legoktm> deploying https://gerrit.wikimedia.org/r/303218 [releng]
2016-08-09 §
23:21 <Amir1> ladsgroup@deployment-sca03:~$ sudo service celery-ores-worker restart [releng]
15:24 <thcipriani> due to https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Jenkins_execution_lock [releng]