6501-6550 of 10000 results (30ms)
2016-08-17 §
21:33 <cscott> updated OCG to version e3e0fd015ad8fdbf9da1838c830fe4b075c59a29 [releng]
21:28 <bd808> restarted salt-minion on deployment-pdf02 [releng]
21:26 <bd808> restarted salt-minion on deployment-pdf01 [releng]
21:15 <cscott> starting OCG deploy to beta [releng]
14:10 <gehel> upgrading elasticsearch to 2.3.4 on deployment-logstash2.deployment-prep.eqiad.wmflabs [releng]
13:28 <gehel> upgrading elasticsearch to 2.3.4 on deployment-elastic*.deployment-prep + JVM upgrade [releng]
2016-08-16 §
23:10 <thcipriani> max_servers at 6, seeing 6 allocated instances, still seeing 403 already used 10 of 10 instances :(( [releng]
22:37 <thcipriani> restarting nodepool, bumping max_servers to match up with what openstack seems willing to allocate (6) [releng]
09:06 <Amir1> removing ores-related-cherry-picked commits from deployment-puppetmaster [releng]
2016-08-15 §
21:30 <thcipriani> update scap on beta to 3.2.3-1 bugfix release [releng]
02:30 <bd808> Forced a zuul restart -- https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Restart [releng]
02:23 <bd808> Lots and lots of "AttributeError: 'NoneType' object has no attribute 'name'" errors in /var/log/zuul/zuul.log [releng]
02:21 <bd808> nodepool delete 301068 [releng]
02:20 <bd808> nodepool delete 301291 [releng]
02:20 <bd808> nodepool delete 301282 [releng]
02:19 <bd808> nodepool delete 301144 [releng]
02:11 <bd808> nodepool delete 299641 [releng]
02:11 <bd808> nodepool delete 278848 [releng]
02:08 <bd808> Aug 15 02:07:48 labnodepool1001 nodepoold[24796]: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
2016-08-13 §
23:16 <Amir1> cherry-picking 304678/1 into the puppetmaster [releng]
00:08 <legoktm> deploying https://gerrit.wikimedia.org/r/304588 [releng]
00:06 <legoktm> deploying https://gerrit.wikimedia.org/r/304068 [releng]
2016-08-12 §
23:57 <legoktm> p [releng]
23:57 <legoktm> deploying https://gerrit.wikimedia.org/r/304587, no-o [releng]
19:20 <Krenair> that fixed it, upload.beta is back up [releng]
19:14 <Krenair> rebooting deployment-cache-upload04, it's stuck in https://phabricator.wikimedia.org/T141673 and varnish is no longer working there afaict, so trying to bring upload.beta.wmflabs.org back up [releng]
18:19 <Amir1> deploying 2ef24f2 to ores-beta in sca03 [releng]
2016-08-10 §
23:56 <legoktm> deploying https://gerrit.wikimedia.org/r/304149 [releng]
23:47 <thcipriani> stopping nodepool to clean up [releng]
23:40 <legoktm> deploying https://gerrit.wikimedia.org/r/304131 [releng]
21:59 <thcipriani> restarted nodepool, no trusty instances were being used by jobs [releng]
01:58 <legoktm> deploying https://gerrit.wikimedia.org/r/303218 [releng]
2016-08-09 §
23:21 <Amir1> ladsgroup@deployment-sca03:~$ sudo service celery-ores-worker restart [releng]
15:24 <thcipriani> due to https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Jenkins_execution_lock [releng]
15:20 <thcipriani> beta site updates stuck for 15 hours :( [releng]
02:17 <legoktm> deploying https://gerrit.wikimedia.org/r/303741 [releng]
02:16 <legoktm> manually updated slave-scripts on all slaves via `fab deploy_slave_scripts` [releng]
00:56 <legoktm> deploying https://gerrit.wikimedia.org/r/303726 [releng]
2016-08-08 §
23:33 <TimStarling> deleted instance deployment-depurate01 [releng]
16:19 <bd808> Manually cleaned up root@logstash02 cronjobs related to logstash03 [releng]
14:39 <Amir1> deploying d00159c for ores in sca03 [releng]
10:14 <Amir1> deploying 616707c into sca03 (for ores) [releng]
2016-08-07 §
12:01 <hashar> Nodepool: can't spawn instances due to: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
12:01 <hashar> nodepool: deleted servers stuck in "used" states for roughly 4 hours (using: nodepool list , then nodepool delete <id>) [releng]
11:54 <hashar> Nodepool: can't spawn instances due to: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
11:54 <hashar> nodepool: deleted servers stuck in "used" states for roughly 4 hours (using: nodepool list , then nodepool delete <id>) [releng]
2016-08-06 §
12:31 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
12:28 <Amir1> cherry-picked 303356/1 into the puppetmaster [releng]
12:00 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
2016-08-05 §
17:54 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/299825/3 for testing [releng]