851-900 of 4062 results (9ms)
2015-07-23 §
08:18 <hashar> creating deployment-puppetmaster m1.medium :D [releng]
01:57 <jzerebecki> reconnected slave and needed to kill a few pending beta jobs, works again [releng]
01:50 <jzerebecki> trying https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update [releng]
01:09 <legoktm> beta-mediawiki-config-update-eqiad jobs stuck [releng]
00:15 <jzerebecki> reloading zuul 369e6eb..73dc1f6 for https://gerrit.wikimedia.org/r/#/c/223527/ [releng]
2015-07-22 §
10:24 <hashar> Upgrading Zuul on Jenkins Precise slaves to zuul_2.0.0-327-g3ebedde-wmf2precise1_amd64.deb [releng]
09:32 <hashar_> Reupgrading Zuul to zuul_2.0.0-327-g3ebedde-wmf2precise1_amd64.deb with an approval fix ( https://gerrit.wikimedia.org/r/#/c/226274/ ) for gate-and-submit no more matching Code-Review+2 events ( https://phabricator.wikimedia.org/T106436 ) [releng]
2015-07-21 §
22:54 <greg-g> 22:50 < chasemp> "then git reset --hard 9588d0a6844fc9cc68372f4bf3e1eda3cffc8138 in /etc/zuul/wikimedia" [releng]
22:53 <greg-g> 22:47 < chasemp> service zuul stop && service zuul-merger stop && sudo apt-get install zuul=2.0.0-304-g685ca22-wmf1precise1 [releng]
21:48 <greg-g> Zuul not responding [releng]
20:23 <hasharConfcall> Zuul no more reports back to Gerrit due to an error with the Gerrit label [releng]
20:10 <hasharConfcall> Zuul restarted with 2.0.0-327-g3ebedde-wmf2precise1 [releng]
19:48 <hasharConfcall> Upgrading Zuul to zuul_2.0.0-327-g3ebedde-wmf2precise1 Previous version failed because python-daemon was too old, now shipped in the venv https://phabricator.wikimedia.org/T106399 [releng]
15:04 <hashar> upgraded Zuul on gallium from zuul_2.0.0-306-g5984adc-wmf1precise1_amd64.deb to zuul_2.0.0-327-g3ebedde-wmf1precise1_amd64.deb . now uses python-daemon 2.0.5 [releng]
13:37 <hashar> upgraded Zuul on gallium from zuul_2.0.0-304-g685ca22-wmf1precise1 to zuul_2.0.0-306-g5984adc-wmf1precise1 . Uses a new version of GitPython [releng]
02:15 <bd808> upgraded to elasticsearch-1.7.0.deb on deployment-logstash2 [releng]
2015-07-20 §
16:55 <thcipriani> restarted puppetmaster on deployment-salt, was acting whacky [releng]
2015-07-17 §
21:45 <hashar> upgraded nodepool to 0.0.1-104-gddd6003-wmf4 . That fix graceful stop via SIGUSR1 and let me complete the systemd integration [releng]
20:03 <hashar> stopping Zuul to get rid of a faulty registered function "build:Global-Dev Dashboard Data". Job is gone already. [releng]
2015-07-16 §
16:08 <hashar_> kept nodepool stopped on labnodepool1001.eqiad.wmnet because it spams the cron log [releng]
10:27 <hashar> fixing puppet on deployment-bastion. Stalled since July 7th - https://phabricator.wikimedia.org/T106003 [releng]
10:26 <hashar> deployment-bastion: apt-get upgrade [releng]
02:34 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224313 for scap testing [releng]
2015-07-15 §
20:53 <bd808> Added JanZerebecki as deployment-prep root [releng]
17:53 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224829/ [releng]
16:10 <bd808> sudo rm -rf /tmp/scap_l10n_* on deployment-bastion [releng]
15:33 <bd808> root (/) is full on deployment-bastion, trying to figure out why [releng]
14:39 <bd808> mkdir mira.deployment-prep:/home/l10nupdate because puppet's managehome flag doesn't seem to be doing that :( [releng]
05:00 <bd808> created mira.deployment-prep.eqiad.wmflabs to begin testing multi-master scap [releng]
2015-07-14 §
00:45 <bd808> /srv/deployment/scap/scap on deployment-mediawiki02 had corrupt git cache info; moved to scap-corrupt and forced a re-sync [releng]
00:41 <bd808> trebuchet deploy of scap to mediawiki02 failed. investigating [releng]
00:41 <bd808> Updated scap to d7db8de (Don't assume current l10n cache files are .cdb) [releng]
2015-07-13 §
20:44 <thcipriani> might be some failures, puppetmaster refused to stop as usual, had to kill pid and restart [releng]
20:39 <thcipriani> restarting puppetmaster on deployment-salt, seeing weird errors on instances [releng]
10:24 <hashar> pushed mediawiki/ruby/api tags for versions 0.4.0 and 0.4.1 [releng]
10:12 <hashar> deployment-prep: killing puppetmaster [releng]
10:06 <hashar> integration: kicking puppet master. It is stalled somehow [releng]
2015-07-11 §
04:35 <bd808> Updated /var/lib/git/labs/private to latest upstream [releng]
04:18 <bd808> Logstash cluster upgrade complete! Kibana working again [releng]
04:17 <bd808> Upgraded Elasticsearch to 1.6.0 on logstash1006; replicas recovering now [releng]
03:54 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/224219/ [releng]
03:54 <bd808> fixed rebase conflict with "Enable firejail containment for zotero" by removing stale cherry-pick [releng]
2015-07-10 §
16:12 <hashar> nodepool puppitization going on :-D [releng]
03:01 <legoktm> deploying https://gerrit.wikimedia.org/r/223992 [releng]
2015-07-09 §
22:16 <hashar> integration: pulled labs/private.git : dbef45d..d41010d [releng]
2015-07-08 §
23:17 <bd808> Kibana functional again. Imported some dashboards from prod instance. [releng]
22:48 <marxarelli> cherry-picked https://gerrit.wikimedia.org/r/#/c/223691/ on integration-puppetmaster [releng]
22:33 <bd808> about half of the indices on deployment-logstash2 lost. I assume it was caused by shard rebalancing to logstash1 that I didn't notice before I shut it down and deleted it :( [releng]
22:32 <bd808> Upgraded elasticsearch on logstash2 to 1.6.0 [releng]
22:00 <bd808> Kibana messed up. Half of the logstash elasticsearch indices are gone from deployment-logstash2 [releng]