5701-5750 of 10000 results (35ms)
2017-06-21 §
14:21 <hashar> deployment-prep: force running puppet on all instances [releng]
14:17 <hashar> finally fixed puppet on deployment-prep ! [releng]
14:02 <hashar> deployment-puppmaster (cd /etc/puppet && ln -s /var/lib/git/operations/puppet/manifests && ln -s /var/lib/git/operations/puppet/modules) [releng]
13:26 <hashar> deployment-prep: puppet master got erroneously upgrade to puppet* 4.8. Roll it back to 3.8 which fail, and then back to 3.7! [releng]
12:47 <hashar> broke deployment-prep puppet master while upgrading it :( [releng]
12:28 <hashar> deployment-imagescaler01 removed puppetmaster and puppetmaster-common packages [releng]
12:04 <hashar> apt-get dist-upgrade on deployment-mediawiki hosts [releng]
11:59 <hashar> armed keyholder on deployment-tin and deployment-mira [releng]
11:15 <hashar> deployment-cache-text04 : apt-get dist-upgrade [releng]
11:12 <hashar> varnish fails on deployment-cache-text04 [releng]
11:08 <hashar> deployment-prep : rebooting deployment-tin deployment-mira deployment-cache-text04 deployment-cache-upload04 [releng]
11:00 <hashar> deployment-prep apt-get upgrade and reboot all hosts [releng]
10:21 <hashar> deployment-zotero01 apt-get upgrade and rebooted [releng]
09:59 <hashar> integration: removing swift / python-swift from integration-puppetmaster01 [releng]
09:57 <hashar> Upgrading puppet 3.7.2 .. 3.8.5 on integration-slave-docker-1001 and integration-slave-docker-1002 [releng]
09:39 <hashar> integration: deleting swift and and swift-storage-01 unused [releng]
09:38 <hashar> upgrading/Rebooting all instances from integration project to catch up with Linux kernel upgrades [releng]
2017-06-20 §
19:25 <hashar> Nodepool rate being bumped from 1 query per 6 seconds to 1 query per 5 seconds ( https://gerrit.wikimedia.org/r/#/c/358601/ ) [releng]
01:25 <thcipriani|afk> deployment-tin stuck on post-merge queue for the past 13 hours, unstuck now [releng]
2017-06-19 §
22:08 <thcipriani|afk> reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/360091/ [releng]
08:29 <hashar> Gerrit: added Ladsgroup to 'mediawiki' group - T165860 [releng]
2017-06-18 §
19:26 <Reedy> Re-enabled beta-update-databases-eqiad as wikidatawiki takes < 10 minutes T168036 T167981 [releng]
19:25 <Reedy> A lot of items on beta wikidatawiki deleted T168036 T167981 [releng]
2017-06-16 §
23:41 <Reedy_> also deleting a lot of Property:P* pages on beta wikidatawiki T168106 [releng]
22:55 <Reedy> deleting Q100000-Q200000 on beta wikidatawiki T168106 [releng]
19:04 <Reedy> disabled beta-update-databases-eqiad because it's not doing much useful atm [releng]
14:56 <zeljkof> Reloading Zuul to deploy 18a50a707eac0bcdd88f48f2321af78ee399a4eb [releng]
14:40 <hashar> integration-slave-jessie-1001 apt-get upgrade to downgrade python-pbr to 0.8.2 as pinned since T153877. /usr/bin/unattended-upgrade magically upgraded it for some reason [releng]
06:49 <Reedy> script upto `Processed up to page 336425 (Q235372)`... hopefully it's finished by morning [releng]
03:13 <Reedy> running `mwscript extensions/Wikibase/repo/maintenance/rebuildTermSqlIndex.php --wiki=wikidatawiki` in screen as root on deployment-tin for T168036 [releng]
03:10 <Reedy> running `mwscript extensions/Wikibase/repo/maintenance/rebuildEntityPerPage.php --wiki=wikidatawiki` in screen as root on deployment-tin for T168036 [releng]
02:23 <Reedy> cherry-picked https://gerrit.wikimedia.org/r/#/c/354932/ onto beta puppetmaster [releng]
2017-06-15 §
16:34 <RainbowSprinkles> deployment-prep: Disabled database updates for awhile, running it by hand [releng]
10:39 <hashar> apt-get upgrade on deployment-tin [releng]
00:52 <thcipriani> deployment-tin jenkins agent borked for 4 hours, should be fixed now [releng]
2017-06-14 §
12:24 <hashar> gerrit: marked mediawiki/skins/Donate has read-only ( https://gerrit.wikimedia.org/r/#/admin/projects/mediawiki/skins/Donate ) - T124519 [releng]
2017-06-13 §
22:05 <hashar> Zuul resarted manually from a terminal on contint1001. It does not have any statsd configuration so we will miss metrics for a bit till it is restarted properly. [releng]
21:13 <hashar> Gracefully restarting Zuul [releng]
20:37 <hashar> Restarting Nodepool. apparently confused in pool tracking and spawning to many Trusty nodes (7 instead of 4) [releng]
20:31 <hashar> Nodepool: deleted a bunch of Trusty instances. It scheduled lot of them that are taking slots in the pool. Better have jessie nodes to be spawned instead since there is high demand for them [releng]
20:19 <hashar> deployment-prep: added Polishdeveloper to the "importer" global group. https://deployment.wikimedia.beta.wmflabs.org/wiki/Special:GlobalUserRights/Polishdeveloper - T167823 [releng]
18:47 <andrewbogott> root@deployment-salt02:~# salt "*" cmd.run "apt-get -y install facter" [releng]
18:46 <andrewbogott> using salt to "apt-get -y install facter" on all deployment-prep instances [releng]
18:38 <andrewbogott> restarting apache2 on deployment-puppetmaster02 [releng]
18:37 <andrewbogott> doing a git fetch and rebase for deployment-puppetmaster02 [releng]
17:00 <elukey> hacking apache on mediawiki05 to test rewrite rules [releng]
16:04 <Amir1> cherry-picked 357985/4 on puppetmaster [releng]
15:59 <halfak> deployed ores-prod-deploy:862aea9 [releng]
13:47 <hashar> nodepool force running puppet for: lower min-ready for trusty [puppet] - https://gerrit.wikimedia.org/r/356466 [releng]
10:53 <elukey> rolling restart of all kafka brokers to pick up the new zookeper change (only deployment-zookeeper02 available) [releng]