451-500 of 5314 results (14ms)
2016-07-20 §
13:55 <hashar> beta: switching job beta-scap-eqiad to use 'scap sync' per https://gerrit.wikimedia.org/r/#/c/287951/ (poke thcipriani ) [releng]
12:47 <hashar> integration: enabled unattended upgrade on all instances by adding contint::packages::apt to https://wikitech.wikimedia.org/wiki/Hiera:Integration [releng]
10:28 <hashar> beta dropped salt-key on deployment-salt02 for the three instances: deployment-upload.deployment-prep.eqiad.wmflabs , deployment-logstash3.deployment-prep.eqiad.wmflabs and deployment-ores-web.deployment-prep.eqiad.wmflabs [releng]
10:26 <hashar> beta: rebased puppetmaster git repo. "Parsoid: Move to service::node" has weird conflict https://gerrit.wikimedia.org/r/#/c/298436/ [releng]
10:15 <hashar> beta: removing puppet cherry pick of https://gerrit.wikimedia.org/r/#/c/258979/ "mediawiki: add conftool-specifc credentials and scripts" abandonned/superseeded and caused a conflict [releng]
08:17 <hashar> deployment-fluorine : deleting a puppet lock file /var/lib/puppet/state/agent_catalog_run.lock (created at 2016-07-18 19:58:46 UTC) [releng]
01:53 <legoktm> deploying https://gerrit.wikimedia.org/r/299930 [releng]
2016-07-18 §
20:56 <thcipriani> Deleted deployment-fluorine:/srv/mw-log/archive/*-201605* freed 30 GB [releng]
15:00 <hashar> Upgraded Zuul on the Precise slaves to zuul_2.1.0-151-g30a433b-wmf4precise1 [releng]
12:10 <hashar> (restarted qa-morebots) [releng]
12:10 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
12:08 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
08:32 <hashar> gallium: upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 .. zuul_2.1.0-151-g30a433b-wmf3precise1 [releng]
2016-07-16 §
23:19 <paladox> testing morebots [releng]
2016-07-15 §
08:34 <hashar> Unpooling integration-slave-precise-1002 will use it as a zuul-server test instance temporarily [releng]
2016-07-14 §
18:54 <ebernhardson> deployment-prep manually edited elasticsearch.yml on deployment-elastic05 and restarted to get it listening on eth0. Still looking into why puppet wrote out wrong config file [releng]
09:05 <Amir1> rebooting deployment-ores-redis [releng]
08:29 <Amir1> deploying 0e9555f to ores-beta (sca03) [releng]
2016-07-13 §
16:05 <urandom> Installing Cassandra 2.2.6-wmf1 on deployment-restbase0[1-2].deployment-prep.eqiad.wmflabs : T126629 [releng]
13:58 <hashar> T137525 reverted Zuul back to zuul_2.1.0-95-g66c8e52-wmf1precise1_amd64.deb . It could not connect to Gerrit reliably [releng]
13:46 <hashar> T137525 Stopped zuul that ran in a terminal (with -d). Started it with the init script. [releng]
12:36 <hashar> T137525 Upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 ... zuul_2.1.0-151-g30a433b-wmf1precise1_amd64.deb [releng]
11:37 <hashar> apt-get upgrade on deployment-mediawiki02 [releng]
08:33 <hashar> removing deployment-parsoid05 from the Jenkins slaves T140218 [releng]
2016-07-12 §
20:29 <hashar> integration: force running unattended upgrade on all instances: salt --batch 4 -v '*' cmd.run 'unattended-upgrade' . That upgrades diamond and hhvm among others. imagemagick-common has a prompt though [releng]
20:22 <hashar> CI force running puppet on all instances: salt --batch 5 -v '*' puppet.run [releng]
20:04 <hashar> Maybe fix unattended upgrade on the CI slaves via https://gerrit.wikimedia.org/r/298568 [releng]
16:43 <Amir1> deploying f472f65 to ores-beta [releng]
10:11 <hashar> Github created repos operations-debs-contenttranslation-apertium-mk-en and operations-docker-images-toollabs-images for Gerrit replication [releng]
2016-07-11 §
14:24 <hashar> Removing ZeroMQ config from the Jenkins jobs. It is now enabled globally. T139923 [releng]
10:15 <hashar> T136188: on Trusty slaves, upgrading Chromium from v49 to v51: salt -v '*slave-trusty-*' cmd.run 'apt-get -y install chromium-browser chromium-chromedriver chromium-codecs-ffmpeg-extra' [releng]
10:13 <hashar> T136188: salt -v '*slave-trusty*' cmd.run 'rm /etc/apt/preferences.d/chromium-*' [releng]
10:09 <hashar> Unpinning Chromium v49 from the Trusty slaves and upgrading to v51 for T136188 [releng]
09:34 <zeljkof> Enabled ZMQ Event Publisher on all Jobs in Jenkins [releng]
2016-07-09 §
18:57 <legoktm> deploying https://gerrit.wikimedia.org/r/297731 and https://gerrit.wikimedia.org/r/298142 [releng]
14:07 <bd808> Testing logstash change https://gerrit.wikimedia.org/r/#/c/298115/ via cherry-pick [releng]
2016-07-08 §
16:08 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote set-head origin --auto [releng]
16:06 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid init && git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote add origin ssh://jenkins-bot@ytterbium.wikimedia.org:29418/mediawiki/services/graphoid [releng]
14:59 <hashar> nodepool: rebuild Trusty image from scratch Image ci-trusty-wikimedia-1467989709 in wmflabs-eqiad is ready [releng]
12:35 <hashar> beta: find /data/project/upload7/*/*/thumb -type f -atime +30 -delete [releng]
10:31 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files T64835 [releng]
10:26 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files [releng]
2016-07-07 §
21:41 <MaxSem> Chowned php-master/vendor back to jenkins-deploy [releng]
13:10 <hashar> deleting integration-slave-trusty-1024 and integration-slave-trusty-1025 to free up some RAM. We have enough permanent Trusty slaves. T139535 [releng]
02:43 <MaxSem> started redis-server on deployment-stream [releng]
01:14 <bd808> Restarted logstash on deployment-logstash2 [releng]
01:12 <MaxSem> Leaving my hacks for the night to collect data, if needed revert with cd /srv/mediawiki-staging/php-master/vendor && sudo git reset --hard HEAD && sudo chown -hR jenkins-deploy:wikidev . [releng]
00:50 <bd808> Rebooting deployment-logstash3.eqiad.wmflabs; console full of hung process messages from kernel [releng]
00:27 <MaxSem> Initialized ORES on all wikis where it's enabled, was causing job failures [releng]
00:13 <MaxSem> Debugging a fatal in betalabs, might cause syncs to fail [releng]