401-450 of 2556 results (16ms)
2019-10-23 §
09:13 <arturo> 9 tools-sgeexec nodes and 6 other related VMs are down because hypervisor is rebooting [tools]
09:03 <arturo> tools-sgebastion-08 is down because hypervisor is rebooting [tools]
2019-10-22 §
16:56 <bstorm_> drained tools-worker-1025.tools.eqiad.wmflabs which was malfunctioning [tools]
09:25 <arturo> created the `tools.eqiad1.wikimedia.cloud.` DNS zone [tools]
2019-10-21 §
17:32 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.46 [tools]
2019-10-18 §
22:15 <bd808> Rescheduled continuous jobs away from tools-sgeexec-0904 because of high system load [tools]
22:09 <bd808> Cleared error state of webgrid-generic@tools-sgewebgrid-generic-0901, webgrid-lighttpd@tools-sgewebgrid-lighttpd-09{12,15,19,20,26} [tools]
21:29 <bd808> Rescheduled all grid engine webservice jobs (T217815) [tools]
2019-10-16 §
16:21 <phamhi> Deployed toollabs-webservice 0.46 to buster-tools and stretch-tools (T218461) [tools]
09:29 <arturo> toolforge is recovered from the reboot of cloudvirt1029 [tools]
09:17 <arturo> due to the reboot of cloudvirt1029, several sgeexec nodes (8) are offline, also sgewebgrid-lighttpd (8) and tools-worker (3) and the main toolforge proxy (tools-proxy-03) [tools]
2019-10-15 §
17:10 <phamhi> restart tools-worker-1035 because it is no longer responding [tools]
2019-10-14 §
09:26 <arturo> cleaned-up updatetools from tools-sge-services nodes (T229261) [tools]
2019-10-11 §
19:52 <bstorm_> restarted docker on tools-docker-builder after phamhi noticed the daemon had a routing issue (blank iptables) [tools]
11:55 <arturo> create tools-test-proxy-01 VM for testing T235059 and a puppet prefix for it [tools]
10:53 <arturo> added kubernetes-node_1.4.6-7_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:51 <arturo> added docker-engine_1.12.6-0~debian-jessie_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:46 <arturo> added logster_0.0.10-2~jessie1_all.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
2019-10-10 §
02:33 <bd808> Rebooting tools-sgewebgrid-lighttpd-0903. Instance hung. [tools]
2019-10-09 §
22:52 <jeh> removing test instances tools-sssd-sgeexec-test-[12] from SGE [tools]
15:32 <phamhi> drained tools-worker-1020/23/33/35/36/40 to rebalance the cluster [tools]
14:46 <phamhi> drained and cordoned tools-worker-1029 after status reset on reboot [tools]
12:37 <arturo> drain tools-worker-1038 to rebalance load in the k8s cluster [tools]
12:35 <arturo> uncordon tools-worker-1029 (was disabled for unknown reasons) [tools]
12:33 <arturo> drain tools-worker-1010 to rebalance load [tools]
10:33 <arturo> several sgewebgrid-lighttpd nodes (9) not available because cloudvirt1013 is rebooting [tools]
10:21 <arturo> several worker nodes (7) not available because cloudvirt1012 is rebooting [tools]
10:08 <arturo> several worker nodes (6) not available because cloudvirt1009 is rebooting [tools]
09:59 <arturo> several worker nodes (5) not available because cloudvirt1008 is rebooting [tools]
2019-10-08 §
19:39 <bstorm_> drained tools-worker-1007/8 to rebalance the cluster [tools]
19:34 <bstorm_> drained tools-worker-1009 and then 1014 for rebalancing [tools]
19:27 <bstorm_> drained tools-worker-1005 for rebalancing (and put these back in service as I went) [tools]
19:24 <bstorm_> drained tools-worker-1003 and 1009 for rebalancing [tools]
15:41 <arturo> deleted VM instance tools-sgebastion-0test. No longer in use. [tools]
2019-10-07 §
20:17 <bd808> Dropped backlog of messages for delivery to tools.usrd-tools [tools]
20:16 <bd808> Dropped backlog of messages for delivery to tools.mix-n-match [tools]
20:13 <bd808> Dropped backlog of frozen messages for delivery (240 dropped) [tools]
19:25 <bstorm_> deleted tools-puppetmaster-02 [tools]
19:20 <Krenair> reboot tools-k8s-master-01 due to nfs stale issue [tools]
19:18 <Krenair> reboot tools-paws-worker-1006 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1040 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1039 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1038 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1037 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1036 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1035 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1034 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1033 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1032 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1031 due to nfs stale issue [tools]