6101-6150 of 7039 results (29ms)
2021-04-23 §
09:32 <dcaro> finished upgrade of ceph cluster on codfw1 using exclusively cookbooks (T280641) [admin]
09:17 <dcaro> testing the upgrade_osds cookbook on codfw1 ceph cluster (T280641) [admin]
08:17 <dcaro> testing the upgrade_mons cookbook on codfw1 ceph cluster (T280641) [admin]
2021-04-21 §
17:59 <dcaro> all monitors upgraded on codfw1 with one cookbook `cookbook --verbose -c ~/.config/spicerack/cookbook.yaml wmcs.ceph.upgrade_mons --monitor-node-fqdn cloudcephmon2002-dev.codfw.wmnet` (T280641) [admin]
17:47 <dcaro> upgrading monitors and mrg nodes on codfw ceph cluster (T280641) [admin]
13:26 <dcaro> testing ceph upgrade cookbook on cloudcephmon2002-dev (T280641) [admin]
2021-04-20 §
20:21 <andrewbogott> reboot cloudservices1003 [admin]
20:13 <andrewbogott> reboot cloudservices1004 [admin]
2021-04-19 §
08:40 <dcaro> enabling puppet on labstore1004 after mysql restart (T279657) [admin]
08:09 <dcaro> downtiming labstore1004 and stopping puppet for mysql restart (T279657) [admin]
2021-04-14 §
10:48 <dcaro> Upgrade of codfw ceph to octopus 15.2.20 done, will run some performance tests now (T274566) [admin]
10:41 <dcaro> Upgrade of codfw ceph to octopus 15.2.20, mgrs upgraded, osds next (T274566) [admin]
10:37 <dcaro> Upgrade of codfw ceph to octopus 15.2.20, mons upgraded, mgrs next (T274566) [admin]
10:15 <dcaro> starting the upgrade of codfw ceph to octopus 15.2.20 (T274566) [admin]
10:07 <dcaro> Merged the ceph 15 (Octopus) repo deployment to codfw, only the repo, not the packages (T274566) [admin]
2021-04-13 §
16:42 <dcaro> Ceph balancer got the cluster to eval 0.014916, that is 88-77% usage for compute pool, and 28-19% usage for the cinder one \o/ (T274573) [admin]
15:08 <dcaro> Activating continuous upmap balancer, keeping a close eye (T274573) [admin]
15:03 <dcaro> Executing a second pass, there's still movements to improve the eval of 0.030075 (T274573) [admin]
15:02 <dcaro> First pass finished, improved eval to 0.030075 (T274573) [admin]
14:49 <dcaro> Running the first_pass balancing plan on ceph eqiad, current eval 0.030622 (T274573) [admin]
14:43 <dcaro> enabling ceph upmap pg balancer on equiad (T274573) [admin]
14:36 <andrewbogott> upgrading codfw1dev to version Victoria, T261137 [admin]
13:11 <andrewbogott> upgrading eqiad1 designate to version Victoria, T261137 [admin]
10:43 <dcaro> enabled ceph upmap balancer on codfw (T274573,T274573) [admin]
2021-04-07 §
21:33 <andrewbogott> upgrading codfw1dev designate to Victoria [admin]
2021-04-04 §
17:36 <andrewbogott> upgrading eqiad1 designate to Ussuri [admin]
2021-04-02 §
14:12 <andrewbogott> upgrading codfw1dev to OpenStack version Ussuri [admin]
2021-04-01 §
12:15 <dcaro> Restoring the 4.9 kernel on cloudcephosd2003-dev and upgrading (T274565) [admin]
10:29 <dcaro> Done restoring the 4.9 kernel on cloudcephosd2001-dev and upgrading, requires logging into console to boot from the older kernel before removing the newer one (T274565) [admin]
10:10 <dcaro> Restoring the 4.9 kernel on cloudcephosd2001-dev and upgrading (T274565) [admin]
2021-03-31 §
08:47 <dcaro> upgrading cinder on codfw cloudcontrol2* nodes (T278845) [admin]
2021-03-30 §
09:53 <arturo> rebooting cloudnet1003 to cleanup conntrack table, it wouldn't cleanup by hand ... [admin]
2021-03-28 §
15:42 <andrewbogott> updated debian-10.0-buster base image [admin]
2021-03-27 §
09:54 <arturo> cleanup conntrack table in qrouter nents in cloudnet1003 (backup) [admin]
2021-03-25 §
19:03 <andrewbogott> deleting all unused (per wmcs-imageusage) Jessie base images from Glance [admin]
17:15 <andrewbogott> refreshing puppet compiler facts for tools project [admin]
10:31 <dcaro> kernel upgrade on osds on codfw done, running performance tests (T274565) [admin]
10:24 <dcaro> upgrading kernel on cloudcephosd2003-dev and reboot (T274565) [admin]
10:18 <dcaro> upgrading kernel on cloudcephosd2002-dev and reboot (T274565) [admin]
10:08 <dcaro> upgrading kernel on cloudcephmon2003-dev and reboot (T274565) [admin]
2021-03-24 §
09:19 <dcaro> restarted wmcs-backup on cloudvirt1024 as it failed due to an image being removed while running (T276892) [admin]
2021-03-23 §
11:33 <arturo> root@cloudcontrol1005:~# wmcs-novastats-dnsleaks --delete [admin]
2021-03-22 §
10:10 <arturo> cleanup conntrack table in standby node: aborrero@cloudnet1003:~ $ sudo ip netns exec qrouter-d93771ba-2711-4f88-804a-8df6fd03978a conntrack -F [admin]
2021-03-19 §
17:17 <bstorm> running `ALTER TABLE account MODIFY COLUMN type ENUM('user','tool','paws');` against the labsdbaccounts database on m5 T276284 [admin]
14:29 <andrewbogott> switching admin-monitoring project to use an upstream debian image; I want to see how this affects performance [admin]
00:30 <bstorm> downtimed labstore1004 to check some things in debug mode [admin]
2021-03-17 §
17:28 <bstorm> restarted the backup-glance-images job to clear errors in systemd T271782 [admin]
17:16 <andrewbogott> set default cinder quota for projects to 80Gb with "update quota_classes set hard_limit=80 where resource='gigabytes';" on database 'cinder' [admin]
16:58 <andrewbogott> disabling all flavors with >20Gb root storage with "update flavors set disabled=1 where root_gb>20;" in nova_eqiad1_api [admin]
2021-03-10 §
16:51 <arturo> rebooting cloudvirt1030 for T275753 [admin]