| 
      
        2020-11-25
      
      §
     | 
  
    
  | 19:35 | 
  <bstorm> | 
  repairing ceph pg `instructing pg 6.91 on osd.117 to repair` | 
  [admin] | 
            
  | 09:31 | 
  <_dcaro> | 
  The OSD seems to be up and running actually, though there's that misleading log, will leave it see if the cluster comes fully healthy (T268722) | 
  [admin] | 
            
  | 08:54 | 
  <_dcaro> | 
  Unsetting noup/nodown to allow re-shuffling of the pgs that osd.44 had, will try to rebuild it (T268722) | 
  [admin] | 
            
  | 08:45 | 
  <_dcaro> | 
  Tried resetting the class for osd.44 to ssd, no luck, the cluster is in noout/norebalance to avoid data shuffling (opened T268722) | 
  [admin] | 
            
  | 08:45 | 
  <_dcaro> | 
  Tried resetting the class for osd.44 to ssd, no luck, the cluster is in noout/norebalance to avoid data shuffling (opened root@cloudcephosd1005:/var/lib/ceph/osd/ceph-44# ceph osd crush set-device-class ssd osd.44) | 
  [admin] | 
            
  | 08:19 | 
  <_dcaro> | 
  Restarting serivce osd.44 resulted on osd.44 being unable to start due to some config inconsistency (can not reset class to hdd) | 
  [admin] | 
            
  | 08:16 | 
  <_dcaro> | 
  After enabling auto pg scaling on ceph eqiad cluster, osd.44 (cloudcephosd1005) got stuck, trying to restart the osd service | 
  [admin] | 
            
  | 08:16 | 
  <_dcaro> | 
  After enabling auto pg scaling on ceph eqiad cluster, osd.44 (cloudcephosd1005) got stuck, trying to restart | 
  [admin] |