WS Proxmox node reboot: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
 
(One intermediate revision by the same user not shown)
Line 8: Line 8:
* If a VM or container is going to incur downtime, you must let the affected parties know in advance. Ideally they should be informed the previous day.   
* If a VM or container is going to incur downtime, you must let the affected parties know in advance. Ideally they should be informed the previous day.   


== Pre flight checks ==
== Pre-flight checks ==
* Check all Ceph pools are running on at least 3/2 replication
* Check all Ceph pools are running on at least 3/2 replication
* Check that all running VM's on the node you want to reboot are in HA (if not, add them or migrate them away manually)
* Check that all running VM's on the node you want to reboot are in HA (if not, add them or migrate them away manually)
** '''The `compute.*` VM's are not to be migrated! Rebooting a node with such a VM present requires shutting down the VM!'''
* Check that Ceph is healthy -> No remapped PG's, or degraded data redundancy
* Check that Ceph is healthy -> No remapped PG's, or degraded data redundancy
* You have communicated that downtime is expected to the users who will be affected (Ideally one day in advance)
* You have communicated that downtime is expected to the users who will be affected (Ideally one day in advance)
Line 19: Line 20:


== Reboot process ==
== Reboot process ==
* Complete the pre-flight checks
* If you want to reboot for a kernel update, make sure the kernel is updated by following the Update Process written above
* Start maintenance mode for the Proxmox node and any containers running on the node
* Start maintenance mode for the Proxmox node and any containers running on the node
* Start maintenance mode for Ceph, specify that we only want to surpress the trigger for health state being in warning by setting tag `ceph_health` equals `warning`
* Start maintenance mode for Ceph, specify that we only want to surpress the trigger for health state being in warning by setting tag `ceph_health` equals `warning`
125

edits

Navigation menu