Install a new Disk in Server: Difference between revisions

From Delft Solutions
Jump to navigation Jump to search
(Created page with "== How to Install a New Disk in a Server == To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps: Pick a slot number where you want to put the disk You can find the empty slot numbers or replace a broken disk. Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC Physically Install the disk: Assemble the cadd...")
 
No edit summary
 
(One intermediate revision by the same user not shown)
Line 2: Line 2:
To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:
To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:


Pick a slot number where you want to put the disk
=== Pick a slot number where you want to put the disk ===
You can find the empty slot numbers or replace a broken disk. Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC
* You can find the empty slot numbers or replace a broken disk.  
Physically Install the disk:
* Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC
Assemble the caddy and insert the disk.
=== Physically Install the disk ===
For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.
* Assemble the caddy and insert the disk.
For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.
* For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.
Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.
* For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.
Create OSD for the new disk
* Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.
Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused
=== Create OSD for the new disk ===
Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)
* Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused
If the OSD automatically didn’t create, you can create an encrypted OSD manually using pveceph  osd create /dev/<dev> --crush-device-class <dev-class> -–encrypted 1  command or you can create the OSD in Proxmox web GUI
* Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)
Verify it with using ceph osd tree to integrate the disk into your Ceph cluster
* If the OSD automatically didn’t create, you can create an encrypted OSD manually using ```pveceph  osd create /dev/<dev> --crush-device-class <dev-class> -–encrypted 1``` command or you can create the OSD in Proxmox web GUI
Verify OSD in up | in state in Ceph
* Verify it with using ```ceph osd tree``` to integrate the disk into your Ceph cluster
Checking Ceph's Health by running ceph -s to check the health of the Ceph cluster
=== Verify OSD in up | in state in Ceph ===
Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail
* Checking Ceph's Health by running ceph -s to check the health of the Ceph cluster
Make sure Ceph is healthy and is using the new OSD
* Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail
Ceph should be healthy and shows you a green state without any remapped PG's, or degraded data redundancy
=== Make sure Ceph is healthy and is using the new OSD ===
* Ceph should be healthy and shows you a green state without any remapped PG's, or degraded data redundancy
 
Resourse: https://pve.proxmox.com/pve-docs/pveceph.1.html

Latest revision as of 01:04, 16 September 2024

How to Install a New Disk in a Server

To insert the disk into the server used for Ceph and configure it for use, including making it a part of the Ceph cluster with Proxmox, follow these steps:

Pick a slot number where you want to put the disk

  • You can find the empty slot numbers or replace a broken disk.
  • Check the drive bay number (that is written in front side of the server) to be matched to the slot number in iDRAC

Physically Install the disk

  • Assemble the caddy and insert the disk.
  • For hot-swappable drives (including NVMe and non-NVMe), no need to power down the server. You can insert the disk.
  • For non-hot-swappable drives, check the server manual to determine if powering off the server is necessary.
  • Follow the server’s manual and insert the disk into the correct drive bay, matching the slot number in iDRAC.

Create OSD for the new disk

  • Depends on the server configuration, when you insert the disk in the server, it could automatically be available in Disk Overview on Proxmox and labelled Unused
  • Create the OSD with encryption enabled and correct device class (HDD, SSD, NVMe)
  • If the OSD automatically didn’t create, you can create an encrypted OSD manually using ```pveceph osd create /dev/<dev> --crush-device-class <dev-class> -–encrypted 1``` command or you can create the OSD in Proxmox web GUI
  • Verify it with using ```ceph osd tree``` to integrate the disk into your Ceph cluster

Verify OSD in up | in state in Ceph

  • Checking Ceph's Health by running ceph -s to check the health of the Ceph cluster
  • Verify the new OSD is up and running using ceph osd status and check for any errors with ceph health detail

Make sure Ceph is healthy and is using the new OSD

  • Ceph should be healthy and shows you a green state without any remapped PG's, or degraded data redundancy

Resourse: https://pve.proxmox.com/pve-docs/pveceph.1.html