add/remove CEPH OSD – Object Storage Device

In blog post Install CEPH cluster – OS Fedora 23 is described how to setup CEPH storage cluster based on Fedora 23. In that configuration I used only one OSD per CEPH node, in real life you will want to have more OSDs per CEPH node.

OSD stands for Object Storage Device and belongs to main component of CEPH storage cluster. Recommended read CEPH OSD

Adding new OSD is not difficult task, and it can be done via ceph-deploy or by running ceph-disk commands
In ceph cluster I created in previous post, I used kvm, and I as OSD here I am going to use virtual disk attached to KVM machine.

Let’s create disk we are going to use for OSD

 
# qemu-img create -f qcow2 cephf23-node1disk1.qcow2 15G 

Attach disk to kvm domain, 1)edit kvm machine .xml domain file and add there definition for new disk, or fire

# virsh edit  kvm_domain  

and do very same steps in editor.

Restart kvm guest

# virsh destroy  kvm_doman  
# virsh start  kvm_doman  

once machine boots up, new disks which was just added will be visible in this configuration it is marked as /dev/vdc. The name can be different in another configuration.

ADD OSD

First we are going to add OSD using ceph-deploy

ceph-deploy disk zap cephf23-node1:vdc
ceph-deploy osd prepare cephf23-node1:vdc
ceph-deploy osd activate cephf23-node1:/dev/vdc1:/dev/vdc2

Above process of adding new OSD disk is using ceph-deploy which will by default create XFS filesystem on top of OSDs and use it. If you do not want to use XFS, then using below approach will enable us to specify different file system. At time xfs and ext4 are supported,at other side btrfs is experimental and still not wider used in production.

ceph-disk prepare --cluster  --cluster-uuid  --fs-type xfs|ext4|btrfs /device 

In this specific case command will be

# parted -s /dev/vdc mklabel gpt 
# ceph-disk prepare --cluster ceph --cluster-uuid b71a3eb1-e253-410a-bf11-84ae01bad654 --fs-type xfs /dev/vdc 
# ceph-disk activate /dev/vdc1
  • cluster-uuid ( b71a3eb1-e253-410a-bf11-84ae01bad654 )
  • cluster name – default name is ceph unless specified otherwise when ran ceph-deploy eg ceph-deploy –cluster=cluster_name
    check ceph-deploy docs

After this new OSD will be added to ceph cluster. It might happen that cluster is for short is not in HEALTH_OK while it can take some time before PG are rebalanced across new OSD. This is normal process and should be checked if it remains in unhealthy state

REMOVE OSD

In order to remove OSD, first we need to identify OSD we want to remove, ceph osd tree can help, in below output we see all OSDs and osd.3 will be removed in steps afterwards

 
# ceph osd tree
ID WEIGHT  TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.04997 root default                                             
-2 0.00999     host cephf23-node3                                   
 0 0.00999         osd.0               up  1.00000          1.00000 
-3 0.01999     host cephf23-node2                                   
 1 0.00999         osd.1               up  1.00000          1.00000 
 4 0.00999         osd.4               up  1.00000          1.00000 
-4 0.01999     host cephf23-node1                                   
 2 0.00999         osd.2               up  1.00000          1.00000 
 3 0.00999         osd.3               up  1.00000          1.00000 

If osd.3 is picked to be removed, then below steps will help with that

 
# /etc/init.d/ceph stop osd.3
=== osd.3 === 
Stopping Ceph osd.3 on cephf23-node1...done
# ceph osd out osd.3
marked out osd.3. 
# ceph osd down osd.3
marked down osd.3. 
# ceph osd rm osd.3
removed osd.3
# ceph osd crush remove osd.3
removed item id 3 name 'osd.3' from crush map
# ceph auth del osd.3
updated

Now, ceph osd tree will not show osd.3 anymore

]# ceph osd tree
ID WEIGHT  TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.03998 root default                                             
-2 0.00999     host cephf23-node3                                   
 0 0.00999         osd.0               up  1.00000          1.00000 
-3 0.01999     host cephf23-node2                                   
 1 0.00999         osd.1               up  1.00000          1.00000 
 4 0.00999         osd.4               up  1.00000          1.00000 
-4 0.00999     host cephf23-node1                                   
 2 0.00999         osd.2               up  1.00000          1.00000 
Advertisements

#ceph, #ceph-cluster, #ceph-osd