glustersummit2017

Today finished Glustersummit 2017 in Prague / Czechia , and I can say it was one of the best conference I ever attended. This is my (probably) subjective feeling, but all was top level from organization to ideas presented during the talks.

I had oprotunity together with my colleague Shekhar Berry to present our work on topic : Scalability and Performance with Brick Multiplexing, whole list of talks presented can be found in gluster summit schedule

Slides of our presentation can be found at link

Group photo is at this link

Advertisements

#cns, #gluster, #glustersummit2017, #kubernetes, #openshift, #redhat

Remove CNS configuration from OCP cluster

WARNING: Below steps are destructive, follow them on your own responsibility

CNS – Container Native Storage
OCP – Openshift Container Platform

CNS cluster can be part of OCP cluster, and that means running CNS cluster inside OCP cluster and have all managing by OCP. Read more about OCP here and about CNS here.

This post is not going to be about how to setup CNS / OCP, if you want to learn how to setup CNS then follow documentation links. It will be about how to remove CNS configuration from OCP cluster. Why would anybody want to do this? I see below two as most obvious

  • stop using CNS as storage backend and free up resources for other projects
  • test various configurations and setups before going with final one, and during configuration testing it is necessary to clean up configuration and start over.

Deleting CNS pods and storage configuration will result in data loss , but assuming you know what and why you are doing this, it is safe to play with this.
So, how to delete / recreate CNS configuration from OCP cluster? Steps are below!

CNS itself provides cns-deploy

# cns-deploy --abort 

if run in namespace where CNS pods are created, it will report below output

# cns-deploy --abort
Multiple CLI options detected. Please select a deployment option.
[O]penShift, [K]ubernetes? [O/o/K/k]: O
Using OpenShift CLI.
NAME      STATUS    AGE
zelko     Active    12h
Using namespace "zelko".
Do you wish to abort the deployment?
[Y]es, [N]o? [Default: N]: Y
No resources found
deploymentconfig "heketi" deleted
service "heketi" deleted
route "heketi" deleted
service "heketi-storage-endpoints" deleted
serviceaccount "heketi-service-account" deleted
template "deploy-heketi" deleted
template "heketi" deleted

from this it is visible that deploymentconfig, services, serviceaccounts, and templates were deleted, but not labels on nodes, I opened BZ

This is first step,however CNS pods are still present, and this is by CNS design, CNS pods should not be deleted so easily as deleting them will destroy data, but in this specific case and for reasons listed above we want to delete them.

# oc get pods -n zelko 
NAME              READY     STATUS    RESTARTS   AGE
glusterfs-72bps   1/1       Running   0          12h
glusterfs-fg3k5   1/1       Running   0          12h
glusterfs-gb9h4   1/1       Running   0          12h
glusterfs-hn0gk   1/1       Running   0          12h
glusterfs-jsrn8   1/1       Running   0          12h

deleting CNS project will delete CNS pods

# oc delete project zelko

cns-deploy --abort and oc delete project zelko can be reduced to deleting only project which will give same result

This will not clean up all stuff on CNS nodes, there will be still PV/VG/LVs created by CNS configuration, and if logged to OCP node which earlier hosted CNS pods, there will be visible

#vgs
  
  vg_18c5f1249a65b678dd5a904fd70a9cd8   1   0   0 wz--n- 199.87g 199.87g
  vg_7137ba2d9b189997110f53a6e7b1a5e4   1   2   0 wz--n- 199.87g 197.85g
# pvs
 PV         VG                                  Fmt  Attr PSize   PFree  
  /dev/xvdc  vg_7137ba2d9b189997110f53a6e7b1a5e4 lvm2 a--  199.87g 197.85g
  /dev/xvdd  vg_18c5f1249a65b678dd5a904fd70a9cd8 lvm2 a--  199.87g 199.87g
# lvs                    
  brick_29ec6b1183398e9513676e586db1adb5 vg_7137ba2d9b189997110f53a6e7b1a5e4 Vwi-a-tz--  2.00g tp_29ec6b1183398e9513676e586db1adb5        0.66                                   
  tp_29ec6b1183398e9513676e586db1adb5    vg_7137ba2d9b189997110f53a6e7b1a5e4 twi-aotz--  2.00g                                            0.66   0.33                  

recreating CNS as long as these are present from previous configuration will not work. For removing logical volumes, fastest approach I found was vgchange -an volumegroup; vgremove volumegroup --force

It is also necessary to clean up stuff in /var/lib/glusterd/, /etc/glusterfs/ and /var/lib/heketi/, below is list of files which needs to be removed before running again cns-deploy

# ls -l /var/lib/glusterd/
total 8
drwxr-xr-x. 3 root root 17 Feb 27 14:14 bitd
drwxr-xr-x. 2 root root 34 Feb 27 14:12 geo-replication
-rw-------. 1 root root 66 Feb 27 14:14 glusterd.info
drwxr-xr-x. 3 root root 19 Feb 27 14:12 glusterfind
drwxr-xr-x. 3 root root 46 Feb 27 14:14 glustershd
drwxr-xr-x. 2 root root 40 Feb 27 14:12 groups
drwxr-xr-x. 3 root root 15 Feb 27 14:12 hooks
drwxr-xr-x. 3 root root 39 Feb 27 14:14 nfs
-rw-------. 1 root root 47 Feb 27 14:14 options
drwxr-xr-x. 2 root root 94 Feb 27 14:14 peers
drwxr-xr-x. 3 root root 17 Feb 27 14:14 quotad
drwxr-xr-x. 3 root root 17 Feb 27 14:14 scrub
drwxr-xr-x. 2 root root 31 Feb 27 14:14 snaps
drwxr-xr-x. 2 root root  6 Feb 27 14:12 ss_brick
drwxr-xr-x. 3 root root 29 Feb 27 14:14 vols


# ls -l /etc/glusterfs/
total 32
-rw-r--r--. 1 root root  400 Feb 27 14:12 glusterd.vol
-rw-r--r--. 1 root root 1001 Feb 27 14:12 glusterfs-georep-logrotate
-rw-r--r--. 1 root root  626 Feb 27 14:12 glusterfs-logrotate
-rw-r--r--. 1 root root 1822 Feb 27 14:12 gluster-rsyslog-5.8.conf
-rw-r--r--. 1 root root 2564 Feb 27 14:12 gluster-rsyslog-7.2.conf
-rw-r--r--. 1 root root  197 Feb 27 14:12 group-metadata-cache
-rw-r--r--. 1 root root  276 Feb 27 14:12 group-virt.example
-rw-r--r--. 1 root root  338 Feb 27 14:12 logger.conf.example

# ls -l /var/lib/heketi/
total 4
-rw-r--r--. 1 root root 219 Feb 27 14:14 fstab
drwxr-xr-x. 3 root root  49 Feb 27 14:14 mounts

do this on every machine which was previously part of CNS cluster, pay attention when deleting VGs/PVs/LVs to point to correct names/devices!!!

remove storagenode=glusterfs labels from nodes which were previously used in CNS deployment, check BZ for why is this necessary. To remove node labels, either use approach described here or oc edit node and then clean storagenode=glusterfs label.

After all this is done, then running cns-deploy should work as expected

#cns, #container-native-storage, #gluster, #ipv6, #lvm, #openshift-container-platform