Upgrade cns using rolling update technique

CNS ( Container Native Storage ) uses daemonset to start pods on desired nodes. After installation we can see
for deamonset

# oc get ds 
o NAME                DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
glusterfs-storage         3         3         3         3            3       glusterfs=storage-host   4m

and for pods

 
# oc get pods
NAME                                          READY     STATUS    RESTARTS   AGE
glusterblock-storage-provisioner-dc-1-lkx7t   1/1       Running   0          1m
glusterfs-storage-jxb69                       1/1       Running   0          4m
glusterfs-storage-qt7td                       1/1       Running   0          4m
glusterfs-storage-vzzvr                       1/1       Running   0          4m
heketi-storage-1-sbchm                        1/1       Running   0          2m

and nodes

 
# oc get nodes --show-labels | grep storage-host 
node1    Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=ip-node1,node-role.kubernetes.io/compute=true,region=primary,zone=default
node2    Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=node2,node-role.kubernetes.io/compute=true,region=primary,zone=default
node3   Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=node3,node-role.kubernetes.io/compute=true,region=primary,zone=default
 
# oc get nodes --selector glusterfs=storage-host
NAME                                          STATUS    ROLES     AGE       VERSION
node1     Ready     compute   13d       v1.9.1+a0ce1bc657
node2     Ready     compute   13d       v1.9.1+a0ce1bc657
node3     Ready     compute   13d       v1.9.1+a0ce1bc657

we see that deamonset defines to use label storage-host to where to start glusterfs pods.

Full deamonset for this case is here

In daemon set updateStrategy defines when to update daemonset. By default for cns it is on OnDelete

 updateStrategy:
    type: OnDelete

this means, it will update configuration if we delete daemonset. This is not optimal, if we delete daemonset it will delete pods it is responsible for ( unless we use –cascade ). If we want to upgrade daemonset configuration, it is better approach to make an rollingupdate.

To do an rolling update for CNS pods do below

  • Get planned images in advance on nodes – yes, pull prior starting rolling update

This will reduce possible errors by one and process will be faster.

  • Edit daemon set and change updateStrategy to be RollingUpdate, like below
 updateStrategy:
    type: RollingUpdate
  • in daemonset change also image part and adapt it to desired image version

After doing these changes, an rolling update process will start. If all is fine after some times all pods will be running with new / updated images.
Rolling update approach allows to update daemonset without causing downtime as rolling update process ensure that application is up and running whole time during update process.

For more about rolling updates read here and here

Advertisements

#cns, #gluster, #kubernetes, #openshift, #rolling-updates, #storage

Dynamic storage provisioning for OCP persistent templates

OCP ( Openshift Container Platform ) ships many templates – ephemeral and/or persistent ones. Change to project openshift

# oc project openshift 
# oc get templates

to see default delivered templates which you can use directly and without any further changes.

If you take a look at some of persistent templates you can notice that they have PVC definition as showed below

 {
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${SERVICE_NAME}"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },
 

For an persistent template to work one need to provide PVC with specific name in advance and prior to executing oc new-app template_name. This works fine, but I find it problematic to create pvc in advance
This is easy to overcome and change existing template / create new one which will support dynamic storage provision via storage classes.

First, we need to locate template we want to change in order to use dynamic storage provision
Note: I assume there is already storageclass in place

  1. edit desired permanent template, eg lets take postgresql-persistent and edit PersistentVolumeClaim section to be like
# oc get template -n openshift postgresql-persistent -o json > postgresql-persistent_storageclass.json  

Edit postgresql-persistent_storageclass.json by changing sections below sections

 
 "kind": "Template",
    "labels": {
        "template": "glusterfs-postgresql-persistent-template_storageclass"
    },

... rest of template ..... 

"name": "glusterfs-postgresql-persistent_storageclass", 

....... rest of template .... 

"selfLink": "/oapi/v1/namespaces/openshift/templates/glusterfs-postgresql-persistent_storageclass" 

.... .... rest of template .... 

adapt PersistentVolumeClaim section to support dynamic storage provisioning

 
{
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${DATABASE_SERVICE_NAME}",
                "annotations": {
                    "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS}"
                        }
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },

This will add new requirement for new parameter STORAGE_CLASS

At end of template in parameter section add this new parameter

{
            "name" : "STORAGE_CLASS",
            "description": "Storagecclass to use - here we expect storageclass name",
            "required": true,
            "value": "storageclassname"
        }

Save and create new template

# oc create -f postgresql-persistent_storageclass.json

After this point we can use this new template to start postgresql service and it will automatically allocate storage space from specified storage class

It is supposed that storageclass is already configured and in place, you can use any storage backend which support storage classes,in case you want to try CNS , you can follow instructions how to setup cns storage

 
# oc get storageclass
NAME           TYPE
cnsclass       kubernetes.io/glusterfs    

# oc new-project postgresql-storageclass
# oc new-app postgresql-persistent_storageclass -p STORAGE_CLASS=cnsclass

After application is started

# oc get pod
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-zdvq2   1/1       Running   0          52m
[root@gprfc031 templates]# oc exec postgresql-1-zdvq2 -- mount | grep pgsql
10.16.153.123:vol_72cd8ef33eee365d4c7f75cffaa1681b on /var/lib/pgsql/data type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@gprfc031 templates]# oc get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
postgresql   Bound     pvc-514d56e8-e7bf-11e7-8827-d4bed9b390df   1Gi        RWO           cnsclass       52m

it will mount volume from storage backend defined in storage class and start using it.

#cns, #gluster, #k8s, #kubernetes, #openshift, #persistant-storage, #pvc

glustersummit2017

Today finished Glustersummit 2017 in Prague / Czechia , and I can say it was one of the best conference I ever attended. This is my (probably) subjective feeling, but all was top level from organization to ideas presented during the talks.

I had oprotunity together with my colleague Shekhar Berry to present our work on topic : Scalability and Performance with Brick Multiplexing, whole list of talks presented can be found in gluster summit schedule

Slides of our presentation can be found at link

Group photo is at this link

#cns, #gluster, #glustersummit2017, #kubernetes, #openshift, #redhat

Remove CNS configuration from OCP cluster

WARNING: Below steps are destructive, follow them on your own responsibility

CNS – Container Native Storage
OCP – Openshift Container Platform

CNS cluster can be part of OCP cluster, and that means running CNS cluster inside OCP cluster and have all managing by OCP. Read more about OCP here and about CNS here.

This post is not going to be about how to setup CNS / OCP, if you want to learn how to setup CNS then follow documentation links. It will be about how to remove CNS configuration from OCP cluster. Why would anybody want to do this? I see below two as most obvious

  • stop using CNS as storage backend and free up resources for other projects
  • test various configurations and setups before going with final one, and during configuration testing it is necessary to clean up configuration and start over.

Deleting CNS pods and storage configuration will result in data loss , but assuming you know what and why you are doing this, it is safe to play with this.
So, how to delete / recreate CNS configuration from OCP cluster? Steps are below!

CNS itself provides cns-deploy

# cns-deploy --abort 

if run in namespace where CNS pods are created, it will report below output

# cns-deploy --abort
Multiple CLI options detected. Please select a deployment option.
[O]penShift, [K]ubernetes? [O/o/K/k]: O
Using OpenShift CLI.
NAME      STATUS    AGE
zelko     Active    12h
Using namespace "zelko".
Do you wish to abort the deployment?
[Y]es, [N]o? [Default: N]: Y
No resources found
deploymentconfig "heketi" deleted
service "heketi" deleted
route "heketi" deleted
service "heketi-storage-endpoints" deleted
serviceaccount "heketi-service-account" deleted
template "deploy-heketi" deleted
template "heketi" deleted

from this it is visible that deploymentconfig, services, serviceaccounts, and templates were deleted, but not labels on nodes, I opened BZ

This is first step,however CNS pods are still present, and this is by CNS design, CNS pods should not be deleted so easily as deleting them will destroy data, but in this specific case and for reasons listed above we want to delete them.

# oc get pods -n zelko 
NAME              READY     STATUS    RESTARTS   AGE
glusterfs-72bps   1/1       Running   0          12h
glusterfs-fg3k5   1/1       Running   0          12h
glusterfs-gb9h4   1/1       Running   0          12h
glusterfs-hn0gk   1/1       Running   0          12h
glusterfs-jsrn8   1/1       Running   0          12h

deleting CNS project will delete CNS pods

# oc delete project zelko

cns-deploy --abort and oc delete project zelko can be reduced to deleting only project which will give same result

This will not clean up all stuff on CNS nodes, there will be still PV/VG/LVs created by CNS configuration, and if logged to OCP node which earlier hosted CNS pods, there will be visible

#vgs
  
  vg_18c5f1249a65b678dd5a904fd70a9cd8   1   0   0 wz--n- 199.87g 199.87g
  vg_7137ba2d9b189997110f53a6e7b1a5e4   1   2   0 wz--n- 199.87g 197.85g
# pvs
 PV         VG                                  Fmt  Attr PSize   PFree  
  /dev/xvdc  vg_7137ba2d9b189997110f53a6e7b1a5e4 lvm2 a--  199.87g 197.85g
  /dev/xvdd  vg_18c5f1249a65b678dd5a904fd70a9cd8 lvm2 a--  199.87g 199.87g
# lvs                    
  brick_29ec6b1183398e9513676e586db1adb5 vg_7137ba2d9b189997110f53a6e7b1a5e4 Vwi-a-tz--  2.00g tp_29ec6b1183398e9513676e586db1adb5        0.66                                   
  tp_29ec6b1183398e9513676e586db1adb5    vg_7137ba2d9b189997110f53a6e7b1a5e4 twi-aotz--  2.00g                                            0.66   0.33                  

recreating CNS as long as these are present from previous configuration will not work. For removing logical volumes, fastest approach I found was vgchange -an volumegroup; vgremove volumegroup --force

It is also necessary to clean up stuff in /var/lib/glusterd/, /etc/glusterfs/ and /var/lib/heketi/, below is list of files which needs to be removed before running again cns-deploy

# ls -l /var/lib/glusterd/
total 8
drwxr-xr-x. 3 root root 17 Feb 27 14:14 bitd
drwxr-xr-x. 2 root root 34 Feb 27 14:12 geo-replication
-rw-------. 1 root root 66 Feb 27 14:14 glusterd.info
drwxr-xr-x. 3 root root 19 Feb 27 14:12 glusterfind
drwxr-xr-x. 3 root root 46 Feb 27 14:14 glustershd
drwxr-xr-x. 2 root root 40 Feb 27 14:12 groups
drwxr-xr-x. 3 root root 15 Feb 27 14:12 hooks
drwxr-xr-x. 3 root root 39 Feb 27 14:14 nfs
-rw-------. 1 root root 47 Feb 27 14:14 options
drwxr-xr-x. 2 root root 94 Feb 27 14:14 peers
drwxr-xr-x. 3 root root 17 Feb 27 14:14 quotad
drwxr-xr-x. 3 root root 17 Feb 27 14:14 scrub
drwxr-xr-x. 2 root root 31 Feb 27 14:14 snaps
drwxr-xr-x. 2 root root  6 Feb 27 14:12 ss_brick
drwxr-xr-x. 3 root root 29 Feb 27 14:14 vols


# ls -l /etc/glusterfs/
total 32
-rw-r--r--. 1 root root  400 Feb 27 14:12 glusterd.vol
-rw-r--r--. 1 root root 1001 Feb 27 14:12 glusterfs-georep-logrotate
-rw-r--r--. 1 root root  626 Feb 27 14:12 glusterfs-logrotate
-rw-r--r--. 1 root root 1822 Feb 27 14:12 gluster-rsyslog-5.8.conf
-rw-r--r--. 1 root root 2564 Feb 27 14:12 gluster-rsyslog-7.2.conf
-rw-r--r--. 1 root root  197 Feb 27 14:12 group-metadata-cache
-rw-r--r--. 1 root root  276 Feb 27 14:12 group-virt.example
-rw-r--r--. 1 root root  338 Feb 27 14:12 logger.conf.example

# ls -l /var/lib/heketi/
total 4
-rw-r--r--. 1 root root 219 Feb 27 14:14 fstab
drwxr-xr-x. 3 root root  49 Feb 27 14:14 mounts

do this on every machine which was previously part of CNS cluster, pay attention when deleting VGs/PVs/LVs to point to correct names/devices!!!

remove storagenode=glusterfs labels from nodes which were previously used in CNS deployment, check BZ for why is this necessary. To remove node labels, either use approach described here or oc edit node and then clean storagenode=glusterfs label.

After all this is done, then running cns-deploy should work as expected

#cns, #container-native-storage, #gluster, #ipv6, #lvm, #openshift-container-platform