Dynamic storage provisioning for OCP persistent templates

OCP ( Openshift Container Platform ) ships many templates – ephemeral and/or persistent ones. Change to project openshift

# oc project openshift 
# oc get templates

to see default delivered templates which you can use directly and without any further changes.

If you take a look at some of persistent templates you can notice that they have PVC definition as showed below

 {
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${SERVICE_NAME}"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },
 

For an persistent template to work one need to provide PVC with specific name in advance and prior to executing oc new-app template_name. This works fine, but I find it problematic to create pvc in advance
This is easy to overcome and change existing template / create new one which will support dynamic storage provision via storage classes.

First, we need to locate template we want to change in order to use dynamic storage provision
Note: I assume there is already storageclass in place

  1. edit desired permanent template, eg lets take postgresql-persistent and edit PersistentVolumeClaim section to be like
# oc get template -n openshift postgresql-persistent -o json > postgresql-persistent_storageclass.json  

Edit postgresql-persistent_storageclass.json by changing sections below sections

 
 "kind": "Template",
    "labels": {
        "template": "glusterfs-postgresql-persistent-template_storageclass"
    },

... rest of template ..... 

"name": "glusterfs-postgresql-persistent_storageclass", 

....... rest of template .... 

"selfLink": "/oapi/v1/namespaces/openshift/templates/glusterfs-postgresql-persistent_storageclass" 

.... .... rest of template .... 

adapt PersistentVolumeClaim section to support dynamic storage provisioning

 
{
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${DATABASE_SERVICE_NAME}",
                "annotations": {
                    "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS}"
                        }
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },

This will add new requirement for new parameter STORAGE_CLASS

At end of template in parameter section add this new parameter

{
            "name" : "STORAGE_CLASS",
            "description": "Storagecclass to use - here we expect storageclass name",
            "required": true,
            "value": "storageclassname"
        }

Save and create new template

# oc create -f postgresql-persistent_storageclass.json

After this point we can use this new template to start postgresql service and it will automatically allocate storage space from specified storage class

It is supposed that storageclass is already configured and in place, you can use any storage backend which support storage classes,in case you want to try CNS , you can follow instructions how to setup cns storage

 
# oc get storageclass
NAME           TYPE
cnsclass       kubernetes.io/glusterfs    

# oc new-project postgresql-storageclass
# oc new-app postgresql-persistent_storageclass -p STORAGE_CLASS=cnsclass

After application is started

# oc get pod
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-zdvq2   1/1       Running   0          52m
[root@gprfc031 templates]# oc exec postgresql-1-zdvq2 -- mount | grep pgsql
10.16.153.123:vol_72cd8ef33eee365d4c7f75cffaa1681b on /var/lib/pgsql/data type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@gprfc031 templates]# oc get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
postgresql   Bound     pvc-514d56e8-e7bf-11e7-8827-d4bed9b390df   1Gi        RWO           cnsclass       52m

it will mount volume from storage backend defined in storage class and start using it.

Advertisements

#cns, #gluster, #k8s, #kubernetes, #openshift, #persistant-storage, #pvc

etcd error message “etcd failed to send out hearbeat on time”

… etcd distributed key value store that provides a reliable way to store data across a cluster of machines per 1 and 2. ETCD is very sensitive on delays in networks, and not only in networks but all kind of overlay sluggishness of etcd cluster nodes can lead to complete kubernets cluster functionality problems.

At time when OpenShift/Kubernetes cluster starts reporting error messages as showed below, cluster will already behave inappropriate and pods scheduling / deleting will not work as expected and problems will be more than visible

Sep 27 00:04:01 dhcp7-237 etcd: failed to send out heartbeat on time (deadline exceeded for 1.766957688s)
Sep 27 00:04:01 dhcp7-237 etcd: server is likely overloaded
Sep 27 00:04:01 dhcp7-237 etcd: failed to send out heartbeat on time (deadline exceeded for 1.766976918s)
Sep 27 00:04:01 dhcp7-237 etcd: server is likely overloaded

systemctl status etcd output

 systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-10-01 09:18:37 EDT; 5h 20min ago
 Main PID: 11970 (etcd)
   Memory: 1.0G
   CGroup: /system.slice/etcd.service
           └─11970 /usr/bin/etcd --name=dhcp6-138.example.net --data-dir=/var/lib/etcd/ --listen-client-urls=https://172.16.6.138:2379

Oct 01 14:38:55 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 377.70994ms)
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 377.933298ms)
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 1.226630142s)
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 1.226803192s)
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:39:07 dhcp6-138.example.net etcd[11970]: the clock difference against peer f801f8148b694198 is too high [1.078081179s > 1s]

# systemctl status etcd -l will also have similar messages,and check these too

ETCD configuration file is located in /etc/etcd/etcd.conf and has similar content as below, this one is from RHEL, other OSes can have it a bit changed

ETCD_NAME=dhcp7-237.example.net
ETCD_LISTEN_PEER_URLS=https://172.16.7.237:2380
ETCD_DATA_DIR=/var/lib/etcd/
ETCD_HEARTBEAT_INTERVAL=6000
ETCD_ELECTION_TIMEOUT=30000
ETCD_LISTEN_CLIENT_URLS=https://172.16.7.237:2379

ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.7.237:2380
ETCD_INITIAL_CLUSTER=dhcp7-241.example.net=https://172.16.7.241:2380,dhcp7-237.example.net=https://172.16.7.237:2380,dhcp7-239.example.net=https://172.16.7.239:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1
ETCD_ADVERTISE_CLIENT_URLS=https://172.16.7.237:2379


ETCD_CA_FILE=/etc/etcd/ca.crt
ETCD_CERT_FILE=/etc/etcd/server.crt
ETCD_KEY_FILE=/etc/etcd/server.key
ETCD_PEER_CA_FILE=/etc/etcd/ca.crt
ETCD_PEER_CERT_FILE=/etc/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/etcd/peer.key

bold parameters in above configuration files are ones we want to change ETCD_HEARTBEAT_INTERVAL and ETCD_ELECTION_TIMEOUT and there is not unified value for all, it is necessary to play with different values and find out what is best. For most cases default (500/2500) will be fine.

After changing /etc/etcd/etc.conf do not forget to restart etcd service

# systemctl restart etcd

Below issue affecting ETCD nodes can lead to problem described in this post

  • network latency
  • storage latency
  • combination of network latency and storage latency

if network latency is low, then check storage which is used by Kubernets/OpenShift ETCD servers. This is workaround for case when root cause is discovered and changes as stated in this post are performed in order to mitigate issue when no other option is possible. First and better solution would be to solve issue at its roots by fixing problematic subsystem(s).

In my particular case storage subsystem was slow and not possible to change that without bunch of $$$

References : etcd documentation

#etcd, #k8s, #kubernetes, #linux, #openshift, #redhat, #storage