Upgrade cns using rolling update technique

CNS ( Container Native Storage ) uses daemonset to start pods on desired nodes. After installation we can see
for deamonset

# oc get ds 
o NAME                DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
glusterfs-storage         3         3         3         3            3       glusterfs=storage-host   4m

and for pods

 
# oc get pods
NAME                                          READY     STATUS    RESTARTS   AGE
glusterblock-storage-provisioner-dc-1-lkx7t   1/1       Running   0          1m
glusterfs-storage-jxb69                       1/1       Running   0          4m
glusterfs-storage-qt7td                       1/1       Running   0          4m
glusterfs-storage-vzzvr                       1/1       Running   0          4m
heketi-storage-1-sbchm                        1/1       Running   0          2m

and nodes

 
# oc get nodes --show-labels | grep storage-host 
node1    Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=ip-node1,node-role.kubernetes.io/compute=true,region=primary,zone=default
node2    Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=node2,node-role.kubernetes.io/compute=true,region=primary,zone=default
node3   Ready     compute   13d       v1.9.1+a0ce1bc657   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,glusterfs=storage-host,kubernetes.io/hostname=node3,node-role.kubernetes.io/compute=true,region=primary,zone=default
 
# oc get nodes --selector glusterfs=storage-host
NAME                                          STATUS    ROLES     AGE       VERSION
node1     Ready     compute   13d       v1.9.1+a0ce1bc657
node2     Ready     compute   13d       v1.9.1+a0ce1bc657
node3     Ready     compute   13d       v1.9.1+a0ce1bc657

we see that deamonset defines to use label storage-host to where to start glusterfs pods.

Full deamonset for this case is here

In daemon set updateStrategy defines when to update daemonset. By default for cns it is on OnDelete

 updateStrategy:
    type: OnDelete

this means, it will update configuration if we delete daemonset. This is not optimal, if we delete daemonset it will delete pods it is responsible for ( unless we use –cascade ). If we want to upgrade daemonset configuration, it is better approach to make an rollingupdate.

To do an rolling update for CNS pods do below

  • Get planned images in advance on nodes – yes, pull prior starting rolling update

This will reduce possible errors by one and process will be faster.

  • Edit daemon set and change updateStrategy to be RollingUpdate, like below
 updateStrategy:
    type: RollingUpdate
  • in daemonset change also image part and adapt it to desired image version

After doing these changes, an rolling update process will start. If all is fine after some times all pods will be running with new / updated images.
Rolling update approach allows to update daemonset without causing downtime as rolling update process ensure that application is up and running whole time during update process.

For more about rolling updates read here and here

Advertisements

#cns, #gluster, #kubernetes, #openshift, #rolling-updates, #storage

Benchmark Postgresql in OpenShift Container Platform using pgbench

Openshift Container Platform (OCP) allows to run various applications out of box based on images and templates delivered with OCP installation. After OCP installation if executed below commands

# oc get -n openshift templates
# oc get -n openshift images

there will be showed all by default installed templates and images which can be used directly and without any modifications to start desired application.

Templates having persistant in name means that they are using storage for data. Checking an particular template we can see there that PVC definition inside template is as showed below

{
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${DATABASE_SERVICE_NAME}"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },

This will require that PVC with name ${DATABASE_SERVICE_NAME} is created before trying to create pod which will use it. From this we can see that there is not direct support for dynamic storage for ocp templates – RFC bug is opened and this will probably be fixed in future releases, it is tracked in BZ

The idea with templates supporting storage classes is to use for example below command to start application pod and at time of creation use storage from predefined storage class

# oc new-app --template= -p STORAGE_CLASS=storage_class_name

In order to make storage classes part of templates, it is necessary to edit it per BZ – 1559728

In this blog post I will show you how to start Postgresql pod with storage from predefined storage and how to benchmark Postgress database running inside OCP pod. In all my tests I am using dynamic storage provisioning and storage classes as only way to provide storage for OCP pods, manual storage preparation works too, but that is 2016ish way to create storage for pods and I find it easier to use storage classes.

In order to use dynamic storage provision with OCP templates ( remember bz ) we need to edit template to support dynamic storage provisioning for PVCs.

# oc get template -n openshift postgresql-persistent storageclass_postgresql-persistent.json

Edit storageclass_postgresql-persistent.json to have PVC section as below and in parameters section add new parameter STORAGE_CLASS_NAME

 
{
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "annotations":{
                        "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS_NAME}"
                },
                "name": "${DATABASE_SERVICE_NAME}"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },

Parameters section

{
    "description" : "Storage class name to use",
    "displayName" : "Storage classs name",
    "name": "STORAGE_CLASS_NAME",
    "required": true

} 

After editing template per above instructions, it will be possible to use

# oc new-project testpostgresql 
# oc new-app --template=postgresql-persistent -p STORAGE_CLASS=storage_class_name

to create application which will dynamically allocated storage per storage class definition.

edit storageclass_postgresql-persistent.json, by
This is not the case now, you can use this template which is edited to accommodate dynamic storage provisioning.

Once you have template, load it and all will be prepared

# oc create -f template.json

In order to have this working it is necessary to have storage prepared in advance, this is usually case but it goes out of scope of this blog post.
However in this blog post it is possible to read how to setup CNS ( Container Native Storage ) storage solution to be used by application pods in OCP PaaS. Using CNS is one option, if cluster is running EC2, then it is also easy to setup storage class which can be used to consume storage space provided by ec2 cloud.

In this test, CNS storage will be used as storage backend providing storage for application pods.

Now we assume that there is functional storage class glusterfs-storage-block then we can run below to start pod

# oc new-project testpostgresql 
# oc new-app --template=postgresql-persistent -p STORAGE_CLASS=glusterfs-storage-block 

Once postgresql pod is started there will be visible

# oc get svc 
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
postgresql   ClusterIP   172.27.192.193           5432/TCP   8m

# oc get pods
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-qvxrr   1/1       Running   0          6m

and also

# oc exec postgresql-1-qvxrr -- mount  | grep data
/dev/mapper/mpathh on /var/lib/pgsql/data type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

so block storage originating from CNS block is mounted to /var/lib/pgsql/data

Postgresql pod runs, and now it is possible to execute pgbench on two different ways

First is to run pgbench in so called client-server mode

# oc exec postgresql-1-qvxrr -- env | egrep "_USER|_PASS"
POSTGRESQL_USER=userGQF
POSTGRESQL_PASSWORD=a75hXQYCsQfnS1LT

# oc get svc 
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
postgresql   ClusterIP   172.27.192.193          5432/TCP   30m

This is necessary in order for pgbench to work without asking for password

 
# vim /root/.pgpass 
# chmod 600 /root/.pgpass 

and add there 
172.27.192.193:5432:*:userGQF:a75hXQYCsQfnS1LT

This is postgresql feature and it is visible that format is serviceIP:port:*:user:password
Now we can ran pgbench

# pgbench -h 172.27.192.193  -p 5432 -i -s 1 sampledb -U userS8L
creating tables...
10000 tuples done.
20000 tuples done.
30000 tuples done.
40000 tuples done.
50000 tuples done.
60000 tuples done.
70000 tuples done.
80000 tuples done.
90000 tuples done.
100000 tuples done.
set primary key...
vacuum...done.

# pgbench -h 172.27.192.193  -p 5432 -c 2 -j 2 -t 1  sampledb -U userS8L
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 2
number of threads: 2
number of transactions per client: 1
number of transactions actually processed: 2/2
tps = 142.755175 (including connections establishing)
tps = 167.147215 (excluding connections establishing)

Another way is what proposed Graham Dumpelton from our OCP team suggested a to run pgbench inside postgresql pod directly

 
# oc exec -i postgresql-1-qvxrr -- bash -c "pgbench -i -s 1 sampledb"
creating tables...
100000 of 1000000 tuples (10%) done (elapsed 0.04 s, remaining 0.39 s)
200000 of 1000000 tuples (20%) done (elapsed 0.10 s, remaining 0.40 s)
300000 of 1000000 tuples (30%) done (elapsed 0.15 s, remaining 0.36 s)
400000 of 1000000 tuples (40%) done (elapsed 0.20 s, remaining 0.30 s)
500000 of 1000000 tuples (50%) done (elapsed 0.25 s, remaining 0.25 s)
600000 of 1000000 tuples (60%) done (elapsed 0.31 s, remaining 0.20 s)
700000 of 1000000 tuples (70%) done (elapsed 0.36 s, remaining 0.15 s)
800000 of 1000000 tuples (80%) done (elapsed 0.41 s, remaining 0.10 s)
900000 of 1000000 tuples (90%) done (elapsed 0.47 s, remaining 0.05 s)
1000000 of 1000000 tuples (100%) done (elapsed 0.52 s, remaining 0.00 s)

# oc exec -i postgresql-1-qvxrr -- bash -c "pgbench -c 10 -j 2 -t 10 sampledb"
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10
number of transactions actually processed: 100/100
latency average: 14.361 ms
tps = 696.335188 (including connections establishing)
tps = 703.148347 (excluding connections establishing)
[root@perf148b pbench-agent]# 

Both of these methods works fine and will give identical result, thought I think later approach is a bit better – it does not require part related to setting up part related to /root/.pgpass
I wrote small script which supports second method from above. Check script’s readme to get insights how it can be used for pgbench testing.

Collect systems statistics during benchmark runs

Only knowing how Postgresql pod performs ( from tp/s point of view ) will give only partial result of benchmark. Beside this information it will be good to know

  • what is network traffic between application pods
  • what is iops load directed at storage network
  • how much memory is consumed on OCP node hosting Postgresql pod
  • status of CPU usage during test
  • and many other things can be also interested to check during test itself

Some of these information will be saved to various places in /var/log/* ( think /var/log/sar* ), but some of them will be short lived records in /proc and it would be good if there is a way to gather these too during test execution time. Luckily there is pbench tool developed by Red Hat’s perf/scale team which can help to collect system performance data during test execution. Pbench has many features and take time to read documentation in advance.

For me it is useful to see how OCP node where postgresql pod is scheduled behaves during test, and also to see load on storage sub-system and how it performs during postgresql load test

As part of Pbench is script pbench-user-benchmark which can as input accept above mentioned script and will collect system stats from desired machines.

This requires pbench to be setup and working in advance, check pbench repo for instructions where to find binaries and how to setup pbench.

Create Jenkins task to execute benchmark test

Above will work fine, but it can be better if above process is “Jenkis-ized”, that said if there is Jenkins job which will run benchmark test based on input requirements, if that is case then it is necessary to take into account that pgbench_test.sh uses oc tool to create and start pvc/pod.At time of this writing kubectl is not covered.

If pbench-user-benchmark is used then ensure pbench is installed and working fine. Check pbench github for details.

If oc is ran outside of OCP cluster ( eg, by installing oc on third machine and getting /root/.kube there ), then it is important to make sure that oc can view OCP cluster configuration and be able to create pods.

Another option is to make OCP master as Jenkins slave and bind Postgresql jenkins job to always execute on OCP master acting in this case as Jenkins slave

Below are graphs on how to create Jenkins job to fulfill desired requirement. I assume there is functional and working Jenkins server ( instructions how to install Jenkins server on CentOS can be found here )

Jenkins job creation

In Jenkins web New item -> Enter some descriptive name -> Freestyle project -> OK then fill necessary information

Make sure that job will run on machine where oc tool is present and available

In build section add

pbench-user-benchmark --config=${pbenchconfig} -- ${WORKSPACE}/postgresql/pgbench_test.sh -n ${namespace} -t ${transactions} -e ${template} -v ${vgsize} -m ${memsize} -i ${iterations} --mode ${mode} --clients ${clients} --threads ${threads} --storageclass ${storageclass}

After this all is prepared for starting jenkins job.
From main Jenkins job console -> Build with parameters -> Build
If all is fine…Jenkins console log will after some time will report Finished: SUCCESS

Happy Benchmark-Ing!

#cns, #jenkins, #kubernetes, #openshift, #performance, #pgbench, #postgresql

Dynamic storage provisioning for OCP persistent templates

OCP ( Openshift Container Platform ) ships many templates – ephemeral and/or persistent ones. Change to project openshift

# oc project openshift 
# oc get templates

to see default delivered templates which you can use directly and without any further changes.

If you take a look at some of persistent templates you can notice that they have PVC definition as showed below

 {
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${SERVICE_NAME}"
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },
 

For an persistent template to work one need to provide PVC with specific name in advance and prior to executing oc new-app template_name. This works fine, but I find it problematic to create pvc in advance
This is easy to overcome and change existing template / create new one which will support dynamic storage provision via storage classes.

First, we need to locate template we want to change in order to use dynamic storage provision
Note: I assume there is already storageclass in place

  1. edit desired permanent template, eg lets take postgresql-persistent and edit PersistentVolumeClaim section to be like
# oc get template -n openshift postgresql-persistent -o json > postgresql-persistent_storageclass.json  

Edit postgresql-persistent_storageclass.json by changing sections below sections

 
 "kind": "Template",
    "labels": {
        "template": "glusterfs-postgresql-persistent-template_storageclass"
    },

... rest of template ..... 

"name": "glusterfs-postgresql-persistent_storageclass", 

....... rest of template .... 

"selfLink": "/oapi/v1/namespaces/openshift/templates/glusterfs-postgresql-persistent_storageclass" 

.... .... rest of template .... 

adapt PersistentVolumeClaim section to support dynamic storage provisioning

 
{
            "apiVersion": "v1",
            "kind": "PersistentVolumeClaim",
            "metadata": {
                "name": "${DATABASE_SERVICE_NAME}",
                "annotations": {
                    "volume.beta.kubernetes.io/storage-class": "${STORAGE_CLASS}"
                        }
            },
            "spec": {
                "accessModes": [
                    "ReadWriteOnce"
                ],
                "resources": {
                    "requests": {
                        "storage": "${VOLUME_CAPACITY}"
                    }
                }
            }
        },

This will add new requirement for new parameter STORAGE_CLASS

At end of template in parameter section add this new parameter

{
            "name" : "STORAGE_CLASS",
            "description": "Storagecclass to use - here we expect storageclass name",
            "required": true,
            "value": "storageclassname"
        }

Save and create new template

# oc create -f postgresql-persistent_storageclass.json

After this point we can use this new template to start postgresql service and it will automatically allocate storage space from specified storage class

It is supposed that storageclass is already configured and in place, you can use any storage backend which support storage classes,in case you want to try CNS , you can follow instructions how to setup cns storage

 
# oc get storageclass
NAME           TYPE
cnsclass       kubernetes.io/glusterfs    

# oc new-project postgresql-storageclass
# oc new-app postgresql-persistent_storageclass -p STORAGE_CLASS=cnsclass

After application is started

# oc get pod
NAME                 READY     STATUS    RESTARTS   AGE
postgresql-1-zdvq2   1/1       Running   0          52m
[root@gprfc031 templates]# oc exec postgresql-1-zdvq2 -- mount | grep pgsql
10.16.153.123:vol_72cd8ef33eee365d4c7f75cffaa1681b on /var/lib/pgsql/data type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@gprfc031 templates]# oc get pvc
NAME         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
postgresql   Bound     pvc-514d56e8-e7bf-11e7-8827-d4bed9b390df   1Gi        RWO           cnsclass       52m

it will mount volume from storage backend defined in storage class and start using it.

#cns, #gluster, #k8s, #kubernetes, #openshift, #persistant-storage, #pvc

glustersummit2017

Today finished Glustersummit 2017 in Prague / Czechia , and I can say it was one of the best conference I ever attended. This is my (probably) subjective feeling, but all was top level from organization to ideas presented during the talks.

I had oprotunity together with my colleague Shekhar Berry to present our work on topic : Scalability and Performance with Brick Multiplexing, whole list of talks presented can be found in gluster summit schedule

Slides of our presentation can be found at link

Group photo is at this link

#cns, #gluster, #glustersummit2017, #kubernetes, #openshift, #redhat

OCP metrics error message “Error from server: No API token found for service account “metrics-deployer”

I wanted to recreate OCP – OpenShift Container Platform metrics and followed same upstream process as many times before but it was keep failing with

Error from server: No API token found for service account "metrics-deployer", retry after the token is automatically created and added to the service account 

huh, new trouble, luckily restarting master services helped in this case

# systemctl restart atomic-openshift-master-controllers; systemctl restart atomic-openshift-master-api

This was multimaster configuration, so necessary to restart master services on all masters.

Just writing it down in hope google will pick up tags and hopefully help someone with same issue

Happy hacking!

#atomic-openshift-master-api, #atomic-openshift-master-controlers, #kubernetes, #metrics, #ocp, #openshift-container-platform, #openshift-metrics

OpenShift : Error from server: User “$user” cannot list all nodes/pods in the cluster

The below OpenShift error messages can be quite annoying, and they appear if current login is not for system:admin
Example error messages

# oc get pods 
No resources found.
Error from server: User "system:anonymous" cannot list pods in project "default"
root@dhcp8-176: ~ # oc get nodes 
No resources found.
Error from server: User "system:anonymous" cannot list all nodes in the cluster

Trying to login with user admin will not help

# oc login -u admin 
Authentication required for https://dhcp8-144.example.net:8443 (openshift)
Username: admin
Password: 
Login successful.
You don't have any projects. You can try to create a new project, by running
    oc new-project 

root@dhcp8-176: ~ # oc get pods
No resources found.
Error from server: User "admin" cannot list pods in project "default"
root@dhcp8-176: ~ # oc get nodes 
No resources found.
Error from server: User "admin" cannot list all nodes in the cluster

To get rid of it, login as system:admin

# oc login -u system:admin

what it does and what certificates reads in order to succeed is possible to see if last command run with --loglevel=10

# oc login -u system:admin --logleve=10

#kubernetes, #openshift

etcd error message “etcd failed to send out hearbeat on time”

… etcd distributed key value store that provides a reliable way to store data across a cluster of machines per 1 and 2. ETCD is very sensitive on delays in networks, and not only in networks but all kind of overlay sluggishness of etcd cluster nodes can lead to complete kubernets cluster functionality problems.

At time when OpenShift/Kubernetes cluster starts reporting error messages as showed below, cluster will already behave inappropriate and pods scheduling / deleting will not work as expected and problems will be more than visible

Sep 27 00:04:01 dhcp7-237 etcd: failed to send out heartbeat on time (deadline exceeded for 1.766957688s)
Sep 27 00:04:01 dhcp7-237 etcd: server is likely overloaded
Sep 27 00:04:01 dhcp7-237 etcd: failed to send out heartbeat on time (deadline exceeded for 1.766976918s)
Sep 27 00:04:01 dhcp7-237 etcd: server is likely overloaded

systemctl status etcd output

 systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2016-10-01 09:18:37 EDT; 5h 20min ago
 Main PID: 11970 (etcd)
   Memory: 1.0G
   CGroup: /system.slice/etcd.service
           └─11970 /usr/bin/etcd --name=dhcp6-138.example.net --data-dir=/var/lib/etcd/ --listen-client-urls=https://172.16.6.138:2379

Oct 01 14:38:55 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 377.70994ms)
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 377.933298ms)
Oct 01 14:38:56 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 1.226630142s)
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: failed to send out heartbeat on time (deadline exceeded for 1.226803192s)
Oct 01 14:38:58 dhcp6-138.example.net etcd[11970]: server is likely overloaded
Oct 01 14:39:07 dhcp6-138.example.net etcd[11970]: the clock difference against peer f801f8148b694198 is too high [1.078081179s > 1s]

# systemctl status etcd -l will also have similar messages,and check these too

ETCD configuration file is located in /etc/etcd/etcd.conf and has similar content as below, this one is from RHEL, other OSes can have it a bit changed

ETCD_NAME=dhcp7-237.example.net
ETCD_LISTEN_PEER_URLS=https://172.16.7.237:2380
ETCD_DATA_DIR=/var/lib/etcd/
ETCD_HEARTBEAT_INTERVAL=6000
ETCD_ELECTION_TIMEOUT=30000
ETCD_LISTEN_CLIENT_URLS=https://172.16.7.237:2379

ETCD_INITIAL_ADVERTISE_PEER_URLS=https://172.16.7.237:2380
ETCD_INITIAL_CLUSTER=dhcp7-241.example.net=https://172.16.7.241:2380,dhcp7-237.example.net=https://172.16.7.237:2380,dhcp7-239.example.net=https://172.16.7.239:2380
ETCD_INITIAL_CLUSTER_STATE=new
ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-1
ETCD_ADVERTISE_CLIENT_URLS=https://172.16.7.237:2379


ETCD_CA_FILE=/etc/etcd/ca.crt
ETCD_CERT_FILE=/etc/etcd/server.crt
ETCD_KEY_FILE=/etc/etcd/server.key
ETCD_PEER_CA_FILE=/etc/etcd/ca.crt
ETCD_PEER_CERT_FILE=/etc/etcd/peer.crt
ETCD_PEER_KEY_FILE=/etc/etcd/peer.key

bold parameters in above configuration files are ones we want to change ETCD_HEARTBEAT_INTERVAL and ETCD_ELECTION_TIMEOUT and there is not unified value for all, it is necessary to play with different values and find out what is best. For most cases default (500/2500) will be fine.

After changing /etc/etcd/etc.conf do not forget to restart etcd service

# systemctl restart etcd

Below issue affecting ETCD nodes can lead to problem described in this post

  • network latency
  • storage latency
  • combination of network latency and storage latency

if network latency is low, then check storage which is used by Kubernets/OpenShift ETCD servers. This is workaround for case when root cause is discovered and changes as stated in this post are performed in order to mitigate issue when no other option is possible. First and better solution would be to solve issue at its roots by fixing problematic subsystem(s).

In my particular case storage subsystem was slow and not possible to change that without bunch of $$$

References : etcd documentation

#etcd, #k8s, #kubernetes, #linux, #openshift, #redhat, #storage