CNS ( Container Native Storage ) is way to containerizes storage to run as pods inside OCP ( OpenShift Container Platform ). Documentation is very detailed – so start from there if you want to learn more.
In short, it requires minimal nodes which will serve as base for CNS pods, and three nodes are minimum, one can have 6, 9 and it should work fine.
If one decide to run multiple CNS clusters , eg two three node clusters instead one six node cluster that will work too and in below text will be described how to achieve this.
Having two three nodes separate clusters is in my opinion better approach than having one big ( let’s say 6 node ) cluster. Multiple clusters can help to separate and/or organize block devices which are part of particular cluster and users can have different storage classes bound to different CNS storages in backend.
Let’s say we have below organization of devices
cns cluster1 [ node1, node2, node3 ]
node1 /dev/sdb node2 /dev/sdb node3 /dev/sdb
cns cluster2 [ node4, node5, node6 ]
node4 /dev/nvme0n1 node5 /dev/nvme0n1 node6 /dev/nvme0n1
CNS uses daemon sets and node labels to decide where to start cns pods. So pods will be started on nodes with specific label which is provided during CNS cluster setup.
Default label is glusterfs and it is applied on nodes and not namespaces – this means pods from other cluster will end on node with label even they are not supposed to be there.
Steps to build cns cluster1
- Get CNS packages, if you are Red Hat Customer, check documentation for proper RHN channels
- craft topology file
# oc new-project cnscluster1 # cns-deploy -n cnscluster1 --block-host 500 -g topologyfile1.json -y
Steps to build cns cluster2
# oc new-project cnscluster2 # cns-deploy -n cnselko --daemonset-label cnsclusterb --block-host 500 -g topologyfile2.json -y
I mentioned above that default label for CNS nodes is storagenode=glusterfs . In order to have two cns cluster, for second cluster it is necessary to use different daemonset-label which will ensure that nodes for second cluster have different label and based on that label daemonset will decide where to start cns pods in seconds cluster.
With both cluster up and running, the only remaining thing is to craft proper storage classes for use and we are done and ready to have two different storage classes for two different storage backends.