ceph rbd block device as persistent storage for openshift

After Fedora 23 – ceph installation and Fedora 23 – Openshift installation setup it is now time to hook openshift environment to use CEPH storage backend

Openshift pods will be using CEPH rados block devices as persistent storage and to achieve this one option is to follow below steps

  • create ceph pool and desired number of images on top of it, this can be done manually or using ceph-pool-setup.sh script. If ceph-pool-setup.sh is used,read README before running it.
  • create ceph-secret file. As an example is possible to use ceph-secret
  • define persistent volume and persistent volume claim. Yaml example files ceph-pv and ceph-pv-claim
  • create pod file adapting it to use ceph pv and ceph pv claims pod-pv-pvc-ceph
  • In above examples is necessary to change variables to suit different environments ( ceph pool name, ceph monitor(s) ip addresses … )

    Once all is in place, then running at below on Ceph cluster and after Openshift master will create pod which will in return start using rbd as persistent

    Ceph side

    # ./ceph-pool-setup -a c -p mypool -i 1 -r 3 -s 1
    

    This will create three way replicated ceph poll with name mypool, with one image on top of it with size of 1 GB

    Openshift side

    # oc create -f ceph-secrets.yaml
    # oc create -f ceph-pv.yaml 
    # oc create -f ceph-pv-claim.yaml
    # oc create -f pod-pv-pvc-ceph.json
    

    If all is fine pod should start and mount rbd inside pod with ext4 file system preformanted

    # oc rsh pod 
    # mount | grep rbd 
    /dev/rbd0 on /mnt/ceph type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
    

    This setup will enable openshift pods to use ceph rbd device as persistent storage, and in case pod is removed, and started at some other openshift node it will get same data in case it has access to rbd device which was used before pod was deleted. As name said, this is persistent volume and it should persist across pod re-creation.

    Advertisements

    #ceph, #ceph-rbd, #ceph-storage, #kubenetes, #openshift

    install openshift origin / OS Fedora 23

    Installing Openshift origin on Fedora 23 is showed below,overall not an difficult task to get test environment for openshift origin which then can be used for testing
    ( really only for testing – as this is going to be master / node on one machine and in kvm guest )

    Following below steps will lead to test openshift origin environment.

    Openshift origin publish bits are github openshift releases bellow is what I did to get it working under 10 mins.

    # dnf install -y docker; systemctl enable docker; systemctl start docker 
    # mkdir /root/openshift
    # cd /root/openshift
    # wget https://github.com/openshift/origin/releases/download/v1.1.1/openshift-origin-server-v1.1.1-e1d9873-linux-64bit.tar.gz
    # tar -xaf openshift-origin-server-v1.1.1-e1d9873-linux-64bit.tar.gz
    # cd openshift-origin-server-v1.1.1-e1d9873-linux-64bit
    # ./openshift start &
    

    After this, beside files delivered after unpacking source archive, there will be created in openshift directory openshift configuration files

    # ls -l 
    drwxr-xr-x. 4 root root        46 Jan 19 18:44 openshift.local.config
    drwx------. 3 root root        20 Jan 19 20:03 openshift.local.etcd
    drwxr-x---. 4 root root        33 Jan 19 18:44 openshift.local.volumes
    

    From here, it is necessary to export paths to keys and certificated

    # export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig
    $ export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt
    $ sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig
    

    That is! Follow rc-local Fedora 23 to make it to start on boot, or write systemd files using as starting points openshift-master-service and openshift-node-service – what should work with small tweaks

    #openshift, #openshift-origin

    prevent NetworkManager to update /etc/resolv.conf

    NetworkManager is going to update /etc/resolv.conf but if you do not want it to do that, then update /etc/resolv.conf to desired value,and edit /etc/NetworkManager/NetworkManager.conf and add there in main section dns=none linke below

    [main]
    plugins=ifcfg-rh,ibft
    dns=none
    

    This will prevent updating of /etc/resolv.conf by NetworkManager

    #dns-configuration-fedora, #fedora-2, #networkmanager

    copy/edit partition table with sfdisk

    sfdisk is nice tool for playing with disk partitions. It has many features, and is very useful when is necessary to do some changes with disk partitions. Before doing anything with sfdisk I recommend reading sfdisk man page to get basic picture what is sfdisk and for what it can be used. If not used carefully, it can be dangerous command, especially if pointed to wrong device so … think before running it
    I needed it where was necessary to clone partition table of one sdcard to another ( fdisk can do this too )

    To save partition table, I did

     
    # sfdisk --dump /dev/sdb > 16gcard
    

    Now in 16gcard dump file was written

    # cat 16gcard
    label: dos
    label-id: 0x00000000
    device: /dev/sdb
    unit: sectors
    
    /dev/sdb1 : start=        8192, size=    31108096, type=c
    

    This is what I need, however, new card is double in size, so 32 GB and writing above on new card will occupy just first 16 GB. Luckily, sfdisk is very versatile tool and it allows editing partition dump and then writing it back to disk. Open 16gcard in text editor ( eg. Vim ) and edit dump file. If original size is 31108096 * 512 B ( sectors ) then new size would be 61399040 * 512 B (sectors) and new dump file

    # cat 16gcard 
    label: dos
    label-id: 0x00000000
    device: /dev/sdb
    unit: sectors
    
    /dev/sdb1 : start=        8192, size=    61399040, type=c
    

    Now I can write it to new card

     
    # sfdisk /dev/sdb < 16gcard
    

    and fdisk -l shows

    #  fdisk -l /dev/sdb
    Disk /dev/sdb: 29.3 GiB, 31440502784 bytes, 61407232 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x00000000
    
    Device     Boot Start      End  Sectors  Size Id Type
    /dev/sdb1  *     2048 61407231 61405184 29.3G  c W95 FAT32 (LBA)
    

    What is very same partition table as one I had on old card except last sector which is adapted to suit size of new card.

    #linux, #sfdisk, #storage

    for all deleted … kill all processes

    Got my / file system full, after checking all, as last resort I looked at lsof to check are there some deleted files which are still hold by some process, and yes, there were many of them!
    Sometimes simple for loop can save a lot of time…
    # for m in $(lsof | grep delete |awk '{print $2}');do kill -9 $m;done

    #lsof

    docker complains about : cannot remove …. device or resource busy

    Sometimes docker is causing troubles,even stopped and all docker related packages are deleted from system. One example of such issue is

    # rpm -qa | grep docker
    #
    #rm -rf docker/
    rm: cannot remove ‘docker/devicemapper/mnt/c1b69563d2b817b729e875f50f9f5d29206d15f65d823c864c8444aa3c6030dd’: Device or resource busy
    rm: cannot remove ‘docker/containers/c1b69563d2b817b729e875f50f9f5d29206d15f65d823c864c8444aa3c6030dd/secrets’: Device or resource busy
    rm: cannot remove ‘docker/volumes/efd99751dce0cf97dd2a2f48ecc6ffa05d41b30938e08c9592b436bc3f858315/_data/secrets’: Device or resource busy
    rm: cannot remove ‘docker/volumes/efd99751dce0cf97dd2a2f48ecc6ffa05d41b30938e08c9592b436bc3f858315/_data/screen’: Device or resource busy

    it is another topic why docker is not cleaning stuff once deleted packages! In this case, something is holding these files and lsof will not help you :). This long line c1b69563d2b817b729e875f50f9f5d29206d15f65d823c864c8444aa3c6030dd reminds on docker container ID.Every process ( eg. nsenter, some command sent to inside container ) will get in /proc/$PID/mountinfo information about acquiring access to Mount Namespace.
    In this case I did

    # grep -l c1b69563 /proc/*/mountinfo
    /proc/8441/mountinfo
    /proc/8442/mountinfo
    root@ip-172-31-12-99: /var/lib # grep -l efd997 /proc/*/mountinfo
    /proc/8441/mountinfo
    /proc/8442/mountinfo

    and from above we see that PIDs, 8441,8442 are still alive and holding pointers to files in /var/lib/docker.

    # ps -f 8441
    UID        PID  PPID  C STIME TTY      STAT   TIME CMD
    root      8441 18680  0 04:16 pts/1    S      0:00 nsenter -m -u -n -i -p -t 19012

    and killing process,

    # kill -9 8441

    released these and rm worked.

    If you now think why I had to remove /var/lib/docker read changing docker storage backend you have to do that if you want to switch from / to loop lvm -> direct lvm -> overlay -> btrfs and visa versa.

    #docker, #docker-storage, #docker-storage-setup, #linux, #nsenter

    Jenkins CI server installation on CentOS 7

    Jenkins is nice CI tool and is easy to install and setup it. Below are steps for CentOS 7, assuming you have already Centos 7 installed and able to connect to internet, do below in order to get Jenkins server installed and running

    Get Jenkins repository information and upload key

    wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
    rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

    Install packages and open Jenkings port

    yum install java-1.7.0-openjdk
    systemctl start jenkins
    systemctl enable jenkins
    firewall-cmd --zone=public --add-port=8080/tcp --permanent
    firewall-cmd --zone=public --add-service=http --permanent
    firewall-cmd --reload

    After this you can access jenkins web interface via http://server_hostname:8080, where server_hostname should be $(hostname -s).
    Further tweaking might be directed toward Manage Jenkins -> and pick up what you want to change. Installing some plugins – like git ones related can be helpful. rpm -ql jenkins will give list of files which are part of jenkins package installed in step above from where you can investigate it further – and eventually change some default parameters ( check /etc/sysconfig/jenkins for details )

    #centos, #ci, #continious-integration, #jenkins, #linux