Kubernetes Cluster setup on CentOS


Kubernetes Cluster setup on CentOS isn’t that difficult anymore as it used to be in the past where we had to perform some major OS changes in order to build a proper environment. In this short tutorial we will guide you step by step how to actually set up a simple Kubernetes Cluster based on CentOS 7.5 and Kubernetes 1.12.1. Exactly like any other cluster a Kubernetes Cluster must always have an odd number of nodes like 3, 5, 7 and so on but in this particular tutorial we will resume to use only 3 nodes, one master node and two slave nodes. You can use being Virtual Machines or Bare Metal servers for this tutorial, the result will be the same as long we have three machines up and running and CentOS installed and patched on each one of them.

Table of Contents

Kubernetes and Docker Tutorial

Kubernetes Cluster Overview
CentOS 7 preparation for Kubernetes Cluster environment
OS configuration for Kubernetes Cluster
Docker and Kubernetes repos setup
Docker and Kubernetes installation on CentOS
Kubernetes Nodes Configuration
Troubleshooting Kubernetes

Kubernetes Cluster Overview

In our scenario we will use three nodes as we said before in our short introduction, these nodes may be VMs or Bare Metal machines, it really doesn’t matter, is up to you. We will use the below references for our tutorial so lets have a quick overview over nodes naming, IP addresses and most importantly, their roles within the cluster:

Hostname IP Address Role
KUBERNETES01 Master Node

CentOS 7 preparation for Kubernetes Cluster environment

Assuming that all minimum operating system requirements were met we can start by renaming our nodes hostnames accordingly, we will do that by using CentOS’s own hostnamectl utility, thanks to CentOS 7 this is quite nice and easy now, one single command and we are done. On our tutorial we will use our own naming convention but please feel free to use any names that you want for your nodes, this won’t have any impact over the final result of our Kubernetes tutorial at all.

On our first node lets run the hostnamectl command as shown below in order to change its name:

$ hostnamectl set-hostname KUBERNETES01

Nice, lets move now to the second node and do pretty much the same but using a different name this time:

$ hostnamectl set-hostname KUBERNETES02

And finally lets rename our third node like in the example below:

$ hostnamectl set-hostname KUBERNETES03

If you are still seeing the old hostnames don’t worry, that is expected as you haven’t logged out from the server, please log out and log back in, you will see your changes being applied, these changes will be kept even if you decide to reboot your server for whatever reason.

Carrying on with our CentOS 7 preparation for Kubernetes Cluster environment section we need to make sure that we have an updated OS and also a few utilities needed for our Kubernetes stack.

We’ll be using yum utility, default for CentOS family, just to make sure that our operating system is updated and patched properly, so please run the next yum command on all three nodes:

$ yum update

Now we have to ensure that next tools are available on our fresh CentOS operating system once again on all three nodes:

$ yum install wget yum-utils device-mapper-persistent-data lvm2

We are done with OS preparation for Docker and Kubernetes cluster environment and we can move to the next step where we have to configure the OS.

OS configuration for Kubernetes Cluster

In this step we have to make sure that the OS layer won’t stop or alter our Docker and Kubernetes stack so we must start by disabling firewalld service first on all three nodes. For simplicity on this tutorial we won’t use any software firewalls on any these nodes. You can have a hardware or software firewall in front of this stack but as we said for simplicity we’ll use no firewall for now, we will learn about Kubernetes and Docker security in a different article.

$ systemctl disable firewalld && systemctl stop firewalld

Next step is to disable SELinux, we can do this by running the next command within our terminal window:

$ setenforce=0

To save this change permanently we can edit directly SELinux config file, this will prevent SELinux (Security-Enhanced Linux) to use the access control mechanism built in the kernel, so lets edit the config file like show in the example below:

$ vi /etc/selinux/config

Find and replace the line SELINUX=enforcing with the one listed below:


That is all we have to change in order to prevent SELinux to check the access control.

Kubernetes doesn’t allow any swap usage, a fair choice we could say, who likes the swap? We must disable the swap as well so we can do this online by using the next command:

$ swapoff -a

All done but exactly as SELinux, swap partitions can show up again if for whatever reason we need to reboot one of the nodes, so lets make sure that swap will never be used again by editing fstab file as shown below:

$ vi /etc/fstab

Find and comment out any swap references from this file, below you can see an example:

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

Well done, we have now completed the OS preparation and configuration for our Docker and Kubernetes environment.

Docker and Kubernetes repos setup

On this particular step we have to configure the repositories for Docker CE (Community Edition) and Kubernetes and soon after we’ll proceed with installation steps. So lets start by getting the docker-ce repo first as shown below:

$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

For Kubernetes we will have to edit the repo file manually, so lets open vi and create the kubernetes.repo file for that by using the below command in your terminal window:

$ vi /etc/yum.repos.d/kubernetes.repo

Having kubernetes.repo file open please add the next lines:


gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 

Save and close the file as we are done with repos, we’ll move to the next step where we need to install Docker CE and Kubernetes.

Docker and Kubernetes installation on CentOS

Having all base packages and repos in place we can proceed with Docker and Kubernetes installation on CentOS 7 using once again yum utility like shown in the example below where we will install four packages with one single command on all three nodes:

$ yum install -y docker-ce kubelet kubeadm kubectl

Once all necessary packages for our Docker and Kubernetes environment are installed then we can move to the next step were we have to make sure that kernel control driver needed by kubelet is managed via control groups, so lets perform the changes as it follows, once again on each individual node from our cluster:

# $ sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
mkdir /etc/docker; cat > /etc/docker/daemon.json << EOF
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  "storage-driver": "overlay2",
  "storage-opts": [

cat > /etc/sysctl.d/kubernetes.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
modprobe br_netfilter;
sysctl --system;

We have everything in place now so we can simply start Docker first and also we will make sure that this will start automatically if one of our nodes gets rebooted.

$ systemctl enable docker && systemctl start docker

Same as above but this time for our kubelet service:

$ systemctl enable kubelet && systemctl start kubelet

Kubernetes Nodes Configuration

Now that we have managed to install, enable and start our major services for our Kubernetes Cluster lets start playing with Kubernetes, first lets initialise our master node, KUBERNETES01, like in the example below where we will specify the network subnet that will be used for pods:

$ kubeadm init --apiserver-advertise-address= --pod-network-cidr= --control-plane-endpoint=kubernetes01.mydomain.com

A successful output will look similar to this one listed below where we can clearly see that the Master node has been successfully initialised:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 1rm8m1.q1tpxazoaurv9c4y --discovery-token-ca-cert-hash sha256:d60fd884735ed86a8755b28f6a79abf35b860b9a127dc88512fbc935b147a39e

Let try to get the list of our nodes within the cluster:

$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

The output obviously doesn't look right, in order to get the nodes list we will have to make some changes that were also highlighted on the master mode when we first initialised it, so lets make the next changes:

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ chown $(id -u):$(id -g) $HOME/.kube/config

By executing one more time kubectl get nodes we should get back an output like this:

$ kubectl get nodes

KUBERNETES01  NotReady   master   28m   v1.12.1
KUBERNETES02  NotReady   none     23m   v1.12.1
KUBERNETES03  NotReady   none     23m   v1.12.1

If we need to perform this check on a different terminal window we can simply invoke watch command like this:

$ watch -n1 -d "kubectl get nodes"

We can now see what's going on within our cluster every second.

On each slave node, KUBERNETES02 and KUBERNETES03 we will have to run join command instructing these nodes to follow the master node:

$ kubeadm join --token 1rm8m1.q1tpxazoaurv9c4y --discovery-token-ca-cert-hash sha256:d60fd884735ed86a8755b28f6a79abf35b860b9a127dc88512fbc935b147a39e

Once join command has been executed we should be able to get a similar response back on our terminal window:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

But that's not all, we will have to initialise the network layer called wave, this should be deployed to our master node, KUBERNETES01:

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Give it a few seconds and once deployed we should be able to get the correct status across our cluster:

$ kubectl get nodes

KUBERNETES01  Ready    master   42m   v1.12.1
KUBERNETES02  Ready    none     38m   v1.12.1
KUBERNETES03  Ready    none     37m   v1.12.1

Troubleshooting Kubernetes Cluster

If for some reason you get the error listed below no worries, this can be fixed by copying back Kube config:

$ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

A short fix will be as we said before just to copy the config again like shown here:

$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/root/.kube/config’? y
$ chown $(id -u):$(id -g) $HOME/.kube/config

Now our cluster should be visible again:

$ kubectl get nodes
KUBERNETES01  Ready      master   4m31s   v1.12.1
KUBERNETES02  Ready      none     111s    v1.12.1
KUBERNETES03  Ready      none     111s    v1.12.1


No video posted for this page.


No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Kubernetes Cluster setup on CentOS

Share this page

If you found this page useful please share it with your friends or colleagues.