Change Kubernetes Pods CIDR

 
 

Changing Kubernetes Pods CIDR (Classless Inter-Domain Routing) is our next tutorial where we will learn in just a few easy steps how to change the network mask / range for our Kubernetes environment. Kubernetes is considered one of the most mature and stable platform for Docker orchestration – at the time of writing this tutorial -, being used in production environments by many big names from the tech industry. Mastering Kubernetes is not an easy task, it really takes time and require a lot of practice, it is well known of having a not so easy learning curve. Here, in this tutorial, we will try to explain in a few easy to follow steps, how to fix one of the most common issue encountered when working with Kubernetes, extending Kubernetes Pods CIDR.

Table of contents

Context
Check cluster CIDR configuration
Change Kubernetes Pods CIRD
Restart Kubelet service
Verify Pods CIDR subnet

Context

Assuming that we have already a 3 node Kubernetes Cluster up and running, at some point in time we get a strange error complaining about CIDRNotAvailable, more precisely saying that there are no remaining CIDRs left to allocate in the accepted range.

A complete error line could look similar to this:

I1219 15:01:59.149836 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kubernetes-node01", UID:"a039216a-1a2f-4e1f-b435-17ebc7741c89", APIVersion:"", Resource Version:"", FieldPath:""}): type: 'Normal' reason: 'CIDRNotAvailable' Node kubernetes-node01 status is now: CIDRNotAvailable

Usually followed by something like this:

E1219 15:02:00.784735 1 controller_utils.go:254] Error while processing Node Add/Delete: failed to allocate cidr from cluster cidr at idx:0: CIDR allocation failed; there are no remaining CIDRs left to allocate in the accepted range

This is quite a common issue when for example Kubernetes was not properly configured during the initial install, more precisely Kubernetes Pod network address space was not properly provisioned. Fortunately this can be fixed quite easy, it requires only some minor changes on the Master node, we will talk about this on our next steps.

Check cluster CIDR configuration

Checking cluster CIDR configuration is our first step in order to determine what is the current CIDR configuration of our Kubernetes cluster. Once we know all these details then we can proceed and update the configuration accordingly. There are two ways of getting the CIDR cluster details, we will be using both of these, so let’s start with the first one. Assuming that we are already logged in to our Kubernetes Master Node let’s run the next command on our terminal window:

ps -ef | grep "cluster-cidr"

A successful output will look similar to this one listed below:

root     20618 20587  1 Dec19 ?        00:19:21 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.0.0.0/24 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
root     29017 28856  0 18:33 pts/0    00:00:00 grep --color=auto cluster-cidr

On this output we have to check the value of --cluster-cidr= which in this particular case it’s this:

--cluster-cidr=10.0.0.0/24

So now we know for sure that our Kubernetes Cluster CIDR initial configuration was set up using a /24 subnet.

Let’s confirm this once again by using kubectl‘s output:

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

The above kubectl command should return something like this:

10.0.0.0/24

We do know now that both checks are showing same set of results, our cluster runs on a 10.0.0.0/24 subnet, also we know now for a fact – according to the error logs – that this /24 subnet it is not enough to run all our pods.

Change Kubernetes Pods CIDR

Changing Kubernetes Pods CIDR is the next step where we will be changing the initial configuration of our cluster, this particular setting is stored within kube-controller-manager.yaml file, this being one of the files responsible for our general Kubernetes configuration. So let’s open and edit this file located on our Kubernetes Master Node:

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

The specs section contains cluster-cidr setting, this have to be changed in order to extend our network address space, so please locate the cluster-cidr line like shown in the example below:

spec:
  containers:
  - command:
    - kube-controller-manager
    ...
    - --cluster-cidr=10.0.0.0/24
    ...

Assuming that we have a three node Kubernetes cluster we can change the value to something like this:

--cluster-cidr=10.0.0.0/22

By replacing cluster-cidr=10.0.0.0/24 value with cluster-cidr=10.0.0.0/22 we will be able expand our CIDR range to cover 4 subnets, 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24 and also 10.0.3.0/24, in total 1022 IP addresses to be consumed by Pods across all our three Kubernetes Nodes as opposed to a single /24 subnet which gives us only a 254 IP space. Of course we can go even further and increase the network address space by lowering the mask bits but generally is a good idea to keep things simple, we can amend network address space whenever is needed.

Once the value of cluster-cidr has been adjusted we should be able to save and close kube-controller-manager.yaml file, no other changes are needed.

Restart Kubelet Service

We have managed to identify the current configuration, to change it according to our use case, and now all that is left to do it is just to restart Kubelet service in order to use the new cluster-cidr value, so let’s run the next command on our Master Node terminal window:

systemctl restart kubelet

Verify Pods CIDR Subnet

Verifying Pods CIDR configuration it is really the last step on our short tutorial, here we will be checking if the new CIDR configuration has been read and applied within our Kubernetes cluster. So, once again, let’s get back to our terminal window and run again both commands that should give us the details that we are looking for:

ps -ef | grep "cluster-cidr"

If everything went fine then we should get an output similar to this:

root     20618 20587  1 Dec19 ?        00:22:20 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.0.0.0/22 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
root     29017 28856  0 19:36 pts/0    00:00:00 grep --color=auto cluster-cidr

We can now see that our /22 address space has been successfully applied:

... --cluster-cidr=10.0.0.0/24...

Let’s try now the second and last command:

kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

Once again, what we should see here is this:

10.0.0.0/24 10.0.1.0/24 10.0.2.0/24

That is all, we have extended Pods CIRD range by simply changing a value within kube-controller-manager.yaml.

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
Change Kubernetes Pods CIDR
Author
Category
Published
19/12/2019
Updated
22/12/2019
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.