
introduction
In this article I will cover the quick setup of a self-hosted Kubernetes 1.9 cluster using kubeadm, with the ability to recover after a power cycle (e.g., reboot)
I started playing around with kubeadm again for a new project and was especially interested in the self-hosting feature, which is in alpha state. In short, self-hosted clusters host their control plane (api-server, controller-manager, scheduler) as a workload. Compared to the universe of compilers, self-hosting is when a compiler can correctly compile the source code of itself. In term of kubernetes, this simplifies upgrading clusters to a new version and more in-depth monitoring
quick setup
Consider we have three nodes where kubernetes should be installed. Each node should have internet connectivity and meet the requirements mentioned in the install guide.
Let’s say; the nodes are node1, node2, and node3. For a quick setup of docker and kubeadm, ssh on each node as root and run:
1
| |
Then on node1:
1 2 3 4 5 | |
On node2 and node3 run:
1
| |
With the values from the output of the kubeadm init command. This is a quick setup for a self-hosted kubernetes with calico networking.
make it recoverable
Currently, a reboot would cause a total failure of the cluster, as no new control plane is scheduled, and kubelet won’t reach the API server. To fix this, run:
1 2 3 | |
This will install a systemd service, which runs after the kubelet.service and a script k8s-self-hosted-recover, which does the following to recover a self-hosted control plane after reboot:
- perform the
controlplanephase of kubeadm to setup the plane using static pods - wait until the API server is working
- delete the
DaemonSetof the current self-hosted control plane - run the
selfhostedphase of kubeadm
Congratulations, you have a three-node kubernetes 1.9 self-hosted cluster!
Enjoy!
Sources: