I built a bare-metal KUBERNETES CLUSTER on Raspberry Pi.
After building complex kubernetes cluster architectures for several customers, it’s been a while since I planned to build a cluster for myself as a hobby project, for fun and to host projects. And it’s done 😀
Hobbies should be fun … but requires some concession with our wives … 😀
In this post I’ll be walking you step by step on how I built a bare-metal, 4-node Kubernetes cluster running on Raspberry Pis.
You can add more nodes if you like. I chose 4 nodes simply because I had these Raspberry units in stock. This is just my hobby project.
Without further ado, let’s start with the article you came to read.
Hardware configuration
This is the part that gives the impression that I’m trying to convince you to build it as well . As now a Raspberry is becoming a rare commodity, but I had these Raspberry units in stock before this shortage …
The configuration I used is the following :
COMPUTE
There are different versions of Raspberry Pi, I had in stock 4 Raspberry PI 4B, my configuration is the following :
- 1 x Raspberry Pi 4B (4 CPU, 4 GB RAM each) for master node
- 3 x Raspberry Pi 4B (4 CPU, 8 GB RAM each) for worker node
That’s 12 CPUs, 24 GB memory in total - plenty of room for running multiple projects.
STORAGE
Raspberry Pis don’t come with on-board storage space. They do come with microSD slots, though, so microSD cards can be used to hold the OS and data for the Pi.I opted for SDXC SanDisk Extreme PRO 256GB memory cards. To have a distributed storage (like OpenEBS or other) I added 4 SSD disks which will be connected on the USB port, not very powerful 😀 ! but it is tests and we don’t have yet a direct PCIE bus on the Raspberry available easily.To connect them I used 4 x USB 3.0 adapters for SSDs and 2.5” SATA I/II/III hard drives (EC-SSHD).
POWER
For the power supply I took the choice to power each Raspberry via a traditional DC power supply (micro-USB).I used a 6 port desktop USB charger with PowerIQ technology : Anker PowerPort 60 W 6-Port and 4 x USB to micro-USB cables. To limit the cabling, we could have started with a Power over Ethernet (PoE) solution but we would have had to invest more on the switch.
COOLING
For the cooling I opted for a fan with adjustable speed and silent : 2 x GeekPi 2PCS Raspberry Pi.
NETWORK
For the network I used a switch TP-Link 8 ports TL-SG1008D and Ethernet cable set of 10 cables.
OS
For the OS I used the openSUSE MicroOS Kubic Distribution that is Certified Kubernetes ,which integrates the bundle for kubernetes 1.23.4 and and uses cri-o. Small drawback how it is a bubble version if you want to add packages you will have to use the following command: transactional-update pkg install -y package
And then reboot
BOX
I bought a special carrying case from C4Labs for my Pi cluster.
Cost and Overall Specs
For your reference, here is the list of parts needed and their approximate price. It is true that this is an expensive hobby…. The reference prices below do not include shipping costs and import taxes, which depend on your location.
Item | Price in € |
---|---|
3xRaspberry Pi 4B 8GB | 255 € |
1xRaspberry Pi 4B 4GB | 64 € |
4xSanDisk Micro SD card 256 GB | 128 € |
4xSSD disks 256 GB | 136 € |
4xSabrent USB 3.0 adapter for SSDs and 2.5” SATA I/II/III hard drives | 30 € |
1xTP-Link TL-SG1008D Switch Ethernet Gigabit 8 ports | 19 € |
4xRJ45 CAT6 Network Cable - 0,25 cm | 20 € |
1xAnker USB PowerPort 6 Port 60W | 26 € |
4xMYLB cable usb c to USB2, 50cm short Câble USB C | 25 € |
2xGeeekPi 2PCS Raspberry Pi Adjustable Speed Fan | 24 € |
1xcarrying case from C4Labs | 75 € |
Total | 802 € |
You can reduce the cost by taking only Raspberry Pi 4B 4GB, 3 nodes instead of 4 ,do not add external storage (SSD) and take a cheaper box.
Network topology
My cluster is located in the same Home private network.
Install O/S
Install openSUSE MicroOS KUBIC Download onto an SD card and boot.
Before launching the boot on the SD Card you must configure the first boot to make your system accessible after boot.
The disk images are pre-built, you have to provide a configuration file at first boot. At least a password for root is required. You can set different configuration parameters.
You will need :
- USB flash drive formated with any file system supported by MicroOS (e.g. FAT, EXT4, …)
- The volume label needs to be ignition
- On this media, you create a directory ignition containing a file with the name config.ign
- The resulting directory structure needs to look like this :
Example config.ign:
Create the value for the key passwordHash by using the command line
See the following link for more details on the procedure : MicroOS/Ignition.
You can customize your configuration, set hostname, ip address, start a particular service ….. I manually configured the hostname,ip address and default Gateway after each installation. For more information on the options see this link : https://coreos.github.io/ignition/
Now you can insert your flash drive and launch the boot on the SD card on the first node. Repeat this procedure on the 3 other nodes.
After a few minutes the 4 nodes are up 😀. On each node for the root user I create a .ssh/authorized_keys file where I insert my public key.
SETUP
on each node I will create an ssh config :
- Create an ssh rsa key : ssh-keygen -t rsa
- Create a authorized_keys file
- Add the public keys of each host and of your workstation in the authorized_keys file
- Distribute the authorized_keys file on each node
You will need to install Terraform on your workstation
I will use a Terraform workflow to configure my Kubernetes cluster.
Clone the repository and install the dependencies:
Setup variables in the variables.tf file :
Use terraform init command in terminal to initialize terraform and download the configuration files.
Use terraform apply command in terminal to create Kubernetes cluster with one master and three nodes:
After a few minutes your kubernetes cluster is up 😀
You can tear down the whole Terraform plan with :
Resources can be destroyed using the terraform destroy command, which is similar to terraform apply but it behaves as if all of the resources have been removed from the configuration.
Remote control
You can also install the kubectl command on your workstation, there are different binary depending on your operating system
Create a $HOME/.kube directory and copy the config file from the master node to the /root/.kube directory
Check if your cluster works:
We can see that we use CRI-O instead of docker.
You can also use a tool that I like very much Lens from the company Mirantis. Lens, which bills itself as the Kubernetes IDE, is a useful, attractive, open source user interface (UI) for working with Kubernetes clusters. Out of the box, Lens can connect to Kubernetes clusters using your kubeconfig file and will display information about the cluster and the objects it contains. Lens can also connect to or install a Prometheus stack and use it to provide metrics about the cluster, including node information and health.
To access the kubernetes dashboard :
We will now connect to the kubernetes dashbord using the ssh port forwarding method.
In a future post we will expose the Kubernetes dashboard using the traefik input controller to avoid using the kubectl proxy option or an ssh port forwarding and allow access to the dashboard via HTTPS with proper TLS/Cert.
To access the dashboard you’ll need to find its cluster IP :
Get token for the connection to the dashboard :
copy and paste the token value in the dashboard connection window (in next step)
Open a SSH tunnel:
Next
Now we have a working 4-node kubernetes cluster, Enjoy !! 😀 You can deploy, test your applications …
In future articles I will deal with the following topics:
- Implementation of a Container Attached Storage (CAS) service such as OpenEBS or Longhorn.
- Installation traefik for publishing your services
- Installation Ranger
- Installation a new master node ….
Conclusion
This blog described constructing a working four-node Kubernetes cluster running on real hardware that cost under 900 €.
The goal was a touchy-feely Kubernetes installation running on real local hardware.
- Much of the work involved of assembling a physical stack of Raspberry Pi boards, followed by preparing each operating system to run openSUSE MicroOS Kubic, a Distribution that is Certified Kubernetes ,which integrates the bundle for kubernetes 1.23.4 and and uses cri-o.
- We proceeded to setup Kubernetes through a terraform workflow sent to each node.
- Remote access to our cluster through the following tools :
- kubectl command
- Lens , Kubernetes IDE from the company Mirantis
- Kubernetes Dashboard
Containers have great potential for IoT applications. Raspberry Pi based clusters could be very useful for testing and learning concepts such as microservices, application clustering, container networking and container orchestration. In this article, I’ve tried to cover the hardware and software configuration details to help you build your own Raspberry cluster.