The purpose of this tutorial is to create 3 identical OpenSUSE Tumbleweed virtual machines in AWS that we will deploy only with Ansible.
With the goal of using this infrastrucutre to deploy a kubernetes cluster .
What is AWS EC2 ?
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
What is Ansible ?
Ansible is a software tool that provides simple but powerful automation for cross-platform computer support. It is primarily intended for IT professionals, who use it for application deployment, updates on workstations and servers, cloud provisioning, configuration management, intra-service orchestration, and nearly anything a systems administrator does on a weekly or daily basis. Ansible doesn’t depend on agent software and has no additional security infrastructure, so it’s easy to deploy.
Architecture
We will create 3 VMs :
master-node-0
worker-node-0
worker-node-1
Prerequisites
Before you get started, you’ll need to have these things:
Ansible > 4.5.x
amazon.aws Ansible module installed (for the installation run this command : ansible-galaxy collection install amazon.aws )
Create one workspace, let’s say AWS-K8s. Go inside this workspace and create one folder called roles and ssh-keys.
Copy your WS key-pair file in directory ssh-keys.(in this example my file is named admin.pem and my key admin)
Now go inside this folder and run these below mentioned commands.
Setting up Ansible Configuration File
In Ansible we have two types of configuration files: Global or Local. We will create a local configuration file in the AWS-K8s folder and all the Ansible commands we want to run in future, we will run them in this folder.
Create your ansible.cfg file with the following content:
I will not detail the entries in this file because I assume you know Ansible, just some details:
the default remote user of the EC2 instance is ec2-user.
Creating Ansible Vault to store the AWS Credentials
Run this command :
This command will ask to provide one vault password and then it will open the VI editor on Linux, create two variables in this file and put your AWS access key and secret key as values.
the content of the file will have the following entries :
EC2 role configuration
Setup a variables configuration file for EC2 role
Edit file roles/ec2/vars/main.yml and insert the following lines (replace the different values of the variables with your own):
I have defined 3 variables tags (cluster_tag,master_tag,workers_tag ) which will be useful for filtering when listing instances
Setup EC2 role task file
Edit file roles/ec2/tasks/main.yml and insert the following lines :
In this file nothing particular I insert the name of my variables defined previously.
I use the ec2 module to launch instance on AWS.
I used 3 tags for each instance (Name, Cluster and Role), which are defined in our roles/ec2/vars/main.yml file
Create a file setup_ec2.yml in our local directory , It will execute the ec2 role :
and insert the following lines :
Execute the role EC2
Run this command to start the playbook execution, you will be asked for the password for the Ansible Vault (awscred.yml file) :
This Playbook will launch 3 EC2 Instances for Kubernetes Cluster one Master Node and two Worker Nodes.
After a few minutes our instances are up running
Verify your instances has been properly created in AWS Management Console and test ssh connexion with your AWS key-pair :
Deployment of our kubernetes cluster
Initial setup
We need to get the public , private IP addresses and privte hostname of the nodes we just deployed.
You can use this script Getinventory.sh which reads the variable file of the Role EC2 and displays the public and private ip addresses of each node:
Or retrieve this information from the EC2 console or through ansible inventory scripts.
Configuration host file
We will create a host file in our workspace with the following entries (replace the different values of the variables with your own)
The value of the ansible_host variable is your public ip address.
Creation of playbooks
First steps create a playbooks directory :
We will create 3 playbooks :
set_k8s_node.yml : installs and configures the prerequisites for the master and workers nodes
run_master_install.yml : install and configure the master node
run_worker_install.yml : install and configure the worker nodes
We will create a file env_variables that contains the variables we need for the deployment.
Edit file playbooks/env_variables and insert the following lines (replace the different values of the variables with your own):
Replace the nodename entries with the value of the private name of each node and the iplocalnode entries with the value of the private ip address of each node.
Create or Edit file playbooks/set_k8s_node.yml and insert the following lines
This playbook will be used in the run_master_install and run_woker_install playbooks to install the kubernetes prerequisites
Create or edit file playbooks/run_master_install.yml and insert the following lines :
Modify the dest entry of the Get Join Cluster command with the path where your playbooks are stored
Create or edit file playbooks/run_worker_install.yml and insert the following lines :
Now we will create two files for the execution of the playbooks :
Create or edit file setup_master.yml and insert the following lines :
Create or edit file setup_worker.yml and insert the following lines :
Run master playbook install
Run workers playbook install
After a few minutes your kubernetes cluster is up and running 😀 👏
Remote control
Check if your cluster works:
Your kubernetes cluster bind on local ip addresses you can create an AWS Site-to-Site connection AWS Site-to-Site to be able to eat the cluster from your workstation or more simply connect with ssh on the master node.We will use the ssh method.
(replace the ip address with the public ip of your master node)
To access the dashboard you’ll need to find its cluster IP :
Get token for the connection to the dashboard :
copy and paste the token value in the dashboard connection window (in next step)
Open a SSH tunnel:
Now you can access the dashboard on your computer at http://localhost:8888.
Paste the token value :
Conclusion
With kubeadm and Ansible, booting a Kubernetes cluster can be done with a some
commands and it only takes few minutes to get a fully functional configuration.
You now have a basic understanding of how to use Ansible. You’ll notice that the Ansible code remains easy to read and simplifies the exact description of the infrastructure you want to create.