As cloud-native applications continue to dominate the tech landscape, deploying databases within Kubernetes has become increasingly prevalent. MySQL, particularly with its InnoDB storage engine, stands out as a reliable choice for managing data in Kubernetes environments.
The benefits of using MySQL in Kubernetes are numerous:
Scalability: MySQL in Kubernetes allows you to easily scale your database instances up or down to meet changing application demands, ensuring optimal resource utilization.
High Availability: With features like replication and clustering, MySQL can achieve high availability, minimizing downtime and ensuring that applications remain responsive even in the face of failures.
Automated Management: Kubernetes simplifies the management of MySQL deployments with features such as automated failover, self-healing, and seamless upgrades, which greatly reduce operational overhead.
Consistent Development Environments: By deploying MySQL on Kubernetes, development and production environments can remain consistent, easing the process of debugging and testing applications.
Microservices Compatibility: MySQL fits well within microservices architectures, enabling applications to have their own dedicated databases while still facilitating efficient inter-service communication.
In this article, we will walk through the steps required to deploy a MySQL InnoDB cluster in Kubernetes using the Kubernetes manifest method, leveraging the advantages of MySQL to create a robust, scalable database solution that meets the demands of modern applications. Future articles will explore additional deployment methods, including the MySQL operator and tools like ArgoCD.
This includes the deployment of MySQL Router as well as an application that uses the MySQL InnoDB cluster.
Prerequisites
Before you get started, you’ll need to have these things:
An operational Kubernetes cluster.
Installed kubectl command-line tool.
A Oracle container registry account.
bash version 4+
Steps
✅ Step1: Create a namespace
✅ Step2: Create the MySQL secret:
❗️ Replace the value of MYSQL_PASSWORD with your root user’s password
The password is deployed with each Pod, and is used by management scripts and commands for MySQL InnoDB Cluster and ClusterSet deployment in this tutorial.
✅ Step3: Create a named secret containing Oracle Cloud Infrastructure credentials
When you install the application using Kubernetes manifes or Helm, use a Kubernetes secret to provide the authentication details to pull an image from the remote repository.
The Kubernetes Secret contains all the login details you provide if you were manually logging in to the remote Docker registry using the docker login command, including your credentials.
Create a secret by providing the credentials on the command-line by using the following command :
ora-cont-secret.yaml:
dockerconfigjson corresponds to these JSON entries encoded in base64 :
container-registry.oracle.com: Name of Oracle container registry.
USERNAME: User name to access the remote Oracle container registry. The format varies based on your Kubernetes platform.
PASSWORD: Password to access the remote Oracle container registry.
EMAIL: Email ID for your Docker registry.
We will encode the JSON string with the following command:
You should get something like:
Use the encoded output in your Secret: Then, insert this encoded code into your secret definition. The YAML file should look like this:
We will now deploy our three MySQL instances using a Kubernetes StatefulSet deployment.
Why a StatefulSet? Because a StatefulSet is well-suited for a MySQL cluster due to its ability to manage state, ensure stable identities, and orchestrate the deployment of pods in a controlled and persistent manner.
We will apply this manifest (mysql-instance.yaml):
This command deploys the StatefulSet consisting of three replicas
After a few minutes, our instances are up :
To inspect the placement of your Pods on our cluster :
We can see that our 3 instances are properly distributed across each node
❗️We will not expose our MySQL instances externally via a LoadBalancer service, it will be the MySQL Router service that will handle the exposure.
✅ Step 5: Prepare the primary MySQL InnoDB Cluster
To configure a MySQL InnoDB Cluster, follow these steps:
We will connect directly to the pod of the first instance (dbc1-0) and execute the various commands.
Connect to the pod of the first instance (dbc1-0) :
Accessing the MySQL Instances
Once you have deployed your StatefulSet and the Service, you can access individual MySQL instances using the following DNS naming convention:
The DNS names of our MySQL instances will be:
Create a admin cluster user :
❗️Replace password with your root user’s password.
We are going to create a user named clusteradmin
We will execute this command for the other two instances :
Before creating a production deployment from server instances you need to check that MySQL on each instance is correctly configured.
To check if the MySQL server instances :
Repeat this command for each server instance(dbc1-1.mysql.mysqldb.svc.cluster.local, dbc1-2.mysql.mysqldb.svc.cluster.local) that you plan to use as part of your cluster
❗If the conditions are not met, you will be asked to modify certain parameters, but the best way is to execute the dba.configureInstance(‘root@instance_name’) command.
✅ Step 6: Create the primary MySQL InnoDB Cluster
We are now going to create the MySQL InnoDB Cluster using the MySQL Admin createCluster command.
Start with the dbc1-0 instance, which will be the primary instance for the cluster, then add two additional replicas to the cluster.Log in with the clusteradmin user (created in the previous step)
Create the MySQL InnoDB Cluster and add the prima instance to the cluster.
Our cluster has been successfully created 😀, and we will now add the other two instances.
Add the second instance to the cluster.
Add the remaining instance (dbc1-2.mysql.mysqldb.svc.cluster.local) to the cluster :
Verify the cluster’s status.
This command shows the status of the cluster. The topology consists of three hosts, one primary and two secondary instances. Optionally, you can call cluster.status({extended:1}).
✅ Step 7: Deployment router service
The next step is to deploy the router service to make our instances accessible from an application
You can deploy a MySQL Router to direct client application traffic to the proper clusters. Routing is based on the connection port of the application issuing a database operation:
Writes are routed to the primary Cluster instance in the primary ClusterSet.
Reads can be routed to any instance in the primary Cluster.
When you start a MySQL Router, it is bootstrapped against the MySQL InnoDB ClusterSet deployment. The MySQL Router instances connected with the MySQL InnoDB ClusterSet are aware of any controlled switchovers or emergency failovers and direct traffic to the new primary cluster.
Connect to one of the secondary servers to check if the mydb database has been properly replicated.
We will connect using the external address of the Router on the read-only port (6447)
Our application database is deployed and accessible through the router service for both reading and writing. We can now deploy our application.
Namespace creation :
Our application will require the following variables to function:
MYSQL_HOST=Host_instance
MYSQL_PORT=db_port
MYSQL_DATABASE=mydb
MYSQL_USER=YourMySQLUserName
MYSQL_PASSWORD=YourMySQLUserPassword
We will create a Secret for the password (MYSQL_PASSWORD).
Secret creation:
We will create a ConfigMap for the other variables(MYSQL_HOST,MYSQL_DATABASE,MYSQL_USER ).
ConfigMAp creation:
Now that the configuration is complete, we will deploy our e-commerce application using this Kubernetes manifest:
ecommerce.yaml :
To deploy the application, run this command :
Check if the application is successfully deployed :
The application is up 😀,we can connect to the external address of the service (10.89.0.103) on port 5001:
http://10.89.0.103:5001
The next step involves testing our application’s behavior during a failover of our MySQL InnoDB cluster,a topic we will address in a future article.
You will find all scripts and manifests in the scripts directory of the repository.
Conclusion
In conclusion, we have outlined the essential steps for deploying a MySQL InnoDB cluster in Kubernetes using the Kubernetes manifest method, and we have also demonstrated how to deploy an application that utilizes MySQL. This approach allows you to harness the power and flexibility of MySQL, ensuring a resilient and scalable database solution for your applications. As we continue to explore various deployment strategies in future articles, including the MySQL operator and tools such as ArgoCD, we aim to provide a comprehensive understanding of how to optimize MySQL for diverse environments. Thank you for following along, and we look forward to diving deeper into these topics in upcoming discussions.