- Published on
Using Terraform to Deploy KinD Kubernetes Clusters
- Authors
- Name
- Ruan Bekker
- @ruanbekker
In this post we will use Terraform to deploy a 2 node KinD Kubernetes cluster.
If you are looking for related content, you can look under my kind tags for related tutorials.
Assumptions
I will make the assumption that you already have the following tools installed:
KinD runs kubernetes nodes as docker containers, which is why we need docker to be installed if you are following this tutorial.
Write Terraform Code
We will create the directory where we will be defining our terraform code:
mkdir workspace
Then change to our directory:
cd workspace
Then we need to find a kind terraform provider, there are a couple online, but I went with tehcyx/kind.
Then we will define our providers.tf
:
terraform {
required_providers {
kind = {
source = "tehcyx/kind"
version = "0.4.0"
}
}
}
provider "kind" {}
Then we can define our kind_cluster
resource in our main.tf
, where we want to define the following:
- Write the kubeconfig file to
/tmp/config
- Define the cluster name
- Define the kubernetes version (versions: kind/releases )
- Define the node roles
- Define the port mappings from the host to the container
For more configuration see the provider documentation.
resource "kind_cluster" "default" {
name = "test-cluster"
node_image = "kindest/node:v1.27.1"
kubeconfig_path = pathexpand("/tmp/config")
wait_for_ready = true
kind_config {
kind = "Cluster"
api_version = "kind.x-k8s.io/v1alpha4"
node {
role = "control-plane"
extra_port_mappings {
container_port = 80
host_port = 80
}
}
node {
role = "worker"
}
}
}
If you are using terraform to consume outputs such as the cluster ca certificate, client certificate etc, from kubernetes like the helm provider you can do that, but I will just reference them as outputs as demonstration in outputs.tf
.
output "kubeconfig" {
value = kind_cluster.default.kubeconfig
}
output "endpoint" {
value = kind_cluster.default.endpoint
}
output "client_certificate" {
value = kind_cluster.default.client_certificate
}
output "client_key" {
value = kind_cluster.default.client_key
}
output "cluster_ca_certificate" {
value = kind_cluster.default.cluster_ca_certificate
}
Initialize Terraform
Now that we have defined our terraform configuration, we can run a terraform init
which will download the providers:
terraform init
We can then run terraform plan
which will show us what terraform will want to deploy:
terraform plan
Which in my case shows the following:
Terraform will perform the following actions:
# kind_cluster.default will be created
+ resource "kind_cluster" "default" {
+ client_certificate = (known after apply)
+ client_key = (known after apply)
+ cluster_ca_certificate = (known after apply)
+ completed = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ kubeconfig = (known after apply)
+ kubeconfig_path = "/tmp/config"
+ name = "test-cluster"
+ node_image = "kindest/node:v1.27.1"
+ wait_for_ready = true
+ kind_config {
+ api_version = "kind.x-k8s.io/v1alpha4"
+ kind = "Cluster"
+ node {
+ role = "control-plane"
+ extra_port_mappings {
+ container_port = 80
+ host_port = 80
}
}
+ node {
+ role = "worker"
}
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
If you are happy with the proposed changes, you can run a terraform apply
:
terraform apply -auto-approve
This took about 2 minutes for me to have a cluster deployed. This will depend on your internet speed and if the kind container image has been downloaded before.
Interact with your Kubernetes Cluster
First we need to set our KUBECONFIG
environment variable to point to the config file that terraform created:
export KUBECONFIG=/tmp/config
Now we can use kubectl
to view our nodes:
kubectl get nodes -o wide
This should output something like the following:
test-cluster-control-plane Ready control-plane 3m46s v1.27.1 172.24.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-101-generic containerd://1.6.21
test-cluster-worker Ready <none> 3m22s v1.27.1 172.24.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-101-generic containerd://1.6.21
We can test our cluster by creating a nginx deployment:
kubectl create deployment demo --image=nginx --port=80 --namespace=default
We can then view our pods:
kubectl get pods -n default -o wide
Which will show us this output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
demo-59989484f-rtbdk 1/1 Running 0 67s 10.244.1.2 test-cluster-worker <none> <none>
Cleanup
We can tear down the cluster by running terraform destroy
:
terraform destroy -auto-approve
Thank You
The code for this demonstration can be found in my quick-starts repository.
Thanks for reading, if you like my content, feel free to check out my website, and subscribe to my newsletter or follow me at @ruanbekker on Twitter.
- Linktree: https://go.ruan.dev/links
- Patreon: https://go.ruan.dev/patreon