How to deploy Kubernetes on Brightbox
This guide takes you through deploying a Kubernetes cluster on Brightbox using Terraform.
The deployed cluster will be pre-configured with the Brightbox Kubernetes controller manager, allowing Kubernetes to manage its own resources using the Brightbox API.
Requirements
You need a Brightbox account, and you must have an SSH key set up. And you’ll need an SSH agent running locally with that SSH key added.
Locally, you’ll need git and must have Terraform installed.
Clone the Brightbox kubernetes-cluster terraform configuration repository
We’ve written a set of terraform configs to build a Kubernetes cluster for you, so get those from github:
$ git clone https://github.com/brightbox/kubernetes-cluster.git
Cloning into 'kubernetes-cluster'...
remote: Counting objects: 170, done.
remote: Compressing objects: 100% (94/94), done.
remote: Total 170 (delta 110), reused 127 (delta 74), pack-reused 0
Receiving objects: 100% (170/170), 39.88 KiB | 5.70 MiB/s, done.
Resolving deltas: 100% (110/110), done.
$ cd kubernetes-cluster/
Initialize Terraform
Then get Terraform to initialize all the relevant plugins:
$ terraform init
Initializing modules...
- k8s_cluster in cluster
- k8s_master in master
- k8s_storage in worker
- k8s_worker in worker
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
- Downloading plugin for provider "digitalocean" (terraform-providers/digitalocean) 1.20.0...
- Downloading plugin for provider "brightbox" (terraform-providers/brightbox) 1.3.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "random" (hashicorp/random) 2.3.0...
- Downloading plugin for provider "tls" (hashicorp/tls) 2.0.1...
Terraform has been successfully initialized!
Configure Terraform
We need to tell Terraform which what to call the cluster, the Brightbox account to build the cluster on, your username and password to authenticate with and how many servers (workers) to build.
Create a file called local.auto.tfvars
with the following keys and appropriate values:
account = "acc-xxxxx"
username = "you@brightbox.com"
cluster_name = "example"
worker_count = 2
You’ll notice this doesn’t include your password. We recommand against storing user credentials on-disk in plain-text, even locally.
Luckily Terraform allows us to provide variables from environment variables To
avoid your password being echoed to screen or ending up in your bash_history
file, use the read
command to prompt for a password and then export it into
the environment:
$ read -p "Password:" -s TF_VAR_password
Build the cluster!
Amazingly that’s all the hard work done. Now just apply the configuration and Terraform will spit out a huge plan of action and ask you to confirm:
$ terraform apply
module.k8s_worker.data.brightbox_image.k8s_worker: Refreshing state...
module.k8s_storage.data.brightbox_image.k8s_worker: Refreshing state...
module.k8s_master.data.brightbox_image.k8s_master: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Plan: 25 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
Type yes
and hit enter, and terraform will build and configure your new
Kubernetes cluster.
After a few minutes the cluster will be built and Terraform will spit out some useful information: …
Apply complete! Resources: 25 added, 0 changed, 0 destroyed.
Outputs:
bastion = cip-98y5p.gb1.brightbox.com
group_fqdn = grp-pjl1f.gb1.brightbox.com
master = 109.107.38.173
storage_ids = []
worker_ids = [
"srv-hrgv1",
"srv-rmfdx",
]
Connect to your Kubernetes cluster
The master
output is the public IP address of the Kubernetes master server.
You can SSH into this server using your SSH key:
$ ssh ubuntu@cip-lyzey.gb1.brightbox.com
Last login: Thu Jul 16 09:37:10 2020 from x.x.x.x
ubuntu@srv-4dbz0:~$
Or you can use a neat trick to get Terraform fill in the hostname for you like this:
$ ssh ubuntu@$(terraform output -raw master)
So now you can use kubectl
on the master to inspect the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
srv-4dbz0 Ready master 10m v1.18.5
srv-hrgv1 Ready worker 7m56s v1.18.5
srv-rmfdx Ready worker 7m40s v1.18.5
See here we have one master server and two worker nodes.
Grow the cluster
Let’s get Terraform to build an additional node for our cluster.
Edit the local.auto.tfvars
file and increase the worker_count
variable from
2
to 3
:
worker_count = 3
Then run terraform apply
again. Terraform knows that it has already built the
rest of the cluster and just builds one new node and configures it:
$ terraform apply
Plan: 3 to add, 1 to change, 1 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
...
Apply complete! Resources: 3 added, 1 changed, 1 destroyed.
Outputs:
bastion = cip-98y5p.gb1.brightbox.com
group_fqdn = grp-pjl1f.gb1.brightbox.com
master = 109.107.38.173
storage_ids = []
worker_ids = [
"srv-hrgv1",
"srv-rmfdx",
"srv-5z9pu",
]
Then on the master you can confirm that the new server was added to the cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
srv-4dbz0 Ready master 16m v1.18.5
srv-5z9pu Ready worker 2m20s v1.18.5
srv-hrgv1 Ready worker 14m v1.18.5
srv-rmfdx Ready worker 13m v1.18.5
So now you have a four node Kubernetes cluster, ready for receive your container deployments!
Now you might want to follow our guide to deploying an app with a load balancer on the cluster.