A Terraform provider/provisioner for deploying Kubernetes with kubeadm
A Terraform resource
definition and provisioner
that lets you install Kubernetes on a cluster.
The underlying resources
where the provisioner
runs could be things like
AWS instances, libvirt
machines, LXD containers or any other
resource that supports SSH-like connections. The kubeadm
provisioner
will run over this SSH connection all the commands necessary for installing
Kubernetes in those resources, according to the configuration specified in
the resource "kubeadm"
block.
Here is an example that will setup Kubernetes in a cluster created with the Terraform libvirt provider:
resource "kubeadm" "main" {
api {
external = "loadbalancer.external.com" # external address for accessing the API server
}
cni {
plugin = "flannel" # could be 'weave' as well...
}
network {
dns_domain = "my_cluster.local"
services = "10.25.0.0/16"
}
# install some extras: helm, the dashboard...
helm { install = "true" }
dashboard { install = "true" }
}
# from the libvirt provider
resource "libvirt_domain" "master" {
name = "master"
memory = 1024
# this provisioner will start a Kubernetes master in this machine,
# with the help of "kubeadm"
provisioner "kubeadm" {
# there is no "join", so this will be the first node in the cluster: the seeder
config = "${kubeadm.main.config}"
# when creating multiple masters, the first one (the _seeder_) must join="",
# and the rest will join it afterwards...
join = "${count.index == 0 ? "" : libvirt_domain.master.network_interface.0.addresses.0}"
role = "master"
install {
# this will try to install "kubeadm" automatically in this machine
auto = true
}
}
# provisioner for removing the node from the cluster
provisioner "kubeadm" {
when = "destroy"
config = "${kubeadm.main.config}"
drain = true
}
}
# from the libvirt provider
resource "libvirt_domain" "minion" {
count = 3
name = "minion${count.index}"
# this provisioner will start a Kubernetes worker in this machine,
# with the help of "kubeadm"
provisioner "kubeadm" {
config = "${kubeadm.main.config}"
# this will make this minion "join" the cluster started by the "master"
join = "${libvirt_domain.master.network_interface.0.addresses.0}"
install {
# this will try to install "kubeadm" automatically in this machine
auto = true
}
}
# provisioner for removing the node from the cluster
provisioner "kubeadm" {
when = "destroy"
config = "${kubeadm.main.config}"
drain = true
}
}
Note well that:
provisioners
must specify the config = ${kubeadm.XXX.config}
,join
attribute pointing to the <IP/name>
they must join. You can use
the optional role
parameter for specifying whether it is joining as a
master
or as a worker
.Now you can see the plan, apply it, and then destroy the infrastructure:
$ terraform plan
$ terraform apply
$ terraform destroy
You can find examples of the privider/provisioner in other environments like OpenStack, LXD, etc. in the examples directory)
provisioner "kubeadm"
in the machines
you want to be part of the cluster.
count
of your masters or workers.kubeadm
attributes
in other parts of your Terraform script. This makes it easy to do things like:
kubeadm
in the code you have for creating your Load Balancer.cloud-init
code) that can
be used for creating machines dynamically, without Terraform being involved
(like autoscaling groups in AWS).(check the TODO for an updated list of features).
This provider
/provisioner
is being actively developed, but I would still consider
it ALPHA, so there can be many rough edges and some things can change without
any previous notice. To see what is left or planned, see the
issues list and the
roadmap.
$ mkdir -p $HOME/.terraform.d/plugins
$ # with go>=1.12
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provider-kubeadm \
github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provider-kubeadm
$ go build -v -o $HOME/.terraform.d/plugins/terraform-provisioner-kubeadm \
github.com/inercia/terraform-provider-kubeadm/cmd/terraform-provisioner-kubeadm
kubeadm
in your Terraform scripts:
resource "kubeadm"
configuration
block.provisioner "kubeadm"
block.You can run the unit tests with:
$ make test
There are end-to-end tests as well, that can be launched with
$ make tests-e2e