Kubestack is a framework for Kubernetes platform engineering teams to define the entire cloud native stack in one Terraform code base and continuously evolve the platform safely through GitOps.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.19.0-beta.0...v0.19.1-beta.0
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.2-beta.0...v0.19.0-beta.0
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
terraform fmt -recursive
by @anhdle14 in https://github.com/kbst/terraform-kubestack/pull/304
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.1-beta.0...v0.18.2-beta.0
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.0-beta.0...v0.18.1-beta.0
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.17.1-beta.0...v0.18.0-beta.0
Upgrade the versions for each module and the source image in the Dockerfile. Also run terraform init --upgrade
to update the provider versions in the lock file.
There are no specific steps required for EKS. There are two optional steps for AKS and GKE. See below for details.
The azurerm
provider has a number of breaking changes in the latest version. Most have been handled inside the module and do not require special steps during upgrade. There is one exception, the reserved ingress IP.
Upstream refactored how zones are handled across resources. Kubestack let Azure handle the zones for the ingress IP, but previous provider versions stored them in state. The upgraded provider wants to destroy and recreate the IP because the zones in state don't match what's specified in code (null).
Users that do not want the IP to be recreated, have to set the zones explicitly to match what's in state.
configuration = {
# Settings for Apps-cluster
apps = {
# [...]
default_ingress_ip_zones = "1,2,3"
# [...]
}
# [...]
}
Starting with Kubernetes v1.24 GKE will default to COS_containerd
instead of COS
for the node image type. Kubestack follows upstream and defaults to the Containerd version for new node pools starting with this release. For existing node pools, you can set cluster_image_type
for the default node pool configured as part of the cluster module or image_type
for additional node pools to either COS_containerd
or COS
to explicitly set this for a respective node pool.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.17.0-beta.0...v0.17.1-beta.0
Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade
to update the locked provider versions.
Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.16.3-beta.0...v0.17.0-beta.0
Both EKS and GKE until now had a kubernetes
provider configured inside the cluster module. The change away from exec based kubeconfigs allowed removing the providers configured inside the modules, and correct this anti-pattern.
This change requires that upgrading users configure the kubernetes
provider in the root module, and then pass the configured provider into the cluster module.
EKS Example
module "eks_zero" {
providers = {
aws = aws.eks_zero
# pass kubernetes provider to the EKS cluster module
kubernetes = kubernetes.eks_zero
}
# [...]
}
# configure aliased kubernetes provider EKS
provider "kubernetes" {
alias = "eks_zero"
host = local.eks_zero_kubeconfig["clusters"][0]["cluster"]["server"]
cluster_ca_certificate = base64decode(local.eks_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
token = local.eks_zero_kubeconfig["users"][0]["user"]["token"]
}
GKE Example
module "gke_zero" {
# add providers block and pass kubernetes provider for GKE cluster module
providers = {
kubernetes = kubernetes.gke_zero
}
# [...]
}
# configure aliased kubernetes provider for GKE
provider "kubernetes" {
alias = "gke_zero"
host = local.gke_zero_kubeconfig["clusters"][0]["cluster"]["server"]
cluster_ca_certificate = base64decode(local.gke_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
token = local.gke_zero_kubeconfig["users"][0]["user"]["token"]
}
No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.
No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.