Terraform Kubestack Versions Save

Kubestack is a framework for Kubernetes platform engineering teams to define the entire cloud native stack in one Terraform code base and continuously evolve the platform safely through GitOps.

v0.19.1-beta.0

8 months ago

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.19.0-beta.0...v0.19.1-beta.0

Upgrade Notes

Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.

v0.19.0-beta.0

9 months ago

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.2-beta.0...v0.19.0-beta.0

Upgrade Notes

Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.

v0.18.2-beta.0

1 year ago

New Contributors

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.1-beta.0...v0.18.2-beta.0

Upgrade Notes

Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.

v0.18.1-beta.0

1 year ago

What's Changed

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.18.0-beta.0...v0.18.1-beta.0

Upgrade Notes

Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.

v0.18.0-beta.0

1 year ago

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.17.1-beta.0...v0.18.0-beta.0

Upgrade Notes

Upgrade the versions for each module and the source image in the Dockerfile. Also run terraform init --upgrade to update the provider versions in the lock file.

There are no specific steps required for EKS. There are two optional steps for AKS and GKE. See below for details.

AKS

The azurerm provider has a number of breaking changes in the latest version. Most have been handled inside the module and do not require special steps during upgrade. There is one exception, the reserved ingress IP.

Upstream refactored how zones are handled across resources. Kubestack let Azure handle the zones for the ingress IP, but previous provider versions stored them in state. The upgraded provider wants to destroy and recreate the IP because the zones in state don't match what's specified in code (null).

Users that do not want the IP to be recreated, have to set the zones explicitly to match what's in state.

configuration = {
  # Settings for Apps-cluster
  apps = {
    # [...]

    default_ingress_ip_zones = "1,2,3"

    # [...]
  }

  # [...]
}

GKE

Starting with Kubernetes v1.24 GKE will default to COS_containerd instead of COS for the node image type. Kubestack follows upstream and defaults to the Containerd version for new node pools starting with this release. For existing node pools, you can set cluster_image_type for the default node pool configured as part of the cluster module or image_type for additional node pools to either COS_containerd or COS to explicitly set this for a respective node pool.

v0.17.1-beta.0

1 year ago

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.17.0-beta.0...v0.17.1-beta.0

Upgrade Notes

Update framework versions for all modules, the Dockerfile and also run terraform init --upgrade to update the locked provider versions.

v0.17.0-beta.0

1 year ago

Full Changelog: https://github.com/kbst/terraform-kubestack/compare/v0.16.3-beta.0...v0.17.0-beta.0

Upgrade Notes

EKS and GKE

Both EKS and GKE until now had a kubernetes provider configured inside the cluster module. The change away from exec based kubeconfigs allowed removing the providers configured inside the modules, and correct this anti-pattern.

This change requires that upgrading users configure the kubernetes provider in the root module, and then pass the configured provider into the cluster module.

EKS Example

module "eks_zero" {
  providers = {
    aws = aws.eks_zero
    # pass kubernetes provider to the EKS cluster module
    kubernetes = kubernetes.eks_zero
  }

  #  [...]
}
# configure aliased kubernetes provider EKS
provider "kubernetes" {
  alias = "eks_zero"

  host                   = local.eks_zero_kubeconfig["clusters"][0]["cluster"]["server"]
  cluster_ca_certificate = base64decode(local.eks_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
  token                  = local.eks_zero_kubeconfig["users"][0]["user"]["token"]
}

GKE Example

module "gke_zero" {
  # add providers block and pass kubernetes provider for GKE cluster module
  providers = {
    kubernetes = kubernetes.gke_zero
  }

  #  [...]
}
# configure aliased kubernetes provider for GKE
provider "kubernetes" {
  alias = "gke_zero"

  host                   = local.gke_zero_kubeconfig["clusters"][0]["cluster"]["server"]
  cluster_ca_certificate = base64decode(local.gke_zero_kubeconfig["clusters"][0]["cluster"]["certificate-authority-data"])
  token                  = local.gke_zero_kubeconfig["users"][0]["user"]["token"]
}

v0.16.3-beta.0

2 years ago
  • GKE: Allow setting cloud router BGB config #274 - thanks @rzk
  • Update CLI and Terraform versions and lock azurerm provider version #275

Upgrade Notes

No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.

v0.16.2-beta.0

2 years ago
  • EKS: Stop using EKS-D Kind images #264
  • EKS: Allow configuring VPC DNS options #265 - thanks @mark5cinco
  • EKS: Allow disabling all logging types by setting empty string #269 - thanks @krpatel19
  • GKE: Allow setting KMS key for secret encryption at rest #272 - thanks @mark5cinco
  • EKS: Allow setting KMS key for secret encryption at rest #270 - thanks @mark5cinco

Upgrade Notes

No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.

v0.16.1-beta.0

2 years ago
  • GKE: Provide configuration variable to enable Cloud TPU feature #249
  • EKS: Provision per AZ NAT gateways when opting for private IP nodes #250
  • GKE: Allow configuring K8s pod and service CIDRs #251
  • EKS: Add ability to specify additional tags for nodes #253 - thanks @mark5cinco
  • AKS: Allow setting availability zones for the default node pool's nodes #254 - thanks @ajrpayne
  • Sign images #255 and quickstart artifacts #257 with sigstore/cosign - thanks @cpanato
  • EKS: Prevent provider level tags to constantly show up as changes for node pools #262

Upgrade Notes

No special steps required. Update the module versions and the image tag in the Dockerfile. Then run the pipeline to apply.