Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
Full Changelog: https://github.com/atlassian/escalator/compare/v1.14.0...v1.14.1
This release bumps a number of packages as well as some doc changes.
ghcr.io/atlassian/escalator:v1.14.0
This release bumps the tool chain to Go 1.20 and bumps version of other dependencies, as well as some minor changes.
#214 - Bump github.com/aws/aws-sdk-go from 1.33.0 to 1.34.0 #215 - Bump github.com/prometheus/client_golang from 1.5.1 to 1.11.1 #218 - Bump golang.org/x/net from 0.0.0-20211209124913-491a49abca63 to 0.7.0 #222 - Update docker-publish workflow, disable provenance when building images #223 - Ensure all node removal and taint log messages contain nodegroup fields #225 - Ensure that scale-ups always occur when there are starved pods #227 - Upgrade go version to 1.20
ghcr.io/atlassian/escalator:v1.13.2
This release bumps the tool chain from Go 1.17 to Go 1.19 and pins container base image to Alpine 3.16, and fixes documentation.
#210 - docs: remove scale to zero gotcha - Thanks @omjadas #211 - Update golangci-lint.yml #212 - Dockefile: Bump golang and Alpine
ghcr.io/atlassian/escalator:v1.13.1
This release fixes a bug comparing empty pod affinity expressions, bumps the tool chain from Go 1.14 to Go 1.17 and Kuberentes packages to 1.22, and introduces building of arm64 multi-arch container image.
#209 - Default Pod Selector ignores Pods with affinity set to empty struct - Thanks @decayofmind #205 - Build arm64 multi-arch Docker images #206 - Bump versions of Go and dependent libraries
ghcr.io/atlassian/escalator:v1.13.0
This release ensures Escalator considers initContainers and pod overhead when calculating pod requests. Escalator will now be more accurate in those edge cases. We migrated the CI/CD system to Github Actions and GCR for images. Additionally, thanks to @haugenj, Escalator now terminates orphaned instances for AWS Fleet.
Note: From this version forward escalator images are published to ghcr.io instead of Dockerhub
ghcr.io/atlassian/escalator:v1.12.0
This release adds the ability for Escalator to tag Auto Scaling Groups and Fleet requests. See nodegroup config to enable. Make sure to update your IAM policy to allow tagging.
atlassian/escalator:v1.11.0
This release updates dependencies, updates Kubernetes client dependencies from v1.13 to v1.18, builds on Go 1.14, moves to Leases
instead of ConfigMaps
for leader election, introduces support for AWS Spot instance types, and adds a feature to keep specific nodes safe from deletion with an annotation.
Lease
object for Leader Election as part of Kubernetes v1.18 client upgrade. See the RBAC for the new rbac to support creating the lease instead of ConfigMapAs part of the upgrade to Kubernetes client libraries v1.18.1, Escalator now uses Leases
to perform leader election instead of ConfigMaps
. If using RBAC and Leader Election, make sure to give Escalator the permissions to access these new object types. You can find an updated RBAC example here. There are no other changes needed for escalator to start using Leases.
atlassian/escalator:v1.10.0
This release fixes two issues with Escalator.
An edge case when increasing the minimum cloud provider nodegroup size above the current number of nodes would mean Escalator would skip untainting nodes to reach the desired capacity and instead request new nodes. This has been fixed such that Escalator will first try to untaint required nodes in that rare case.
A bug where using NodeAffinity
to match pods to nodes would ignore the Operator instead of only matching against Operator In
as desired. Note: if you currently use NodeAffinity
match selector operators other than In
you should update your pod configuration to match the documentation before updating to this version of Escalator.
Only match against In
operator for NodeAffinity
match selectors, #160
Untaint nodes when below minimum before requesting new nodes #168
atlassian/escalator:v1.8.0