Complete container management platform
It is important to review the Install/Upgrade Notes below before upgrading to any Rancher version.
The v2.7.12 build is only available for Rancher Prime customers, through the Rancher Prime registry. To learn more about Rancher Prime, see our page on the Rancher Prime Platform.
See the full list of issues addressed.
- If you're installing Rancher for the first time, your environment must fulfill the installation requirements.
--no-hooks
to the helm template
command, to skip rendering files for Helm's hooks. See #3226.NO_PROXY
. See the documentation and related issue #2725.registries.yaml
file to the docker run
command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #28969.The following information only applies if you are upgrading from Rancher v2.7.5. It does not apply if you are upgrading directly to the latest Rancher version from v2.7.4 or earlier, or if you are upgrading to the latest Rancher version from v2.7.6.
Rancher v2.7.6 and later contain a reverse migration utility that runs at startup. Data migration is only triggered if you have been on Rancher v2.7.5.
Users will be corrected from the v2.7.5 data migration, which updated personalIDs to use GUIDs instead of Distinguished Names (DNs). Rancher v2.7.6 and later fix bugs related to the inability to login for various reasons. See #41985 and #42120.
Important: If you disabled AD authentication while on v2.7.5, don't enable it after upgrading until after the utility is run. Doing so will cause the reverse migration to fail to clean up the remaining bad data.
We strongly recommend that you directly upgrade to the latest version of Rancher v2.7.x, especially if you're on a broken or partially downgraded Rancher setup after upgrading to v2.7.5. Allow the startup utility to revert the Active Directory changes to restore functionality to your setup.
Even if you're currently on Rancher v2.7.5 and your setup wasn't broken by the Active Directory changes, you should still upgrade to v2.7.6 or later and allow the startup utility to revert the migration.
The reverse migration startup utility saves all relevant changes to Rancher if it finds GUID-based users in Active Directory. The users' data (including the user object, all bindings, and tokens) return a Distinguished Name as the principalID. If the LDAP connection permanently fails during execution of the utility, Rancher automatically retries the utility several times with exponential backoff. Missing users are left behind and reported to the local admin for manual review.
If you need to clean up any missing users following an upgrade to the latest Rancher version, contact support.
cluster-api
core provider controllers are now run in a pod in the cattle-provisioning-cattle-system
namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher ran cluster-api
controllers in an embedded fashion. This change makes it easier to maintain cluster-api
versioning. See #41094.
capi-webhook
error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which has cluster.x-k8s.io/v1alpha4
resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete all cluster.x-k8s.io/v1alpha4
resources from the backup tar before using it. See #382.rancher-webhook
chart not only in the local cluster but also in all downstream clusters. Note that restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions ideally need to be in sync. See #41730 and #41917.psp.enabled
value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #41018.managedBy
annotation. Project Monitoring V2 required a workaround in its initial release to set helmProjectOperator.helmController.enabled: false
since the Helm Controller operated on a cluster-wide level and ignored the managedBy
annotation. See #39724.management.cattle.io/auth-provider-cleanup
annotation with the unlocked
value to its auth config. See #40378./v1/counts
endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.
.
, were permitted and would result in clusters being provisioned without the necessary Fleet components. See #39248.Directory.Read.All
permissions of type Application. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD. This will cause errors.Please refer to the README for latest and stable versions.
Please review our version documentation for more details on versioning and tagging conventions.
Starting in 2.6.0, many of the Rancher Helm charts available in the Apps & Marketplace will start with a major version of 100. This was done to avoid simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also brings us into compliance with semver, which is a requirement for newer versions of Helm. You can now see the upstream version of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features were announced in previous releases. See #6864.
UI and Backend
UI
implausible joined server for entry
. This requires manually marking the nodes in the cluster with a joined server. See #42856.404
error on high-availability RKE installations. Single node Docker installations aren't affected. If you refresh the browser window and select Resend, the authentication request will succeed, and you will be able to log in. See #31163.Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.Result Code 32 "No Such Object"
. See #35259.Request entity too large
errors when attempting to add a GitHub repo. Only target customizations that modify the Helm chart URL or version are affected. As a workaround, use multiple paths or GitHub repos instead of target customization. See #1650.rke-profile-hardened-1.23
or the rke2-profile-hardened-1.23
profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #39851.rootSize
for AWS EC2 provisioners does not currently take an integer when it should, and an error is thrown. To work around this issue, wrap the EC2 rootSize
in quotes. See #40128.Updating
state even when it contains nodes in an Error
state. See #39164.disableSameOriginCheck
setting controls when credentials are attached to requests. See documentation and #34584 for more information.rancher-charts
catalog resource.fleet-crd
and fleet-chart
.fleet
chart. Fleet will automatically update the agents.fleet-agent
pods may be created and deleted during initial downstream agent deployment; rather than just one. This resolves itself quickly, but is unintentional behavior. See #33293.apiextensions.k8s.io/v1beta1
, trying to restore an existing backup file into a v1.22+ cluster will fail because the backup file contains CRDs with the apiVersion v1beta1. There are two options to work around this issue: update the default resourceSet
to collect the CRDs with the apiVersion v1, or update the default resourceSet
and the client to use the new APIs internally. See documentation and #34154.mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni
. See #33291.ingress-nginx/nginx-ingress-controller
" and "Updating service frontend
with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #35798.spec.rkeConfig.machineGlobalConfig.profile
is set to null
, which is an invalid configuration. See #8480.Cluster health check failed
. During an upgrade, this is a benign error and will self-resolve. It's caused by the Kubernetes API server becoming temporarily unavailable as it is being upgraded within your cluster. See #41012.kubectl edit setting <setting-name>
, then set the value and source fields to ""
, and re-deploy Rancher. See #37998.Do not create a service
. Change this to ClusterIP and upon saving, the new port will be created successfully during this subsequent attempt. See #4280.Important: Review the Install/Upgrade notes before upgrading to any Rancher version.
cattle.io/unknown
label. You can list these settings with the command kubectl get settings -l 'cattle.io/unknown==true'
. In Rancher v2.9 and later, these settings will be removed instead. See #43992.The embedded Cluster API webhook is removed from the Rancher webhook and can no longer be installed from the webhook chart. It has not been used as of Rancher v2.7.7, where it was migrated to a separate Pod. See #44619.
- If you're installing Rancher for the first time, your environment must fulfill the installation requirements.
NO_PROXY
. See the documentation and issue #2725.registries.yaml
file to the docker run
command, as shown in the K3s documentation. If the registry has certificates, then you'll also need to supply those. See #28969.privileged
flag. See documentation.Please refer to the README for the latest and stable Rancher versions.
Please review our version documentation for more details on versioning and tagging conventions.
In Rancher v2.6.0 and later, in the Apps & Marketplace UI, many Rancher Helm charts are named with a major version that starts with 100. This avoids simultaneous upstream changes and Rancher changes from causing conflicting version increments. This also complies with semantic versioning (SemVer), which is a requirement for Helm. You can see the upstream version number of a chart in the build metadata, for example: 100.0.0+up2.1.0
. See #32294.
Dual-stack and IPv6-only support for RKE1 clusters using the Flannel CNI has been experimental since v1.23.x. See the upstream Kubernetes docs. Dual-stack is not currently supported on Windows. See #165.
In June 2023, Microsoft deprecated the Azure AD Graph API that Rancher had been using for authentication via Azure AD. When updating Rancher, update the configuration to make sure that users can still use Rancher with Azure AD. See the documentation and issue #29306 for details.
Apps functionality in the cluster manager has been deprecated as of the Rancher v2.7 line. This functionality has been replaced by the Apps & Marketplace section of the Rancher UI.
Also, rancher-external-dns
and rancher-global-dns
have been deprecated as of the Rancher v2.7 line.
The following legacy features have been removed as of Rancher v2.7.0. The deprecation and removal of these features was announced in previous releases. See #6864.
UI and Backend
UI
.
, were permitted and would result in clusters being provisioned without the necessary Fleet components. See #39248.implausible joined server for entry
. This requires manually marking the nodes in the cluster with a joined server. See #42856.cluster-api
core provider controllers run in a pod in the cattle-provisioning-cattle-system
namespace, within the local cluster. These controllers are installed with a Helm chart. Previously, Rancher ran cluster-api
controllers in an embedded fashion. This change makes it easier to maintain cluster-api
versioning. See #41094.restricted-admin
role is being deprecated in favor of a more flexible global role configuration, which is now available for different use cases other than only the restricted-admin
. If you want to replicate the permissions given through this role, use the new inheritedClusterRoles
feature to create a custom global role. A custom global role, like the restricted-admin
role, grants permissions on all downstream clusters. See #42462. Given its deprecation, the restricted-admin
role will continue to be included in future builds of Rancher through the v2.8.x and v2.9.x release lines. However, in accordance with the CVSS standard, only security issues scored as critical will be backported and fixed in the restricted-admin
role until it is completely removed from Rancher.
rancher/rdns-server
repository is now archived. Reverse DNS is already disabled by default.
~/.rancher/cli2.json
previously had permissions set to 0644
. Although 0644
would usually indicate that all users have read access to the file, the parent directory would block users' access. New Rancher CLI configuration files will only be readable by the owner (0600
). Invoking the CLI will trigger a warning, in case old configuration files are world-readable or group-readable. See #42838.psp.enabled
value in the chart install yaml when you install or upgrade v102.x.y charts on hardened RKE2 clusters. Instructions for updating the value are available. See #41018.kubeconfig-token-ttl-minutes
setting has been replaced by the setting, kubeconfig-default-token-ttl-minutes
, and is no longer available in the UI. See #38535.management.cattle.io/auth-provider-cleanup
annotation with the unlocked
value to its auth config. See #40378.bind
and escalate
verbs for GlobalRoles. Users who have *
set on GlobalRoles will now have both of these verbs, and could potentially use them to escalate privileges in Rancher v2.8.0 and later. You should review current custom GlobalRoles, especially cases where bind
, escalate
, or *
are granted, before you upgrade.rancher-webhook
chart not only in the local cluster but also in all downstream clusters. Restoring Rancher from v2.7.5 to an earlier version will result in downstream clusters' webhooks being at the version set by Rancher v2.7.5, which might cause incompatibility issues. Local and downstream webhook versions need to be in sync. See #41730 and #41917.Legacy code for the following v1 charts is no longer available in the rancher/system-charts
repository:
rancher-cis-benchmark
rancher-gatekeeper-operator
rancher-istio
rancher-logging
rancher-monitoring
The code for these charts will remain available for previous versions of Rancher.
Helm v2 support is deprecated as of the Rancher v2.7 line and will be removed in Rancher v2.9.
Directory.Read.All
permissions of type Application
. If you configure a different set of permissions, Rancher may not have sufficient privileges to perform some necessary actions within Azure AD, causing errors.
priorityClass
is available in the Rancher pod and its feature charts. Previously, pods critical to running Rancher didn't use a priority class. This could cause a cluster with limited resources to evict Rancher pods before other noncritical pods. See #37927.capi-webhook
error. Make sure that the chart version used for backups is v102.0.2+up3.1.2, which has cluster.x-k8s.io/v1alpha4
resources removed from the resourceSet. If you can't use v102.0.2+up3.1.2 for backups, delete all cluster.x-k8s.io/v1alpha4
resources from the backup tar before using it. See #382./v1/counts
endpoint that the UI uses to display resource counts. The UI subscribes to changes to the counts for all resources through a websocket to receive the new counts for resources.
managedBy
annotation. In its initial release, Project Monitoring V2 required a workaround to set helmProjectOperator.helmController.enabled: false
, since the Helm Controller operated on a cluster-wide level and ignored the managedBy
annotation. See #39724.Not all cluster tools can be installed on a hardened cluster.
Rancher v2.7.2:
Cluster health check failed
. This is a benign error that occurs as part of the upgrade process, and will self-resolve. It's caused by the Kubernetes API server becoming temporarily unavailable as it is being upgraded within your cluster. See #41012.kubectl edit setting <setting-name>
, then set the value and source fields to ""
, and re-deploy Rancher. See #37998.Rancher 2.6.1:
ClusterIP
to an existing Deployment created using the legacy UI, the new port won't be created upon your first attempt to save the new port. You must repeat the procedure to add the port again. The Service Type field will display Do not create a service
during the second procedure. Change this to ClusterIP
and save to create the new port. See #4280.kubectl -n kube-system get configmap cattle-controller
rke-profile-hardened-1.23
or the rke2-profile-hardened-1.23
profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #39851.Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.Updating
state. This causes cluster creation to hang. See #41606.rke-profile-hardened-1.23
or the rke2-profile-hardened-1.23
profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #39851.spec.rkeConfig.machineGlobalConfig.profile
is set to null
, which is an invalid configuration. See #8480.rootSize
for AWS EC2 provisioners doesn't take an integer when it should, and an error is thrown. As a workaround, wrap the EC2 rootSize
in quotes. See #40128.
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.Updating
state even when they contain nodes in an Error
state. See #39164.foo
key. See #8563.ingress-nginx/nginx-ingress-controller
" and "Updating service frontend
with public endpoints". Ingresses and clusters are functional and active, and logs resolve eventually. See #35798.win_prefix_path
set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #32535.rancher-restricted
, to include cattle-provisioning-capi-system
and cattle-fleet-local-system
under the exemptions.namespaces
list. As a workaround, manually update rancher-restricted
to add cattle-provisioning-capi-system
and cattle-fleet-local-system
under the exemptions.namespaces
list. See #43150.securityContext
section is missing when a new workload is created. This prevents pods from starting when Pod Security Policy (PSP) support is enabled. See #4815.
404
error on high-availability RKE installations. Single node Docker installations aren't affected. If you refresh the browser window and select Resend, the authentication request will succeed, and you will be able to log in. See #31163.Result Code 32 "No Such Object"
. See #35259.Request entity too large
errors when attempting to add a GitHub repo. Only target customizations that modify the Helm chart URL or version are affected. As a workaround, use multiple paths or GitHub repos instead of target customization. See #1650.fleet-agent
pods may be created and deleted during initial downstream agent deployment, rather than just one. This resolves itself quickly, but is unintentional behavior. See #33293.
rke-profile-hardened-1.23
or the rke2-profile-hardened-1.23
profile is used. These RKE and RKE2 test cases failing is expected as they rely on PSPs, which have been removed in Kubernetes v1.25. See #39851.When migrating to a cluster with the Rancher Backup feature, the server-url cannot be changed to a different location. It must continue to use the same URL.
Rancher v2.7.7:
Active
status after a migration. If you see that a downstream cluster is still updating or in an error state immediately after a migration, please let it attempt to resolve itself. This might take up to an hour to complete. See #34518 and #42834.Rancher v2.6.3:
apiextensions.k8s.io/v1beta1
, trying to restore an existing backup file into a v1.22+ cluster will fail. The backup file contains CRDs with the apiVersion v1beta1
. There are two workarounds for this issue: update the default resourceSet
to collect the CRDs with the apiVersion v1, or update the default resourceSet
and the client to use the new APIs internally. See the documentation and #34154.Istio v1.12 and below do not work on Kubernetes v1.23 clusters. To use the Istio charts, please do not update to Kubernetes v1.23 until the next charts' release.
Rancher v2.6.4:
mkdir -p /var/run/istio-cni && semanage fcontext -a -t container_file_t /var/run/istio-cni && restorecon -v /var/run/istio-cni
. See #33291.Rancher v2.6.1:
Read-only project permissions and the View Monitoring role aren't sufficient to view links on the Monitoring index page. Users won't be able to see monitoring links. As a workaround, you can perform the following steps:
cattle-monitoring-system
namespace into the project.monitoring-ui-view
) role, and read-only
or higher permissions on at least one project in the cluster.See #4466.
win_prefix_path
set, you must deploy Rancher Wins Upgrader to restart wins on the hosts. This will allow Rancher to start collecting metrics in Prometheus. See #32535.rancher/rancher v2.8.3-rc8 rancher/rancher-agent v2.8.3-rc8
v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1 v1.28.7-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)rancher/rancher v2.8.3-rc7 rancher/rancher-agent v2.8.3-rc7
v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1 v1.28.7-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)rancher/rancher v2.8.3-rc6 rancher/rancher-agent v2.8.3-rc6 rancher/shell v0.1.23-rc3 rancher/system-agent v0.3.6-rc2-suc
DASHBOARD_UI_VERSION v2.8.3-rc1 SYSTEM_AGENT_VERSION v0.3.6-rc2 UI_VERSION 2.8.3-rc1 DYNAMICLISTENER v0.4.0-rc2 RKE v1.5.7-rc5
v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1 v1.28.7-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)v1.23.16-rancher2-3 v1.24.17-rancher1-1 v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)v1.23.16-rancher2-3 v1.24.17-rancher1-1 v1.25.16-rancher2-3 v1.26.14-rancher1-1 v1.27.11-rancher1-1
scripts/package-env
)scripts/package-env
)package/Dockerfile
)package/Dockerfile
)package/Dockerfile
)Dockerfile.dapper
)pkg/settings/setting.go
)pkg/settings/setting.go
)