Skip to content

📖 Update v1.11 support matrix #12131

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 35 additions & 32 deletions docs/book/src/reference/versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ For the sake of this document, the most important artifacts included in a Cluste
- The Kubeadm Control Plane provider image
- The clusterctl binary

The Cluster API team will release a new Cluster API version approximately every four months (3 releases each year).
The Cluster API team will release a new Cluster API version approximately every four months (three releases each year).
See [release cycle](https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-cycle.md) and [release calendars](https://github.com/kubernetes-sigs/cluster-api/tree/main/docs/release/releases) for more details about Cluster API releases management.

The Cluster API team actively supports the latest two minor releases (N, N-1); support in this context means that we:
Expand All @@ -67,10 +67,11 @@ The table below documents support matrix for Cluster API versions (versions olde

| Minor Release | Status | Supported Until (including maintenance mode) |
|---------------|-------------------------|---------------------------------------------------------------------------------------------|
| v1.11.x | Under development | |
| v1.12.x | Under development | |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? We just released v1.10.x and are now working on v1.11

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if we have to update this table at all

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbueringer my idea was to get this page ready for when we will release, so we do not forget (see #12124)

But happy to rollback if you prefer

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point. It's just that when folks look at this doc on main it's very confusing. Do we know how we did this in the past? (I would expect we have some tracking in our release tasks)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say:

Like when folks look at code in main they get the in progress implementation for the next release,
they also get in progress docs for the next release.

To your point, as far as I remember, it is our a common practice to work on the book on main documenting what will become true when we release.

We do this for every new feature, for every breaking change etc, and this already started happening also in this release: we have a page where we are documenting changes for providers, we already changed contract documentation to align to v1beta2 etc.
(e.g also in this page we are already stating v1beta2 as current APi, v1beta1 as deprecated)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sbueringer, as discussed I have dropped the line for 1.12 release under development

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx!

| v1.11.x | Standard support period | in maintenance mode when v1.13.0 will be released, EOL when v1.14.0 will be released |
| v1.10.x | Standard support period | in maintenance mode when v1.12.0 will be released, EOL when v1.13.0 will be released |
| v1.9.x | Standard support period | in maintenance mode when v1.11.0 will be released, EOL when v1.12.0 will be released |
| v1.8.x | Maintenance mode | Maintenance mode since 2025-04-22 - v1.10.0 release date, EOL when v1.11.0 will be released |
| v1.9.x | Maintenance mode | Maintenance mode since 2025-08-12 - v1.11.0 release date, EOL when v1.12.0 will be released |
| v1.8.x | EOL | EOL since 2025-08-12 - v1.11.0 release date |
| v1.7.x | EOL | EOL since 2025-04-22 - v1.10.0 release date |
| v1.6.x | EOL | EOL since 2024-12-10 - v1.9.0 release date |
| v1.5.x | EOL | EOL since 2024-08-12 - v1.8.0 release date |
Expand Down Expand Up @@ -150,14 +151,14 @@ stopping and restarting the kube-controller-manager.

#### Cluster API release vs contract versions

Each Cluster API contract version defines a set of rules a provider is expected to comply with in order to interact with a specific Cluster API release.
Each Cluster API contract version defines a set of rules a provider is expected to comply with to interact with a specific Cluster API release.
Those rules can be in the form of CustomResourceDefinition (CRD) fields and/or expected behaviors to be implemented.
See [provider contracts](../developer/providers/contracts/overview.md)

Each Cluster API release supports one contract version, and by convention the supported contract version matches
the newest API version in the same Cluster API release.

A contract version might be temporarily compatible with older contract versions to ease transition of providers to
A contract version might be temporarily compatible with older contract versions to ease the transition of providers to
a new supported version; compatibility for older contract versions will be dropped when the older contract version is EOL.

<aside class="note">
Expand All @@ -181,12 +182,12 @@ providers of an older contract version)

</aside>

| Contract Version | Compatible with contract versions | Status | Supported Until |
|------------------|-----------------------------------|-------------|---------------------------------------------------------------------------------------------|
| v1beta2 | v1beta1 (temporarily) | Supported | After a newer API contract will be released |
| v1beta1 | | Deprecated | Deprecated since CAPI v1.11; in v1.14, Aug 26 v1beta1 will no more be considered compatible |
| v1alpha4 | | EOL | EOL since 2023-12-05 - v1.6.0 release date |
| v1alpha3 | | EOL | EOL since 2023-07-25 - v1.5.0 release date |
| Contract Version | Compatible with contract versions | Status | Supported Until |
|------------------|-----------------------------------|-------------|-------------------------------------------------------------------------------------------------------------------------------|
| v1beta2 | v1beta1 (temporarily) | Supported | After a newer API contract will be released |
| v1beta1 | | Deprecated | Deprecated since CAPI v1.11; in v1.14, Aug 26 v1beta2 will drop compatibility with v1beta1 and v1beta1 will be considered EOL |
| v1alpha4 | | EOL | EOL since 2023-12-05 - v1.6.0 release date |
| v1alpha3 | | EOL | EOL since 2023-07-25 - v1.5.0 release date |

See [11920](https://github.com/kubernetes-sigs/cluster-api/issues/11920) for details about the v1beta1 removal plan.

Expand All @@ -213,13 +214,13 @@ When a new Cluster API release is cut, we will document the Kubernetes version c
has been tested with in the [table](#supported-versions-matrix-by-provider-or-component) below.

Each Cluster API minor release supports (when it's initially created):
* 4 Kubernetes minor releases for the management cluster (N - N-3)
* 6 Kubernetes minor releases for the workload cluster (N - N-5)
* Four Kubernetes minor releases for the management cluster (N - N-3)
* Six Kubernetes minor releases for the workload cluster (N - N-5)

When a new Kubernetes minor release is available, the Cluster API team will try to support it in an upcoming Cluster API
patch release, thus extending the support matrix for the latest supported Cluster API minor release to:
* 5 Kubernetes minor releases for the management cluster (N - N-4)
* 7 Kubernetes minor releases for the workload cluster (N - N-6)
* Five Kubernetes minor releases for the management cluster (N - N-4)
* Seven Kubernetes minor releases for the workload cluster (N - N-6)

For example, Cluster API v1.7.0 would support the following Kubernetes versions:
* v1.26.x to v1.29.x for the management cluster
Expand All @@ -233,7 +234,7 @@ For example, Cluster API v1.7.0 would support the following Kubernetes versions:
Cluster API support for older Kubernetes version is not a replacement/alternative for upstream Kubernetes support policies!

Support for versions of Kubernetes which itself are out of support is limited to "Cluster API can start a Cluster with this Kubernetes version"
and "Cluster API can upgrade to the next Kubernetes version"; it does not include any extended support to Kubernetes itself.
and "Cluster API can upgrade to the next Kubernetes version"; it does not include any extended support to Kubernetes itself.

</aside>

Expand Down Expand Up @@ -265,18 +266,18 @@ In some cases, also Cluster API and/or Cluster API providers are defining additi
The following table defines the support matrix for the Cluster API core provider.
See [Cluster API release support](#cluster-api-release-support) and [Kubernetes versions support](#kubernetes-versions-support).

| | v1.8, _Maintenance Mode_ | v1.9 | v1.10 |
| | v1.9, _Maintenance Mode_ | v1.10 | v1.11 |
|------------------|--------------------------|-------------------|-------------------|
| Kubernetes v1.24 | | | |
| Kubernetes v1.25 | ✓ (only workload) | | |
| Kubernetes v1.26 | ✓ (only workload) | ✓ (only workload) | |
| Kubernetes v1.27 | ✓ | ✓ (only workload) | ✓ (only workload) |
| Kubernetes v1.28 | ✓ | ✓ | ✓ (only workload) |
| Kubernetes v1.29 | ✓ | ✓ | ✓ |
| Kubernetes v1.25 | | | |
| Kubernetes v1.26 | ✓ (only workload) | | |
| Kubernetes v1.27 | ✓ (only workload) | ✓ (only workload) | |
| Kubernetes v1.28 | ✓ | ✓ (only workload) | ✓ (only workload) |
| Kubernetes v1.29 | ✓ | ✓ | ✓ (only workload) |
| Kubernetes v1.30 | ✓ | ✓ | ✓ |
| Kubernetes v1.31 | ✓ >= v1.8.1 | ✓ | ✓ |
| Kubernetes v1.32 | | ✓ >= v1.9.1 | ✓ |
| Kubernetes v1.33 | | | ✓ >= v1.10.1 |
| Kubernetes v1.31 | ✓ | ✓ | ✓ |
| Kubernetes v1.32 | ✓ >= v1.9.1 | ✓ | ✓ |
| Kubernetes v1.33 | | ✓ >= v1.10.1 | ✓ |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not for this PR, as 1.33 support was already added for v1.10.1... but I'm curious why we aren't pinning to the same go version for 1.33 if we're introducing support for it?

Ref:

Compare to:

Updating to go 1.24 would obviously include updating a bunch of core libraries (apimachinery, etc), but I was under the impression that "CAPI support" for a particular version would include that affinity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we aren't pinning to the same go version for 1.33...?

It's a good point, CAPI usually keeps those in sync.

Switching to Go 1.24 is in progress: see #12128, which may be blocked by #12088.

Copy link
Member Author

@fabriziopandini fabriziopandini May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not for this PR, as 1.33 support was already added for v1.10.1

That's correct.
This PR is about the version that the users can use for management or workload clusters, it isn't about the version we use in our codebase.

I'm curious why we aren't pinning to the same go version for 1.33 if we're introducing support for it?

Wrt to the version we use in our codebase, the process we usually follow is described in https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/release/release-cycle.md

TL;DR; we usually use the same go version of the K8s version we are importing, which we keep in sync with the K8s version imported by the version of controller runtime we are using
(CAPI 1.10 imports CR 0.20, CR 0.20 imports K8s v1.32 which is uses go 1.23, so also CAPI does)

Additional note, the CR runtime minor bump usually happens on main only, not on release branches, so CAPI 1.10 most probably will continue to use 1.23 as go version.

Copy link
Member

@sbueringer sbueringer May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few small additions.

  • CR v0.21 which would use k8s.io/* v0.33 dependencies is not released yet even as of today. Even if it would be released slightly before the CAPI release I don't want to pick up a new CR (and also k8s.io/* & Go minor version) with basically zero soak time in CI. There also is no technical reason why we have to use the same k8s.io/* versions as the highest wl cluster version we manage or the highest mgmt cluster version we run on
  • If we would bump the CR / k8s.io/* / Go minor version after a .0 release this would be a breaking change to everyone importing our CAPI go module. So we don't do that
    • Note: CAPI .0 releases are usually cut before k8s.io .0. I.e. k8s.io/* v0.33.0 was not available at the time of CAPI v1.10.0


See also [Kubernetes version specific notes](#kubernetes-version-specific-notes).

Expand Down Expand Up @@ -324,7 +325,7 @@ The Kubeadm Control Plane requires the Kubeadm Bootstrap provider of the same ve

#### Etcd API Support

The Kubeadm Control Plane provider communicates with the API server and etcd members of every Workload Cluster whose control plane it owns.
The Kubeadm Control Plane provider communicates with the API server and etcd members of every Workload Cluster whose control plane it owns.
All the Cluster API Kubeadm Control Plane providers currently supported are using [etcd v3 API](https://etcd.io/docs/v3.2/rfc/v3api/) when communicating with etcd.

#### CoreDNS Support
Expand Down Expand Up @@ -353,32 +354,34 @@ See [corefile-migration](https://github.com/coredns/corefile-migration)
Cluster API has a vibrant ecosystem of awesome providers maintained by independent teams and hosted outside of
the Cluster API [GitHub repository](https://github.com/kubernetes-sigs/cluster-api/).

To understand the list of supported version of a specific provider, its own Kubernetes support matrix, supported API versions,
To understand the list of supported versions for a specific provider, its own Kubernetes support matrix, supported API versions,
supported contract version and specific skip upgrade rules, please see its documentation. Please refer to [providers list](providers.md)

In general, if a provider version M says it is compatible with Cluster API version N, then it MUST be compatible
with a subset of the Kubernetes versions supported by Cluster API version N.

### clusterctl

It is strongly recommended to always use the latest patch version of [clusterctl](../clusterctl/overview.md), in order to get all the fixes/latest changes.
It is strongly recommended to always use the latest patch version of [clusterctl](../clusterctl/overview.md) to get all the fixes/latest changes.

In case of upgrades, clusterctl should be upgraded first and then used to upgrade all the other components.

## Annexes

### Kubernetes version Support and Cluster API deployment model

The most common deployment model for Cluster API assumes Core provider, Kubeadm Bootstrap provider, and Kubeadm Control Plane provider
and at least one infrastructure provider running on the Management Cluster, all managing the lifecycle
The most common deployment model for Cluster API assumes all providers are running on the Management Cluster and managing the lifecycle
of a set of _separate_ Workload clusters.

"All providers" includes: the core provider, a Bootstrap provider, a Control Plane provider (optional),
and at least one infrastructure provider.

![Management/Workload Separate Clusters](../images/management-workload-separate-clusters.png)

In this scenario, the Kubernetes version of the Management and Workload Clusters are allowed to be different.
Additionally, Management Clusters and Workload Clusters can be upgraded independently and in any order.

In another deployment model for Cluster API, the Cluster API providers are used not only to managing the
In another deployment model for Cluster API, the Cluster API providers are used not only to manage the
lifecycle of _separate_ Workload clusters, but also to manage the lifecycle of the Management cluster itself.
This cluster is also referred to as a "self-hosted" cluster.

Expand Down