Skip to content

Release 1.2.0 #102

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Oct 18, 2019
Merged

Release 1.2.0 #102

merged 3 commits into from
Oct 18, 2019

Conversation

pohly
Copy link
Contributor

@pohly pohly commented Oct 15, 2019

What type of PR is this?
/kind feature

What this PR does / why we need it:

This in preparation for the v1.2.0 release.

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

updated sidecars to latest stable releases

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 15, 2019
@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Oct 15, 2019
@pohly pohly force-pushed the release-1.2.0 branch 2 times, most recently from 789e535 to 199dabd Compare October 15, 2019 15:16
@@ -40,7 +40,7 @@ spec:
serviceAccountName: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.3.0
image: quay.io/k8scsi/csi-provisioner:v1.4.0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unsure which deployments should be using 1.4.0. It's marked as compatible with Kubernetes >= 1.13 (https://github.com/kubernetes-csi/external-provisioner/releases/tag/v1.4.0). Do we still need any of the older releases? What for?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could technically use 1.4.0. The main argument I can see for keeping older versions of sidecars is so that they still get CI testing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That depends on whether anyone really needs to use those older versions. If that is not the case, then we don't need to support and test anything besides 1.4.0.

See also my questions in kubernetes-csi/docs#212.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, the inverse is also true: because we claim that external-provisioner is compatible with Kubernetes >= 1.13, we need to test that, in particular because it is a combination that hasn't been tested before.

IMHO the "kubernetes-1.14" deployment should cover the recommended set of sidecars for that Kubernetes release. I'm just not sure at the moment what the recommendation is 🤷‍♂️

If we want to support also older releases of the sidecars, then we need additional deployments and Prow jobs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The recommended should be documented in the sidecar pages. Let's stick with that for now.

Agree that we have a testing gap with what we claim to support. We can brainstorm ideas on how to improve the coverage in a separate issue for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The recommended should be documented in the sidecar pages.

You mean the "recommended K8s version" column? That's not quite the same.

For example, external-provisioner 1.4.0 may have some features that depend on 1.16 (hypothetically speaking, I have no idea whether that is the case). According to https://kubernetes-csi.github.io/docs/kubernetes-compatibility.html that means it's "recommended K8s version" will list 1.16 because "recommended Kubernetes version specifies the lowest Kubernetes version needed where all features of a sidecar will function correctly". That column basically answers the question "I want external-provisioner 1.4.0, which Kubernetes version will work best".

The other question is "I have Kubernetes 1.15, which sidecar versions should I use for that".

For example, suppose 1.4.0 can fully replace 1.3.1 on Kubernetes 1.15. Does that make it the recommended release for 1.15? This is not documented. The closest we get to documenting that are the reference deployments in this repo.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make the answers to both questions the same for simplicity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in the CI standup meeting, the deployments should use the sidecar versions on those (and only) those Kubernetes releases which are "recommended", i.e. we don't attempt to replace older sidecar versions if there have been API changes even if that would technically work - we simply don't test it and thus don't support it. Instead, older sidecar releases get maintained for older Kubernetes releases.

https://kubernetes-csi.github.io/docs/livenessprobe.html doesn't list "recommended Kubernetes". I made the assumption that there have been no API changes and thus all Kubernetes releases should be using the latest (1.1.0).

Same with https://kubernetes-csi.github.io/docs/node-driver-registrar.html (latest v1.2.0).

https://kubernetes-csi.github.io/docs/external-snapshotter.html lists 1.14 for 1.2.2, so it gets used on 1.14, 1.15 and 1.16.

CHANGELOG-2.0.md Outdated

## Other Notable Changes

- The deployment uses `hostpath.csi.k8s.io` as driver name ([#64](https://github.com/kubernetes-csi/csi-driver-host-path/pull/64), [@pohly](https://github.com/pohly)).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this technically a breaking change? existing PVs using the old driver name won't work anymore?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. I can move that into a new "breaking changes" section.

@@ -37,7 +37,7 @@ spec:
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we remove the entire 1.13 deployment? 1.13 is out of K8s support policy technically and we don't run tests on 1.13 anymore

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine with me.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you plan to remove 1.13 specs in this pr?

@@ -40,7 +40,7 @@ spec:
serviceAccountName: csi-provisioner
containers:
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.3.0
image: quay.io/k8scsi/csi-provisioner:v1.4.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could technically use 1.4.0. The main argument I can see for keeping older versions of sidecars is so that they still get CI testing.

CHANGELOG-2.0.md Outdated

## Breaking Changes

- The deployment uses `hostpath.csi.k8s.io` as driver name ([#64](https://github.com/kubernetes-csi/csi-driver-host-path/pull/64), [@pohly](https://github.com/pohly)).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a note about what might break and what they can do to mitigate/resolve it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -37,7 +37,7 @@ spec:
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you plan to remove 1.13 specs in this pr?

pohly added 2 commits October 16, 2019 21:32
Deployments for older Kubernetes releases get updated if the release
notes of a sidecar declare the sidecar release as "recommended" for
that release of Kubernetes. The driver itself still needs to be tagged
once this change is merged.
That release is no longer supported.
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Oct 16, 2019
@pohly
Copy link
Contributor Author

pohly commented Oct 16, 2019

do you plan to remove 1.13 specs in this pr?

Done, also mentioned in changelog.

CHANGELOG-1.2.md Outdated
## Breaking Changes

- The deployment uses `hostpath.csi.k8s.io` as driver name ([#64](https://github.com/kubernetes-csi/csi-driver-host-path/pull/64), [@pohly](https://github.com/pohly)).
Make sure that there are no active ephemeral inline volumes using the old `csi-hostpath` name before updating because otherwise
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also existing PVs created using the old driver name won't work either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't they tied to the storage class, and if the storage class gets updated the driver is still found under the new name? That at least was my hope; I haven't tried any of this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One provisioned, the PV object contains the name of the driver in it. So if you change the driver name, it will no longer be able to process existing PVs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

CHANGELOG-1.2.md Outdated

- The deployment uses `hostpath.csi.k8s.io` as driver name ([#64](https://github.com/kubernetes-csi/csi-driver-host-path/pull/64), [@pohly](https://github.com/pohly)).
Make sure that there are no persistent or ephemeral volumes using the old `csi-hostpath` name before updating because otherwise
those volumes cannot be removed. Pods with ephemeral volumes will be stuck in "terminating" state.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And any new pods cannot start

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Any new pod" or "pods using the old name"? I suppose the later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

CHANGELOG-1.2.md Outdated
- The deployment uses `hostpath.csi.k8s.io` as driver name ([#64](https://github.com/kubernetes-csi/csi-driver-host-path/pull/64), [@pohly](https://github.com/pohly)).
Make sure that there are no persistent or ephemeral volumes using the old `csi-hostpath` name before updating because otherwise
those volumes cannot be removed. Pods with such ephemeral volumes will be stuck in "terminating" state. New pods using
the old name in an inline volume will not be able to start.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true for pvs too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

I hereby promise to never, ever change the name of a CSI driver again. 😬

@msau42
Copy link
Collaborator

msau42 commented Oct 18, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 18, 2019
@msau42
Copy link
Collaborator

msau42 commented Oct 18, 2019

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: msau42, pohly

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 18, 2019
@k8s-ci-robot k8s-ci-robot merged commit 8359099 into kubernetes-csi:master Oct 18, 2019
@@ -40,7 +40,7 @@ spec:
serviceAccountName: csi-attacher
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.2.0
image: quay.io/k8scsi/csi-attacher:v2.0.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, one reason though to update 1.14 deployment to use attacher 2.0.0 is that currently our canary 1.14 tests are failing because 2.0 had added new rbac rules. We would need to update the 1.14 deployment here to pick up the new rbacs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

currently our canary 1.14 tests are failing because 2.0 had added new rbac rules

But isn't that as it should be? attacher 2.0.0 is not meant to run on Kubernetes 1.14, so we shouldn't be testing that combination (anymore).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where it gets complicated. I think it's still valuable to test 1.14 canary for all the other sidecars, but not attacher. Or we make an exception in this case

@pohly
Copy link
Contributor Author

pohly commented Oct 21, 2019 via email

@msau42
Copy link
Collaborator

msau42 commented Oct 21, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants