Skip to content

Propagate skipPhases from Kubeadm config to Cluster API objects #5357

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mkjpryor opened this issue Sep 29, 2021 · 14 comments · Fixed by #5993
Closed

Propagate skipPhases from Kubeadm config to Cluster API objects #5357

mkjpryor opened this issue Sep 29, 2021 · 14 comments · Fixed by #5993
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Milestone

Comments

@mkjpryor
Copy link

User Story

I want to be able to specify skipPhases in my Kubeadm configuration in order to prevent kube-proxy from being deployed because the CNI I am using (Cilium) includes a more performant alternative.

Detailed Description

Upstream Kubeadm configuration resources now support skipPhases for init and join configurations. We need to add this to the Cluster API resources and propagate it.

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 29, 2021
@sbueringer
Copy link
Member

/cc @chrischdi

@neolit123
Copy link
Member

@mkjpryor this was briefly discussed in the CAPI meeting today, since it has been a recurring ask (at least on slack).
response from the maintainers is that if someone wishes to work on this they can go ahead.

/help

@k8s-ci-robot
Copy link
Contributor

@neolit123:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

@mkjpryor this was briefly discussed in the CAPI meeting today, since it has been a recurring ask (at least on slack).
response from the maintainers is that if someone wishes to work on this they can go ahead.

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Sep 29, 2021
@dharmjit
Copy link

dharmjit commented Sep 29, 2021

Hi @sbueringer, @neolit123, I could look into this. Can I assign this issue to myself.

@sbueringer
Copy link
Member

@dharmjit Yes

@dharmjit
Copy link

/assign

@vincepri
Copy link
Member

/milestone v1.0

@k8s-ci-robot k8s-ci-robot added this to the v1.0 milestone Sep 30, 2021
@dharmjit
Copy link

dharmjit commented Oct 1, 2021

Hi @fabriziopandini @sbueringer, I have below in mind to implement this. Please let me know if I missed something.

  • Add field SkipPhases []string for {Init|Join}Configuration structs in bootstrap/kubeadm/api/v1beta1/kubeadm_types.go. This will be only introduced in v1beta1 and not for previous versions.
  • bootstrap/kubeadm/types/upstreamv1beta3 are already in sync with upstream. Nothing to do here.
  • Generate CRDs via make generate-manifests
  • Generate deepcopy, conversion code via make generate-go and make generate-go-conversions
  • Manually create {Init|Join}Configuration conversion functions which are not auto-generated due to this error the following fields need manual conversion: SkipPhases.
  • Their autoCovert_* counterpart has this warning // WARNING: in.SkipPhases requires manual conversion: does not exist in peer-type. Is it okay to keep it?
  • Not sure, do we need to add some conversion or other tests.

Also, I am getting this go:build !ignore_autogenerated_core in generated files. I am using Go1.17, Do I need to use some specific Go version.

@fabriziopandini
Copy link
Member

This really depends by what is the expected behaviour.

Skip phases has been added in the latest version of Kubeadm config, but KubeadmConfig allows to create clusters with older version of kubeadm as well.

If I'm right, the proposed approach silently ignore the flag for older version, which could be confusing for the users.

So the question is do we consider this acceptable? if yes, then how do we document this properly; if not, how do we translate skipPhases for older version? e.g will it be possible to translate skipPhases into kubeadm command flags?

I don't have strong opinions about this, but given that this is user facing, it should probably be discussed in the channel or in the next office hours (in the latest office hours I have raised the point, but as far as I understood we didn't reach an agreement on this point).

Also, I am getting this go:build !ignore_autogenerated_core in generated files. I am using Go1.17, Do I need to use some specific Go version.

yes, you use go v1.16 (ClusterAPI is in sync with the version of Kuberentes we are importing)

@sbueringer
Copy link
Member

Do we already have some fields which are only supported in newer kubeadm versions? If yes, how do we handle them currently?

@mkjpryor
Copy link
Author

mkjpryor commented Oct 1, 2021

If you are worried about backwards compatibility, why not always translate it into command line arguments until there are no supported Kubeadm versions that don't have the config file option? Even when the config file option is available, the flag is still supported...

You can transition from one to the other in the future without changing the Cluster API interface that is used.

@jayunit100
Copy link
Contributor

after about 2 hours of grepping and googling, i just realized this :) what is the current workaround ?

@jayunit100
Copy link
Contributor

yes silently ignoreing the flag is fine - imo as a user :) i have a selfish motivation for this

@vincepri
Copy link
Member

Do we already have some fields which are only supported in newer kubeadm versions? If yes, how do we handle them currently?

@fabriziopandini might know more, but AFAIK today all configuration options have been compatible with each Kubeadm/Kubernetes version.

If you are worried about backwards compatibility, why not always translate it into command line arguments until there are no supported Kubeadm versions that don't have the config file option? Even when the config file option is available, the flag is still supported...

That could be a potential solution, although translating only some options to CLI flags still requires Cluster API to be aware of which Kubernetes version a flag is supported in.

If I'm right, the proposed approach silently ignore the flag for older version, which could be confusing for the users.

Silently ignoring flags isn't ideal, we should try to warn users as much as possible and potentially throw an error.


The compatibility matrix for Kubeadm is expanding with this issue to be not only the Kubeadm API version, but also the Kubernetes version that supports each field. Thankfully, we're carrying our own Kubeadm shim-like types, so adding a new field and cross-checking the kubeadm version shouldn't be an issue if we go down that path, although it does require us to introduce a new layer of compatibility, which might be ok in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants