Skip to content

Nodegroup role name exceeds maximum length #2704

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
richardcase opened this issue Aug 24, 2021 · 15 comments
Closed

Nodegroup role name exceeds maximum length #2704

richardcase opened this issue Aug 24, 2021 · 15 comments
Assignees
Labels
area/provider/eks Issues or PRs related to Amazon EKS provider good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@richardcase
Copy link
Member

/kind bug
/area provider/eks
/priority important-soon
/milestone v0.7.x

What steps did you take and what happened:

Created stack to allow iam roles per cluster:

apiVersion: bootstrap.aws.infrastructure.cluster.x-k8s.io/v1alpha1
kind: AWSIAMConfiguration
spec:
  bootstrapUser:
    enable: true
  eks:
    enable: true
    iamRoleCreation: true # Set to true if you plan to use the EKSEnableIAM feature flag to enable automatic creation of IAM roles

Enabled IAM roles per cluster using the following before clusterctl init:

export EXP_EKS_IAM=true"

Created a machine pool with the following specs:

apiVersion: exp.cluster.x-k8s.io/v1alpha3
kind: MachinePool
metadata:
  name: "capi-managed-test-pool-0"
spec:
  clusterName: "capi-managed-test"
  template:
    spec:
      clusterName: "capi-managed-test"
      bootstrap:
        dataSecretName: ""
      infrastructureRef:
        name: "capi-managed-test-pool-0"
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
        kind: AWSManagedMachinePool
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: AWSManagedMachinePool
metadata:
  name: "capi-managed-test-pool-0"
spec:
  instanceType: t3.large
  amiType: AL2_x86_64

And then we get the following error when reconciliation occurs:

[manager] E0824 12:46:07.186195       8 controller.go:257] controller-runtime/controller "msg"="Reconciler error" "error"="failed to reconcile machine pool for AWSManagedMachinePool default/capi-managed-test-pool-0: ValidationError: 1 validation error detected: Value 'default_capi-managed-test-control-plane-default_capi-managed-test-pool-0-nodegroup-iam-service-role' at 'roleName' failed to satisfy constraint: Member must have length less than or equal to 64\n\tstatus code: 400, request id: d1f43a56-6b8f-4ebe-984c-6c24647e43e9" "controller"="awsmanagedmachinepool" "name"="capi-managed-test-pool-0" "namespace"="default"

What did you expect to happen:
I would expect the controller to detect the length of the auto-generated role name and then truncate/hash if its too long.

Anything else you would like to add:
This fix will need to be applied to 0.7x and then back ported to 0.6.x

Environment:

  • Cluster-api-provider-aws version: 0.6.8
@k8s-ci-robot k8s-ci-robot added this to the v0.7.x milestone Aug 24, 2021
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. area/provider/eks Issues or PRs related to Amazon EKS provider priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Aug 24, 2021
@randomvariable randomvariable modified the milestones: v0.7.x, v1.x Nov 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 6, 2022
@richardcase
Copy link
Member Author

/lifecycle frozen
/good-first-issue

@k8s-ci-robot
Copy link
Contributor

@richardcase:
This request has been marked as suitable for new contributors.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/lifecycle frozen
/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 7, 2022
@richardcase
Copy link
Member Author

/lifecycle frozen

@Callisto13
Copy link
Contributor

@richardcase can I take this one?

@richardcase
Copy link
Member Author

/assign Callisto13

@Callisto13
Copy link
Contributor

Callisto13 commented Jun 16, 2022

I think this is already in place, just perhaps not ported back to 0.6?

func GenerateEKSName(resourceName, namespace string, maxLength int) (string, error) {

see 50ed343

@richardcase
Copy link
Member Author

/remove-lifecycle frozen

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 9, 2022
@richardcase richardcase removed this from the v1.x milestone Nov 10, 2022
@Sajiyah-Salat
Copy link

Is this issue still needs to solve

@Callisto13
Copy link
Contributor

@richardcase the description originally said that a fix should be backported to 0.6.0, but that is a very old version now. Do we want to bother doing that or close this as fixed from 0.7.0?

@richardcase
Copy link
Member Author

As this is fixed in the currently supported versions: https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/pkg/cloud/services/eks/roles.go#L165:L168 (thanks @Callisto13 for checking this) we can now close this issue as there is no need to backport to 0.6.0 as its unsupported now.

@richardcase
Copy link
Member Author

/close

@k8s-ci-robot
Copy link
Contributor

@richardcase: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/eks Issues or PRs related to Amazon EKS provider good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

6 participants