Skip to content

ACK IAM :: Blue Green deployment #1403

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
transadm312 opened this issue Jul 26, 2022 · 5 comments
Open

ACK IAM :: Blue Green deployment #1403

transadm312 opened this issue Jul 26, 2022 · 5 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@transadm312
Copy link

Describe the bug
I have a requirement to deploy ACK IAM resources in secondary cluster / Blue Green deployment of EKS

Steps to reproduce
Install ACK Resources in Primary Cluster
Create EKS Secondary Cluster
Install ACK Resources in Secondary Cluster

Expected outcome
Role should be created / Updated
but getting "Resource already exists" exception

Environment

  • Kubernetes version - 1.21
  • Using EKS (yes/no), if so version? Yes
  • AWS service targeted (S3, RDS, etc.) IAM
@transadm312 transadm312 added the kind/bug Categorizes issue or PR as related to a bug. label Jul 26, 2022
@a-hilaly a-hilaly added IAM and removed IAM labels Jul 26, 2022
@a-hilaly
Copy link
Member

Hey @transadm312 , having two clusters managing the same resource is not different from having two controllers in the same cluster. I think that ideally you should use a different AWS account for the secondary cluster (Second controller), or try to use a different AWS region in the secondary cluster.

@vijtrip2
Copy link
Contributor

Hi @transadm312, this looks similar to #1381 . It is on our radar.

@eks-bot
Copy link

eks-bot commented Oct 24, 2022

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale

@ack-bot ack-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2022
@eks-bot
Copy link

eks-bot commented Nov 23, 2022

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle rotten

@ack-bot ack-bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 23, 2022
@a-hilaly
Copy link
Member

/lifecycle frozen

@ack-bot ack-bot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Nov 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants