Skip to content

Optional ability to restrict to a single namespace #581

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ncdc opened this issue Feb 15, 2019 · 20 comments · Fixed by #607 or #615
Closed

Optional ability to restrict to a single namespace #581

ncdc opened this issue Feb 15, 2019 · 20 comments · Fixed by #607 or #615
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@ncdc
Copy link
Contributor

ncdc commented Feb 15, 2019

/kind feature

Describe the solution you'd like
I'm interested in being able to manage clusters in multiple AWS accounts. I believe a natural and easy way to achieve this could be to partition by namespace, where all the clusters in a single namespace belong to the same AWS account.

I'd like to add an optional ability to restrict CAPA so it watches for changes only in a single namespace, instead of across all namespaces. Operators using this option would need to deploy 1 CAPA controller pod per namespace/AWS account if they want to support multiple accounts.

Anything else you would like to add:
The controller-runtime manager.Options struct has a Namespace field that restricts the listing/watching to the specified namespace. I'm not sure if there have been any previous discussions about how to configure the controller, but we could add an optional flag to the command line for this.

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 15, 2019
@vincepri
Copy link
Member

Doesn't the manager.Options.Namespace flag only restricts the internal cache to a specific namespace?

/help
/milestone v1alpha1
/priority important-soon

@k8s-ci-robot
Copy link
Contributor

@vincepri:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

Doesn't the manager.Options.Namespace flag only restricts the internal cache to a specific namespace?

/help
/milestone v1alpha1
/priority important-soon

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added this to the v1alpha1 milestone Feb 15, 2019
@k8s-ci-robot k8s-ci-robot added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 15, 2019
@detiber
Copy link
Member

detiber commented Feb 15, 2019

I've added this to the agenda for the next office hours

@detiber
Copy link
Member

detiber commented Feb 15, 2019

Doesn't the manager.Options.Namespace flag only restricts the internal cache to a specific namespace?

Further restriction could be done via RBAC policy as well, currently we are setting up RBAC via a ClusterRole, but we could limit that further that we do today.

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

@vincepri yes, that's the goal

@vincepri
Copy link
Member

The tricky part would be limiting RBAC via a dynamic flag, we could leave the defaults as they are and have users patch their RBAC to be more strict.

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

RBAC restriction is not strictly necessary, but it does add another layer of assurance.

@vincepri
Copy link
Member

I wonder if calls to Watch (example c.Watch(&source.Kind{Type: &clusterv1.Machine{}}, &handler.EnqueueRequestForObject{})) should also be namespaced upstream if the Manager's namespace is set.

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

They use the informer, which has its list+watch restricted to the namespace

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

Or what do you mean by "namespaced upstream"?

@detiber
Copy link
Member

detiber commented Feb 15, 2019

The tricky part would be limiting RBAC via a dynamic flag, we could leave the defaults as they are and have users patch their RBAC to be more strict.

It can be limited at deploy time, which is when the dynamic flag for the controller binary typically would be set anyway.

@vincepri
Copy link
Member

I meant that we should make sure calls that use the client in cluster-api are also properly namespaced.

@vincepri
Copy link
Member

/assign

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

cluster-api can stay across all namespaces - its controllers can operate generically across all providers & accounts

@vincepri
Copy link
Member

@ncdc Given that we vendor cluster-api code and we use upstream controllers to run our actuators I think the restriction should go there as well. 🤔

@ncdc
Copy link
Contributor Author

ncdc commented Feb 15, 2019

There's no harm in having the upstream controllers optionally limited to a single namespace, but I don't think it buys us anything.

If I want to deploy the CAPI controllers, where do you envision that I get them from? A CAPI release and/or image, or from a CAPA release/image?

@detiber
Copy link
Member

detiber commented Feb 15, 2019

If I want to deploy the CAPI controllers, where do you envision that I get them from? A CAPI release and/or image, or from a CAPA release/image?

@ncdc That is something that still needs to be determined out of the Release and Versioning discussions: kubernetes-sigs/cluster-api#730

Currently capi only publishes a latest tag for their resources, but you really need to deploy the manifest from one of the downstream providers. Today there are no assurances that the published image(s) will work across providers or that the downstream manifests deployed are compatible in any way :(

@ashish-amarnath
Copy link
Contributor

/assign
/lifecycle active
I'll digest this and come up with a proposal. I kinda had some thoughts around this when I did the namespace for cluster and machines. kubernetes-sigs/cluster-api#252

Also is this v1alpha1?

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Feb 15, 2019
@detiber
Copy link
Member

detiber commented Feb 15, 2019

/priority important-longterm

@ashish-amarnath I would consider this a stretch goal for v1alpha1 and not necessarily a hard requirement.

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Feb 15, 2019
@detiber detiber removed the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 15, 2019
@ashish-amarnath
Copy link
Contributor

ashish-amarnath commented Feb 25, 2019

Setting the Namespace in the manager.Options I was able to restrict the CAPA controllers reconcile CAPI objects restricted to the
supplied namespace and it worked as expected. PR: #607
By running multiple instances of the CAPA controllers with their own set of AWS credentials, it is possible to reconcile clusters in
different AWS accounts.

Tested scenarios:

  • In bootstrap cluster, should reconcile cluster objects created in the watched namespace.
  • In bootstrap cluster, should not reconcile cluster objects created in the unwatched namespace.
  • In target cluster, should reconcile machine objects created in the watched namespace.
  • In target cluster, should not reconcile machine objects created in the unwatched namespace.

TODO:

  • Document how to configure CAPA to reconcile CAPI objects in a specific namespace. Document the 'namespace' cli flag to capa controller #615
  • clusterawsadm only supports using AWS credentials with the default profile but should be able to render credentials for a custom profile.
    Say, profile per account. This will allow existing tooling to supply the CAPA controller instance with its own set of credentials.
    Expert use case, no tooling support.
  • Other scenario specific release artifacts? Expert use case, no specific release artifacts.
  • Advertise in upstream to drive consistent behavior across providers? Ability to run multiple CAPI implementations side-by-side may be useful. However, upstream CAPI, IMO, should not make strong opinions about this implementation specifics.

References for release notes:
By restricting the CAPI object reconciliation to a specific namespace, cluster administrators will be able to:

  • Roll CAPA releases out in phases, across environments, with a controlled blast radius and confidently consume CAPA releases at a cadence.
  • Run CAPI for multiple providers alongside the AWS provider and reconcile clusters across cloud providers.
  • Run multiple instances of CAPA controllers, each with its own credentials, to reconcile clusters across different AWS accounts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
5 participants