-
Notifications
You must be signed in to change notification settings - Fork 602
Optional ability to restrict to a single namespace #581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Doesn't the /help |
@vincepri: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I've added this to the agenda for the next office hours |
Further restriction could be done via RBAC policy as well, currently we are setting up RBAC via a ClusterRole, but we could limit that further that we do today. |
@vincepri yes, that's the goal |
The tricky part would be limiting RBAC via a dynamic flag, we could leave the defaults as they are and have users patch their RBAC to be more strict. |
RBAC restriction is not strictly necessary, but it does add another layer of assurance. |
I wonder if calls to Watch (example |
They use the informer, which has its list+watch restricted to the namespace |
Or what do you mean by "namespaced upstream"? |
It can be limited at deploy time, which is when the dynamic flag for the controller binary typically would be set anyway. |
I meant that we should make sure calls that use the client in cluster-api are also properly namespaced. |
/assign |
cluster-api can stay across all namespaces - its controllers can operate generically across all providers & accounts |
@ncdc Given that we vendor cluster-api code and we use upstream controllers to run our actuators I think the restriction should go there as well. 🤔 |
There's no harm in having the upstream controllers optionally limited to a single namespace, but I don't think it buys us anything. If I want to deploy the CAPI controllers, where do you envision that I get them from? A CAPI release and/or image, or from a CAPA release/image? |
@ncdc That is something that still needs to be determined out of the Release and Versioning discussions: kubernetes-sigs/cluster-api#730 Currently capi only publishes a latest tag for their resources, but you really need to deploy the manifest from one of the downstream providers. Today there are no assurances that the published image(s) will work across providers or that the downstream manifests deployed are compatible in any way :( |
/assign Also is this v1alpha1? |
/priority important-longterm @ashish-amarnath I would consider this a stretch goal for v1alpha1 and not necessarily a hard requirement. |
Setting the Tested scenarios:
TODO:
References for release notes:
|
/kind feature
Describe the solution you'd like
I'm interested in being able to manage clusters in multiple AWS accounts. I believe a natural and easy way to achieve this could be to partition by namespace, where all the clusters in a single namespace belong to the same AWS account.
I'd like to add an optional ability to restrict CAPA so it watches for changes only in a single namespace, instead of across all namespaces. Operators using this option would need to deploy 1 CAPA controller pod per namespace/AWS account if they want to support multiple accounts.
Anything else you would like to add:
The controller-runtime
manager.Options
struct has aNamespace
field that restricts the listing/watching to the specified namespace. I'm not sure if there have been any previous discussions about how to configure the controller, but we could add an optional flag to the command line for this.Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: