-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Define approach for multi tenancy #1631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The option for complete decoupling would be advantageous. Say I provide two services to my users already. 1. a multitenant capable kubernetes. 2. onsite private cloud / public cloud access I can see some cluster-admins wanting to manage all the plumbing though and just let the users manage the clusters themselves without managing the controllers. Perhaps providers needs to be available in both ClusterProvider and Provider ways? |
@ncdc, as per yesterday discussion, below the type of the resources that as of today you can find in CAPA Global Resources Namespaced Resources Assuming I have a cluster with:
An upgrade on ns1, CAPA v0.4.1 --> CAPA v0.X.Y might create a problem on the ns2, CAPA instance because it is going to override 2,3,4 (Instead I don't Namespace could be a problem) |
One thing that IMO can be done to reduce the need for multitenancy is to improve how infrastructure provider credentials are managed e.g. Instead of reading credentials from the namespace where the provider is running, the infrastructure provider can:
This allows usage of many credentials with a single instance of the infrastructure provider only by duplicating a Secret, not the whole provider |
@fabriziopandini currently I believe all providers are using the default lookup behaviors of the cloud SDKs they use for handling credentials, changing that behavior could have varied implications. In some cases this could reduce security of credentials handling, and it could also have negative potential impact on api quotas (such as the case with aws, where we use cached sessions to avoid api throttling that happens when initiating a session). |
There may be a middle ground where instead of the operator deploying controllers per-namespace, the main controller is responsible for forking when it detects multi-tenancy and collapse when they are deleted. This way it eases the cognitive burden for operators to maintain and deploy, while preserving the existing code. |
@fabriziopandini it would be also good if beside
the additional goal would be to define best practices if, when and how many different providers should be supported in a single management cluster. btw. we are working around current limitation by setting having management cluster which is than used used to create multiple management clusters, each for one instance of provider. So say that we want to create workload clusters in 2 vSphere platforms, 2 bare metal pools and AWS we would have 5 management clusters - one for each. Cost could be minimized by creating those management clusters as single node. The solution is not elegant though and we would like much more to use single management cluster which host multiple providers which are multitenant ( accepting multiple credentials). |
I think what we can do here is:
|
And figuring out how to have a single controller operate with multiple distinct sets of credentials is optional and most likely provider specific |
@ncdc if this mean that we would be able to namespace the controllers in a way to have e.g. ns-CAPx-1 and ns-CAPx-2 in management cluster, targeting different infrastructure providers of same kind, each with its own CAPx custom resources, it would cover our use case kubernetes-sigs/cluster-api-provider-vsphere#528 (comment) very well. It would be great to see something in that direction in v1a3. Did I got the idea right? |
@vukg I'm not sure I entirely follow what you wrote - could you give an more detailed example please? From your linked issue:
My suggestions above don't address this. My "common flag/config field for having a controller watch a single namespace" is essentially the |
@ncdc I got it wrong then. The example is actually in my linked issue: CAPV in v1a2 stores credentials for vSphere in a secret managed by CAPI. This bounds one management cluster to one vSphere instance for creation of workload clusters. What we need is that one CAPI management cluster is able to talk to multiple different vSpheres at the same time. This can be achieved by making CAPV accept the list of credentials (e.g. vSphere 1, vSphere 2 ...) or by having mechanism to deploy multiple CAPV infrastructure providers in a single management cluster isolated in their own namespaces with own secrets containing vSphere credentials . Hope this gives a bit of clarity. |
The second multi-tenancy example is actually using single CAPI management cluster with multiple CAPI Infrastructure Providers with multiple instances of same provider. In our concrete case we would need to have CAPA, CAPO and several different (namespaced) instances of CAPV and CAPBM in one CAPI management cluster. We would like to have that to have more elegant, efficient and simplified management plane setup for our Kubernetes engine. |
I'm ok with documenting requirements if we can get #1986 to merge because this PR de facto makes sure that the different instances of the providers are sharing only CRDs, and CRD changes are already strictly ruled.
Ok!
Ok, if #1986 is merged, this became a no problem also from a clusterctl perspective |
Reopening to finish up the documentation part |
@vincepri this is now documented here https://master.cluster-api.sigs.k8s.io/clusterctl/commands/init.html#multi-tenancy. Is it ok to close this issue? |
+1 /close |
@vincepri: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As an operator/provider implementer I would like to have clear rules around multitenancy for cluster API
Detailed Description
AFAIK there isn't a clear definition of multitenancy for cluster API.
A common use case for multitenancy is to use many different credentials with an infrastructure provider; another use case is to isolate the management plane for each guest cluster/set of guest clusters.
Possible solutions to address such use cases are:
While the major drawback of 1 is additional costs, also 2 comes with some complications because each provider instance is a mix of global resources and namespaced resources, and, in this scenario, global resources will be shared among all the instances of the same provider.
Having a set of resources shared among several provider instance imply a set of constraints/limitation on the lifecycle of the management cluster:
Goals
Non-Goals
Anything else you would like to add:
/kind feature
/cc @timothysc @frapposelli
The text was updated successfully, but these errors were encountered: