Skip to content

Define approach for multi tenancy #1631

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
fabriziopandini opened this issue Oct 23, 2019 · 18 comments · Fixed by #1986
Closed

Define approach for multi tenancy #1631

fabriziopandini opened this issue Oct 23, 2019 · 18 comments · Fixed by #1986
Assignees
Labels
area/clusterctl Issues or PRs related to clusterctl kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@fabriziopandini
Copy link
Member

User Story

As an operator/provider implementer I would like to have clear rules around multitenancy for cluster API

Detailed Description

AFAIK there isn't a clear definition of multitenancy for cluster API.

A common use case for multitenancy is to use many different credentials with an infrastructure provider; another use case is to isolate the management plane for each guest cluster/set of guest clusters.

Possible solutions to address such use cases are:

  1. create different management clusters
  2. deploy many instances of one (or more) providers in the same management cluster.

While the major drawback of 1 is additional costs, also 2 comes with some complications because each provider instance is a mix of global resources and namespaced resources, and, in this scenario, global resources will be shared among all the instances of the same provider.

Having a set of resources shared among several provider instance imply a set of constraints/limitation on the lifecycle of the management cluster:

  • All the provider instance should be of the same version (or a compatible version, but currently there is not such a definition that covers all the shared resources)
  • Upgrading a single provider instance might impact all the other instances of the same provider
  • Deleting a single provider might impact all the other instances of the same provider

Goals

    1. To clearly define multitenancy for cluster API
    1. To clearly define best practices for multitenancy, and most specifically if, when and how many instances of one provider should be supported in a single management cluster
    1. To embed best practices for multitenancy in clusterctl

Non-Goals

Anything else you would like to add:

/kind feature

/cc @timothysc @frapposelli

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 23, 2019
@ncdc ncdc added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/documentation Categorizes issue or PR as related to documentation. labels Oct 23, 2019
@ncdc ncdc added this to the v0.3.0 milestone Oct 23, 2019
@ncdc ncdc added the area/clusterctl Issues or PRs related to clusterctl label Oct 23, 2019
@kfox1111
Copy link

The option for complete decoupling would be advantageous. Say I provide two services to my users already. 1. a multitenant capable kubernetes. 2. onsite private cloud / public cloud access
The users should be able to use the existing multitenant Kubernetes as the management cluster to provision clusters in resource number 2.
One tenant in that multitenant cluster should not be able to effect another at all though. The only thing that should be globally defined I think is the CRD's.

I can see some cluster-admins wanting to manage all the plumbing though and just let the users manage the clusters themselves without managing the controllers.

Perhaps providers needs to be available in both ClusterProvider and Provider ways?

@fabriziopandini
Copy link
Member Author

fabriziopandini commented Oct 24, 2019

@ncdc, as per yesterday discussion, below the type of the resources that as of today you can find in CAPA

Global Resources
1 Namespace
2 CustomResourceDefinition
3 ClusterRole
4 ClusterRoleBinding

Namespaced Resources
5 Role
6 RoleBinding
7 Secret
8 Service
9 Deployment

Assuming I have a cluster with:

  • ns1, CAPA v0.4.1
  • ns2, CAPA v0.4.1

An upgrade on ns1, CAPA v0.4.1 --> CAPA v0.X.Y might create a problem on the ns2, CAPA instance because it is going to override 2,3,4 (Instead I don't Namespace could be a problem)

@fabriziopandini
Copy link
Member Author

fabriziopandini commented Oct 24, 2019

One thing that IMO can be done to reduce the need for multitenancy is to improve how infrastructure provider credentials are managed e.g.

Instead of reading credentials from the namespace where the provider is running, the infrastructure provider can:

  1. first read credentials from the namespace where the cluster(the object it is reconciling) is located
  2. if credentials do not exist there, fallback to the default credentials in the namespace where the provider is running

This allows usage of many credentials with a single instance of the infrastructure provider only by duplicating a Secret, not the whole provider

@detiber
Copy link
Member

detiber commented Oct 24, 2019

@fabriziopandini currently I believe all providers are using the default lookup behaviors of the cloud SDKs they use for handling credentials, changing that behavior could have varied implications.

In some cases this could reduce security of credentials handling, and it could also have negative potential impact on api quotas (such as the case with aws, where we use cached sessions to avoid api throttling that happens when initiating a session).

@timothysc
Copy link
Member

There may be a middle ground where instead of the operator deploying controllers per-namespace, the main controller is responsible for forking when it detects multi-tenancy and collapse when they are deleted. This way it eases the cognitive burden for operators to maintain and deploy, while preserving the existing code.

@joonas
Copy link

joonas commented Dec 6, 2019

related to kubernetes-sigs/cluster-api-provider-vsphere#528

@vukg
Copy link

vukg commented Dec 10, 2019

@fabriziopandini it would be also good if beside

ii. To clearly define best practices for multitenancy, and most specifically if, when and how many instances of one provider should be supported in a single management cluster

the additional goal would be to define best practices if, when and how many different providers should be supported in a single management cluster.

btw. we are working around current limitation by setting having management cluster which is than used used to create multiple management clusters, each for one instance of provider. So say that we want to create workload clusters in 2 vSphere platforms, 2 bare metal pools and AWS we would have 5 management clusters - one for each. Cost could be minimized by creating those management clusters as single node. The solution is not elegant though and we would like much more to use single management cluster which host multiple providers which are multitenant ( accepting multiple credentials).

@ncdc
Copy link
Contributor

ncdc commented Dec 20, 2019

I think what we can do here is:

  1. Document that we can't make any breaking changes within a released API version (additions are generally OK)
  2. Document when it's ok to make things more restrictive vs when it's not (e.g. validation)
  3. Define a common flag/config field for having a controller watch a single namespace

@ncdc
Copy link
Contributor

ncdc commented Dec 20, 2019

And figuring out how to have a single controller operate with multiple distinct sets of credentials is optional and most likely provider specific

@vukg
Copy link

vukg commented Dec 20, 2019

  1. Define a common flag/config field for having a controller watch a single namespace

@ncdc if this mean that we would be able to namespace the controllers in a way to have e.g. ns-CAPx-1 and ns-CAPx-2 in management cluster, targeting different infrastructure providers of same kind, each with its own CAPx custom resources, it would cover our use case kubernetes-sigs/cluster-api-provider-vsphere#528 (comment) very well. It would be great to see something in that direction in v1a3. Did I got the idea right?

@ncdc
Copy link
Contributor

ncdc commented Dec 20, 2019

@vukg I'm not sure I entirely follow what you wrote - could you give an more detailed example please?

From your linked issue:

We would like to avoid having to cascade control clusters with CAPV in them just to get possibility to target multiple vCenters.

My suggestions above don't address this. My "common flag/config field for having a controller watch a single namespace" is essentially the --watch-namespace flag that we currently have in CAPI, CAPA, CAPV, etc.

@vukg
Copy link

vukg commented Dec 20, 2019

@ncdc I got it wrong then. The example is actually in my linked issue: CAPV in v1a2 stores credentials for vSphere in a secret managed by CAPI. This bounds one management cluster to one vSphere instance for creation of workload clusters. What we need is that one CAPI management cluster is able to talk to multiple different vSpheres at the same time. This can be achieved by making CAPV accept the list of credentials (e.g. vSphere 1, vSphere 2 ...) or by having mechanism to deploy multiple CAPV infrastructure providers in a single management cluster isolated in their own namespaces with own secrets containing vSphere credentials

. Hope this gives a bit of clarity.

@vukg
Copy link

vukg commented Dec 21, 2019

The second multi-tenancy example is actually using single CAPI management cluster with multiple CAPI Infrastructure Providers with multiple instances of same provider. In our concrete case we would need to have CAPA, CAPO and several different (namespaced) instances of CAPV and CAPBM in one CAPI management cluster. We would like to have that to have more elegant, efficient and simplified management plane setup for our Kubernetes engine.

@fabriziopandini
Copy link
Member Author

fabriziopandini commented Jan 2, 2020

@ncdc

Document ...

I'm ok with documenting requirements if we can get #1986 to merge because this PR de facto makes sure that the different instances of the providers are sharing only CRDs, and CRD changes are already strictly ruled.

Define a common flag/config field for having a controller watch a single namespace

Ok!

figuring out how to have a single controller operate with multiple distinct sets of credentials is optional and most likely provider specific

Ok, if #1986 is merged, this became a no problem also from a clusterctl perspective

@vincepri
Copy link
Member

vincepri commented Jan 3, 2020

Reopening to finish up the documentation part

@vincepri vincepri reopened this Jan 3, 2020
@vincepri vincepri self-assigned this Jan 3, 2020
@fabriziopandini
Copy link
Member Author

@vincepri this is now documented here https://master.cluster-api.sigs.k8s.io/clusterctl/commands/init.html#multi-tenancy. Is it ok to close this issue?

@vincepri
Copy link
Member

+1

/close

@k8s-ci-robot
Copy link
Contributor

@vincepri: Closing this issue.

In response to this:

+1

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl kind/documentation Categorizes issue or PR as related to documentation. kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants