-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Umbrella: Breaking apart clusterctl #1065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Agree on the clusters and machines, CRD's are a little different though There are a few high-level features that I think clusterctl could serve:
e.g. Install could probably be solved by hosting a concatenated YAML if it were the only use case
|
1 & 3 could be solved with a kubectl plugin. Or possibly an AddOn operator pattern. 2 is a non-starter imo. If you want that you should setup your own logging service. |
Pivot is complicated for a few reasons, I'm not sure it could be simplified outside of an external tool.
|
The big reason for this was to simplify the user experience and avoid needing to run manual pre-steps. |
+1 to this approach going forward, it didn't exist for the first iteration |
Overall, I think kubectl plugins are a great way for us to move going forward, when |
Perhaps we need to reframe the question: a) Should CAPI and CAPI providers depend/require |
I think the UX with kind is good enough, if not great |
I agree here 100%, there should definitely be workflows that exist without
I think there may be multiple levels here. I think the project benefits from having a bootstrapping tool to go from 0 -> cluster-api, but how much of a friendly UX we can accomplish while keeping the relatively un-opinionated nature of cluster-api proper is debatable. For a proper user-friendly installer UX, I definitely think that is a separate project. |
I'm thinking Cluster API Operator to manage the lifecycle across * providers. This component can install the CRDs and controllers and turn down components if needed. It could be the one todo the final killing, but regardless we could. The operator also solves a part of our distribution problem. |
I definitely like the idea of an operator, but we also need to remember that deploying the operator also presents a chicken/egg situation where an existing cluster needs to be present. |
I'm totally cool with having a bootstrap cluster be a precondition. |
Same - |
The problem with kind as the bootstrapping cluster now means that you still need something to handle pivoting, or would we expect that the operator be able to solve the pivoting problem for us? |
My thought is that the operator would do the pivot. |
Here's my POV:
|
@detiber I don't want to hijack the discussion here but since you mention the concurrency problems of moving from bootstrapper to pivot, I think for as long as one can reach the other, say bootstrapper can reach the pivot, controller(s) may be smart and rely on the same locks to guarantee there are no concurrency issues. Something along the follow lines:
Does it make any sense? If yes, then let's take it out of this issue, maybe? |
One thing to keep in mind in this discussion is that currently |
I like the idea of a |
I don't see a need for clusterctl. I've been using CAPA entirely without clusterctl. I create a management cluster with kind (one command) and then use kubectl to deploy the CAPI/CAPA bits, and the workload cluster(s). I do use a CAPA-maintained helper shell script to generate the manifests, though. With the infra/bootstrap provider split, I can see each infra and bootstrap provider shipping its own tool to help generate manifests. There might be room for a CAPI-maintained tool that helps generate the now provider-agnostic Cluster and Machine manifests. Haven't thought this through, but I suspect kubectl plugins using kustomize would do the job nicely. |
@timothysc @ncdc |
/close |
@liztio: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Describe the solution you'd like
As an old-timer to kubernetes, I find clusterctl to be weird... It's performing operations which "could be preconditions", and also performing operations which IMO should be part of the .spec of the objects. This issue tries to break down some of the details, and feedback is solicited.
Building a bootstrap cluster...
kubectl apply
Create / Delete (CRDs, Cluster, Machines, ... )
kubectl apply OR delete
... possibly with a kubectl plugin to make it feel more 1st classed.Pivot
cluster.spec
that is part of a state machine of the cluster object.Kubeconfig
What I'm really struggling with is... do we really need this tool? I think there are portions of clusterctl which I think could move into a client library that I think would be generally useful as aggregate utility functions which are common operations that providers could leverage. I also think a kubectl plugin might be generally useful to treat clusterapi objects as 1st classed resources, but other then that.?.?.? In a v1alpha2 world what workflows are missing that clusterctl provides?
/kind feature
/cc @ncdc @vincepri @detiber
The text was updated successfully, but these errors were encountered: