You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-cloud-provider/0013-build-deploy-ccm.md
+11-11Lines changed: 11 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ status: provisional
50
50
-[Migrating to a CCM Build](#migrating-to-a-ccm-build)
51
51
-[Flags, Service Accounts, etc](#flags,-service-accounts,-etc)
52
52
-[Deployment Scripts](#deployment-scripts)
53
-
-[KubeAdm](#kubeadm)
53
+
-[kubeadm](#kubeadm)
54
54
-[CI, TestGrid and Other Testing Issues](#ci,-testgrid-and-other-testing-issues)
55
55
-[Alternatives](#alternatives)
56
56
-[Staging Alternatives](#staging-alternatives)
@@ -102,7 +102,7 @@ changes in to an official build. The relevant dependencies require changes in th
102
102
103
103
-[Kube Controller Manager](https://kubernetes.io/docs/reference/generated/kube-controller-manager/) - Track usages of [CMServer.CloudProvider](https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/options/options.go)
104
104
-[API Server](https://kubernetes.io/docs/reference/generated/kube-apiserver/) - Track usages of [ServerRunOptions.CloudProvider](https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-apiserver/app/options/options.go)
105
-
-[Kubelet](https://kubernetes.io/docs/reference/generated/kubelet/) - Track usages of [KubeletFlags.CloudProvider](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go)
105
+
-[kubelet](https://kubernetes.io/docs/reference/generated/kubelet/) - Track usages of [KubeletFlags.CloudProvider](https://github.com/kubernetes/kubernetes/blob/master/cmd/kubelet/app/options/options.go)
106
106
-[How Cloud Provider Functionality is deployed to and enabled in the cluster](https://kubernetes.io/docs/setup/pick-right-solution/#hosted-solutions) - Track usage from [PROVIDER_UTILS](https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-util.sh)
107
107
108
108
For the cloud providers who are in repo, moving out would allow them to more quickly iterate on their solution and
@@ -162,7 +162,7 @@ not have to attempt to support all the different cloud provider configurations.
162
162
local deployment option, which K8s/K8s should continue to support.
163
163
164
164
Lastly there are specific flags which need to be set on various binaries for this to work. Kubernetes API-Server,
165
-
Kubernetes Controller-Manager and Kubelet should all have the --cloud-provider flag set to external. For the Cloud
165
+
Kubernetes Controller-Manager and kubelet should all have the --cloud-provider flag set to external. For the Cloud
166
166
Controller-Manager the --cloud-provider flag should be set appropriately for that cloud provider. In addition we need
167
167
to set the set of controllers running in the Kubernetes Controller-Manager. More on that later.
168
168
@@ -185,7 +185,7 @@ The code which needs to be shared can be broken into several types.
185
185
There is code which properly belongs in the various cloud-provider repos. The classic example of this would be the
186
186
implementations of the cloud provider interface. (Eg. [gce](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/gce)
187
187
or [aws](https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers/aws)) These sections of code
188
-
need to be shared until we can remove all cloud provider dependencies from K8s/K8s. (i.e. When KAS, KCM and Kubelet no
188
+
need to be shared until we can remove all cloud provider dependencies from K8s/K8s. (i.e. When KAS, KCM and kubelet no
189
189
longer contain a cloud-provider flag and no longer depend on either the cloud provider interface or any cloud provider
190
190
implementations) At that point they should be permanently moved to the individual provider repos. I would suggest that
191
191
as long as the code is shared it be in vendor for K8s/cloud-provider. We would want to create a separate Staging repo
@@ -324,7 +324,7 @@ import (
324
324
We then get to the issue of creating a deployable artifact for each cloud provider. There are several artifacts beyond
325
325
the CCM which are cloud provider specific. These artifacts include things like the deployment scripts themselves, the
326
326
contents of the add-on manager and the sidecar needed to get CSI/cloud-specific persistent storage to work. Ideally
327
-
these would then be packaged with a version of the Kubernetes core components (KAS, KCM, Kubelet, …) which have not
327
+
these would then be packaged with a version of the Kubernetes core components (KAS, KCM, kubelet, …) which have not
328
328
been statically linked against the cloud provider libraries. However in the short term the K8s/K8s deployable builds
329
329
will still need to link these binaries against all of the in-tree plugins. We need a way for the K8s/cloud-provider
330
330
repo to consume artifacts generated by the K8s/K8s repo. For official releases these artifacts should be published to a
@@ -359,9 +359,9 @@ upgrade steps are cloud provider specific, we do provide guidance that the contr
359
359
first (master version >= kubelet version) and that the system should be able to handle up to a 2 revision difference
360
360
between the control plane and the kubelets. In addition with disruptions budgets etc, there will not be a consistent
361
361
version of the kubelets, until the upgrade completes. So we need to ensure that our cloud provider/CCM builds work with
362
-
existing clusters. This means that we need to account for things like older Kubelets having the cloud provider enabled
362
+
existing clusters. This means that we need to account for things like older kubelets having the cloud provider enabled
363
363
and using it for things like direct volume mount/unmount, IP discovery, … We can even expect that scaling events such
364
-
as increases in the size of a replica set may cause us to deploy old Kubelet images which directly use the cloud
364
+
as increases in the size of a replica set may cause us to deploy old kubelet images which directly use the cloud
365
365
provider implementation in clusters which are controlled by a CCM build. We need to make sure we test these sort of
366
366
scenarios and ensure they work (get their IP, can mount cloud specific volume types, …)
367
367
@@ -423,11 +423,11 @@ and flexibility in K8s/K8s is not needed in K8s/Cloud-provider. It is also worth
423
423
[CCM Repo Requirements](#repository-requirements) for some suggestions on common K8s/Cloud-provider
424
424
layout suggestions. These include an installer directory for custom installer code.
425
425
426
-
#### KubeAdm[WIP]
426
+
#### kubeadm[WIP]
427
427
428
-
KubeAdm is a tool for creating clusters. For reference see [creating cluster with KubeAdm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
429
-
Need to determine how KubeAdm and K8s/Cloud-providers should interact. More planning clearly needs to be done on
430
-
cloud-provider and KubeAdm planning.
428
+
kubeadm is a tool for creating clusters. For reference see [creating cluster with kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/).
429
+
Need to determine how kubeadm and K8s/Cloud-providers should interact. More planning clearly needs to be done on
0 commit comments