You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/proposals/20191017-kubeadm-based-control-plane.md
+9-11
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Control Plane Management
2
+
title: Kubeadm Based Control Plane Management
3
3
authors:
4
4
- "@detiber”
5
5
- "@chuckha”
@@ -17,11 +17,11 @@ reviewers:
17
17
- "@hardikdr"
18
18
- "@sbueringer"
19
19
creation-date: 2019-10-17
20
-
last-updated: 2019-10-30
20
+
last-updated: 2019-11-08
21
21
status: implementable
22
22
---
23
23
24
-
# Control Plane Management
24
+
# Kubeadm Based Control Plane Management
25
25
26
26
## Table of Contents
27
27
@@ -88,12 +88,9 @@ and centralize the logic in Cluster API.
88
88
### Goals
89
89
90
90
- To establish new resource types for control plane management
91
-
- To support single node and multiple control plane instances, with the requirement that the infrastructure provider supports some type of a stable endpoint for the API Server (Load Balancer, VIP, etc).
91
+
- To support single node and multiple node control plane instances, with the requirement that the infrastructure provider supports some type of a stable endpoint for the API Server (Load Balancer, VIP, etc).
92
92
- To enable declarative orchestrated control plane upgrades
93
93
- To provide a default machine-based implementation using kubeadm
94
-
95
-
#### Additional goals of the default kubeadm machine-based Implementation
96
-
97
94
- To provide a kubeadm-based implementation that is infrastructure provider agnostic
98
95
- To enable declarative orchestrated replacement of control plane machines, such as to rollout an OS-level CVE fix.
99
96
- To manage a kubeadm-based, "stacked etcd" control plane
@@ -107,9 +104,10 @@ and centralize the logic in Cluster API.
107
104
Non-Goals listed in this document are intended to scope bound the current v1alpha3 implementation and are subject to change based on user feedback over time.
108
105
109
106
- To manage non-machine based topologies, e.g.
110
-
- Pod based control planes; these can be managed via standard kubernetes objects.
111
-
- Non-node control planes (i.e. EKS, GKE, AKS); these can be managed via the respective APIs.
112
-
- To manage control plane deployments across failure domains, followup work for this will be tracked on [this issue](https://github.com/kubernetes-sigs/cluster-api/issues/1647).
107
+
- Pod based control planes.
108
+
- Non-node control planes (i.e. EKS, GKE, AKS).
109
+
- To manage control plane deployments across failure domains, follow up work for this will be tracked on [this issue](https://github.com/kubernetes-sigs/cluster-api/issues/1647).
110
+
- To define a mechanism for providing a stable API endpoint for providers that do not currently have one, follow up work for this will be tracked on [this issue](https://github.com/kubernetes-sigs/cluster-api/issues/1687)
113
111
- To manage CA certificates outside of what is provided by Kubeadm bootstrapping
114
112
- To manage etcd clusters in any topology other than stacked etcd (externally managed etcd clusters can still be leveraged).
115
113
- To address disaster recovery constraints, e.g. restoring a control plane from 0 replicas using a filesystem or volume snapshot copy of data persisted in etcd.
@@ -142,7 +140,7 @@ Non-Goals listed in this document are intended to scope bound the current v1alph
142
140
143
141
1. Based on the function of kubeadm, the control plane provider must be able to scale the number of replicas of a control plane from 1 to X, meeting user stories 1 through 4.
144
142
2. To address user story 5, the control plane provider must provide validation of the number of replicas in a control plane. Where the stacked etcd topology is used (i.e., in the default implementation), the number of replicas must be an odd number, as per [etcd best practice](https://etcd.io/docs/v3.3.12/faq/#why-an-odd-number-of-cluster-members). When external etcd is used, any number is valid.
145
-
3. In service of user story 5, the control plane provider must also manage etcd membership via kubeadm as part of scaling down (`kubeadm` takes care of adding the new etcd member when joining).
143
+
3. In service of user story 5, the kubeadm control plane provider must also manage etcd membership via kubeadm as part of scaling down (`kubeadm` takes care of adding the new etcd member when joining).
146
144
4. The control plane provider should provide indicators of health to meet user story 6 and 10. This should include at least the state of etcd and information about which replicas are currently healthy or not. For the default implementation, health attributes based on artifacts kubeadm installs on the cluster may also be of interest to cluster operators.
147
145
5. The control plane provider must be able to upgrade a control plane’s version of Kubernetes as well as updating the underlying machine image on where applicable (e.g. virtual machine based infrastructure).
0 commit comments