Skip to content

add Nutanix suppport #120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,40 @@ Then, deploy the cluster with:
microk8s kubectl apply -f cluster-gcp.yaml
```

#### Nutanix

> *NOTE*: Ensure that you have properly deployed the Nutanix infrastructure provider prior to executing the commands below. See [Initialization for common providers](https://cluster-api.sigs.k8s.io/user/quick-start.html#initialization-for-common-providers)

Prior to generate a cluster template, you need to add a VM image for use in the cluster. The MicroK8s provider works with any stock Ubuntu image. Use a Ubuntu 22.04 LTS cloud image.

From Prism Central, create a new image with:
```bash
nuclei image.create name=ubuntu-22.04 image_type=DISK_IMAGE source_uri=https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
```

Make note of the name of the image `ubuntu-22.04`, which we then feed into the cluster template.

Generate a cluster template with:

```bash
# review list of variables needed for the cluster template
clusterctl generate cluster microk8s-nutanix --from ./templates/cluster-template-nutanix.yaml --list-variables

# set environment variables (edit the file as needed before sourcing it)
source ./templates/cluster-template-nutanix.rc

# generate the cluster
clusterctl generate cluster microk8s-nutanix --from ./templates/cluster-template-nutanix.yaml > cluster-nutanix.yaml
```

Then, deploy the cluster with:

```bash
microk8s kubectl apply -f cluster-nutanix.yaml
```

You can also use the `cluster-template-nutanix-user.yaml` template file to inject a user and an ssh key in the cluster nodes. It will allow you to connect directly on the nodes.

## Development

The two MicroK8s CAPI providers, the bootstrap and control plane, serve distinct purposes:
Expand Down
282 changes: 282 additions & 0 deletions templates/cluster-template-nutanix-user.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
# Based on: https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix/releases/download/v1.5.3/cluster-template.yaml
---
apiVersion: v1
binaryData:
ca.crt: ${NUTANIX_ADDITIONAL_TRUST_BUNDLE=""}
kind: ConfigMap
metadata:
name: ${CLUSTER_NAME}-pc-trusted-ca-bundle
namespace: ${NAMESPACE}
---
apiVersion: v1
kind: Secret
metadata:
name: ${CLUSTER_NAME}
namespace: ${NAMESPACE}
stringData:
credentials: |
[
{
"type": "basic_auth",
"data": {
"prismCentral":{
"username": "${NUTANIX_USER}",
"password": "${NUTANIX_PASSWORD}"
}
}
}
]
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}
namespace: ${NAMESPACE}
spec:
clusterNetwork:
pods:
cidrBlocks:
- 172.20.0.0/16
serviceDomain: cluster.local
services:
cidrBlocks:
- 172.19.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
name: ${CLUSTER_NAME}-kcp
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixCluster
name: ${CLUSTER_NAME}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
labels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
name: ${CLUSTER_NAME}-md-0
namespace: ${NAMESPACE}
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT:=1}
selector:
matchLabels: {}
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
name: ${CLUSTER_NAME}-kcfg-0
clusterName: ${CLUSTER_NAME}
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
name: ${CLUSTER_NAME}-mt-0
version: ${KUBERNETES_VERSION}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixCluster
metadata:
name: ${CLUSTER_NAME}
namespace: ${NAMESPACE}
spec:
controlPlaneEndpoint:
host: ${CONTROL_PLANE_ENDPOINT_IP}
port: ${CONTROL_PLANE_ENDPOINT_PORT=6443}
prismCentral:
additionalTrustBundle:
kind: ConfigMap
name: ${CLUSTER_NAME}-pc-trusted-ca-bundle
address: ${NUTANIX_ENDPOINT}
credentialRef:
kind: Secret
name: ${CLUSTER_NAME}
insecure: ${NUTANIX_INSECURE=false}
port: ${NUTANIX_PORT=9440}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: ${CLUSTER_NAME}-mt-0
namespace: ${NAMESPACE}
spec:
template:
spec:
bootType: ${NUTANIX_MACHINE_BOOT_TYPE=legacy}
cluster:
name: ${NUTANIX_PRISM_ELEMENT_CLUSTER_NAME}
type: name
image:
name: ${NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME}
type: name
memorySize: ${NUTANIX_MACHINE_MEMORY_SIZE=4Gi}
providerID: nutanix://${CLUSTER_NAME}-m1
subnet:
- name: ${NUTANIX_SUBNET_NAME}
type: name
systemDiskSize: ${NUTANIX_SYSTEMDISK_SIZE=40Gi}
vcpuSockets: ${NUTANIX_MACHINE_VCPU_SOCKET=2}
vcpusPerSocket: ${NUTANIX_MACHINE_VCPU_PER_SOCKET=1}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: MicroK8sControlPlane
metadata:
name: ${CLUSTER_NAME}-kcp
spec:
controlPlaneConfig:
initConfiguration:
joinTokenTTLInSecs: 900000
addons:
- dns
- ingress
httpProxy: "${CONTAINERD_HTTP_PROXY:=}"
httpsProxy: "${CONTAINERD_HTTPS_PROXY:=}"
noProxy: "${CONTAINERD_NO_PROXY:=}"
riskLevel: "${SNAP_RISKLEVEL:=stable}"
confinement: "${SNAP_CONFINEMENT:=classic}"
preRunCommands:
- hostnamectl set-hostname "{{ ds.meta_data.hostname }}"
- "useradd -m ${NUTANIX_USER:=capi} -s /bin/bash"
- "mkdir -p /home/${NUTANIX_USER:=capi}/.ssh"
- "echo '${NUTANIX_SSH_AUTHORIZED_KEY}' > /home/${NUTANIX_USER:=capi}/.ssh/authorized_keys"
- "chown -R ${NUTANIX_USER:=capi}:${NUTANIX_USER:=capi} /home/${NUTANIX_USER:=capi}/.ssh"
- "chmod 700 /home/${NUTANIX_USER:=capi}/.ssh"
- "chmod 600 /home/${NUTANIX_USER:=capi}/.ssh/authorized_keys"
- "echo '${NUTANIX_USER:=capi} ALL=(ALL) NOPASSWD:ALL' > /etc/sudoers.d/${NUTANIX_USER:=capi}"
postRunCommands:
- mkdir -p /var/snap/microk8s/current/var/staticpod
- cp /tmp/kube-vip.yaml /var/snap/microk8s/current/var/staticpod/
extraKubeletArgs:
- "--pod-manifest-path=/var/snap/microk8s/current/var/staticpod"
extraWriteFiles:
- content: |
apiVersion: v1
kind: Pod
metadata:
name: kube-vip
namespace: kube-system
spec:
containers:
- name: kube-vip
image: ghcr.io/kube-vip/kube-vip:v0.8.9
imagePullPolicy: IfNotPresent
args:
- manager
env:
- name: vip_arp
value: "true"
- name: address
value: "${CONTROL_PLANE_ENDPOINT_IP}"
- name: port
value: "${CONTROL_PLANE_ENDPOINT_PORT=6443}"
- name: vip_cidr
value: "32"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_ddns
value: "false"
- name: vip_leaderelection
value: "true"
- name: vip_leaseduration
value: "15"
- name: vip_renewdeadline
value: "10"
- name: vip_retryperiod
value: "2"
- name: svc_enable
value: "${KUBEVIP_SVC_ENABLE=false}"
- name: lb_enable
value: "${KUBEVIP_LB_ENABLE=false}"
- name: enableServicesElection
value: "${KUBEVIP_SVC_ELECTION=false}"
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_TIME
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
resources: {}
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- name: kubeconfig
hostPath:
type: File
path: /var/snap/microk8s/current/credentials/client.config
status: {}
owner: root:root
path: /tmp/kube-vip.yaml
permissions: "0600"
machineTemplate:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
name: "${CLUSTER_NAME}-mt-0"
replicas: ${CONTROL_PLANE_MACHINE_COUNT:=1}
version: "${KUBERNETES_VERSION}"
upgradeStrategy: "${UPGRADE_STRATEGY:=SmartUpgrade}"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: MicroK8sConfigTemplate
metadata:
name: "${CLUSTER_NAME}-kcfg-0"
spec:
template:
spec:
initConfiguration:
httpProxy: "${CONTAINERD_HTTP_PROXY:=}"
httpsProxy: "${CONTAINERD_HTTPS_PROXY:=}"
noProxy: "${CONTAINERD_NO_PROXY:=}"
riskLevel: "${SNAP_RISKLEVEL:=stable}"
confinement: "${SNAP_CONFINEMENT:=classic}"
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineHealthCheck
metadata:
name: ${CLUSTER_NAME}-mhc
namespace: ${NAMESPACE}
spec:
clusterName: ${CLUSTER_NAME}
maxUnhealthy: 40%
nodeStartupTimeout: 10m0s
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}
unhealthyConditions:
- status: "False"
timeout: 5m0s
type: Ready
- status: Unknown
timeout: 5m0s
type: Ready
- status: "True"
timeout: 5m0s
type: MemoryPressure
- status: "True"
timeout: 5m0s
type: DiskPressure
- status: "True"
timeout: 5m0s
type: PIDPressure
- status: "True"
timeout: 5m0s
type: NetworkUnavailable
40 changes: 40 additions & 0 deletions templates/cluster-template-nutanix.rc
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Kubernetes cluster configuration
export KUBERNETES_VERSION=v1.31.6
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=1

# Nutanix endpoint configuration.
export NUTANIX_ENDPOINT=
export NUTANIX_INSECURE=false
export NUTANIX_PRISM_ELEMENT_CLUSTER_NAME=
export NUTANIX_USER=admin
export NUTANIX_PASSWORD=Nutanix/4u

# Nutanix machine configuration
export NUTANIX_MACHINE_TEMPLATE_IMAGE_NAME=ubuntu-22.04
export NUTANIX_MACHINE_MEMORY_SIZE=4Gi
export NUTANIX_MACHINE_VCPU_SOCKET=2
export NUTANIX_MACHINE_VCPU_PER_SOCKET=1
export NUTANIX_SYSTEMDISK_SIZE=40Gi
export NUTANIX_SUBNET_NAME=""

# User configuration
export NUTANIX_USER="capi"
export NUTANIX_SSH_AUTHORIZED_KEY=""

# kube-vip configuration
export KUBEVIP_LB_ENABLE=false
export KUBEVIP_SVC_ENABLE=false
export KUBEVIP_SVC_ELECTION=false

# (optional) Containerd HTTP proxy configuration. Leave empty if not required.
export CONTAINERD_HTTP_PROXY=""
export CONTAINERD_HTTPS_PROXY=""
export CONTAINERD_NO_PROXY=""

# (optional) Snap risk level and confinement
export SNAP_RISKLEVEL="stable"
export SNAP_CONFINEMENT="classic"

# Upgrade configuration
export UPGRADE_STRATEGY=SmartUpgrade
Loading