You can use machine management to flexibly work with underlying infrastructure such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), {rh-openstack-first}, and VMware vSphere to manage the {product-title} cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies.
It is important to have a cluster that adapts to changing workloads. The {product-title} cluster can horizontally scale up and down when the load increases or decreases.
Machine management is implemented as a custom resource definition (CRD).
A CRD object defines a new unique object Kind
in the cluster and enables the Kubernetes API server to handle the object’s entire lifecycle.
The Machine API Operator provisions the following resources:
-
MachineSet
-
Machine
-
ClusterAutoscaler
-
MachineAutoscaler
-
MachineHealthCheck
As a cluster administrator, you can perform the following actions:
-
Create a compute machine set for the following cloud providers:
-
Create a machine set for a bare metal deployment: Creating a compute machine set on bare metal
-
Manually scale a compute machine set by adding or removing a machine from the compute machine set.
-
Updating compute machine configurations through the
MachineSet
YAML configuration file. -
Delete a machine.
-
Configure and deploy a machine health check to automatically fix damaged machines in a machine pool.
As a cluster administrator, you can perform the following actions:
-
Update your control plane configuration with a control plane machine set for the following cloud providers:
-
Configure and deploy a machine health check to automatically recover unhealthy control plane machines.
You can automatically scale your {product-title} cluster to ensure flexibility for changing workloads. To autoscale your cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each compute machine set.
-
The cluster autoscaler increases and decreases the size of the cluster based on deployment needs.
-
The machine autoscaler adjusts the number of machines in the compute machine sets that you deploy in your {product-title} cluster.
User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the {product-title}. You can add compute machines to a cluster on user-provisioned infrastructure during or after the installation process.
As a cluster administrator, you can perform the following actions:
-
Add Red Hat Enterprise Linux (RHEL) compute machines, also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster.
-
Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster.