diff --git a/docs/book/src/SUMMARY.md b/docs/book/src/SUMMARY.md
index 1ef7d40b6f22..05d91e762415 100644
--- a/docs/book/src/SUMMARY.md
+++ b/docs/book/src/SUMMARY.md
@@ -8,6 +8,17 @@
- [Certificate Management](./tasks/certs/index.md)
- [Using Custom Certificates](./tasks/certs/using-custom-certificates.md)
- [Generating a Kubeconfig](./tasks/certs/generate-kubeconfig.md)
+- [clusterctl CLI](./clusterctl/overview.md)
+ - [clusterctl Commands](clusterctl/commands/commands.md)
+ - [init](clusterctl/commands/init.md)
+ - [config cluster](clusterctl/commands/config-cluster.md)
+ - [move](./clusterctl/commands/move.md)
+ - [adopt](clusterctl/commands/adopt.md)
+ - [upgrade](clusterctl/commands/upgrade.md)
+ - [delete](clusterctl/commands/delete.md)
+ - [clusterctl Configuration](clusterctl/configuration.md)
+ - [clusterctl Provider Contract](clusterctl/provider-contract.md)
+ - [clusterctl for Developers](clusterctl/developers.md)
- [Developer Guide](./architecture/developer-guide.md)
- [Repository Layout](./architecture/repository-layout.md)
- [Rapid iterative development with Tilt](./developer/tilt.md)
@@ -33,6 +44,5 @@
- [Reference](./reference/reference.md)
- [Glossary](./reference/glossary.md)
- [Provider List](./reference/providers.md)
- - [clusterctl CLI](./tooling/clusterctl.md)
- [Code of Conduct](./code-of-conduct.md)
- [Contributing](./CONTRIBUTING.md)
diff --git a/docs/book/src/clusterctl/command-init.md b/docs/book/src/clusterctl/command-init.md
new file mode 100644
index 000000000000..a6131c10e6a0
--- /dev/null
+++ b/docs/book/src/clusterctl/command-init.md
@@ -0,0 +1 @@
+# init
diff --git a/docs/book/src/clusterctl/commands/adopt.md b/docs/book/src/clusterctl/commands/adopt.md
new file mode 100644
index 000000000000..02c1529c5d76
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/adopt.md
@@ -0,0 +1 @@
+# clusterctl adopt
diff --git a/docs/book/src/clusterctl/commands/commands.md b/docs/book/src/clusterctl/commands/commands.md
new file mode 100644
index 000000000000..a5d0f5204998
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/commands.md
@@ -0,0 +1,11 @@
+# clusterctl Commands
+
+* [`clusterctl init`](init.md)
+* [`clusterctl config cluster`](config-cluster.md)
+* [`clusterctl move`](move.md)
+* [`clusterctl adopt`](adopt.md)
+* [`clusterctl upgrade`](upgrade.md)
+* [`clusterctl delete`](delete.md)
+
+
+
diff --git a/docs/book/src/clusterctl/commands/config-cluster.md b/docs/book/src/clusterctl/commands/config-cluster.md
new file mode 100644
index 000000000000..f15445d0d43c
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/config-cluster.md
@@ -0,0 +1 @@
+# clusterctl config cluster
diff --git a/docs/book/src/clusterctl/commands/delete.md b/docs/book/src/clusterctl/commands/delete.md
new file mode 100644
index 000000000000..aa41aef0c987
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/delete.md
@@ -0,0 +1 @@
+# clusterctl delete
diff --git a/docs/book/src/clusterctl/commands/init.md b/docs/book/src/clusterctl/commands/init.md
new file mode 100644
index 000000000000..986261bd649e
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/init.md
@@ -0,0 +1,233 @@
+# clusterctl init
+
+The `clusterctl init` command installs the Cluster API components and transforms the Kubernetes cluster
+into a management cluster.
+
+This document provides more detail on how `clusterctl init` works and on the supported options for customizing your
+management cluster.
+
+## Defining the management cluster
+
+The `clusterctl init` command accepts in input a list of providers to install.
+
+
+
+#### Automatically installed providers
+
+The `clusterctl init` command automatically adds the Cluster API core provider and
+the kubeadm bootstrap provider to the list of providers to install. This allows users to use a concise command syntax for initializing a management cluster. e.g.
+use the command:
+
+`clusterctl init --infrastracture aws`
+
+To install the `aws` infrastructure provider, the Cluster API core provider and the kubeadm bootstrap provider
+
+
+
+
+#### Provider version
+
+The `clusterctl init` command by default installs the latest version available for each selected provider.
+
+
+
+#### Target namespace
+
+The `clusterctl init` command by default installs each provider in the default target namespace defined by each provider, e.g. `capi-system` for the Cluster API core provider.
+
+See the provider documentation for more details.
+
+
+
+
+
+#### Watching namespace
+
+The `clusterctl init` command by default installs each provider configured for watching objects in all namespaces.
+
+
+
+
+
+#### Multi-tenancy
+
+*Multi-tenancy* for Cluster API means a management cluster where multiple instances of the same provider are installed.
+
+The user can achieve multi-tenancy configurations with `clusterctl` by a combination of:
+
+- Multiple calls to `clusterctl init`;
+- Usage of the `--target-namespace` flag;
+- Usage of the `--watching-namespace` flag;
+
+The `clusterctl` command officially supports the following multi-tenancy configurations:
+
+{{#tabs name:"tab-multi-tenancy" tabs:"n-Infra, n-Core"}}
+{{#tab n-Infra}}
+A management cluster with n (n>1) instances of an infrastructure provider, and only one instance
+of Cluster API core provider, bootstrap provider and control plane provider (optional).
+
+For example:
+
+* Cluster API core provider installed in the `capi-system` namespace, watching objects in all namespaces;
+* The kubeadm bootstrap provider in `capbpk-system`, watching all namespaces;
+* The kubeadm control plane provider in `cacpk-system`, watching all namespaces;
+* The `aws` infrastructure provider in `aws-system1`, watching objects in `aws-system1` only;
+* The `aws` infrastructure provider in `aws-system2`, watching objects in `aws-system2` only;
+* etc. (more instances of the `aws` provider)
+
+{{#/tab }}
+{{#tab n-Core}}
+A management cluster with n (n>1) instances of the Cluster API core provider, each one with a dedicated
+instance of infrastructure provider, bootstrap provider, and control plane provider (optional).
+
+For example:
+
+* A Cluster API core provider installed in the `capi-system1` namespace, watching objects in `capi-system1` only, and with:
+ * The kubeadm bootstrap provider in `capi-system1`, watching `capi-system1`;
+ * The kubeadm control plane provider in `capi-system1`, watching `capi-system1`;
+ * The `aws` infrastructure provider in `capi-system1`, watching objects `capi-system1`;
+* A Cluster API core provider installed in the `capi-system2` namespace, watching objects in `capi-system2` only, and with:
+ * The kubeadm bootstrap provider in `capi-system2`, watching `capi-system2`;
+ * The kubeadm control plane provider in `capi-system2`, watching `capi-system2`;
+ * The `aws` infrastructure provider in `capi-system2`, watching objects `capi-system2`;
+* etc. (more instances of the Cluster API core provider and the dedicated providers)
+
+
+{{#/tab }}
+{{#/tabs }}
+
+
+
+
+
+## Provider repositories
+
+To access provider specific information, such as the components YAML to be used for installing a provider,
+`clusterctl init` accesses the **provider repositories**, that are well-known places where the release assets for
+a provider are published.
+
+See [clusterctl configuration](../configuration.md) for more info about provider repository configurations.
+
+
+
+## Variable substitution
+Providers can use variables in the components YAML published in the provider's repository.
+
+During `clusterctl init`, those variables are replaced with environment variables or with variables read from the
+[clusterctl configuration](../configuration.md).
+
+
+
+
+
+## Additional information
+
+When installing a provider, the `clusterctl init` command executes a set of steps to simplify
+the lifecycle management of the provider's components.
+
+* All the provider's components are labeled, so they can be easily identified in
+subsequent moments of the provider's lifecycle, e.g. upgrades.
+
+ ```bash
+ labels:
+ - clusterctl.cluster.x-k8s.io: ""
+ - clusterctl.cluster.x-k8s.io/provider: ""
+ ```
+
+* An additional `Provider` object is created in the target namespace where the provider is installed.
+This object keeps track of the provider version, the watching namespace, and other useful information
+for the inventory of the providers currently installed in the management cluster.
+
+
diff --git a/docs/book/src/clusterctl/commands/move.md b/docs/book/src/clusterctl/commands/move.md
new file mode 100644
index 000000000000..8932ad55ca72
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/move.md
@@ -0,0 +1 @@
+# clusterctl move
diff --git a/docs/book/src/clusterctl/commands/upgrade.md b/docs/book/src/clusterctl/commands/upgrade.md
new file mode 100644
index 000000000000..ca0fe9494696
--- /dev/null
+++ b/docs/book/src/clusterctl/commands/upgrade.md
@@ -0,0 +1 @@
+# clusterctl
diff --git a/docs/book/src/clusterctl/configuration.md b/docs/book/src/clusterctl/configuration.md
new file mode 100644
index 000000000000..79bbd63638d4
--- /dev/null
+++ b/docs/book/src/clusterctl/configuration.md
@@ -0,0 +1,46 @@
+# clusterctl Configuration File
+
+The `clusterctl` config file is located at `$HOME/.cluster-api/clusterctl.yaml` and it can be used to:
+
+- Customize the list of providers and provider repositories.
+- Provide configuration values to be used for variable substitution when installing providers or creating clusters.
+
+## Provider repositories
+
+The `clusterctl` CLI is designed to work with providers implementing the [clusterct Provider Contract](contract.md).
+
+Each provider is expected to define a provider repository, a well-known place where release assets are published.
+
+By default, `clusterctl` ships with providers sponsored by SIG Cluster Lifecycle.
+
+Users can customize the list of available providers using the `clusterctl` configuration file, as shown in the following example:
+
+```yaml
+providers:
+ # add a custom provider
+ - name: "my-infra-provider"
+ url: "https://github.com/myorg/myrepo/releases/latest/infrastructure_components.yaml"
+ type: "InfrastructureProvider"
+ # override a pre-defined provider
+ - name: "cluster-api"
+ url: "https://github.com/myorg/myforkofclusterapi/releases/latest/core_components.yaml"
+ type: "CoreProvider"
+```
+
+## Variables
+
+When installing a provider `clusterctl` reads a YAML file that is published in the provider repository; while executing
+this operation, `clusterctl` can substitute certain variables with the ones provided by the user.
+
+The same mechanism also applies when `clusterctl` reads the cluster templates YAML published in the repository, e.g.
+when injecting the Kubernetes version to use, or the number of worker machines to create.
+
+The user can provide values using OS environment variables, but it is also possible to add
+variables in the `clusterctl` config file:
+
+```yaml
+# Values for environment variable substitution
+AWS_B64ENCODED_CREDENTIALS: XXXXXXXX
+```
+
+In case a variable is defined both in the config file and as an OS environment variable, the latter takes precedence.
diff --git a/docs/book/src/clusterctl/developers.md b/docs/book/src/clusterctl/developers.md
new file mode 100644
index 000000000000..c50da572b68c
--- /dev/null
+++ b/docs/book/src/clusterctl/developers.md
@@ -0,0 +1,80 @@
+# clusterctl for Developers
+
+This document describes how to use `clusterctl` during the development workflow.
+
+## Prerequisites
+
+* A Cluster API development setup (go, git, etc.)
+* A local clone of the Cluster API GitHub repository
+* A local clone of the GitHub repositories for the providers you want to install
+
+## Getting started
+
+### Build clustertl
+
+From the root of the local copy of Cluster API, you can build the `clusterctl` binary by running:
+
+```shell
+make clusterctl
+```
+
+The output of the build is saved in the `bin/` folder; In order to use it you have to specify
+the full path, create an alias or copy it into a folder under your `$PATH`.
+
+### Create a clusterctl-settings.json file
+
+Next, create a `clusterctl-settings.json` file and place it in your local copy of Cluster API. Here is an example:
+
+```yaml
+{
+ "providers": [ "cluster-api", "kubeadm-bootstrap", "aws"],
+ "provider_repos": ["../cluster-api-provider-aws"]
+}
+```
+
+**enable_providers** (Array[]String, default=[]): A list of the providers to enable. See [available providers]() for more details.
+
+**provider_repos** (Array[]String, default=[]): A list of paths to all the providers you want to use. Each provider must have
+a `clusterctl-settings.json` file describing how to build the provider assets.
+
+## Run the local-overrides hack!
+
+You can now run the local-overrides hack from the root of the local copy of Cluster API:
+
+```shell
+cmd/clusterctl/hack/local-overrides.py
+```
+
+The script reads from the local repositories of the providers you want to install, builds the providers' assets,
+and places them in a local override folder located under `$HOME/.cluster-api/overrides/`.
+Additionally, the command output provides you the `clusterctl init` command with all the necessary flags.
+
+```shell
+clusterctl local overrides generated from local repositories for the cluster-api, kubeadm-bootstrap, aws providers.
+in order to use them, please run:
+
+clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm-bootstrap:v0.3.0 --infrastructure aws:v0.5.0
+```
+
+## Available providers
+
+The following providers are currently defined in the script:
+
+* `cluster-api`
+* `kubeadm-bootstrap`
+* `kubeadm-controlplane`
+* `docker`
+
+More providers can be added by editing the `clusterctl-settings.json` in your local copy of Cluster API;
+please note that each `provider_repo` should have its own `clusterctl-settings.json` describing how to build the provider assets, e.g.
+
+```yaml
+{
+ "name": "aws",
+ "config": {
+ "componentsFile": "infrastructure-components.yaml",
+ "nextVersion": "v0.5.0",
+ "type": "InfrastructureProvider"
+ }
+}
+```
\ No newline at end of file
diff --git a/docs/book/src/clusterctl/overview.md b/docs/book/src/clusterctl/overview.md
new file mode 100644
index 000000000000..5bdba8cb4d12
--- /dev/null
+++ b/docs/book/src/clusterctl/overview.md
@@ -0,0 +1,278 @@
+# Overview of clusterctl
+
+The `clusterctl` CLI tool handles the lifecycle of a Cluster API management cluster.
+
+## Day 1
+
+The `clusterctl` user interface is specifically designed for providing a simple "day 1 experience" and a
+quick start with Cluster API.
+
+### Prerequisites
+
+* Cluster API requires an existing Kubernetes cluster accessible via kubectl;
+
+{{#tabs name:"tab-create-cluster" tabs:"Development,Production"}}
+{{#tab Development}}
+
+{{#tabs name:"tab-create-development-cluster" tabs:"kind,kind for docker provider,Minikube"}}
+{{#tab kind}}
+
+ ```bash
+ kind create cluster --name=clusterapi
+ kubectl cluster-info --context kind-clusterapi
+ ```
+
+See the [kind documentation](https://kind.sigs.k8s.io/) for more details.
+{{#/tab }}
+{{#tab kind for docker provider}}
+
+If you are planning to use the Docker infrastructure provider, a custom kind cluster configuration is required
+because the provider needs to access Docker on the host:
+
+ ```bash
+ cat > kind-cluster-with-extramounts.yaml <
+```
+{{#/tab }}
+{{#tab Pivot}}
+
+- Create a bootstrap management cluster with kind/Minikube
+- Use `clusterctl init` and `clusterctl config cluster` to create a production cluster (see below)
+- "Pivot" the bootstrap management cluster into the production management cluster
+
+{{#/tab }}
+{{#/tabs }}
+
+{{#/tab }}
+{{#/tabs }}
+
+* If the provider of your choice expects some preliminary steps to be executed, users should take care of those in advance;
+* If the provider of your choice expects some environment variables, e.g. `AWS_CREDENTIALS` for the `aws`
+infrastructure provider, user should ensure those variables are set in advance.
+
+{{#tabs name:"tab-installation-infrastructure" tabs:"AWS,Azure,Docker,GCP,vSphere,OpenStack"}}
+{{#tab AWS}}
+
+Download the latest binary of `clusterawsadm` from the [AWS provider releases] and make sure to place it in your path.
+
+```bash
+# Create the base64 encoded credentials using clusterawsadm.
+# This command uses your environment variables and encodes
+# them in a value to be stored in a Kubernetes Secret.
+export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)
+```
+
+See the [AWS Provider Prerequisites](https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/master/docs/prerequisites.md) document for more details.
+
+{{#/tab }}
+{{#tab Azure}}
+
+```bash
+# Create the base64 encoded credentials
+export AZURE_SUBSCRIPTION_ID_B64="$(echo -n "$AZURE_SUBSCRIPTION_ID" | base64 | tr -d '\n')"
+export AZURE_TENANT_ID_B64="$(echo -n "$AZURE_TENANT_ID" | base64 | tr -d '\n')"
+export AZURE_CLIENT_ID_B64="$(echo -n "$AZURE_CLIENT_ID" | base64 | tr -d '\n')"
+export AZURE_CLIENT_SECRET_B64="$(echo -n "$AZURE_CLIENT_SECRET" | base64 | tr -d '\n')"
+```
+
+For more information about authorization, AAD, or requirements for Azure, visit the [Azure Provider Prerequisites](https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/master/docs/getting-started.md#prerequisites) document.
+
+{{#/tab }}
+{{#tab Docker}}
+
+No additional pre-requisites.
+
+{{#/tab }}
+{{#tab GCP}}
+
+```bash
+# Create the base64 encoded credentials by catting your credentials json.
+# This command uses your environment variables and encodes
+# them in a value to be stored in a Kubernetes Secret.
+export GCP_B64ENCODED_CREDENTIALS=$( cat /path/to/gcp-credentials.json | base64 | tr -d '\n' )
+```
+
+{{#/tab }}
+{{#tab vSphere}}
+
+It is required to use an official CAPV machine image for your vSphere VM templates. See [Uploading CAPV Machine Images](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md#uploading-the-capv-machine-image) for instructions on how to do this.
+
+Then, it is required Upload vCenter credentials as a Kubernetes secret:
+
+```bash
+$ cat <"
+ password: ""
+EOF
+```
+
+For more information about prerequisites, credentials management, or permissions for vSphere, visit the [getting started guide](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/master/docs/getting_started.md).
+
+{{#/tab }}
+{{#tab OpenStack}}
+
+Please visit the [getting started guide](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/docs/getting-started.md).
+
+{{#/tab }}
+{{#/tabs }}
+
+### 1. Initialize the management cluster
+
+The `clusterctl init` command installs the Cluster API components and transforms the Kubernetes cluster
+into a management cluster.
+
+The command accepts as input a list of providers to install; when executed for the first time, `clusterctl init`
+automatically adds to the list the Cluster API core provider, and if a bootstrap provider is not specified, it adds
+also the kubeadm bootstrap provider.
+
+
+
+```shell
+clusterctl init --infrastructure aws
+```
+
+The output of `clusterctl init` is similar to this:
+
+```shell
+performing init...
+ - cluster-api CoreProvider installed (v0.2.8)
+ - aws InfrastructureProvider installed (v0.4.1)
+
+Your Cluster API management cluster has been initialized successfully!
+
+You can now create your first workload cluster by running the following:
+
+ clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
+```
+
+See [`clusterctl init`](init.md) for more details.
+
+### 2. Create the first workload cluster
+
+Once the management cluster is ready, you can create the first workload cluster.
+
+The `clusterctl create config` command returns a YAML template for creating a workload cluster.
+Store it locally, eventually customize it, and the apply it to start provisioning the workload cluster.
+
+
+
+
+
+
+
+
+
+For example
+
+```
+clusterctl config cluster my-cluster --kubernetes-version v1.16.3 > my-cluster.yaml
+```
+
+Creates a YAML file named `my-cluster.yaml` with a predefined list of Cluster API objects; Cluster, Machines,
+Machine Deployments, etc.
+
+The file can be eventually modified using your editor of choice; when ready, run the following command
+to apply the cluster manifest.
+
+```
+kubectl apply -f my-cluster.yaml
+```
+
+The output is similar to this:
+
+```
+kubeadmconfig.bootstrap.cluster.x-k8s.io/my-cluster-controlplane-0 created
+kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/my-cluster-worker created
+cluster.cluster.x-k8s.io/my-cluster created
+machine.cluster.x-k8s.io/my-cluster-controlplane-0 created
+machinedeployment.cluster.x-k8s.io/my-cluster-worker created
+awscluster.infrastructure.cluster.x-k8s.io/my-cluster created
+awsmachine.infrastructure.cluster.x-k8s.io/my-cluster-controlplane-0 created
+awsmachinetemplate.infrastructure.cluster.x-k8s.io/my-cluster-worker created
+```
+
+See [`clusterctl config cluster`](config-cluster.md) for more details.
+
+## Day 2 operations
+
+The `clusterctl` command supports also day 2 operations:
+
+* use [`clusterctl init`](commands/init.md) to install additional Cluster API providers
+* use [`clusterctl upgrade`](commands/upgrade.md) to upgrade Cluster API providers
+* use [`clusterctl delete`](commands/delete.md) to delete Cluster API providers
+
+* use [`clusterctl config cluster`](commands/config-cluster.md) to spec out additional workload clusters
+* use [`clusterctl move`](commands/move.md) to migrate objects defining a workload clusters (e.g. Cluster, Machines) from a management cluster to another management cluster
diff --git a/docs/book/src/clusterctl/provider-contract.md b/docs/book/src/clusterctl/provider-contract.md
new file mode 100644
index 000000000000..86c0c4470d1d
--- /dev/null
+++ b/docs/book/src/clusterctl/provider-contract.md
@@ -0,0 +1,208 @@
+# clusterctl Provider Contract
+
+The `clusterctl` command is designed to work with all the providers compliant with the following rules.
+
+## Provider Repositories
+
+Each provider MUST define a **provider repository**, that is a well-known place where the release assets for
+a provider are published.
+
+The provider repository MUST contain the following files:
+
+* The metadata YAML
+* The components YAML
+
+
+Additionally, the provider repository SHOULD contain the following files:
+
+* Workload cluster templates
+
+
+
+
+
+### Metadata YAML
+
+The provider is required to generate a **metadata YAML** file and publish it to the provider's repository.
+
+The metadata YAML file documents the release series of each provider and maps each release series to a Cluster API version.
+
+For example, for Cluster API:
+
+```yaml
+apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
+kind: Metadata
+releaseSeries:
+- major: 0
+ minor: 3
+ clusterAPIVersion: v1alpha3
+- major: 0
+ minor: 2
+ clusterAPIVersion: v1alpha2
+```
+
+
+
+### Components YAML
+
+The provider is required to generate a **components YAML** file and publish it to the provider's repository.
+This file is a single YAML with _all_ the components required for installing the provider itself (CRDs, Controller, RBAC etc.).
+
+The following rules apply:
+
+#### Naming conventions
+
+It is strongly recommended that:
+* Core provider release a file called `core-components.yaml`
+* Infrastructure providers release a file called `infrastructure-components.yaml`
+* Bootstrap providers release a file called ` bootstrap-components.yaml`
+* Control plane providers release a file called `control-plane-components.yaml`
+
+#### Target namespace
+
+The components YAML should contain one Namespace object, which will be used as the default target namespace
+when creating the provider components.
+
+All the objects in the components YAML MUST belong to the target namespace, with the exception of objects that
+are not namespaced, like ClusterRoles/ClusterRoleBinding and CRD objects.
+
+
+
+#### Controllers & Watching namespace
+
+Each provider is expected to deploy controllers using a Deployment.
+
+While defining the Deployment Spec, the container that executes the controller binary MUST be called `manager`.
+
+The manager MUST support a `--namespace` flag for specifying the namespace where the controller
+will look for objects to reconcile.
+
+#### Variables
+
+The components YAML can contain environment variables matching the regexp `\${\s*([A-Z0-9_]+)\s*}`; it is highly
+recommended to prefix the variable name with the provider name e.g. `${ AWS_CREDENTIALS }`
+
+Additionally, each provider should create user facing documentation with the list of required variables and with all the additional
+notes that are required to assist the user in defining the value for each variable.
+
+### Workload cluster templates
+
+An infrastructure provider could publish a **cluster templates** file to be used by `clusterctl config cluster`.
+This is single YAML with _all_ the objects required to create a new workload cluster.
+
+The following rules apply:
+
+#### Naming conventions
+
+Cluster templates MUST be stored in the same folder as the component YAML and follow this naming convention:
+1. The default cluster template should be named `config-{bootstrap}.yaml`. e.g `config-kubeadm.yaml`
+2. Additional cluster template should be named `config-{flavor}-{bootstrap}.yaml`. e.g `config-production-kubeadm.yaml`
+
+`{bootstrap}` is the name of the bootstrap provider used in the template; `{flavor}` is the name the user can pass to the
+`clusterctl config cluster --flavor` flag to identify the specific template to use.
+
+Each provider SHOULD create user facing documentation with the list of available cluster templates.
+
+#### Target namespace
+
+The cluster template YAML MUST assume the target namespace already exists.
+
+All the objects in the cluster template YAML MUST be deployed in the same namespace.
+
+#### Variables
+
+The cluster templates YAML can also contain environment variables (as can the components YAML).
+
+Additionally, each provider should create user facing documentation with the list of required variables and with all the additional
+notes that are required to assist the user in defining the value for each variable.
+
+##### Common variables
+
+The `clusterctl config cluster` command allows user to set a small set of common variables via CLI flags or command arguments.
+
+Templates writers should use the common variables to ensure consistency across providers and a simpler user experience
+(if compared to the usage of OS environment variables or the `clusterctl` config file).
+
+| CLI flag | Variable name | Note |
+| ---------------------- | ----------------- | ------------------------------------------- |
+|`--target-namespace`| `${ NAMESPACE }` | The namespace where the workload cluster should be deployed |
+|`--kubernetes-version`| `${ KUBERNETES_VERSION }` | The Kubernetes version to use for the workload cluster |
+|`--controlplane-machine-count`| `${ CONTROLPLANE_MACHINE_COUNT }` | The number of control plane machines to be added to the workload cluster |
+|`--worker-machine-count`| `${ WORKER_MACHINE_COUNT }` | The number of worker machines to be added to the workload cluster |
+
+Additionally, value of the command argument to `clusterctl config cluster ` (`` in this case), will
+be applied to every occurrence of the `${ CLUSTER_NAME }` variable.
+
+## Additional notes
+
+### Components YAML transformations
+
+Provider authors should be aware of the following transformations that `clusterctl` applies during component installation:
+
+* Variable substitution;
+* Enforcement of target namespace:
+ * The name of the namespace object is set;
+ * The namespace field of all the objects is set (with exception of cluster wide objects like e.g. ClusterRoles);
+ * ClusterRole and ClusterRoleBinding are renamed by adding a “${namespace}-“ prefix to the name; this change reduces the risks
+ of conflicts between several instances of the same provider in case of multi tenancy;
+* Enforcement of watching namespace;
+* All components are labeled;
+
+### Cluster template transformations
+
+Provider authors should be aware of the following transformations that `clusterctl` applies during components installation:
+
+* Variable substitution;
+* Enforcement of target namespace:
+ * The namespace field of all the objects is set;
+
+### Links to external objects
+
+The `clusterctl` command requires that both the components YAML and the cluster templates contain _all_ the required
+objects.
+
+If, for any reason, the provider authors/YAML designers decide not to comply with this recommendation and e.g. to
+
+* implement links to external objects from a component YAML (e.g. secrets, aggregated ClusterRoles NOT included in the component YAML)
+* implement link to external objects from a cluster template (e.g. secrets, configMaps NOT included in the cluster template)
+
+The provider authors/YAML designers should be aware that it is their responsibility to ensure the proper
+functioning of all the `clusterctl` features both in single tenancy or multi-tenancy scenarios and/or document known limitations.
+
+### Move constraints
+
+WIP
+
+### Adopt
+
+WIP
diff --git a/docs/book/src/providers/clusterctl.md b/docs/book/src/providers/clusterctl.md
deleted file mode 100644
index 76fe8e51d8cb..000000000000
--- a/docs/book/src/providers/clusterctl.md
+++ /dev/null
@@ -1,57 +0,0 @@
-# `clusterctl`
-
-## clusterctl v1alpha3 (clusterctl redesign)
-
-`clusterctl` is a CLI tool for handling the lifecycle of a cluster-API management cluster.
-
-The v1alpha3 release is designed for providing a simple day 1 experience; `clusterctl` is bundled with Cluster API and can be reused across providers
-that are compliant with the following rules.
-
-### Components YAML
-
-The provider is required to generate a single YAML file with all the components required for installing the provider
-itself (CRD, Controller, RBAC etc.).
-
-Infrastructure providers MUST release a file called `infrastructure-components.yaml`, while bootstrap provider MUST
-release a file called ` bootstrap-components.yaml` (exception for CABPK, which is included in CAPI by default).
-
-The components YAML should contain one Namespace object, which will be used as the default target namespace
-when creating the provider components.
-
-> If the generated component YAML does't contain a Namespace object, user will need to provide one to `clusterctl init` using
-> the `--target-namespace` flag.
-
-> In case there is more than one Namespace object in the components YAML, `clusterctl` will generate an error and abort
-> the provider installation.
-
-The components YAML can contain environment variables matching the regexp `\${\s*([A-Z0-9_]+)\s*}`; it is highly
-recommended to prefix the variable name with the provider name e.g. `{ $AWS_CREDENTIALS }`
-
-> Users are required to ensure that environment variables are set in advance before running `clusterctl init`; if a variable
-> is missing, `clusterctl` will generate an error and abort the provider installation.
-
-### Workload cluster templates
-
-Infrastructure provider could publish cluster templates to be used by `clusterctl config cluster`.
-
-Cluster templates MUST be stored in the same folder of the component YAML and adhere to the the following naming convention:
-1. The default cluster template should be named `config-{bootstrap}.yaml`. e.g `config-kubeadm.yaml`
-2. Additional cluster template should be named `config-{flavor}-{bootstrap}.yaml`. e.g `config-production-kubeadm.yaml`
-
-`{bootstrap}` is the name of the bootstrap provider used in the template; `{flavor}` is the name the user can pass to the
-`clusterctl config cluster --flavor` flag to identify the specific template to use.
-
-## Previous versions (unsupported)
-
-### v1alpha1
-
-`clusterctl` was a command line tool packaged with v1alpha1 providers. The goal of this tool was to go from nothing to a
-running management cluster in whatever environment the provider was built for. For example, Cluster-API-Provider-AWS
-packaged a `clusterctl` that created a Kubernetes cluster in EC2 and installed the necessary controllers to respond to
-Cluster API's APIs.
-
-### v1alpha2
-
-`clusterctl` was likely becoming provider-agnostic meaning one clusterctl was bundled with Cluster API and can be reused
-across providers. Work here is still being figured out but providers will not be packaging their own `clusterctl`
-anymore.