Skip to content

Commit a292b34

Browse files
Update readme
1 parent 12a973d commit a292b34

File tree

1 file changed

+19
-197
lines changed

1 file changed

+19
-197
lines changed

Diff for: README.md

+19-197
Original file line numberDiff line numberDiff line change
@@ -1,212 +1,34 @@
1-
# OpenShift cluster-api-provider-aws
1+
# Machine API Provider AWS
22

3-
This repository hosts an implementation of a provider for AWS for the
4-
OpenShift [machine-api](https://github.com/openshift/cluster-api).
3+
This repository contains implementations of AWS Provider for [Machine API](https://github.com/openshift/machine-api-operator).
54

6-
This provider runs as a machine-controller deployed by the
7-
[machine-api-operator](https://github.com/openshift/machine-api-operator)
5+
## What is the Machine API
86

9-
### How to build the images in the RH infrastructure
10-
The Dockerfiles use `as builder` in the `FROM` instruction which is not currently supported
11-
by the RH's docker fork (see [https://github.com/kubernetes-sigs/kubebuilder/issues/268](https://github.com/kubernetes-sigs/kubebuilder/issues/268)).
12-
One needs to run the `imagebuilder` command instead of the `docker build`.
7+
A declarative API for creating and managing machines in an OpenShift cluster. The project is based on v1alpha2 version of [Cluster API](https://github.com/kubernetes-sigs/cluster-api).
138

14-
Note: this info is RH only, it needs to be backported every time the `README.md` is synced with the upstream one.
9+
## Documentation
1510

16-
## Deploy machine API plane with minikube
11+
- [Overview](https://github.com/openshift/machine-api-operator/blob/master/docs/user/machine-api-operator-overview.md)
12+
- [Hacking Guide](https://github.com/openshift/machine-api-operator/blob/master/docs/dev/hacking-guide.md)
1713

18-
1. **Install kvm**
14+
## Architecture
1915

20-
Depending on your virtualization manager you can choose a different [driver](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md).
21-
In order to install kvm, you can run (as described in the [drivers](https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver) documentation):
16+
The provider imports [Machine controller](https://github.com/openshift/machine-api-operator/tree/master/pkg/controller/machine) from `machine-api-operator` and provides implementation for Actuator interface. The Actuator implementation is responsible for CRUD operations on AWS API.
2217

23-
```sh
24-
$ sudo yum install libvirt-daemon-kvm qemu-kvm libvirt-daemon-config-network
25-
$ systemctl start libvirtd
26-
$ sudo usermod -a -G libvirt $(whoami)
27-
$ newgrp libvirt
28-
```
18+
## Building and running controller locally
2919

30-
To install to kvm2 driver:
20+
```
21+
NO_DOCKER=1 make build && ./bin/machine-controller-manager
22+
```
3123

32-
```sh
33-
curl -Lo docker-machine-driver-kvm2 https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 \
34-
&& chmod +x docker-machine-driver-kvm2 \
35-
&& sudo cp docker-machine-driver-kvm2 /usr/local/bin/ \
36-
&& rm docker-machine-driver-kvm2
37-
```
24+
By default, we run make tasks in a container. To run the controller locally, set NO_DOCKER=1.
3825

39-
2. **Deploying the cluster**
26+
## Running tests
4027

41-
To install minikube `v1.1.0`, you can run:
28+
### Unit
4229

43-
```sg
44-
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
45-
```
30+
In order to run unit tests use `make test`.
4631

47-
To deploy the cluster:
32+
### E2E Tests
4833

49-
```
50-
$ minikube start --vm-driver kvm2 --kubernetes-version v1.13.1 --v 5
51-
$ eval $(minikube docker-env)
52-
```
53-
54-
3. **Deploying machine API controllers**
55-
56-
For development purposes the aws machine controller itself will run out of the machine API stack.
57-
Otherwise, docker images needs to be built, pushed into a docker registry and deployed within the stack.
58-
59-
To deploy the stack:
60-
```
61-
kustomize build config | kubectl apply -f -
62-
```
63-
64-
4. **Deploy secret with AWS credentials**
65-
66-
AWS actuator assumes existence of a secret file (references in machine object) with base64 encoded credentials:
67-
68-
```yaml
69-
apiVersion: v1
70-
kind: Secret
71-
metadata:
72-
name: aws-credentials-secret
73-
namespace: default
74-
type: Opaque
75-
data:
76-
aws_access_key_id: FILLIN
77-
aws_secret_access_key: FILLIN
78-
```
79-
80-
You can use `examples/render-aws-secrets.sh` script to generate the secret:
81-
```sh
82-
./examples/render-aws-secrets.sh examples/addons.yaml | kubectl apply -f -
83-
```
84-
85-
5. **Provision AWS resource**
86-
87-
The actuator expects existence of certain resource in AWS such as:
88-
- vpc
89-
- subnets
90-
- security groups
91-
- etc.
92-
93-
To create them, you can run:
94-
95-
```sh
96-
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh install
97-
```
98-
99-
To delete the resources, you can run:
100-
101-
```sh
102-
$ ENVIRONMENT_ID=aws-actuator-k8s ./hack/aws-provision.sh destroy
103-
```
104-
105-
All machine manifests expect `ENVIRONMENT_ID` to be set to `aws-actuator-k8s`.
106-
107-
## Test locally built aws actuator
108-
109-
1. **Tear down machine-controller**
110-
111-
Deployed machine API plane (`machine-api-controllers` deployment) is (among other
112-
controllers) running `machine-controller`. In order to run locally built one,
113-
simply edit `machine-api-controllers` deployment and remove `machine-controller` container from it.
114-
115-
1. **Build and run aws actuator outside of the cluster**
116-
117-
```sh
118-
$ go build -o bin/machine-controller-manager sigs.k8s.io/cluster-api-provider-aws/cmd/manager
119-
```
120-
121-
```sh
122-
$ .bin/machine-controller-manager --kubeconfig ~/.kube/config --logtostderr -v 5 -alsologtostderr
123-
```
124-
If running in container with `podman`, or locally without `docker` installed, and encountering issues, see [hacking-guide](https://github.com/openshift/machine-api-operator/blob/master/docs/dev/hacking-guide.md#troubleshooting-make-targets).
125-
126-
127-
1. **Deploy k8s apiserver through machine manifest**:
128-
129-
To deploy user data secret with kubernetes apiserver initialization (under [config/master-user-data-secret.yaml](config/master-user-data-secret.yaml)):
130-
131-
```yaml
132-
$ kubectl apply -f config/master-user-data-secret.yaml
133-
```
134-
135-
To deploy kubernetes master machine (under [config/master-machine.yaml](config/master-machine.yaml)):
136-
137-
```yaml
138-
$ kubectl apply -f config/master-machine.yaml
139-
```
140-
141-
1. **Pull kubeconfig from created master machine**
142-
143-
The master public IP can be accessed from AWS Portal. Once done, you
144-
can collect the kube config by running:
145-
146-
```
147-
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
148-
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
149-
```
150-
151-
Once done, you can access the cluster via `kubectl`. E.g.
152-
153-
```sh
154-
$ kubectl --kubeconfig=kubeconfig get nodes
155-
```
156-
157-
## Deploy k8s cluster in AWS with machine API plane deployed
158-
159-
1. **Generate bootstrap user data**
160-
161-
To generate bootstrap script for machine api plane, simply run:
162-
163-
```sh
164-
$ ./config/generate-bootstrap.sh
165-
```
166-
167-
The script requires `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables to be set.
168-
It generates `config/bootstrap.yaml` secret for master machine
169-
under `config/master-machine.yaml`.
170-
171-
The generated bootstrap secret contains user data responsible for:
172-
- deployment of kube-apiserver
173-
- deployment of machine API plane with aws machine controllers
174-
- generating worker machine user data script secret deploying a node
175-
- deployment of worker machineset
176-
177-
1. **Deploy machine API plane through machine manifest**:
178-
179-
First, deploy generated bootstrap secret:
180-
181-
```yaml
182-
$ kubectl apply -f config/bootstrap.yaml
183-
```
184-
185-
Then, deploy master machine (under [config/master-machine.yaml](config/master-machine.yaml)):
186-
187-
```yaml
188-
$ kubectl apply -f config/master-machine.yaml
189-
```
190-
191-
1. **Pull kubeconfig from created master machine**
192-
193-
The master public IP can be accessed from AWS Portal. Once done, you
194-
can collect the kube config by running:
195-
196-
```
197-
$ ssh -i SSHPMKEY ec2-user@PUBLICIP 'sudo cat /root/.kube/config' > kubeconfig
198-
$ kubectl --kubeconfig=kubeconfig config set-cluster kubernetes --server=https://PUBLICIP:8443
199-
```
200-
201-
Once done, you can access the cluster via `kubectl`. E.g.
202-
203-
```sh
204-
$ kubectl --kubeconfig=kubeconfig get nodes
205-
```
206-
207-
# Upstream Implementation
208-
Other branches of this repository may choose to track the upstream
209-
Kubernetes [Cluster-API AWS provider](https://github.com/kubernetes-sigs/cluster-api-provider-aws/)
210-
211-
In the future, we may align the master branch with the upstream project as it
212-
stabilizes within the community.
34+
If you wish to run E2E tests, you can use `make e2e`. Make sure you have a running OpenShift cluster on AWS.

0 commit comments

Comments
 (0)