Skip to content

📖 Update new provider docs to be more current #12085

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -2,41 +2,20 @@

## Docker Image Name

The patch in `config/manager/manager_image_patch.yaml` will be applied to the manager pod.
Right now there is a placeholder `IMAGE_URL`, which you will need to change to your actual image.

### Development Images
It's likely that you will want one location and tag for release development, and another during development.

The approach most Cluster API projects is using [a `Makefile` that uses `sed` to replace the image URL][sed] on demand during development.

[sed]: https://github.com/kubernetes-sigs/cluster-api/blob/e0fb83a839b2755b14fbefbe6f93db9a58c76952/Makefile#L201-L204

## Deployment

### cert-manager

Cluster API uses [cert-manager] to manage the certificates it needs for its webhooks.
Before you apply Cluster API's yaml, you should [install `cert-manager`][cm-install]

[cert-manager]: https://github.com/cert-manager/cert-manager
[cm-install]: https://cert-manager.io/docs/installation/
The IMG variable is used to build the Docker image and push it to a registry. The default value is `controller:latest`, which is a local image. You can change it to a remote image if you want to push it to a registry.

```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/<version>/cert-manager.yaml
make docker-push IMG=ghcr.io/your-org/your-repo:dev
```

## Deployment

### Cluster API

Before you can deploy the infrastructure controller, you'll need to deploy Cluster API itself to the management cluster.

You can use a precompiled manifest from the [release page][releases], run `clusterctl init`, or clone [`cluster-api`][capi] and apply its manifests using `kustomize`:
Follow the [quick start guide](https://cluster-api.sigs.k8s.io/user/quick-start) up to and including the step of [creating the management cluster](https://cluster-api.sigs.k8s.io/user/quick-start#initialize-the-management-cluster). We will proceed presuming you created a cluster with kind and initalized cluster-api with `clusterctl init`.

```bash
cd cluster-api
make envsubst
kustomize build config/default | ./hack/tools/bin/envsubst | kubectl apply -f -
```

Check the status of the manager to make sure it's running properly:

Expand All @@ -45,11 +24,11 @@ kubectl describe -n capi-system pod | grep -A 5 Conditions
```
```bash
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
```

[capi]: https://github.com/kubernetes-sigs/cluster-api
Expand All @@ -66,24 +45,36 @@ labels:
cluster.x-k8s.io/provider: infrastructure-mailgun
```

If you're using kind for your management cluster, you can use the following command to build and push your image to the kind cluster's local registry. We need to use the IMG variable to override the default `controller:latest` image name with a specific version like `controller:0.1` to avoid having kubernetes try to pull the latest version of `controller` from docker hub.

```bash
cd cluster-api-provider-mailgun

# Build the Docker image
make docker-build IMG=controller:dev

# Load the Docker image into the kind cluster
kind load docker-image controller:dev
```

Now you can apply your provider as well:

```bash
cd cluster-api-provider-mailgun

# Install CRD and controller to current kubectl context
make install deploy
make install deploy IMG=controller:dev

kubectl describe -n cluster-api-provider-mailgun-system pod | grep -A 5 Conditions
```

```text
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
```

[label_prefix]: https://github.com/kubernetes-sigs/cluster-api/search?q=%22infrastructure-%22
Expand All @@ -102,6 +93,7 @@ config:
image: controller:latest # change to remote image name if desired
label: CAPM
live_reload_deps: ["main.go", "go.mod", "go.sum", "api", "controllers", "pkg"]
go_main: cmd/main.go # kubebuilder puts main.go under the cmd directory
```

- Create file `tilt-settings.yaml` in the cluster-api directory:
Expand All @@ -116,15 +108,11 @@ enable_providers:
- mailgun
```

- Create a kind cluster. By default, Tiltfile assumes the kind cluster is named `capi-test`.
- Bring tilt up by using the `make tilt-up` command in the cluster-api directory. This will ensure tilt is set up correctly to use a local registry for your image. You may need to `make tilt-clean` before this if you've been using tilt with other providers.

```bash
kind create cluster --name capi-test

# If you want a more sophisticated setup of kind cluster + image registry, try:
# ---
# cd cluster-api
# hack/kind-install-for-capd.sh
cd cluster-api
make tilt-up
```

- Run `tilt up` in the cluster-api folder
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ As you might have noticed, we are reading variable values from a `ConfigMap` and
You now have to add those to the manifest, but how to inject configuration in production?
The convention many Cluster-API projects use is environment variables.

`config/manager/configuration.yaml`
`config/manager/credentials.yaml`

```yaml
---
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Controllers and Reconciliation

Right now, you can create objects with our API types, but those objects doesn't make any impact on your mailgun infrastrucrure.
Right now, you can create objects with your API types, but those objects don't make any impact on your mailgun infrastructure.
Let's fix that by implementing controllers and reconciliation for your API objects.

From the [kubebuilder book][controller]:
Expand All @@ -25,17 +25,16 @@ Kubebuilder has created our first controller in `controllers/mailguncluster_cont
// MailgunClusterReconciler reconciles a MailgunCluster object
type MailgunClusterReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}

// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters/status,verbs=get;update;patch

func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = context.Background()
_ = r.Log.WithValues("mailguncluster", req.NamespacedName)
_ = logf.FromContext(ctx)

// your logic here
// TODO(user): your logic here

return ctrl.Result{}, nil
}
Expand Down Expand Up @@ -88,7 +87,7 @@ We're going to be sending mail, so let's add a few extra fields:
// MailgunClusterReconciler reconciles a MailgunCluster object
type MailgunClusterReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
Mailgun mailgun.Mailgun
Recipient string
}
Expand All @@ -102,7 +101,7 @@ Here's a naive example:
```go
func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
_ = r.Log.WithValues("mailguncluster", req.NamespacedName)
_ = ctrl.LoggerFrom(ctx)

var cluster infrav1.MailgunCluster
if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil {
Expand All @@ -117,8 +116,8 @@ By returning an error, you request that our controller will get `Reconcile()` ca
That may not always be what you want - what if the object's been deleted? So let's check that:

```go
var cluster infrav1.MailgunCluster
if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil {
var mailgunCluster infrav1.MailgunCluster
if err := r.Get(ctx, req.NamespacedName, &mailgunCluster); err != nil {
// import apierrors "k8s.io/apimachinery/pkg/api/errors"
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
Expand All @@ -127,19 +126,57 @@ That may not always be what you want - what if the object's been deleted? So let
}
```

Now, if this were any old `kubebuilder` project you'd be done, but in our case you have one more object to retrieve.
Cluster API splits a cluster into two objects: the [`Cluster` defined by Cluster API itself][cluster].
We'll want to retrieve that as well.
Luckily, cluster API [provides a helper for us][getowner].
Now, if this were any old `kubebuilder` project you'd be done, but in our case you have one more object to retrieve. While we defined our own cluster object (`MailGunCluster`) that represents all the infrastructure provider specific details for our cluster, we also need to retrieve the upstream [`Cluster` object that is defined by Cluster API itself][cluster]. Luckily, cluster API [provides a helper for us][getowner].

First, you'll need to import the cluster-api package into our project if you haven't done so yet:

```bash
# In your Mailgun repository's root directory
go get sigs.k8s.io/cluster-api
go mod tidy
```

Now we can add in a call to the `GetOwnerCluster` function to retrieve the cluster object:

```go
cluster, err := util.GetOwnerCluster(ctx, r.Client, &mg)
// import sigs.k8s.io/cluster-api/util
cluster, err := util.GetOwnerCluster(ctx, r.Client, mailgunCluster.ObjectMeta)
if err != nil {
return ctrl.Result{}, err

}
```

If our cluster was just created, the Cluster API controller may not have set the ownership reference on our object yet, so we'll have to return here and wait to do more with our cluster object until then. We can leave a log message noting that we're waiting for the main Cluster API controller to set the ownership reference. Here's what our `Reconcile()` function looks like now:

```go
func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
// You'll eventually get rid of this and use a context passed in from your main.go
ctx := context.Background()

// We change the _ to `log` since we're going to log something now
log = ctrl.LoggerFrom(ctx)

var mailgunCluster infrav1.MailgunCluster
if err := r.Get(ctx, req.NamespacedName, &mailgunCluster); err != nil {
// import apierrors "k8s.io/apimachinery/pkg/api/errors"
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}

// import sigs.k8s.io/cluster-api/util
cluster, err := util.GetOwnerCluster(ctx, r.Client, mailgunCluster.ObjectMeta)
if err != nil {
return ctrl.Result{}, err
}

if cluster == nil {
log.Info("Waiting for Cluster Controller to set OwnerRef on MailGunCluster")
return ctrl.Result{}, nil
}
```

### The fun part

_More Documentation: [The Kubebuilder Book][book] has some excellent documentation on many things, including [how to write good controllers!][implement]_
Expand All @@ -152,10 +189,10 @@ This is where your provider really comes into its own.
In our case, let's try sending some mail:

```go
subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name)
body := fmt.Sprint("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request)
subject := fmt.Sprintf("[%s] New Cluster %s requested", mailgunCluster.Spec.Priority, cluster.Name)
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mailgunCluster.Spec.Request)

msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient)
msg := r.mailgun.NewMessage(mailgunCluster.Spec.Requester, subject, body, r.Recipient)
_, _, err = r.Mailgun.Send(msg)
if err != nil {
return ctrl.Result{}, err
Expand All @@ -172,28 +209,28 @@ This is an important thing about controllers: they need to be idempotent. This m
So in our case, we'll store the result of sending a message, and then check to see if we've sent one before.

```go
if mgCluster.Status.MessageID != nil {
if mailgunCluster.Status.MessageID != nil {
// We already sent a message, so skip reconciliation
return ctrl.Result{}, nil
}

subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name)
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request)
subject := fmt.Sprintf("[%s] New Cluster %s requested", mailgunCluster.Spec.Priority, cluster.Name)
body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mailgunCluster.Spec.Request)

msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient)
msg := r.Mailgun.NewMessage(mailgunCluster.Spec.Requester, subject, body, r.Recipient)
_, msgID, err := r.Mailgun.Send(msg)
if err != nil {
return ctrl.Result{}, err
}

// patch from sigs.k8s.io/cluster-api/util/patch
helper, err := patch.NewHelper(&mgCluster, r.Client)
helper, err := patch.NewHelper(&mailgunCluster, r.Client)
if err != nil {
return ctrl.Result{}, err
}
mgCluster.Status.MessageID = &msgID
if err := helper.Patch(ctx, &mgCluster); err != nil {
return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mgCluster.Name)
mailgunCluster.Status.MessageID = &msgID
if err := helper.Patch(ctx, &mailgunCluster); err != nil {
return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mailgunCluster.Name)
}

return ctrl.Result{}, nil
Expand Down Expand Up @@ -223,7 +260,7 @@ Right now, it probably looks like this:
```go
if err = (&controllers.MailgunClusterReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster")
os.Exit(1)
Expand Down Expand Up @@ -256,7 +293,7 @@ We're going to use environment variables for this:

if err = (&controllers.MailgunClusterReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"),
Scheme: mgr.GetScheme(),
Mailgun: mg,
Recipient: recipient,
}).SetupWithManager(mgr); err != nil {
Expand Down
Loading