diff --git a/docs/book/src/developer/providers/getting-started/building-running-and-testing.md b/docs/book/src/developer/providers/getting-started/building-running-and-testing.md index 26d5d611444f..8bc1fa398dbc 100644 --- a/docs/book/src/developer/providers/getting-started/building-running-and-testing.md +++ b/docs/book/src/developer/providers/getting-started/building-running-and-testing.md @@ -2,41 +2,20 @@ ## Docker Image Name -The patch in `config/manager/manager_image_patch.yaml` will be applied to the manager pod. -Right now there is a placeholder `IMAGE_URL`, which you will need to change to your actual image. - -### Development Images -It's likely that you will want one location and tag for release development, and another during development. - -The approach most Cluster API projects is using [a `Makefile` that uses `sed` to replace the image URL][sed] on demand during development. - -[sed]: https://github.com/kubernetes-sigs/cluster-api/blob/e0fb83a839b2755b14fbefbe6f93db9a58c76952/Makefile#L201-L204 - -## Deployment - -### cert-manager - -Cluster API uses [cert-manager] to manage the certificates it needs for its webhooks. -Before you apply Cluster API's yaml, you should [install `cert-manager`][cm-install] - -[cert-manager]: https://github.com/cert-manager/cert-manager -[cm-install]: https://cert-manager.io/docs/installation/ +The IMG variable is used to build the Docker image and push it to a registry. The default value is `controller:latest`, which is a local image. You can change it to a remote image if you want to push it to a registry. ```bash -kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download//cert-manager.yaml +make docker-push IMG=ghcr.io/your-org/your-repo:dev ``` +## Deployment + ### Cluster API Before you can deploy the infrastructure controller, you'll need to deploy Cluster API itself to the management cluster. -You can use a precompiled manifest from the [release page][releases], run `clusterctl init`, or clone [`cluster-api`][capi] and apply its manifests using `kustomize`: +Follow the [quick start guide](https://cluster-api.sigs.k8s.io/user/quick-start) up to and including the step of [creating the management cluster](https://cluster-api.sigs.k8s.io/user/quick-start#initialize-the-management-cluster). We will proceed presuming you created a cluster with kind and initalized cluster-api with `clusterctl init`. -```bash -cd cluster-api -make envsubst -kustomize build config/default | ./hack/tools/bin/envsubst | kubectl apply -f - -``` Check the status of the manager to make sure it's running properly: @@ -45,11 +24,11 @@ kubectl describe -n capi-system pod | grep -A 5 Conditions ``` ```bash Conditions: - Type Status - Initialized True - Ready True - ContainersReady True - PodScheduled True + Type Status + PodReadyToStartContainers True + Initialized True + Ready True + ContainersReady True ``` [capi]: https://github.com/kubernetes-sigs/cluster-api @@ -66,24 +45,36 @@ labels: cluster.x-k8s.io/provider: infrastructure-mailgun ``` +If you're using kind for your management cluster, you can use the following command to build and push your image to the kind cluster's local registry. We need to use the IMG variable to override the default `controller:latest` image name with a specific version like `controller:0.1` to avoid having kubernetes try to pull the latest version of `controller` from docker hub. + +```bash +cd cluster-api-provider-mailgun + +# Build the Docker image +make docker-build IMG=controller:dev + +# Load the Docker image into the kind cluster +kind load docker-image controller:dev +``` + Now you can apply your provider as well: ```bash cd cluster-api-provider-mailgun # Install CRD and controller to current kubectl context -make install deploy +make install deploy IMG=controller:dev kubectl describe -n cluster-api-provider-mailgun-system pod | grep -A 5 Conditions ``` ```text Conditions: - Type Status - Initialized True - Ready True - ContainersReady True - PodScheduled True + Type Status + PodReadyToStartContainers True + Initialized True + Ready True + ContainersReady True ``` [label_prefix]: https://github.com/kubernetes-sigs/cluster-api/search?q=%22infrastructure-%22 @@ -102,6 +93,7 @@ config: image: controller:latest # change to remote image name if desired label: CAPM live_reload_deps: ["main.go", "go.mod", "go.sum", "api", "controllers", "pkg"] + go_main: cmd/main.go # kubebuilder puts main.go under the cmd directory ``` - Create file `tilt-settings.yaml` in the cluster-api directory: @@ -116,15 +108,11 @@ enable_providers: - mailgun ``` -- Create a kind cluster. By default, Tiltfile assumes the kind cluster is named `capi-test`. +- Bring tilt up by using the `make tilt-up` command in the cluster-api directory. This will ensure tilt is set up correctly to use a local registry for your image. You may need to `make tilt-clean` before this if you've been using tilt with other providers. ```bash -kind create cluster --name capi-test - -# If you want a more sophisticated setup of kind cluster + image registry, try: -# --- -# cd cluster-api -# hack/kind-install-for-capd.sh +cd cluster-api +make tilt-up ``` - Run `tilt up` in the cluster-api folder diff --git a/docs/book/src/developer/providers/getting-started/configure-the-deployment.md b/docs/book/src/developer/providers/getting-started/configure-the-deployment.md index c473feeb86c7..08c2f8d8763e 100644 --- a/docs/book/src/developer/providers/getting-started/configure-the-deployment.md +++ b/docs/book/src/developer/providers/getting-started/configure-the-deployment.md @@ -53,7 +53,7 @@ As you might have noticed, we are reading variable values from a `ConfigMap` and You now have to add those to the manifest, but how to inject configuration in production? The convention many Cluster-API projects use is environment variables. -`config/manager/configuration.yaml` +`config/manager/credentials.yaml` ```yaml --- diff --git a/docs/book/src/developer/providers/getting-started/controllers-and-reconciliation.md b/docs/book/src/developer/providers/getting-started/controllers-and-reconciliation.md index 7c179f966ccd..b31256b42e25 100644 --- a/docs/book/src/developer/providers/getting-started/controllers-and-reconciliation.md +++ b/docs/book/src/developer/providers/getting-started/controllers-and-reconciliation.md @@ -1,6 +1,6 @@ # Controllers and Reconciliation -Right now, you can create objects with our API types, but those objects doesn't make any impact on your mailgun infrastrucrure. +Right now, you can create objects with your API types, but those objects don't make any impact on your mailgun infrastructure. Let's fix that by implementing controllers and reconciliation for your API objects. From the [kubebuilder book][controller]: @@ -25,17 +25,16 @@ Kubebuilder has created our first controller in `controllers/mailguncluster_cont // MailgunClusterReconciler reconciles a MailgunCluster object type MailgunClusterReconciler struct { client.Client - Log logr.Logger + Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=mailgunclusters/status,verbs=get;update;patch func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { - _ = context.Background() - _ = r.Log.WithValues("mailguncluster", req.NamespacedName) + _ = logf.FromContext(ctx) - // your logic here + // TODO(user): your logic here return ctrl.Result{}, nil } @@ -88,7 +87,7 @@ We're going to be sending mail, so let's add a few extra fields: // MailgunClusterReconciler reconciles a MailgunCluster object type MailgunClusterReconciler struct { client.Client - Log logr.Logger + Scheme *runtime.Scheme Mailgun mailgun.Mailgun Recipient string } @@ -102,7 +101,7 @@ Here's a naive example: ```go func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { ctx := context.Background() - _ = r.Log.WithValues("mailguncluster", req.NamespacedName) + _ = ctrl.LoggerFrom(ctx) var cluster infrav1.MailgunCluster if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil { @@ -117,8 +116,8 @@ By returning an error, you request that our controller will get `Reconcile()` ca That may not always be what you want - what if the object's been deleted? So let's check that: ```go - var cluster infrav1.MailgunCluster - if err := r.Get(ctx, req.NamespacedName, &cluster); err != nil { + var mailgunCluster infrav1.MailgunCluster + if err := r.Get(ctx, req.NamespacedName, &mailgunCluster); err != nil { // import apierrors "k8s.io/apimachinery/pkg/api/errors" if apierrors.IsNotFound(err) { return ctrl.Result{}, nil @@ -127,19 +126,57 @@ That may not always be what you want - what if the object's been deleted? So let } ``` -Now, if this were any old `kubebuilder` project you'd be done, but in our case you have one more object to retrieve. -Cluster API splits a cluster into two objects: the [`Cluster` defined by Cluster API itself][cluster]. -We'll want to retrieve that as well. -Luckily, cluster API [provides a helper for us][getowner]. +Now, if this were any old `kubebuilder` project you'd be done, but in our case you have one more object to retrieve. While we defined our own cluster object (`MailGunCluster`) that represents all the infrastructure provider specific details for our cluster, we also need to retrieve the upstream [`Cluster` object that is defined by Cluster API itself][cluster]. Luckily, cluster API [provides a helper for us][getowner]. + +First, you'll need to import the cluster-api package into our project if you haven't done so yet: + +```bash +# In your Mailgun repository's root directory +go get sigs.k8s.io/cluster-api +go mod tidy +``` + +Now we can add in a call to the `GetOwnerCluster` function to retrieve the cluster object: ```go - cluster, err := util.GetOwnerCluster(ctx, r.Client, &mg) + // import sigs.k8s.io/cluster-api/util + cluster, err := util.GetOwnerCluster(ctx, r.Client, mailgunCluster.ObjectMeta) if err != nil { return ctrl.Result{}, err - } ``` +If our cluster was just created, the Cluster API controller may not have set the ownership reference on our object yet, so we'll have to return here and wait to do more with our cluster object until then. We can leave a log message noting that we're waiting for the main Cluster API controller to set the ownership reference. Here's what our `Reconcile()` function looks like now: + +```go +func (r *MailgunClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { + // You'll eventually get rid of this and use a context passed in from your main.go + ctx := context.Background() + + // We change the _ to `log` since we're going to log something now + log = ctrl.LoggerFrom(ctx) + + var mailgunCluster infrav1.MailgunCluster + if err := r.Get(ctx, req.NamespacedName, &mailgunCluster); err != nil { + // import apierrors "k8s.io/apimachinery/pkg/api/errors" + if apierrors.IsNotFound(err) { + return ctrl.Result{}, nil + } + return ctrl.Result{}, err + } + + // import sigs.k8s.io/cluster-api/util + cluster, err := util.GetOwnerCluster(ctx, r.Client, mailgunCluster.ObjectMeta) + if err != nil { + return ctrl.Result{}, err + } + + if cluster == nil { + log.Info("Waiting for Cluster Controller to set OwnerRef on MailGunCluster") + return ctrl.Result{}, nil + } +``` + ### The fun part _More Documentation: [The Kubebuilder Book][book] has some excellent documentation on many things, including [how to write good controllers!][implement]_ @@ -152,10 +189,10 @@ This is where your provider really comes into its own. In our case, let's try sending some mail: ```go -subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name) -body := fmt.Sprint("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request) +subject := fmt.Sprintf("[%s] New Cluster %s requested", mailgunCluster.Spec.Priority, cluster.Name) +body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mailgunCluster.Spec.Request) -msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient) +msg := r.mailgun.NewMessage(mailgunCluster.Spec.Requester, subject, body, r.Recipient) _, _, err = r.Mailgun.Send(msg) if err != nil { return ctrl.Result{}, err @@ -172,28 +209,28 @@ This is an important thing about controllers: they need to be idempotent. This m So in our case, we'll store the result of sending a message, and then check to see if we've sent one before. ```go - if mgCluster.Status.MessageID != nil { + if mailgunCluster.Status.MessageID != nil { // We already sent a message, so skip reconciliation return ctrl.Result{}, nil } - subject := fmt.Sprintf("[%s] New Cluster %s requested", mgCluster.Spec.Priority, cluster.Name) - body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mgCluster.Spec.Request) + subject := fmt.Sprintf("[%s] New Cluster %s requested", mailgunCluster.Spec.Priority, cluster.Name) + body := fmt.Sprintf("Hello! One cluster please.\n\n%s\n", mailgunCluster.Spec.Request) - msg := mailgun.NewMessage(mgCluster.Spec.Requester, subject, body, r.Recipient) + msg := r.Mailgun.NewMessage(mailgunCluster.Spec.Requester, subject, body, r.Recipient) _, msgID, err := r.Mailgun.Send(msg) if err != nil { return ctrl.Result{}, err } // patch from sigs.k8s.io/cluster-api/util/patch - helper, err := patch.NewHelper(&mgCluster, r.Client) + helper, err := patch.NewHelper(&mailgunCluster, r.Client) if err != nil { return ctrl.Result{}, err } - mgCluster.Status.MessageID = &msgID - if err := helper.Patch(ctx, &mgCluster); err != nil { - return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mgCluster.Name) + mailgunCluster.Status.MessageID = &msgID + if err := helper.Patch(ctx, &mailgunCluster); err != nil { + return ctrl.Result{}, errors.Wrapf(err, "couldn't patch cluster %q", mailgunCluster.Name) } return ctrl.Result{}, nil @@ -223,7 +260,7 @@ Right now, it probably looks like this: ```go if err = (&controllers.MailgunClusterReconciler{ Client: mgr.GetClient(), - Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), + Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, "Unable to create controller", "controller", "MailgunCluster") os.Exit(1) @@ -256,7 +293,7 @@ We're going to use environment variables for this: if err = (&controllers.MailgunClusterReconciler{ Client: mgr.GetClient(), - Log: ctrl.Log.WithName("controllers").WithName("MailgunCluster"), + Scheme: mgr.GetScheme(), Mailgun: mg, Recipient: recipient, }).SetupWithManager(mgr); err != nil { diff --git a/docs/book/src/developer/providers/getting-started/implement-api-types.md b/docs/book/src/developer/providers/getting-started/implement-api-types.md index 5f4b9e5ad8d2..275d4d0524bf 100644 --- a/docs/book/src/developer/providers/getting-started/implement-api-types.md +++ b/docs/book/src/developer/providers/getting-started/implement-api-types.md @@ -41,6 +41,9 @@ const ( // MailgunClusterSpec defines the desired state of MailgunCluster type MailgunClusterSpec struct { + // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster + // Important: Run "make" to regenerate code after modifying this file + // Priority is how quickly you need this cluster Priority Priority `json:"priority"` // Request is where you ask extra nicely @@ -51,12 +54,15 @@ type MailgunClusterSpec struct { // MailgunClusterStatus defines the observed state of MailgunCluster type MailgunClusterStatus struct { + // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster + // Important: Run "make" to regenerate code after modifying this file + // MessageID is set to the message ID from Mailgun when our message has been sent MessageID *string `json:"response"` } ``` -As the deleted comments request, run `make manager manifests` to regenerate some of the generated data files afterwards. +As the comments request, run `make manager manifests` to regenerate some of the generated data files afterwards. ```bash git add . @@ -69,13 +75,19 @@ To enable clients to encode and decode your API, your types must be able to be r [scheme]: https://pkg.go.dev/k8s.io/apimachinery/pkg/runtime#Scheme -By default, Kubebuilder will provide you with a scheme builder like: +By default, Kubebuilder will provide you with a scheme builder (likely in `api/v1alpha1/groupversion_info.go`) like: ```go -import "sigs.k8s.io/controller-runtime/pkg/scheme" +import ( + "k8s.io/apimachinery/pkg/runtime/schema" + "sigs.k8s.io/controller-runtime/pkg/scheme" +) var ( - // SchemeBuilder is used to add go types to the GroupVersionKind scheme + // GroupVersion is group version used to register these objects. + GroupVersion = schema.GroupVersion{Group: "infrastructure.cluster.x-k8s.io", Version: "v1alpha1"} + + // SchemeBuilder is used to add go types to the GroupVersionKind scheme. SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion} // AddToScheme adds the types in this group-version to the given scheme. @@ -83,11 +95,11 @@ var ( ) ``` -and scheme registration that looks like: +and scheme registration (likely in `api/v1alpha1/*_types.go`) that looks like: ```go func init() { - SchemeBuilder.Register(&Captain{}, &CaptainList{}) + SchemeBuilder.Register(&MailgunCluster{}, &MailgunClusterList{}) } ``` @@ -99,10 +111,17 @@ to be imported cleanly into other projects. To mitigate this, use the following schemebuilder pattern: ```go -import "k8s.io/apimachinery/pkg/runtime" +import ( + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" +) var ( - // schemeBuilder is used to add go types to the GroupVersionKind scheme. + // GroupVersion is group version used to register these objects. + GroupVersion = schema.GroupVersion{Group: "infrastructure.cluster.x-k8s.io", Version: "v1alpha1"} + + // SchemeBuilder is used to add go types to the GroupVersionKind scheme. schemeBuilder = runtime.NewSchemeBuilder(addKnownTypes) // AddToScheme adds the types in this group-version to the given scheme. @@ -122,7 +141,7 @@ and register types as below: ```go func init() { - objectTypes = append(objectTypes, &Captain{}, &CaptainList{}) + objectTypes = append(objectTypes, &MailgunCluster{}, &MailgunClusterList{}) } ``` diff --git a/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md b/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md index a64a365737e2..b8556f468931 100644 --- a/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md +++ b/docs/book/src/developer/providers/getting-started/initialize-repo-and-api-types.md @@ -3,7 +3,7 @@ ## Create a repository ```bash -mkdir cluster-api-provider-mailgun +mkdir -p src/sigs.k8s.io/cluster-api-provider-mailgun cd src/sigs.k8s.io/cluster-api-provider-mailgun git init ``` @@ -38,6 +38,7 @@ The domain for Cluster API resources is `cluster.x-k8s.io`, and infrastructure p Commit your changes so far: ```bash +git add . git commit -m "Generate scaffolding." ``` @@ -75,29 +76,6 @@ Create Controller under pkg/controller [y/n]? y ``` -### Add Status subresource - -The [status subresource][status] lets Spec and Status requests for custom resources be addressed separately so requests don't conflict with each other. -It also lets you split RBAC rules between Spec and Status. You will have to [manually enable it in Kubebuilder][kbstatus]. - -Add the `subresource:status` annotation to your `cluster_types.go` `machine_types.go` - -```go -// +kubebuilder:subresource:status -// +kubebuilder:object:root=true - -// MailgunCluster is the Schema for the mailgunclusters API -type MailgunCluster struct { -``` - -```go -// +kubebuilder:subresource:status -// +kubebuilder:object:root=true - -// MailgunMachine is the Schema for the mailgunmachines API -type MailgunMachine struct { -``` - And regenerate the CRDs: ```bash make manifests @@ -110,9 +88,6 @@ git add . git commit -m "Generate Cluster and Machine resources." ``` -[status]: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#status-subresource -[kbstatus]: https://book.kubebuilder.io/reference/generating-crd.html?highlight=status#status - ### Apply further customizations The cluster API CRDs should be further customized, please refer to [provider contracts](../contracts/overview.md).