Skip to content

📖 Improve comments and fix typos for cronjob tutorial #4724

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 11 additions & 12 deletions hack/docs/internal/cronjob-tutorial/api_design.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,11 @@ const cronjobSpecExplaination = `

- A deadline for starting jobs (if we miss this deadline, we'll just wait till
the next scheduled time)
- What to do if multiple jobs would run at once (do we wait? stop the old one? run both?)
- What to do if multiple jobs would run at once (do we wait? stop the old one? or run both?)
- A way to pause the running of a CronJob, in case something's wrong with it
- Limits on old job history

Remember, since we never read our own status, we need to have some other way to
keep track of whether a job has run. We can use at least one old job to do
Remember, since we never read our status, we need to have another way to track whether a job has run. We can use at least one old job to do
this.

We'll use several markers (` + "`" + `// +comment` + "`" + `) to specify additional metadata. These
Expand All @@ -65,8 +64,8 @@ const cronjobSpecStruct = `
// Specifies how to treat concurrent executions of a Job.
// Valid values are:
// - "Allow" (default): allows CronJobs to run concurrently;
// - "Forbid": forbids concurrent runs, skipping next run if previous run hasn't finished yet;
// - "Replace": cancels currently running job and replaces it with a new one
// - "Forbid": forbids concurrent runs, skipping the next run if the previous run hasn't finished yet;
// - "Replace": cancels the currently running job and replaces it with a new one
// +optional
ConcurrencyPolicy ConcurrencyPolicy` + " `" + `json:"concurrencyPolicy,omitempty"` + "`" + `

Expand All @@ -80,15 +79,15 @@ const cronjobSpecStruct = `

// +kubebuilder:validation:Minimum=0

// The number of successful finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// The number of successfully finished jobs to retain.
// This pointer distinguishes between explicit zero and not specified.
// +optional
SuccessfulJobsHistoryLimit *int32` + " `" + `json:"successfulJobsHistoryLimit,omitempty"` + "`" + `

// +kubebuilder:validation:Minimum=0

// The number of failed finished jobs to retain.
// This is a pointer to distinguish between explicit zero and not specified.
// This pointer distinguishes between explicit zero and not specified.
// +optional
FailedJobsHistoryLimit *int32` + " `" + `json:"failedJobsHistoryLimit,omitempty"` + "`" + `
}
Expand All @@ -111,16 +110,16 @@ const (
// AllowConcurrent allows CronJobs to run concurrently.
AllowConcurrent ConcurrencyPolicy = "Allow"

// ForbidConcurrent forbids concurrent runs, skipping next run if previous
// ForbidConcurrent forbids concurrent runs, skipping the next run if the previous
// hasn't finished yet.
ForbidConcurrent ConcurrencyPolicy = "Forbid"

// ReplaceConcurrent cancels currently running job and replaces it with a new one.
// ReplaceConcurrent cancels the currently running job and replaces it with a new one.
ReplaceConcurrent ConcurrencyPolicy = "Replace"
)

/*
Next, let's design our status, which holds observed state. It contains any information
Next, let's design our status, which holds the observed state. It contains any information
we want users or other controllers to be able to easily obtain.

We'll keep a list of actively running jobs, as well as the last time that we successfully
Expand All @@ -134,7 +133,7 @@ const cronjobList = `
// +optional
Active []corev1.ObjectReference` + " `" + `json:"active,omitempty"` + "`" + `

// Information when was the last time the job was successfully scheduled.
// Information when was the last time the job was successfully scheduled ?
// +optional
LastScheduleTime *metav1.Time` + " `" + `json:"lastScheduleTime,omitempty"` + "`" + `
}
Expand Down
32 changes: 16 additions & 16 deletions hack/docs/internal/cronjob-tutorial/controller_implementation.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ const controllerIntro = `
// +kubebuilder:docs-gen:collapse=Apache License

/*
We'll start out with some imports. You'll see below that we'll need a few more imports
We'll start with some imports. You'll see below that we'll need a few more imports
than those scaffolded for us. We'll talk about each one when we use it.
*/`

Expand Down Expand Up @@ -104,7 +104,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
var cronJob batchv1.CronJob
if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an immediate
// We'll ignore not-found errors, since they can't be fixed by an immediate
// requeue (we'll need to wait for a new notification), and we can get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
Expand All @@ -115,7 +115,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)

To fully update our status, we'll need to list all child jobs in this namespace that belong to this CronJob.
Similarly to Get, we can use the List method to list the child jobs. Notice that we use variadic options to
set the namespace and field match (which is actually an index lookup that we set up below).
set the namespace and field match (which is an index lookup that we set up below).
*/
var childJobs kbatch.JobList
if err := r.List(ctx, &childJobs, client.InNamespace(req.Namespace), client.MatchingFields{jobOwnerKey: req.Name}); err != nil {
Expand All @@ -133,7 +133,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
looking these up can become quite slow as we have to filter through all of them. For a more efficient lookup,
these jobs will be indexed locally on the controller's name. A jobOwnerKey field is added to the
cached job objects. This key references the owning controller and functions as the index. Later in this
document we will configure the manager to actually index this field.</p>
document we will configure the manager to index this field.</p>

</aside>

Expand All @@ -148,7 +148,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
conditions. We'll put that logic in a helper to make our code cleaner.
*/

// find the active list of jobs
// Find the active list of jobs
var activeJobs []*kbatch.Job
var successfulJobs []*kbatch.Job
var failedJobs []*kbatch.Job
Expand Down Expand Up @@ -242,7 +242,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
subresource, we'll use the` + " `" + `Status` + "`" + ` part of the client, with the` + " `" + `Update` + "`" + `
method.

The status subresource ignores changes to spec, so it's less likely to conflict
The status subresource ignores changes to the spec, so it's less likely to conflict
with any other updates, and can have separate permissions.
*/
if err := r.Status().Update(ctx, &cronJob); err != nil {
Expand Down Expand Up @@ -368,7 +368,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
// all start running with no further intervention (if the scheduledJob
// allows concurrency and late starts).
//
// However, if there is a bug somewhere, or incorrect clock
// However, if there is a bug somewhere, or an incorrect clock
// on controller's server or apiservers (for setting creationTimestamp)
// then there could be so many missed start times (it could be off
// by decades or more), that it would eat up all the CPU and memory
Expand All @@ -384,19 +384,19 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
}
// +kubebuilder:docs-gen:collapse=getNextSchedule

// figure out the next times that we need to create
// Figure out the next times that we need to create
// jobs at (or anything we missed).
missedRun, nextRun, err := getNextSchedule(&cronJob, r.Now())
if err != nil {
log.Error(err, "unable to figure out CronJob schedule")
// we don't really care about requeuing until we get an update that
// we don't care about requeuing until we get an update that
// fixes the schedule, so don't return an error
return ctrl.Result{}, nil
}

/*
We'll prep our eventual request to requeue until the next job, and then figure
out if we actually need to run.
out if we need to run.
*/
scheduledResult := ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())} // save this so we can re-use it elsewhere
log = log.WithValues("now", r.Now(), "next run", nextRun)
Expand Down Expand Up @@ -424,7 +424,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
}

/*
If we actually have to run a job, we'll need to either wait till existing ones finish,
If we have to run a job, we'll need to either wait till existing ones finish,
replace the existing ones, or just add new ones. If our information is out of date due
to cache delay, we'll get a requeue when we get up-to-date information.
*/
Expand All @@ -447,7 +447,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
}

/*
Once we've figured out what to do with existing jobs, we'll actually create our desired job
Once we've figured out what to do with existing jobs, we'll create our desired job
*/

/*
Expand Down Expand Up @@ -489,7 +489,7 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
}
// +kubebuilder:docs-gen:collapse=constructJobForCronJob

// actually make the job...
// make the job...
job, err := constructJobForCronJob(&cronJob, missedRun)
if err != nil {
log.Error(err, "unable to construct job from template")
Expand Down Expand Up @@ -520,11 +520,11 @@ const controllerReconcileLogic = `log := logf.FromContext(ctx)
/*
### Setup

Finally, we'll update our setup. In order to allow our reconciler to quickly
look up Jobs by their owner, we'll need an index. We declare an index key that
Finally, we'll update our setup. To allow our reconciler to quickly
lookup Jobs by their owner, we'll need an index. We declare an index key that
we can later use with the client as a pseudo-field name, and then describe how to
extract the indexed value from the Job object. The indexer will automatically take
care of namespaces for us, so we just have to extract the owner name if the Job has
care of namespaces for us, so we just have to extract the owner's name if the Job has
a CronJob owner.

Additionally, we'll inform the manager that this controller owns some Jobs, so that it
Expand Down
4 changes: 2 additions & 2 deletions hack/docs/internal/cronjob-tutorial/e2e_implementation.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ limitations under the License.
package cronjob

const isPrometheusInstalledVar = `
// isPrometheusOperatorAlreadyInstalled will be set true when prometheus CRDs be found on the cluster
// isPrometheusOperatorAlreadyInstalled will be set to be true when prometheus CRDs are found on the cluster
isPrometheusOperatorAlreadyInstalled = false
`

Expand All @@ -36,7 +36,7 @@ if !isPrometheusOperatorAlreadyInstalled {

const checkPrometheusInstalled = `
// To prevent errors when tests run in environments with Prometheus already installed,
// we check for its presence before execution.
// We check for its presence before execution.
// Setup Prometheus before the suite if not already installed
By("checking if prometheus is installed already")
isPrometheusOperatorAlreadyInstalled = utils.IsPrometheusCRDsInstalled()
Expand Down
10 changes: 5 additions & 5 deletions hack/docs/internal/cronjob-tutorial/generate_cronjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ func (sp *Sample) UpdateTutorial() {
sp.updateE2E()
}

// CodeGen is a noop for this sample, just to make generation of all samples
// CodeGen is a loop for this sample, just to make the generation of all samples
// more efficient. We may want to refactor `UpdateTutorial` some day to take
// advantage of a separate call, but it is not necessary.
func (sp *Sample) CodeGen() {
Expand Down Expand Up @@ -186,7 +186,7 @@ func (sp *Sample) updateSpec() {

err = pluginutil.ReplaceInFile(
filepath.Join(sp.ctx.Dir, "api/v1/cronjob_types.go"),
`// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
`// INSERT ADDITIONAL SPEC FIELDS - desired state of the cluster
// Important: Run "make" to regenerate code after modifying this file

// Foo is an example field of CronJob. Edit cronjob_types.go to remove/update
Expand Down Expand Up @@ -379,8 +379,8 @@ manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and Cust
`
const changedManifestTarget = `.PHONY: manifests
manifests: controller-gen ## Generate WebhookConfiguration, ClusterRole and CustomResourceDefinition objects.
# Note that the option maxDescLen=0 was added in the default scaffold in order to sort out the issue
# Too long: must have at most 262144 bytes. By using kubectl apply to create / update resources an annotation
# Note that the option maxDescLen=0 was added in the default scaffold to sort out the issue
# Too long: must have at most 262144 bytes. By using kubectl to create/update resources an annotation
# is created by K8s API to store the latest version of the resource ( kubectl.kubernetes.io/last-applied-configuration).
# However, it has a size limit and if the CRD is too big with so many long descriptions as this one it will cause the failure.
$(CONTROLLER_GEN) rbac:roleName=manager-role crd:maxDescLen=0 webhook paths="./..." output:crd:artifacts:config=config/crd/bases
Expand Down Expand Up @@ -602,7 +602,7 @@ var _ = AfterSuite(func() {
Expect(err).NotTo(HaveOccurred())
})
`, suiteTestCleanup)
hackutils.CheckError("updating suite_test.go to cleanup tests", err)
hackutils.CheckError("updating suite_test.go to clean up tests", err)
}

func (sp *Sample) updateKustomization() {
Expand Down
2 changes: 1 addition & 1 deletion hack/docs/internal/cronjob-tutorial/main_revisited.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ group's package (` + "`" + `batchv1` + "`" + `) to our scheme. This means that
objects in our controller.

If we would be using any other CRD we would have to add their scheme the same way.
Builtin types such as Job have their scheme added by` + " `" + `clientgoscheme` + "`" + `.
Built-in types such as Job have their scheme added by` + " `" + `clientgoscheme` + "`" + `.
*/`

const mainEnableWebhook = `
Expand Down
2 changes: 1 addition & 1 deletion hack/docs/internal/cronjob-tutorial/sample.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ package cronjob
const cronjobSample = `
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
concurrencyPolicy: Allow # explicitly specify, but Allow is also the default.
jobTemplate:
spec:
template:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ import (
// +kubebuilder:docs-gen:collapse=Imports

/*
The first step to writing a simple integration test is to actually create an instance of CronJob you can run tests against.
The first step to writing a simple integration test is to create an instance of CronJob you can run tests against.
Note that to create a CronJob, you’ll need to create a stub CronJob struct that contains your CronJob’s specifications.

Note that when we create a stub CronJob, the CronJob also needs stubs of its required downstream objects.
Expand Down Expand Up @@ -123,7 +123,7 @@ var _ = Describe("CronJob controller", func() {
` +
"`" + `Eventually()` + "`" + ` will repeatedly run the function provided as an argument every interval seconds until
(a) the assertions done by the passed-in ` + "`" + `Gomega` + "`" + ` succeed, or
(b) the number of attempts * interval period exceed the provided timeout value.
(b) the number of attempts * interval period exceeds the provided timeout value.

In the examples below, timeout and interval are Go Duration values of our choosing.
*/
Expand All @@ -140,7 +140,7 @@ var _ = Describe("CronJob controller", func() {
/*
Now that we've created a CronJob in our test cluster, the next step is to write a test that actually tests our CronJob controller’s behavior.
Let’s test the CronJob controller’s logic responsible for updating CronJob.Status.Active with actively running jobs.
We’ll verify that when a CronJob has a single active downstream Job, its CronJob.Status.Active field contains a reference to this Job.
We’ll verify that when a CronJob has a single active downstream Job, it's CronJob.Status.The active field contains a reference to this Job.

First, we should get the test CronJob we created earlier, and verify that it currently does not have any active jobs.
We use Gomega's` + " `" + `Consistently()` + "`" + ` check here to ensure that the active job count remains 0 over a duration of time.
Expand All @@ -151,7 +151,7 @@ var _ = Describe("CronJob controller", func() {
g.Expect(createdCronjob.Status.Active).To(BeEmpty())
}, duration, interval).Should(Succeed())
/*
Next, we actually create a stubbed Job that will belong to our CronJob, as well as its downstream template specs.
Next, we create a stubbed Job that will belong to our CronJob, as well as its downstream template specs.
We set the Job's status's "Active" count to 2 to simulate the Job running two pods, which means the Job is actively running.

We then take the stubbed Job and set its owner reference to point to our test CronJob.
Expand Down
6 changes: 3 additions & 3 deletions hack/docs/internal/cronjob-tutorial/writing_tests_env.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ const suiteTestClient = `

const suiteTestDescription = `
/*
One thing that this autogenerated file is missing, however, is a way to actually start your controller.
One thing that this autogenerated file is missing, however, is a way to start your controller.
The code above will set up a client for interacting with your custom Kind,
but will not be able to test your controller behavior.
If you want to test your custom controller logic, you’ll need to add some familiar-looking manager logic
Expand All @@ -87,7 +87,7 @@ const suiteTestDescription = `
and it'd be easy to make mistakes.

Note that we keep the reconciler running against the manager's cache client, though -- we want our controller to
behave as it would in production, and we use features of the cache (like indices) in our controller which aren't
behave as it would in production, and we use features of the cache (like indices) in our controller that aren't
available when talking directly to the API server.
*/
k8sManager, err := ctrl.NewManager(cfg, ctrl.Options{
Expand All @@ -110,7 +110,7 @@ const suiteTestDescription = `

const suiteTestCleanup = `
/*
Kubebuilder also generates boilerplate functions for cleaning up envtest and actually running your test files in your controllers/ directory.
Kubebuilder also generates boilerplate functions for cleaning up envtest and running your test files in your controllers/ directory.
You won't need to touch these.
*/

Expand Down
Loading