Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconcile by merge instead of replace #1907

Closed
jcechace opened this issue May 18, 2023 · 7 comments
Closed

Reconcile by merge instead of replace #1907

jcechace opened this issue May 18, 2023 · 7 comments
Assignees
Milestone

Comments

@jcechace
Copy link
Contributor

Is there an easy way to achieve "merge" updates on managed dependent resource? If I understand correctly using the desired() method will reconcile the resource into exactly the state equal to the returned resource (so essentially a replace). However I want to keep all the other fields intact. Example

  1. A dependant deployment is created, I DO NOT set the imagePullPolicy field so the default value of ifNotPresent is used. I DO set the image field to example:2.2
  2. I manually edit the deployment and change the imagePullPolicy field to always as well as image to example:1.1
  3. After reconciliation I want the image field to be reconciled but the imagePullPolicy field to be kept intact

Currently I am returning a new Deployment instance from the desired() method. So what I can think of is pulling out the current instance of the Deployment from context and editing that instead of creating the new one. Will the be OK in term of race conditions?

Although this goes a bit against the principal of reconciling into exactly the desired state it provides users with better control over tweaking the created resources (e.g. mounting an additional volume or setting additional environment variables). Especially in early stage of operator development when things such as resource templates in Strimzi operator are not supported.

@csviri csviri self-assigned this May 18, 2023
@metacosm
Copy link
Collaborator

Like you said, this goes against the reconciliation principle. How is the operator supposed to know what the desired state is if you can manually change the dependent? The desired state should always be computable from the primary resource. I think editing the current resource should be fine since JOSDK already clones the initial resource when it starts processing it.

@csviri
Copy link
Collaborator

csviri commented May 18, 2023

I think editing the current resource should be fine since JOSDK already clones the initial resource when it starts processing it.

The current implementation replaces the spec. I think the proper way to do this is to use server side apply:
https://kubernetes.io/docs/reference/using-api/server-side-apply/
from DR.

What can be a problematic in terms of migration. But I think eventually this is what we want. Will take a look on this. But because of migration reasons we need to be cautious.

@csviri
Copy link
Collaborator

csviri commented May 18, 2023

Note that this is related to matchers, so based on what matching we use (equality, or ignoring all adds, or ignore list).

@metacosm
Copy link
Collaborator

Note that this is related to matchers, so based on what matching we use (equality, or ignoring all adds, or ignore list).

Yes, though I don't think that only using a Matcher will work because, while we can ignore / accept some changes, in cases where the resource is changed in such a way that both ignored and unignored properties are changed, the desired version will be used, thus erasing the ignored changes.

🤔 Though maybe we could make the matcher return a merged desired version and that might indeed address the issue?

@csviri
Copy link
Collaborator

csviri commented May 19, 2023

Though maybe we could make the matcher return a merged desired version and that might indeed address the issue?

What I mean is when we have equality macher, it is natural to replace the spec. (So the spec equals to desired resource spec). But probably we could do better than that with Server Side Apply + some intelligent matcher (that aware of manager?) will have to check some corner cases and figure it out.

So for update I think the Server Side Apply is the solution.

@csviri
Copy link
Collaborator

csviri commented May 31, 2023

So after giving some thoughts, the server side apply should solve the update part. So it only updates the managed field just purely by the desired resource (it should be applied to the whole resource not just spec).

The problem is with matching, although we now support equality matcher (compares if the spec or data in config maps for example are the same), this goes against the approach defined in SSA.
Note that we are also support matchers that are ignoring the additive changes - this is wrong now in a sense that if some wants to delete something from desired, that is evaluated as an additive change compared to actual resource from the server. But it works in most of cases, since remove rarely happens. We also support equality with ignoreList.

So the real problem is the matching, and basically this can correctly done this way:
We can just say if all the fields and only those from the desired state are on the actual state we are fine. Ignore all others, this is fine. So we update only if some field values are missing or changed. So what about when some fields are there in the actual and added by us before? This can be detected checking the managedFields on the resource:

https://kubernetes.io/docs/reference/using-api/server-side-apply/#field-management

So we can alter the current algorithm in a way that we check the all "adds" in the json diff if those are managed by the controller. If does, this is non match, thus the item is removed in the desired compared to the actual state.

Such approach should be applied also on values of config maps for example.

@csviri
Copy link
Collaborator

csviri commented Jun 15, 2023

Closing this issue, since should be covered by #1933

@csviri csviri closed this as completed Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants