You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
controller/operators: label ConfigMaps, don't assume they are
In the past, OLM moved to using a label selector to filter the informers
that track ConfigMaps in the cluster. However, when this was done,
previous ConfigMaps on the cluster that already existed were not
labelled. Therefore, on old clusters there is a mix of data - ConfigMaps
that OLM created and managed but has now forgotten since they are
missing labels, and conformant objects with the label.
We use ConfigMaps to track whether or not Jobs should be labelled - if a
Job has an OwnerReference to a ConfigMap and the ConfigMap has an
OwnerReference to an OLM GVK, we know that the Job is created and
managed by OLM.
During runtime, the two-hop lookup described above is done by using a
ConfigMap informer, so we're light on client calls during the labelling
phase of startup. However, before the recent labelling work went in, the
ConfigMap informer was *already* filtered by label, so our lookups were
dead-ends for the few old ConfigMaps that had never gotten labels in the
past. However, on startup we use live clients to determine if there are
unlabelled objects we need to handle, so we end up in a state where the
live lookup can detect the errant Jobs but the informer-based labellers
can't see them as needing labels.
This commit is technically a performance regression, as it reverts the
unequivocal ConfigMap informer filtering - we see all ConfigMaps on the
cluster during startup, but continue to filter as expected once
everything has labels.
Ideally, we can come up with some policies for cleanup of things like
these Jobs and ConfigMaps in the future; at a minimum all of the OLM
objects should be labelled and visible to the OLM operators from here on
out.
Signed-off-by: Steve Kuznetsov <[email protected]>
0 commit comments