-
Notifications
You must be signed in to change notification settings - Fork 62
Add ability for Subscription config to configure a specific container in the operator pod #1507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @dkwon17, An Extension can have 1 to many containers, and any RFE (Request for Enhancement) would not inherently be specific to individual container names like (controller or kube-rbac-proxy) since it would not fit all use cases and users. While the idea is interesting, it raises the question of how we would configure resources for 1 to many containers via a subscription. One potential solution for this use case could be for the Content/Operator Author to create an API, such as a Config CRD. Then, cluster users could define the resource configurations like memory and CPU requests on its CR, and the solution would have the ConfigController watch its CR and update the containers on the cluster based on the info provided. What do you think of this approach? |
Sorry for the late comment, just catching up from the holiday break. I'm not sure I see how the above response rules out OLM doing this? The proposed alternative puts a lot of burden on operator authors to implement their own solution and we'd certainly see a mix of approaches. The proposal calls for applying limits based on container names, which are unique, and even if the cluster extension manages several pods with same named containers, applying things to them all is consistent with k8s resource config. The proposal seems valid to me. |
Hi @bentito, Thank you for checking this one. What is the RFE:In the context of OLM v1 ( spec:
podOverrides:
containers:
- name: my-container
resources:
limits:
memory: "2Gi"
- name: my-container-2
resources:
limits:
memory: "2Gi"
- name: my-container-3
resources:
limits:
memory: "2Gi" The idea is for OLM to watch/observe all Pods on the cluster, identify those that match the specified container names, and apply the defined resource configurations ( Key Questions
Let me know your thoughts! Best regards, |
@dkwon17 if you could weigh in, clarifying between 1 or 2 in what @camilamacedo86 is asking? |
@camilamacedo86 @bentito sorry for the delay, I think what I had in mind was 1: restricting to the CSV bundle, since the operator pod (the pod that runs the controller) is what I would like to configure.
What I want to achieve is to have a way to control the resource limits of specific containers in the pod that that runs the ConfigController. I don't think this is possible for the ConfigController to watch the CR and update its own pod? If it is possible, do you have a small example? |
I implemented a proof of concept as part of #1418. We should revisit that PR when we have time to pick this issue up. |
@joelanford Hey, I'm curious about which part of your POC contains the changes to make this work. Is there a follow up Issue that tracks implementing it? |
Increase the limit to 3Gi. This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
Increase the limit to 3Gi. This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
Increase the limit to 3Gi. This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. The largest memory spike was noted on the stone-prod-p02 cluster as 1.5Gi. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. The largest memory spike was noted on the stone-prod-p02 cluster as 1.5Gi. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. The largest memory spike was noted on the stone-prod-p02 cluster as 1.5Gi. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. The largest memory spike was noted on the stone-prod-p02 cluster as 1.5Gi. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
This should allow observability-operator pod to survive the initial memory spike before the usage stabilises. The largest memory spike was noted on the stone-prod-p02 cluster as 1.5Gi. Sideffect of this is that this applies limits to ALL Operator pods, including prometheus-operator, perses-operator and webhook-admission. This should not impact scheduling as limits are not considered during scheduling. Existing proposal for configuring operator resource usage: operator-framework/operator-controller#1507
Today, we can use Subscription Config to conifgure resource limits and requests for the operator pod: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/design/subscription-config.md#resources
However, the limit and request is applied to all container of the pod. It looks like it's not possible to configure a specific container of the pod with the subscription config.
Proposal
My proposal is to add functionality to allow configuration for each container by introducing
config.containers
which can override the default pod spec:Use case
Our operator pod contains two containers:
We would like to configure the memory limit for only the controller container and not the kube-rbac-proxy.
The text was updated successfully, but these errors were encountered: