-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better support of resources settings #131
Comments
I think the k8s style is the cleaner of the two options, but would it be considered a breaking change? Since we're moving out |
Given that odo v2 is going to GA with the current setting, we need to avoid introducing breaking changes at this point. I'd advise not to change that unless it is really necessary. |
I am wondering is this the final design or temporary design only to avoid breaking changes for odo v2 GA? From scalability perspective, the k8s style is better as it group all functionality properly (eg. request, limit). In the future, if we still have more fields add to the "resources", then it might make fields more redundant (eg. many request and limit) if we don't use k8s style. |
We only plan to expose the most common settings on this and we have no intention to duplication everything that users can configure on K8. Therefore, we are not expecting there will be a lot of new fields that we are add in the future. If we happens in a position in the future that we want to support many other K8 settings, we should probably consider a different approach at that time, e.g. include a link to k8 yaml. |
I do not think devfile needs cpu limits. If there are no cpu limits are set limits are determined by the LimitRange set by the administrator of the cluster or if LimitRanges are not set it is limited to the cpu available to the node that pod is scheduled to (or Quota limits). When we set the limits on devfiles, we are assuming that those values are not in conflict with cluster configuration, which assumes that devfile author knows about the configuration of the cluster, which in turn makes devfiles cluster specific. For instance, if we are running on a namespace with a LimitRange object that sets the Requests are a different story because they are indicating the absolute minimum for devfile to function. If limits, quotas or node sizes can not satisfy those they should cause a failure. |
In short lets agree to disagree @gorkem.
To force me as sysadmin to have multiple LimitRanges for different teams if they are using odo or not is a pain. And even in the cases where the developers are using odo they probably won't do it every time. Most likely the developers will run odo with a application that they are currently developing and have another microservice in the same namespace that they are developing towards. If i have something like, i hang out allot with java developers currently that's why the high numbers.
That would make the random microservice that my odo developer is developing against increase it's cpu/memory usage by more than 100% and that isn't okay. If I go and bring those numbers to a my bosses and tell them that I have 20-30 more worker nodes just because I can't set reasonable defaults due to odo they will most likely ask me to get rid of odo... |
We are in agreement... I think. I do think odo or che needs to provide a way to set values for cpu limits. However this issue is specifically for putting them in devfile. A devfile is related to the application and it should not have cluster related items. For instance setting the limit to 2000m may be OK for one cluster (or a user on the cluster) but it may not work for the next cluster. So we loose the mobility of the application between clusters. That is why I suggested on the issue reported to odo repo to use odo config mechanism. Most developers do not need to know about limits and quotas though. I argue that odo and Che should do more and calculate a sensible cpu limit (for instance 75% of what the user is allowed) and use that by default. |
Personally I don't care if it's in the devfile or in some other file as long as the developers can change this value in a easy way. I don't think automatic calculation and configuration is a good idea due to what you write in redhat-developer/odo#4266 (comment), we don't know if what access the developers have namespaces where limits is configured or not. |
I don't view CPU or memory limit or requests as cluster-specific things. For me, both are application specifics. If I'm writing an application I should know are minimal requirements for the application to start as well as maximum resource consumption that the application is allowed (when the application reaches this I know that something went wrong). LimitRange complicates all this. This is a cluster-specific thing. If my applications specify a limit or request that it's outside of the namespace LimitRange it is completely ok that it gets rejected because the cluster might not be able to handle my application. |
@kadel That is an interesting perspective to look at those values, I can buy into it. |
Adjusting them only if the memory or cpu limits or requests are not set in Devfile? But if Devfile defines memory or CPU requests we need to follow it. |
We'll use the first proposed style. |
Is your feature request related to a problem? Please describe.
Currently, container component has memoryLimit:
But according to Che experience memory limit is not enough. It's also makes sense to have memoryRequest, cpuLimit, cpuRequest.
I suppose the same issue is actual for chePlugin component as well.
Describe the solution you'd like
Support request and limit for cpu and memory (selected proposal):
Describe alternatives you've considered
It may make sense to go k8s style format
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: