-
Notifications
You must be signed in to change notification settings - Fork 34
Define new selector for building image job #311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
✅ Deploy Preview for kubernetes-sigs-kmm ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
Hi @erusso7. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #311 +/- ##
==========================================
- Coverage 82.27% 82.25% -0.03%
==========================================
Files 31 31
Lines 3075 3082 +7
==========================================
+ Hits 2530 2535 +5
- Misses 448 450 +2
Partials 97 97
... and 2 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
internal/build/job/maker.go
Outdated
@@ -129,6 +129,11 @@ func (m *maker) specTemplate( | |||
kanikoImage += ":" + buildConfig.KanikoParams.Tag | |||
} | |||
|
|||
nodeSelector := mld.Selector |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to set the nodeSelector at all for Builds? Why not let kubernetes decide where to run the Build job?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good point. Removing it completely will also solve #140 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not let kubernetes decide where to run the Build job?
This comes from that issue. You may want to reserve nodes equipped with expensive hardware for some workloads only, for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see in that issue any mention of nodes with expensive hardware. They just say that it would be nice if build/sign can also run on other nodes, and not only those with specific hardware. Also, allowing kuberentes to pick the running node makes use of the schedule, which probably has much more data to decide which nodes are less "overloaded"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have understood the issue the same way as @yevgeny-shnaidman did.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, allowing kuberentes to pick the running node makes use of the schedule, which probably has much more data to decide which nodes are less "overloaded"
This change makes a user able to do just that, with selector: {}
in the build
section.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, but why give him an option at all?
/ok-to-test |
e9b3df8
to
9514d5a
Compare
/hold |
@qbarrand maybe all we need to add a label to the Build/Sign job node selector is architecture label. I think that all the nodes are label with it anyways, and it can be derived from the KernelMapping |
@qbarrand @yevgeny-shnaidman I would instead just ask This will also allow to build for ARM in a x86 cluster or any other combination. This is especially handy in the hub-spoke topology. This will requires extending the |
@ybettan does kaniko support something like that? build Dockerfile will probably include Makefile that needs to know how to cross-compile, not sure if customer will want to support it. |
This PR is blocked by #325 unless we modify the CRD to contain a |
Sounds like it's the only way to properly address #140 though. |
@qbarrand |
As agreed in the community meeting we will proceed with the following: We add a new selector for build. Regarding the default value we differentiate between v1 and v2. v1: v2: |
9514d5a
to
f4c8942
Compare
@erusso7: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
cc8dd8c
to
13cb6dc
Compare
Signed-off-by: Erusso7 <[email protected]>
13cb6dc
to
b6fb69c
Compare
/unhold |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: erusso7, yevgeny-shnaidman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
#140