Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MCO-1501: Add support for custom MCPs in MCN #4876

Merged

Conversation

isabella-janssen
Copy link
Member

@isabella-janssen isabella-janssen commented Feb 24, 2025

- What I did
This work adds support for custom MCPs in MCN. The update makes use of the previously existing GetPrimaryPoolForNode function to get the pool a node is associated with.

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
  Pool:
    Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
  Pool:
    Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
  Pool:
    Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Remove the custom pool labels from the nodes and delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc label node <node-name> node-role.kubernetes.io/custom-
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
  Pool:
    Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog
MCO-1501: Added support for custom MCPs in MCN

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 24, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

- How to verify it

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 24, 2025
Copy link
Contributor

openshift-ci bot commented Feb 24, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 24, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, name infra, for example, and add a worker node to the MCP.
  2. Check the machineconfignode object for the node part of the custom pool.
$ oc describe machineconfignode <node-name>
  1. Check the pool name matches the custom mcp created in step 1.
...
Spec:
...
 Pool:
   Name:  worker

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, name infra, for example, and add a worker node to the MCP.
  2. Check the machineconfignode object for the node part of the custom pool.
$ oc describe machineconfignode <node-name>
  1. Check the pool name matches the custom mcp created in step 1.
...
Spec:
...
 Pool:
   Name:  infra

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                        POOLNAME   DESIREDCONFIG             CURRENTCONFIG             UPDATED
ip-10-0-13-196.ec2.internal   infra            rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG             CURRENTCONFIG             UPDATED
ip-10-0-13-196.ec2.internal   infra            rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG             CURRENTCONFIG             UPDATED
ip-10-0-13-196.ec2.internal   infra   rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG             CURRENTCONFIG             UPDATED
ip-10-0-13-196.ec2.internal   infra       rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG             CURRENTCONFIG             UPDATED
ip-10-0-13-196.ec2.internal   infra      rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP, named infra for example, and add a worker node to the new MCP.
  2. View the machineconfignode object for the node part of the custom pool and check that the pool name matches the custom mcp created in step 1..
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra      rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra      rendered-master-xxxxx   rendered-master-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-master-xxxxx   rendered-master-xxxxx   True
ip-10-0-37-255.ec2.internal  custom      rendered-master-xxxxx   rendered-master-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-worker-xxxxx   rendered-worker-xxxxx   True
ip-10-0-37-255.ec2.internal   custom      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-worker-xxxxx   rendered-worker-xxxxx   True
ip-10-0-37-255.ec2.internal   custom      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                      POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-worker-xxxxx   rendered-worker-xxxxx   True
ip-10-0-37-255.ec2.internal   custom      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-worker-xxxxx   rendered-worker-xxxxx   True
ip-10-0-37-255.ec2.internal   custom      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG         UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                          POOLNAME      DESIREDCONFIG           CURRENTCONFIG           UPDATED
ip-10-0-13-196.ec2.internal   infra         rendered-worker-xxxxx   rendered-worker-xxxxx   True
ip-10-0-37-255.ec2.internal   custom      rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG             CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra        rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name-2> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Standard Case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True

**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog:
-->

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node part previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 24, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did

  • Added support for custom MCPs in MCN.

Still todo:

  • update unit tests
  • test mco build and push to live cluster

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@isabella-janssen
Copy link
Member Author

/test unit

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 25, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did
This work adds support for custom MCPs in MCN. The update makes use of the previously existing GetPrimaryPoolForNode function to get the pool a node is associated with.

Still todo:

  • update unit tests
  • test mco build and push to live cluster
  • check against story criteria

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Remove the custom pool labels from the nodes and delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc label node <node-name> node-role.kubernetes.io/custom-
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog
MCO-1501: Added support for custom MCPs in MCN

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 25, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did
This work adds support for custom MCPs in MCN. The update makes use of the previously existing GetPrimaryPoolForNode function to get the pool a node is associated with.

Still todo:

  • update unit tests
  • test mco build and push to live cluster
  • check against story criteria

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Remove the custom pool labels from the nodes and delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc label node <node-name> node-role.kubernetes.io/custom-
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog
MCO-1501: Added support for custom MCPs in MCN

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Feb 25, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did
This work adds support for custom MCPs in MCN. The update makes use of the previously existing GetPrimaryPoolForNode function to get the pool a node is associated with.

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Remove the custom pool labels from the nodes and delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc label node <node-name> node-role.kubernetes.io/custom-
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog
MCO-1501: Added support for custom MCPs in MCN

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@isabella-janssen isabella-janssen changed the title (Not ready for review) MCO-1501: Add support for custom MCPs in MCN MCO-1501: Add support for custom MCPs in MCN Feb 25, 2025
Copy link
Contributor

openshift-ci bot commented Mar 6, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: isabella-janssen

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@isabella-janssen
Copy link
Member Author

/test e2e-gcp-op-techpreview

@sergiordlr
Copy link

sergiordlr commented Mar 7, 2025

Verified using IPI on AWS

We followed these steps

  1. Create an infra pool
  2. Create a custom pool
  3. Check that MCN reports all nodes in worker pool
  4. Move a node to the infra pool
  5. Check that MCN reports that node in the infra pool now
  6. Move the node to the custom pool
  7. Check that MCN reports that node in the custom pool now
  8. Move back the node to the worker pool
  9. Check that MCN reports that node in the worker pool now

Apart from that we executed these steps too

  1. Move a node to the infra pool
  2. Apply a MC
  3. Check that MCN is reporting the right status in the different phases

All already existing MCN e2e tests passed too.

We were able to successfully apply pinnedimagessets to a custom pool too.

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Mar 7, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 7, 2025

@isabella-janssen: This pull request references MCO-1501 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

- What I did
This work adds support for custom MCPs in MCN. The update makes use of the previously existing GetPrimaryPoolForNode function to get the pool a node is associated with.

- How to verify it
Test for standard case

  1. Create a custom MCP named infra and add a worker node to the new MCP.
  2. View the machineconfignode object for the infra node and check that the pool name matches.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
...

Test node with multiple labels

  1. Create 2 MCPs named infra and custom and add one worker node to each new MCP.
  2. Add the custom label to the the node part of the infra MCP.
oc label node <node-name> node-role.kubernetes.io/custom=
  1. View the machineconfignode object for the node part of the infra pool and check that the pool name is properly populating as infra. Note that the node will not be a part of custom, as that label was added after the node joined the infra pool and the node can only be a part of one MCP.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  infra
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   infra      rendered-infra-xxxxx    rendered-infra-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...
...
  1. Remove the infra label from the infra node. This will force the node to become part of the custom MCP due to that label on the node.
oc label node <node-name-2> node-role.kubernetes.io/infra-
  1. View the machineconfignode object for the node previously part of the infra pool and check that the pool name is now properly populating as custom.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  custom
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx   True
ip-10-0-113-251.us-west-2.compute.internal   custom     rendered-custom-xxxxx   rendered-custom-xxxxx  True
...

Test by deleting MCP

  1. Create a custom MCP named custom and add a worker node to the new MCP. Validate the machineconfignode for the node in the new MCP.
  2. Remove the custom pool labels from the nodes and delete the custom MCP. This will force the node previously part of the pool to become part of the default worker pool.
$ oc label node <node-name> node-role.kubernetes.io/custom-
$ oc delete mcp/custom
  1. View the machineconfignode object for the node part previously part of the custom pool and check that the pool name is now properly populating as worker.
$ oc describe machineconfignode <node-name>
...
Spec:
...
 Pool:
   Name:  worker
  1. Check that the pool names are properly populated for all machineconfignode.
$ oc get machineconfignode
NAME                                         POOLNAME   DESIREDCONFIG           CURRENTCONFIG          UPDATED
ip-10-0-101-100.us-west-2.compute.internal   worker     rendered-worker-xxxxx   rendered-worker-xxxxx   True
...

- Description for the changelog
MCO-1501: Added support for custom MCPs in MCN

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@isabella-janssen isabella-janssen force-pushed the mco-1501-customMCPinMCN branch from 4e841c1 to df2762b Compare March 7, 2025 15:55
Copy link
Contributor

openshift-ci bot commented Mar 7, 2025

@isabella-janssen: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-workers-rhel8 df2762b link false /test e2e-aws-workers-rhel8
ci/prow/e2e-aws-disruptive df2762b link false /test e2e-aws-disruptive
ci/prow/okd-e2e-vsphere df2762b link false /test okd-e2e-vsphere
ci/prow/e2e-azure df2762b link false /test e2e-azure
ci/prow/4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade df2762b link false /test 4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade
ci/prow/e2e-ovirt-upgrade df2762b link false /test e2e-ovirt-upgrade
ci/prow/e2e-azure-ovn-upgrade-out-of-change df2762b link false /test e2e-azure-ovn-upgrade-out-of-change
ci/prow/okd-e2e-aws df2762b link false /test okd-e2e-aws
ci/prow/e2e-ovirt df2762b link false /test e2e-ovirt
ci/prow/e2e-openstack-externallb df2762b link false /test e2e-openstack-externallb
ci/prow/e2e-aws-upgrade-single-node df2762b link false /test e2e-aws-upgrade-single-node
ci/prow/okd-e2e-gcp-op df2762b link false /test okd-e2e-gcp-op
ci/prow/e2e-gcp-op-ocl df2762b link false /test e2e-gcp-op-ocl
ci/prow/e2e-aws-ovn-workers-rhel8 df2762b link false /test e2e-aws-ovn-workers-rhel8
ci/prow/e2e-azure-ovn-upgrade df2762b link false /test e2e-azure-ovn-upgrade
ci/prow/e2e-openstack-parallel df2762b link false /test e2e-openstack-parallel
ci/prow/4.12-upgrade-from-stable-4.11-images df2762b link true /test 4.12-upgrade-from-stable-4.11-images
ci/prow/e2e-aws-single-node df2762b link false /test e2e-aws-single-node
ci/prow/e2e-vsphere-ovn-upi-zones df2762b link false /test e2e-vsphere-ovn-upi-zones
ci/prow/okd-e2e-upgrade df2762b link false /test okd-e2e-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@@ -15,6 +15,8 @@ import (
"github.com/openshift/machine-config-operator/pkg/apihelpers"
ctrlcommon "github.com/openshift/machine-config-operator/pkg/controller/common"
"github.com/openshift/machine-config-operator/pkg/daemon/constants"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra line

@umohnani8
Copy link
Contributor

changes LGTM

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally lgtm as well! Left 2 comments just to understand some decision reasoning, but neither are blocking and we can go ahead with the current design

@@ -102,14 +106,6 @@ func generateAndApplyMachineConfigNodes(
return nil
}

var pool string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious as whether it would make sense to make an equivalent call for GetPrimaryPoolNameForMCN here directly, instead of having the pool passed around multiple levels of functions. Would that be feasible technically?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this before and agree that in an ideal world this would be a much better pattern. I had issues with the MCP listener getting passed around that's needed for the GetPrimaryPoolNameForMCN function and, at the end of the day, figured the time commitment to debug was not worth it at this stage.

Since I still think there is value in this but do not think it is as high priority as some of the other GA work for MCN, would a fair compromise be to create a minor priority story to revisit updating this pattern when time permits?

// Handle case of nil node
if node == nil {
klog.Error("node object is nil, setting associated MCP to unknown")
return "unknown", nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm generally ok with unknown being the "pool" for an uninitialized node object, but I think we should generally be careful with introducing keywords (and the infinitesimally small chance that someone actually names a pool unknown, ha).

Is the primary reasoning here that we'd like to differentiate a "erroring" node (which would be "") and an uninitialized node (which would be "unknown")? I would also be fine with having both just be empty string as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the primary reasoning here that we'd like to differentiate a "erroring" node (which would be "") and an uninitialized node (which would be "unknown")?

Since the pool name is a required value, it cannot be empty (or ""). In any case when no pool is associated with the node, the pool is filled, temporarily, as "unknown."

I would be fine changing the placeholder value from "unknown" if there is another value that makes more sense.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with this for now. This should be rare and users shouldn't really see this barring something going terribly wrong, so let's keep it as such

@djoshy
Copy link
Contributor

djoshy commented Mar 17, 2025

/lgtm

Let's get this in to unblock the remaining MCN work and open a low prio card to track cleaning up the pattern.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2025
@djoshy
Copy link
Contributor

djoshy commented Mar 17, 2025

overriding accidentally switched on tests from the master-> main migration (not sure if this is still necessary, but just in case)

/override ci/prow/4.12-upgrade-from-stable-4.11-images
/override ci/prow/okd-images
/override ci/prow/cluster-bootimages

Copy link
Contributor

openshift-ci bot commented Mar 17, 2025

@djoshy: Overrode contexts on behalf of djoshy: ci/prow/4.12-upgrade-from-stable-4.11-images, ci/prow/cluster-bootimages, ci/prow/okd-images

In response to this:

overriding accidentally switched on tests from the master-> main migration (not sure if this is still necessary, but just in case)

/override ci/prow/4.12-upgrade-from-stable-4.11-images
/override ci/prow/okd-images
/override ci/prow/cluster-bootimages

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@djoshy
Copy link
Contributor

djoshy commented Mar 17, 2025

/retest-required

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 13ad337 and 2 for PR HEAD df2762b in total

@openshift-merge-bot openshift-merge-bot bot merged commit d1d9884 into openshift:main Mar 18, 2025
34 of 52 checks passed
@isabella-janssen
Copy link
Member Author

Let's get this in to unblock the remaining MCN work and open a low prio card to track cleaning up the pattern.

Card was created: https://issues.redhat.com/browse/MCO-1610
cc: @yuqi-zhang

@isabella-janssen isabella-janssen deleted the mco-1501-customMCPinMCN branch March 18, 2025 14:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants