Skip to content

add PodReplacementPolicy for Deployments: terminating pods #128546

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

atiratree
Copy link
Member

@atiratree atiratree commented Nov 4, 2024

What type of PR is this?

/kind feature

What this PR does / why we need it:

A new status field .status.terminatingReplicas is added to Deployments and ReplicaSets to allow tracking of terminating pods.

This is a part 1 - please see special notes.

Which issue(s) this PR fixes:

tracking issue: kubernetes/enhancements#3973

Special notes for your reviewer:

Does this PR introduce a user-facing change?

A new status field `.status.terminatingReplicas` is added to Deployments and ReplicaSets to allow tracking of terminating pods when the DeploymentReplicaSetTerminatingReplicas feature-gate is enabled.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/code-generation area/test kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/testing Categorizes an issue or PR as relevant to SIG Testing. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 4, 2024
@k8s-triage-robot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@atiratree atiratree force-pushed the pod-replacement-policy-terminating-pods branch from 804f453 to f4a8480 Compare November 4, 2024 20:29
@atiratree atiratree force-pushed the pod-replacement-policy-terminating-pods branch from f4a8480 to 46560dc Compare November 4, 2024 22:26
@atiratree
Copy link
Member Author

/test pull-kubernetes-e2e-gce

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
from sig-apps pov

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 5, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 55eabbc1e4eb3dd523e255d1fdaa13a8b09cb7c8

@soltysh
Copy link
Contributor

soltysh commented Nov 5, 2024

/triage accepted
/priority important-longterm
/label api-review

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. api-review Categorizes an issue or PR as actively needing an API review. labels Nov 5, 2024
@atiratree
Copy link
Member Author

seems like the #123946 flake is still present
/test pull-kubernetes-integration

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 8, 2024
@atiratree atiratree force-pushed the pod-replacement-policy-terminating-pods branch from 642842f to 444ba7b Compare November 8, 2024 16:19
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 8, 2024
terminatingReplicas := int32(0)
for _, rs := range replicaSets {
if rs != nil {
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defaulting a nil here to 0 unconditionally is not correct.

If this replicaset has been synced by the replicaset controller at all (status.observedGeneration != 0), and TerminatingReplicas is nil, we cannot reason about the sum of terminatingReplicas across the replicasets and have to return nil from GetTerminatingReplicaCountForReplicaSets

Suggested change
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0)
if rs.Status.ObservedGeneration != 0 && rs.Status.TerminatingReplicas == nil {
// we cannot sum TerminatingReplicas from replicasets that have been synced by the replicaset controller but do not have TerminatingReplicas set
return nil
}
// sum TerminatingReplicas reported by the replicaset controller (or 0 for replicasets not yet synced by the controller)
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems reasonable at first glance. I will try to update the code handling for this in both PRs and test it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated both PRs to handle the pointer fields. It seems to work fine when we check the ObservedGeneration as we will get non nil fields during normal operations.

If we are unable to asses the number of terminating pods, the deployment will stop progressing until we are able to make that decision again. As mentioned in #128546 (comment), this should almost never happen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current implementation does what we want functionally, but is sort of complicated to read because it's trying to accumulate the list of replicaset names we don't have terminating counts for.

I think this can be much simpler like the following:

terminatingReplicas := int32(0)
for _, rs := range replicaSets {
  switch {
    case rs == nil:
      // No-op

    case rs.Status.ObservedGeneration == 0 && rs.Status.TerminatingReplicas == nil:
  		// Replicasets that have never been synced by the controller don't contribute to TerminatingReplicas

    case rs.Status.TerminatingReplicas == nil:
      // If any replicaset synced by the controller hasn't reported TerminatingReplicas, we cannot calculate a sum
      return nil

    default:
      terminatingReplicas += *rs.Status.TerminatingReplicas
  }
}
return &terminatingReplicas

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the only outstanding comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated to use the above snippet for better clarity. Thx!

@@ -714,6 +714,17 @@ func GetAvailableReplicaCountForReplicaSets(replicaSets []*apps.ReplicaSet) int3
return totalAvailableReplicas
}

// GetTerminatingReplicaCountForReplicaSets returns the number of terminating pods for all replica sets.
func GetTerminatingReplicaCountForReplicaSets(replicaSets []*apps.ReplicaSet) int32 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this has to return *int32 to distinguish between scenarios where we can sum across replicasets and scenarios where we cannot. We can special-case handling of replicasets never synced by the controller at all and treat terminatingReplicas as 0 in that case, see the comment below.

if err != nil {
return err
}

if utilfeature.DefaultFeatureGate.Enabled(features.DeploymentPodReplacementPolicy) {
terminatingPods = controller.FilterTerminatingPods(allPods)
terminatingPods, err = rsc.claimPods(ctx, rs, selector, terminatingPods)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just counting terminating pods already managed by the replicaset is a much smaller change to reason about than starting to claim terminating pods as well... (it's not adding any new API calls, it's just tweaking the data we're already reporting upward into ReplicaSet and Deployment)

logger.Error(err, "failed to calculate .status.terminatingReplicas", "deployment", klog.KObj(deployment))
} else {
status.TerminatingReplicas = terminatingReplica
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assigning nil is not really an error condition, and not something we need to log

I think this can simplify to

status.TerminatingReplicas = deploymentutil.GetTerminatingReplicaCountForReplicaSets(allRSs)

and then we won't need the context / logger

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better if we could have some reporting when we fail to compute the terminating replicas, as it should work in most scenarios and someone might depend on it.

I guess error logging might be too strong. Would you be okay with logging this with a debug verbosity?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't think we need to log (or should)... the only time we reach this code path and this gets set to nil is when one of the owned replicaset fields is nil. That is observable via the API if anyone is trying to understand why the field is nil.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seemed to be at a similar level to other debug logging we have in the deployment controller.

Ok, I guess users can infer it from observing the API. Updated.

terminatingReplicas := int32(0)
for _, rs := range replicaSets {
if rs != nil {
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the current implementation does what we want functionally, but is sort of complicated to read because it's trying to accumulate the list of replicaset names we don't have terminating counts for.

I think this can be much simpler like the following:

terminatingReplicas := int32(0)
for _, rs := range replicaSets {
  switch {
    case rs == nil:
      // No-op

    case rs.Status.ObservedGeneration == 0 && rs.Status.TerminatingReplicas == nil:
  		// Replicasets that have never been synced by the controller don't contribute to TerminatingReplicas

    case rs.Status.TerminatingReplicas == nil:
      // If any replicaset synced by the controller hasn't reported TerminatingReplicas, we cannot calculate a sum
      return nil

    default:
      terminatingReplicas += *rs.Status.TerminatingReplicas
  }
}
return &terminatingReplicas

@atiratree atiratree force-pushed the pod-replacement-policy-terminating-pods branch from b0e9ae4 to 3b7a1fa Compare January 14, 2025 22:46
@atiratree atiratree force-pushed the pod-replacement-policy-terminating-pods branch from 3b7a1fa to e263b87 Compare January 23, 2025 21:38
@liggitt
Copy link
Member

liggitt commented Jan 23, 2025

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jan 23, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: b6d53d866f9cb4cf1f60c2b049ef73887928384b

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: atiratree, liggitt, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 23, 2025
@k8s-ci-robot k8s-ci-robot merged commit 5aeea45 into kubernetes:master Jan 24, 2025
16 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.33 milestone Jan 24, 2025
@@ -42,6 +45,7 @@ func updateReplicaSetStatus(logger klog.Logger, c appsclient.ReplicaSetInterface
rs.Status.FullyLabeledReplicas == newStatus.FullyLabeledReplicas &&
rs.Status.ReadyReplicas == newStatus.ReadyReplicas &&
rs.Status.AvailableReplicas == newStatus.AvailableReplicas &&
ptr.Equal(rs.Status.TerminatingReplicas, newStatus.TerminatingReplicas) &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just remembered we run an instance of the replicaset controller tied to an informer watching core/v1/replicationcontrollers objects:

// NewReplicationManager configures a replication manager with the specified event recorder
func NewReplicationManager(ctx context.Context, podInformer coreinformers.PodInformer, rcInformer coreinformers.ReplicationControllerInformer, kubeClient clientset.Interface, burstReplicas int) *ReplicationManager {
logger := klog.FromContext(ctx)
eventBroadcaster := record.NewBroadcaster(record.WithContext(ctx))
return &ReplicationManager{
*replicaset.NewBaseController(logger, informerAdapter{rcInformer}, podInformer, clientsetAdapter{kubeClient}, burstReplicas,
v1.SchemeGroupVersion.WithKind("ReplicationController"),
"replication_controller",
"replicationmanager",
podControlAdapter{controller.RealPodControl{
KubeClient: kubeClient,
Recorder: eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "replication-controller"}),
}},
eventBroadcaster,
),
}
}

That instance converts replicationcontroller objects to/from replicaset objects in-memory:

func Convert_v1_ReplicationControllerStatus_To_apps_ReplicaSetStatus(in *v1.ReplicationControllerStatus, out *apps.ReplicaSetStatus, s conversion.Scope) error {
out.Replicas = in.Replicas
out.FullyLabeledReplicas = in.FullyLabeledReplicas
out.ReadyReplicas = in.ReadyReplicas
out.AvailableReplicas = in.AvailableReplicas
out.ObservedGeneration = in.ObservedGeneration
for _, cond := range in.Conditions {
out.Conditions = append(out.Conditions, apps.ReplicaSetCondition{
Type: apps.ReplicaSetConditionType(cond.Type),
Status: core.ConditionStatus(cond.Status),
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
})
}
return nil
}

func Convert_apps_ReplicaSetStatus_To_v1_ReplicationControllerStatus(in *apps.ReplicaSetStatus, out *v1.ReplicationControllerStatus, s conversion.Scope) error {
out.Replicas = in.Replicas
out.FullyLabeledReplicas = in.FullyLabeledReplicas
out.ReadyReplicas = in.ReadyReplicas
out.AvailableReplicas = in.AvailableReplicas
out.ObservedGeneration = in.ObservedGeneration
for _, cond := range in.Conditions {
out.Conditions = append(out.Conditions, v1.ReplicationControllerCondition{
Type: v1.ReplicationControllerConditionType(cond.Type),
Status: v1.ConditionStatus(cond.Status),
LastTransitionTime: cond.LastTransitionTime,
Reason: cond.Reason,
Message: cond.Message,
})
}
return nil
}

Because we didn't add the new status field to core/v1/replicationcontroller (and I don't think we should), this updateReplicaSetStatus function will always think we have to update the replication controller (because the TerminatingReplicas field from the API will always be nil).

We need to handle this to only populate and check this field when handling actual replicasets somehow.

We also need to improve the fuzz test in TestReplicationControllerConversion to make sure we flag differences between ReplicaSet status and ReplicationController status so that we don't end up with landmines in the replicaset controller.

This has to happen before DeploymentPodReplacementPolicy graduates from alpha.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-review Categorizes an issue or PR as actively needing an API review. approved Indicates a PR has been approved by an approver from all required OWNERS files. area/code-generation area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: API review completed, 1.33
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants