-
Notifications
You must be signed in to change notification settings - Fork 40.5k
add PodReplacementPolicy for Deployments: terminating pods #128546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add PodReplacementPolicy for Deployments: terminating pods #128546
Conversation
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
804f453
to
f4a8480
Compare
f4a8480
to
46560dc
Compare
/test pull-kubernetes-e2e-gce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
from sig-apps pov
LGTM label has been added. Git tree hash: 55eabbc1e4eb3dd523e255d1fdaa13a8b09cb7c8
|
/triage accepted |
seems like the #123946 flake is still present |
642842f
to
444ba7b
Compare
terminatingReplicas := int32(0) | ||
for _, rs := range replicaSets { | ||
if rs != nil { | ||
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defaulting a nil here to 0 unconditionally is not correct.
If this replicaset has been synced by the replicaset controller at all (status.observedGeneration != 0), and TerminatingReplicas is nil, we cannot reason about the sum of terminatingReplicas across the replicasets and have to return nil from GetTerminatingReplicaCountForReplicaSets
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0) | |
if rs.Status.ObservedGeneration != 0 && rs.Status.TerminatingReplicas == nil { | |
// we cannot sum TerminatingReplicas from replicasets that have been synced by the replicaset controller but do not have TerminatingReplicas set | |
return nil | |
} | |
// sum TerminatingReplicas reported by the replicaset controller (or 0 for replicasets not yet synced by the controller) | |
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable at first glance. I will try to update the code handling for this in both PRs and test it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated both PRs to handle the pointer fields. It seems to work fine when we check the ObservedGeneration as we will get non nil fields during normal operations.
If we are unable to asses the number of terminating pods, the deployment will stop progressing until we are able to make that decision again. As mentioned in #128546 (comment), this should almost never happen.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current implementation does what we want functionally, but is sort of complicated to read because it's trying to accumulate the list of replicaset names we don't have terminating counts for.
I think this can be much simpler like the following:
terminatingReplicas := int32(0)
for _, rs := range replicaSets {
switch {
case rs == nil:
// No-op
case rs.Status.ObservedGeneration == 0 && rs.Status.TerminatingReplicas == nil:
// Replicasets that have never been synced by the controller don't contribute to TerminatingReplicas
case rs.Status.TerminatingReplicas == nil:
// If any replicaset synced by the controller hasn't reported TerminatingReplicas, we cannot calculate a sum
return nil
default:
terminatingReplicas += *rs.Status.TerminatingReplicas
}
}
return &terminatingReplicas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is the only outstanding comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated to use the above snippet for better clarity. Thx!
@@ -714,6 +714,17 @@ func GetAvailableReplicaCountForReplicaSets(replicaSets []*apps.ReplicaSet) int3 | |||
return totalAvailableReplicas | |||
} | |||
|
|||
// GetTerminatingReplicaCountForReplicaSets returns the number of terminating pods for all replica sets. | |||
func GetTerminatingReplicaCountForReplicaSets(replicaSets []*apps.ReplicaSet) int32 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has to return *int32 to distinguish between scenarios where we can sum across replicasets and scenarios where we cannot. We can special-case handling of replicasets never synced by the controller at all and treat terminatingReplicas as 0 in that case, see the comment below.
if err != nil { | ||
return err | ||
} | ||
|
||
if utilfeature.DefaultFeatureGate.Enabled(features.DeploymentPodReplacementPolicy) { | ||
terminatingPods = controller.FilterTerminatingPods(allPods) | ||
terminatingPods, err = rsc.claimPods(ctx, rs, selector, terminatingPods) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just counting terminating pods already managed by the replicaset is a much smaller change to reason about than starting to claim terminating pods as well... (it's not adding any new API calls, it's just tweaking the data we're already reporting upward into ReplicaSet and Deployment)
444ba7b
to
b0e9ae4
Compare
pkg/controller/deployment/sync.go
Outdated
logger.Error(err, "failed to calculate .status.terminatingReplicas", "deployment", klog.KObj(deployment)) | ||
} else { | ||
status.TerminatingReplicas = terminatingReplica | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assigning nil is not really an error condition, and not something we need to log
I think this can simplify to
status.TerminatingReplicas = deploymentutil.GetTerminatingReplicaCountForReplicaSets(allRSs)
and then we won't need the context / logger
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better if we could have some reporting when we fail to compute the terminating replicas, as it should work in most scenarios and someone might depend on it.
I guess error logging might be too strong. Would you be okay with logging this with a debug verbosity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't think we need to log (or should)... the only time we reach this code path and this gets set to nil is when one of the owned replicaset fields is nil. That is observable via the API if anyone is trying to understand why the field is nil.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seemed to be at a similar level to other debug logging we have in the deployment controller.
Ok, I guess users can infer it from observing the API. Updated.
terminatingReplicas := int32(0) | ||
for _, rs := range replicaSets { | ||
if rs != nil { | ||
terminatingReplicas += ptr.Deref(rs.Status.TerminatingReplicas, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current implementation does what we want functionally, but is sort of complicated to read because it's trying to accumulate the list of replicaset names we don't have terminating counts for.
I think this can be much simpler like the following:
terminatingReplicas := int32(0)
for _, rs := range replicaSets {
switch {
case rs == nil:
// No-op
case rs.Status.ObservedGeneration == 0 && rs.Status.TerminatingReplicas == nil:
// Replicasets that have never been synced by the controller don't contribute to TerminatingReplicas
case rs.Status.TerminatingReplicas == nil:
// If any replicaset synced by the controller hasn't reported TerminatingReplicas, we cannot calculate a sum
return nil
default:
terminatingReplicas += *rs.Status.TerminatingReplicas
}
}
return &terminatingReplicas
b0e9ae4
to
3b7a1fa
Compare
- update internal ReplicaSet and Deployment type documentation to match with versioned API - made Replicaset and Deployment type documentation more consistent
3b7a1fa
to
e263b87
Compare
/lgtm |
LGTM label has been added. Git tree hash: b6d53d866f9cb4cf1f60c2b049ef73887928384b
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: atiratree, liggitt, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -42,6 +45,7 @@ func updateReplicaSetStatus(logger klog.Logger, c appsclient.ReplicaSetInterface | |||
rs.Status.FullyLabeledReplicas == newStatus.FullyLabeledReplicas && | |||
rs.Status.ReadyReplicas == newStatus.ReadyReplicas && | |||
rs.Status.AvailableReplicas == newStatus.AvailableReplicas && | |||
ptr.Equal(rs.Status.TerminatingReplicas, newStatus.TerminatingReplicas) && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just remembered we run an instance of the replicaset controller tied to an informer watching core/v1/replicationcontrollers objects:
kubernetes/pkg/controller/replication/replication_controller.go
Lines 52 to 68 in 83bb5d5
// NewReplicationManager configures a replication manager with the specified event recorder | |
func NewReplicationManager(ctx context.Context, podInformer coreinformers.PodInformer, rcInformer coreinformers.ReplicationControllerInformer, kubeClient clientset.Interface, burstReplicas int) *ReplicationManager { | |
logger := klog.FromContext(ctx) | |
eventBroadcaster := record.NewBroadcaster(record.WithContext(ctx)) | |
return &ReplicationManager{ | |
*replicaset.NewBaseController(logger, informerAdapter{rcInformer}, podInformer, clientsetAdapter{kubeClient}, burstReplicas, | |
v1.SchemeGroupVersion.WithKind("ReplicationController"), | |
"replication_controller", | |
"replicationmanager", | |
podControlAdapter{controller.RealPodControl{ | |
KubeClient: kubeClient, | |
Recorder: eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "replication-controller"}), | |
}}, | |
eventBroadcaster, | |
), | |
} | |
} |
That instance converts replicationcontroller objects to/from replicaset objects in-memory:
kubernetes/pkg/apis/core/v1/conversion.go
Lines 132 to 148 in 83bb5d5
func Convert_v1_ReplicationControllerStatus_To_apps_ReplicaSetStatus(in *v1.ReplicationControllerStatus, out *apps.ReplicaSetStatus, s conversion.Scope) error { | |
out.Replicas = in.Replicas | |
out.FullyLabeledReplicas = in.FullyLabeledReplicas | |
out.ReadyReplicas = in.ReadyReplicas | |
out.AvailableReplicas = in.AvailableReplicas | |
out.ObservedGeneration = in.ObservedGeneration | |
for _, cond := range in.Conditions { | |
out.Conditions = append(out.Conditions, apps.ReplicaSetCondition{ | |
Type: apps.ReplicaSetConditionType(cond.Type), | |
Status: core.ConditionStatus(cond.Status), | |
LastTransitionTime: cond.LastTransitionTime, | |
Reason: cond.Reason, | |
Message: cond.Message, | |
}) | |
} | |
return nil | |
} |
kubernetes/pkg/apis/core/v1/conversion.go
Lines 183 to 199 in 83bb5d5
func Convert_apps_ReplicaSetStatus_To_v1_ReplicationControllerStatus(in *apps.ReplicaSetStatus, out *v1.ReplicationControllerStatus, s conversion.Scope) error { | |
out.Replicas = in.Replicas | |
out.FullyLabeledReplicas = in.FullyLabeledReplicas | |
out.ReadyReplicas = in.ReadyReplicas | |
out.AvailableReplicas = in.AvailableReplicas | |
out.ObservedGeneration = in.ObservedGeneration | |
for _, cond := range in.Conditions { | |
out.Conditions = append(out.Conditions, v1.ReplicationControllerCondition{ | |
Type: v1.ReplicationControllerConditionType(cond.Type), | |
Status: v1.ConditionStatus(cond.Status), | |
LastTransitionTime: cond.LastTransitionTime, | |
Reason: cond.Reason, | |
Message: cond.Message, | |
}) | |
} | |
return nil | |
} |
Because we didn't add the new status field to core/v1/replicationcontroller (and I don't think we should), this updateReplicaSetStatus function will always think we have to update the replication controller (because the TerminatingReplicas field from the API will always be nil).
We need to handle this to only populate and check this field when handling actual replicasets somehow.
We also need to improve the fuzz test in TestReplicationControllerConversion to make sure we flag differences between ReplicaSet status and ReplicationController status so that we don't end up with landmines in the replicaset controller.
This has to happen before DeploymentPodReplacementPolicy graduates from alpha.
What type of PR is this?
/kind feature
What this PR does / why we need it:
A new status field
.status.terminatingReplicas
is added to Deployments and ReplicaSets to allow tracking of terminating pods.This is a part 1 - please see special notes.
Which issue(s) this PR fixes:
tracking issue: kubernetes/enhancements#3973
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: