-
Notifications
You must be signed in to change notification settings - Fork 391
Under certain conditions, a DeleteSnapshot will never get issued for a deleted volumesnapshot #1258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you show the details of the VolumeSnapshotContent in this case? |
I actually had it backed up: apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
annotations:
snapshot.storage.kubernetes.io/deletion-secret-name: ...
snapshot.storage.kubernetes.io/deletion-secret-namespace: ...
snapshot.storage.kubernetes.io/volumesnapshot-being-deleted: "yes"
creationTimestamp: "2024-12-23T08:01:21Z"
deletionGracePeriodSeconds: 0
deletionTimestamp: "2024-12-23T08:01:34Z"
finalizers:
- snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection
generation: 2
name: snapcontent-...
...
spec:
deletionPolicy: Delete
driver: openshift-storage.rbd.csi.ceph.com
source:
volumeHandle: ...
sourceVolumeMode: Block
volumeSnapshotClassName: ocs-...
volumeSnapshotRef:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
name: ...
namespace: ...
...
status:
creationTime: 1734940895115234196
readyToUse: true
restoreSize: 32212254720
snapshotHandle: ... Actually, now that I think about it some more, it's possible #1259 fixes this even without waiting out the entire resync period |
Please provide logs if deletion does not happen. |
I'll try to reproduce and increase verbosity. Which value should I set it to?
Does this theory make sense? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
@xing-yang I think the theory is correct, the PR should be ready for review |
What happened:
Some race that I have yet to figure out will cause the sidecar controller to opt out of sending a
DeleteSnapshot
and thus the volumesnapshot/content to stay pending in deletion indefinitely(including waiting out the resync period)
What you expected to happen:
volumesnapshot/content requeued on some condition so eventually deleted properly
How to reproduce it:
seems tough, but something along the lines of quickly deleting a recently created snapshot
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):The text was updated successfully, but these errors were encountered: