Skip to content

GH-124715: Move trashcan mechanism into Py_Dealloc #132280

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 30, 2025
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 3 additions & 70 deletions Include/cpython/object.h
Original file line number Diff line number Diff line change
Expand Up @@ -429,81 +429,14 @@ PyAPI_FUNC(void) _Py_NO_RETURN _PyObject_AssertFailed(
const char *function);


/* Trashcan mechanism, thanks to Christian Tismer.

When deallocating a container object, it's possible to trigger an unbounded
chain of deallocations, as each Py_DECREF in turn drops the refcount on "the
next" object in the chain to 0. This can easily lead to stack overflows,
especially in threads (which typically have less stack space to work with).

A container object can avoid this by bracketing the body of its tp_dealloc
function with a pair of macros:

static void
mytype_dealloc(mytype *p)
{
... declarations go here ...

PyObject_GC_UnTrack(p); // must untrack first
Py_TRASHCAN_BEGIN(p, mytype_dealloc)
... The body of the deallocator goes here, including all calls ...
... to Py_DECREF on contained objects. ...
Py_TRASHCAN_END // there should be no code after this
}

CAUTION: Never return from the middle of the body! If the body needs to
"get out early", put a label immediately before the Py_TRASHCAN_END
call, and goto it. Else the call-depth counter (see below) will stay
above 0 forever, and the trashcan will never get emptied.

How it works: The BEGIN macro increments a call-depth counter. So long
as this counter is small, the body of the deallocator is run directly without
further ado. But if the counter gets large, it instead adds p to a list of
objects to be deallocated later, skips the body of the deallocator, and
resumes execution after the END macro. The tp_dealloc routine then returns
without deallocating anything (and so unbounded call-stack depth is avoided).

When the call stack finishes unwinding again, code generated by the END macro
notices this, and calls another routine to deallocate all the objects that
may have been added to the list of deferred deallocations. In effect, a
chain of N deallocations is broken into (N-1)/(Py_TRASHCAN_HEADROOM-1) pieces,
with the call stack never exceeding a depth of Py_TRASHCAN_HEADROOM.

Since the tp_dealloc of a subclass typically calls the tp_dealloc of the base
class, we need to ensure that the trashcan is only triggered on the tp_dealloc
of the actual class being deallocated. Otherwise we might end up with a
partially-deallocated object. To check this, the tp_dealloc function must be
passed as second argument to Py_TRASHCAN_BEGIN().
*/


PyAPI_FUNC(void) _PyTrash_thread_deposit_object(PyThreadState *tstate, PyObject *op);
PyAPI_FUNC(void) _PyTrash_thread_destroy_chain(PyThreadState *tstate);


/* Python 3.10 private API, invoked by the Py_TRASHCAN_BEGIN(). */

/* To avoid raising recursion errors during dealloc trigger trashcan before we reach
* recursion limit. To avoid trashing, we don't attempt to empty the trashcan until
* we have headroom above the trigger limit */
#define Py_TRASHCAN_HEADROOM 50

/* Helper function for Py_TRASHCAN_BEGIN */
PyAPI_FUNC(int) _Py_ReachedRecursionLimitWithMargin(PyThreadState *tstate, int margin_count);

#define Py_TRASHCAN_BEGIN(op, dealloc) \
do { \
PyThreadState *tstate = PyThreadState_Get(); \
if (_Py_ReachedRecursionLimitWithMargin(tstate, 2) && Py_TYPE(op)->tp_dealloc == (destructor)dealloc) { \
_PyTrash_thread_deposit_object(tstate, (PyObject *)op); \
break; \
}
/* The body of the deallocator is here. */
#define Py_TRASHCAN_END \
if (tstate->delete_later && !_Py_ReachedRecursionLimitWithMargin(tstate, 4)) { \
_PyTrash_thread_destroy_chain(tstate); \
} \
} while (0);
/* For backwards compatibility with the old trashcan mechanism */
#define Py_TRASHCAN_BEGIN(op, dealloc)
#define Py_TRASHCAN_END


PyAPI_FUNC(void *) PyObject_GetItemData(PyObject *obj);
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Prevents against stack overflows when calling Py_DECREF. Third-party
extension objects no longer need to use the "trashcan" mechanism, as
protection is now built into the `Py_DECREF` macro.
2 changes: 0 additions & 2 deletions Modules/_elementtree.c
Original file line number Diff line number Diff line change
Expand Up @@ -689,7 +689,6 @@ element_dealloc(PyObject *op)

/* bpo-31095: UnTrack is needed before calling any callbacks */
PyObject_GC_UnTrack(self);
Py_TRASHCAN_BEGIN(self, element_dealloc)

if (self->weakreflist != NULL)
PyObject_ClearWeakRefs(op);
Expand All @@ -700,7 +699,6 @@ element_dealloc(PyObject *op)

tp->tp_free(self);
Py_DECREF(tp);
Py_TRASHCAN_END
}

/* -------------------------------------------------------------------- */
Expand Down
2 changes: 0 additions & 2 deletions Objects/descrobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -1311,11 +1311,9 @@ wrapper_dealloc(PyObject *self)
{
wrapperobject *wp = (wrapperobject *)self;
PyObject_GC_UnTrack(wp);
Py_TRASHCAN_BEGIN(wp, wrapper_dealloc)
Py_XDECREF(wp->descr);
Py_XDECREF(wp->self);
PyObject_GC_Del(wp);
Py_TRASHCAN_END
}

static PyObject *
Expand Down
2 changes: 0 additions & 2 deletions Objects/dictobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -3262,7 +3262,6 @@ dict_dealloc(PyObject *self)

/* bpo-31095: UnTrack is needed before calling any callbacks */
PyObject_GC_UnTrack(mp);
Py_TRASHCAN_BEGIN(mp, dict_dealloc)
if (values != NULL) {
if (values->embedded == 0) {
for (i = 0, n = values->capacity; i < n; i++) {
Expand All @@ -3282,7 +3281,6 @@ dict_dealloc(PyObject *self)
else {
Py_TYPE(mp)->tp_free((PyObject *)mp);
}
Py_TRASHCAN_END
}


Expand Down
2 changes: 0 additions & 2 deletions Objects/exceptions.c
Original file line number Diff line number Diff line change
Expand Up @@ -150,10 +150,8 @@ BaseException_dealloc(PyObject *op)
// bpo-44348: The trashcan mechanism prevents stack overflow when deleting
// long chains of exceptions. For example, exceptions can be chained
// through the __context__ attributes or the __traceback__ attribute.
Py_TRASHCAN_BEGIN(self, BaseException_dealloc)
(void)BaseException_clear(op);
Py_TYPE(self)->tp_free(self);
Py_TRASHCAN_END
}

static int
Expand Down
2 changes: 0 additions & 2 deletions Objects/frameobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -1916,7 +1916,6 @@ frame_dealloc(PyObject *op)
_PyObject_GC_UNTRACK(f);
}

Py_TRASHCAN_BEGIN(f, frame_dealloc);
/* GH-106092: If f->f_frame was on the stack and we reached the maximum
* nesting depth for deallocations, the trashcan may have delayed this
* deallocation until after f->f_frame is freed. Avoid dereferencing
Expand All @@ -1941,7 +1940,6 @@ frame_dealloc(PyObject *op)
Py_CLEAR(f->f_locals_cache);
Py_CLEAR(f->f_overwritten_fast_locals);
PyObject_GC_Del(f);
Py_TRASHCAN_END;
}

static int
Expand Down
2 changes: 0 additions & 2 deletions Objects/listobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -550,7 +550,6 @@ list_dealloc(PyObject *self)
PyListObject *op = (PyListObject *)self;
Py_ssize_t i;
PyObject_GC_UnTrack(op);
Py_TRASHCAN_BEGIN(op, list_dealloc)
if (op->ob_item != NULL) {
/* Do it backwards, for Christian Tismer.
There's a simple test case where somehow this reduces
Expand All @@ -569,7 +568,6 @@ list_dealloc(PyObject *self)
else {
PyObject_GC_Del(op);
}
Py_TRASHCAN_END
}

static PyObject *
Expand Down
4 changes: 0 additions & 4 deletions Objects/methodobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -166,10 +166,7 @@ static void
meth_dealloc(PyObject *self)
{
PyCFunctionObject *m = _PyCFunctionObject_CAST(self);
// The Py_TRASHCAN mechanism requires that we be able to
// call PyObject_GC_UnTrack twice on an object.
PyObject_GC_UnTrack(m);
Py_TRASHCAN_BEGIN(m, meth_dealloc);
if (m->m_weakreflist != NULL) {
PyObject_ClearWeakRefs((PyObject*) m);
}
Expand All @@ -186,7 +183,6 @@ meth_dealloc(PyObject *self)
assert(Py_IS_TYPE(self, &PyCFunction_Type));
_Py_FREELIST_FREE(pycfunctionobject, m, PyObject_GC_Del);
}
Py_TRASHCAN_END;
}

static PyObject *
Expand Down
23 changes: 18 additions & 5 deletions Objects/object.c
Original file line number Diff line number Diff line change
Expand Up @@ -2908,13 +2908,11 @@ Py_ReprLeave(PyObject *obj)
void
_PyTrash_thread_deposit_object(PyThreadState *tstate, PyObject *op)
{
_PyObject_ASSERT(op, _PyObject_IS_GC(op));
_PyObject_ASSERT(op, !_PyObject_GC_IS_TRACKED(op));
_PyObject_ASSERT(op, Py_REFCNT(op) == 0);
#ifdef Py_GIL_DISABLED
op->ob_tid = (uintptr_t)tstate->delete_later;
#else
_PyGCHead_SET_PREV(_Py_AS_GC(op), (PyGC_Head*)tstate->delete_later);
*((PyObject**)op) = tstate->delete_later;
#endif
tstate->delete_later = op;
}
Expand All @@ -2933,7 +2931,8 @@ _PyTrash_thread_destroy_chain(PyThreadState *tstate)
op->ob_tid = 0;
_Py_atomic_store_ssize_relaxed(&op->ob_ref_shared, _Py_REF_MERGED);
#else
tstate->delete_later = (PyObject*) _PyGCHead_PREV(_Py_AS_GC(op));
tstate->delete_later = *((PyObject**)op);
op->ob_refcnt = 0;
#endif

/* Call the deallocator directly. This used to try to
Expand Down Expand Up @@ -2998,13 +2997,24 @@ _PyObject_AssertFailed(PyObject *obj, const char *expr, const char *msg,
}


/*
When deallocating a container object, it's possible to trigger an unbounded
chain of deallocations, as each Py_DECREF in turn drops the refcount on "the
next" object in the chain to 0. This can easily lead to stack overflows.
To avoid that, if the C stack is nearing its limit, instead of calling
dealloc on the object, it is added to a queue to be freed later when the
stack is shallower */
void
_Py_Dealloc(PyObject *op)
{
PyTypeObject *type = Py_TYPE(op);
destructor dealloc = type->tp_dealloc;
#ifdef Py_DEBUG
PyThreadState *tstate = _PyThreadState_GET();
if (_Py_ReachedRecursionLimitWithMargin(tstate, 2)) {
_PyTrash_thread_deposit_object(tstate, (PyObject *)op);
return;
}
#ifdef Py_DEBUG
#if !defined(Py_GIL_DISABLED) && !defined(Py_STACKREF_DEBUG)
/* This assertion doesn't hold for the free-threading build, as
* PyStackRef_CLOSE_SPECIALIZED is not implemented */
Expand Down Expand Up @@ -3046,6 +3056,9 @@ _Py_Dealloc(PyObject *op)
Py_XDECREF(old_exc);
Py_DECREF(type);
#endif
if (tstate->delete_later && !_Py_ReachedRecursionLimitWithMargin(tstate, 4)) {
_PyTrash_thread_destroy_chain(tstate);
}
}


Expand Down
3 changes: 0 additions & 3 deletions Objects/odictobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -1389,16 +1389,13 @@ odict_dealloc(PyObject *op)
{
PyODictObject *self = _PyODictObject_CAST(op);
PyObject_GC_UnTrack(self);
Py_TRASHCAN_BEGIN(self, odict_dealloc)

Py_XDECREF(self->od_inst_dict);
if (self->od_weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *)self);

_odict_clear_nodes(self);
PyDict_Type.tp_dealloc((PyObject *)self);

Py_TRASHCAN_END
}

/* tp_repr */
Expand Down
2 changes: 0 additions & 2 deletions Objects/setobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,6 @@ set_dealloc(PyObject *self)

/* bpo-31095: UnTrack is needed before calling any callbacks */
PyObject_GC_UnTrack(so);
Py_TRASHCAN_BEGIN(so, set_dealloc)
if (so->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *) so);

Expand All @@ -529,7 +528,6 @@ set_dealloc(PyObject *self)
if (so->table != so->smalltable)
PyMem_Free(so->table);
Py_TYPE(so)->tp_free(so);
Py_TRASHCAN_END
}

static PyObject *
Expand Down
3 changes: 0 additions & 3 deletions Objects/tupleobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,6 @@ tuple_dealloc(PyObject *self)
}

PyObject_GC_UnTrack(op);
Py_TRASHCAN_BEGIN(op, tuple_dealloc)

Py_ssize_t i = Py_SIZE(op);
while (--i >= 0) {
Expand All @@ -217,8 +216,6 @@ tuple_dealloc(PyObject *self)
if (!maybe_freelist_push(op)) {
Py_TYPE(op)->tp_free((PyObject *)op);
}

Py_TRASHCAN_END
}

static PyObject *
Expand Down
44 changes: 2 additions & 42 deletions Objects/typeobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -2555,7 +2555,6 @@ subtype_dealloc(PyObject *self)
/* UnTrack and re-Track around the trashcan macro, alas */
/* See explanation at end of function for full disclosure */
PyObject_GC_UnTrack(self);
Py_TRASHCAN_BEGIN(self, subtype_dealloc);

/* Find the nearest base with a different tp_dealloc */
base = type;
Expand All @@ -2570,7 +2569,7 @@ subtype_dealloc(PyObject *self)
_PyObject_GC_TRACK(self);
if (PyObject_CallFinalizerFromDealloc(self) < 0) {
/* Resurrected */
goto endlabel;
return;
}
_PyObject_GC_UNTRACK(self);
}
Expand All @@ -2592,7 +2591,7 @@ subtype_dealloc(PyObject *self)
type->tp_del(self);
if (Py_REFCNT(self) > 0) {
/* Resurrected */
goto endlabel;
return;
}
_PyObject_GC_UNTRACK(self);
}
Expand Down Expand Up @@ -2656,45 +2655,6 @@ subtype_dealloc(PyObject *self)
_Py_DECREF_TYPE(type);
}

endlabel:
Py_TRASHCAN_END

/* Explanation of the weirdness around the trashcan macros:

Q. What do the trashcan macros do?

A. Read the comment titled "Trashcan mechanism" in object.h.
For one, this explains why there must be a call to GC-untrack
before the trashcan begin macro. Without understanding the
trashcan code, the answers to the following questions don't make
sense.

Q. Why do we GC-untrack before the trashcan and then immediately
GC-track again afterward?

A. In the case that the base class is GC-aware, the base class
probably GC-untracks the object. If it does that using the
UNTRACK macro, this will crash when the object is already
untracked. Because we don't know what the base class does, the
only safe thing is to make sure the object is tracked when we
call the base class dealloc. But... The trashcan begin macro
requires that the object is *untracked* before it is called. So
the dance becomes:

GC untrack
trashcan begin
GC track

Q. Why did the last question say "immediately GC-track again"?
It's nowhere near immediately.

A. Because the code *used* to re-track immediately. Bad Idea.
self has a refcount of 0, and if gc ever gets its hands on it
(which can happen if any weakref callback gets invoked), it
looks like trash to gc too, and gc also tries to delete self
then. But we're already deleting self. Double deallocation is
a subtle disaster.
*/
}

static PyTypeObject *solid_base(PyTypeObject *type);
Expand Down
2 changes: 0 additions & 2 deletions Python/bltinmodule.c
Original file line number Diff line number Diff line change
Expand Up @@ -566,11 +566,9 @@ filter_dealloc(PyObject *self)
{
filterobject *lz = _filterobject_CAST(self);
PyObject_GC_UnTrack(lz);
Py_TRASHCAN_BEGIN(lz, filter_dealloc)
Py_XDECREF(lz->func);
Py_XDECREF(lz->it);
Py_TYPE(lz)->tp_free(lz);
Py_TRASHCAN_END
}

static int
Expand Down
2 changes: 1 addition & 1 deletion Python/gc.c
Original file line number Diff line number Diff line change
Expand Up @@ -2201,7 +2201,7 @@ void
PyObject_GC_UnTrack(void *op_raw)
{
PyObject *op = _PyObject_CAST(op_raw);
/* Obscure: the Py_TRASHCAN mechanism requires that we be able to
/* Obscure: the trashcan mechanism requires that we be able to
* call PyObject_GC_UnTrack twice on an object.
*/
if (_PyObject_GC_IS_TRACKED(op)) {
Expand Down
Loading
Loading