Skip to content

Commit 690467c

Browse files
urezkitorvalds
authored andcommitted
mm/vmalloc: Move draining areas out of caller context
A caller initiates the drain procces from its context once the drain threshold is reached or passed. There are at least two drawbacks of doing so: a) a caller can be a high-prio or RT task. In that case it can stuck in doing the actual drain of all lazily freed areas. This is not optimal because such tasks usually are latency sensitive where the control should be returned back as soon as possible in order to drive such workloads in time. See 96e2db4 ("mm/vmalloc: rework the drain logic") b) It is not safe to call vfree() during holding a spinlock due to the vmap_purge_lock mutex. The was a report about this from Zeal Robot <[email protected]> here: https://lore.kernel.org/all/[email protected] Moving the drain to the separate work context addresses those issues. v1->v2: - Added prefix "_work" to the drain worker function. v2->v3: - Remove the drain_vmap_work_in_progress. Extra queuing is expectable under heavy load but it can be disregarded because a work will bail out if nothing to be done. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Uladzislau Rezki (Sony) <[email protected]> Reviewed-by: Christoph Hellwig <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Nicholas Piggin <[email protected]> Cc: Oleksiy Avramchenko <[email protected]> Cc: Uladzislau Rezki <[email protected]> Cc: Vasily Averin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent 651d55c commit 690467c

File tree

1 file changed

+17
-13
lines changed

1 file changed

+17
-13
lines changed

mm/vmalloc.c

Lines changed: 17 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -791,6 +791,8 @@ RB_DECLARE_CALLBACKS_MAX(static, free_vmap_area_rb_augment_cb,
791791

792792
static void purge_vmap_area_lazy(void);
793793
static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
794+
static void drain_vmap_area_work(struct work_struct *work);
795+
static DECLARE_WORK(drain_vmap_work, drain_vmap_area_work);
794796

795797
static atomic_long_t nr_vmalloc_pages;
796798

@@ -1717,18 +1719,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
17171719
return true;
17181720
}
17191721

1720-
/*
1721-
* Kick off a purge of the outstanding lazy areas. Don't bother if somebody
1722-
* is already purging.
1723-
*/
1724-
static void try_purge_vmap_area_lazy(void)
1725-
{
1726-
if (mutex_trylock(&vmap_purge_lock)) {
1727-
__purge_vmap_area_lazy(ULONG_MAX, 0);
1728-
mutex_unlock(&vmap_purge_lock);
1729-
}
1730-
}
1731-
17321722
/*
17331723
* Kick off a purge of the outstanding lazy areas.
17341724
*/
@@ -1740,6 +1730,20 @@ static void purge_vmap_area_lazy(void)
17401730
mutex_unlock(&vmap_purge_lock);
17411731
}
17421732

1733+
static void drain_vmap_area_work(struct work_struct *work)
1734+
{
1735+
unsigned long nr_lazy;
1736+
1737+
do {
1738+
mutex_lock(&vmap_purge_lock);
1739+
__purge_vmap_area_lazy(ULONG_MAX, 0);
1740+
mutex_unlock(&vmap_purge_lock);
1741+
1742+
/* Recheck if further work is required. */
1743+
nr_lazy = atomic_long_read(&vmap_lazy_nr);
1744+
} while (nr_lazy > lazy_max_pages());
1745+
}
1746+
17431747
/*
17441748
* Free a vmap area, caller ensuring that the area has been unmapped
17451749
* and flush_cache_vunmap had been called for the correct range
@@ -1766,7 +1770,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
17661770

17671771
/* After this point, we may free va at any time */
17681772
if (unlikely(nr_lazy > lazy_max_pages()))
1769-
try_purge_vmap_area_lazy();
1773+
schedule_work(&drain_vmap_work);
17701774
}
17711775

17721776
/*

0 commit comments

Comments
 (0)