Skip to content

Commit 8ebe0a5

Browse files
rikvanrielakpm00
authored andcommitted
mm,madvise,hugetlb: fix unexpected data loss with MADV_DONTNEED on hugetlbfs
A common use case for hugetlbfs is for the application to create memory pools backed by huge pages, which then get handed over to some malloc library (eg. jemalloc) for further management. That malloc library may be doing MADV_DONTNEED calls on memory that is no longer needed, expecting those calls to happen on PAGE_SIZE boundaries. However, currently the MADV_DONTNEED code rounds up any such requests to HPAGE_PMD_SIZE boundaries. This leads to undesired outcomes when jemalloc expects a 4kB MADV_DONTNEED, but 2MB of memory get zeroed out, instead. Use of pre-built shared libraries means that user code does not always know the page size of every memory arena in use. Avoid unexpected data loss with MADV_DONTNEED by rounding up only to PAGE_SIZE (in do_madvise), and rounding down to huge page granularity. That way programs will only get as much memory zeroed out as they requested. Link: https://lkml.kernel.org/r/[email protected] Fixes: 90e7e7f ("mm: enable MADV_DONTNEED for hugetlb mappings") Signed-off-by: Rik van Riel <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent fba4eaf commit 8ebe0a5

File tree

1 file changed

+11
-1
lines changed

1 file changed

+11
-1
lines changed

mm/madvise.c

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -813,7 +813,14 @@ static bool madvise_dontneed_free_valid_vma(struct vm_area_struct *vma,
813813
if (start & ~huge_page_mask(hstate_vma(vma)))
814814
return false;
815815

816-
*end = ALIGN(*end, huge_page_size(hstate_vma(vma)));
816+
/*
817+
* Madvise callers expect the length to be rounded up to PAGE_SIZE
818+
* boundaries, and may be unaware that this VMA uses huge pages.
819+
* Avoid unexpected data loss by rounding down the number of
820+
* huge pages freed.
821+
*/
822+
*end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma)));
823+
817824
return true;
818825
}
819826

@@ -828,6 +835,9 @@ static long madvise_dontneed_free(struct vm_area_struct *vma,
828835
if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior))
829836
return -EINVAL;
830837

838+
if (start == end)
839+
return 0;
840+
831841
if (!userfaultfd_remove(vma, start, end)) {
832842
*prev = NULL; /* mmap_lock has been dropped, prev is stale */
833843

0 commit comments

Comments
 (0)