Skip to content

Commit fe35004

Browse files
Satoru Moriyatorvalds
Satoru Moriya
authored andcommitted
mm: avoid swapping out with swappiness==0
Sometimes we'd like to avoid swapping out anonymous memory. In particular, avoid swapping out pages of important process or process groups while there is a reasonable amount of pagecache on RAM so that we can satisfy our customers' requirements. OTOH, we can control how aggressive the kernel will swap memory pages with /proc/sys/vm/swappiness for global and /sys/fs/cgroup/memory/memory.swappiness for each memcg. But with current reclaim implementation, the kernel may swap out even if we set swappiness=0 and there is pagecache in RAM. This patch changes the behavior with swappiness==0. If we set swappiness==0, the kernel does not swap out completely (for global reclaim until the amount of free pages and filebacked pages in a zone has been reduced to something very very small (nr_free + nr_filebacked < high watermark)). Signed-off-by: Satoru Moriya <[email protected]> Acked-by: Minchan Kim <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Acked-by: Jerome Marchand <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent c50ac05 commit fe35004

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

mm/vmscan.c

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1761,10 +1761,10 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
17611761
* proportional to the fraction of recently scanned pages on
17621762
* each list that were recently referenced and in active use.
17631763
*/
1764-
ap = (anon_prio + 1) * (reclaim_stat->recent_scanned[0] + 1);
1764+
ap = anon_prio * (reclaim_stat->recent_scanned[0] + 1);
17651765
ap /= reclaim_stat->recent_rotated[0] + 1;
17661766

1767-
fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1);
1767+
fp = file_prio * (reclaim_stat->recent_scanned[1] + 1);
17681768
fp /= reclaim_stat->recent_rotated[1] + 1;
17691769
spin_unlock_irq(&mz->zone->lru_lock);
17701770

@@ -1777,7 +1777,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
17771777
unsigned long scan;
17781778

17791779
scan = zone_nr_lru_pages(mz, lru);
1780-
if (priority || noswap) {
1780+
if (priority || noswap || !vmscan_swappiness(mz, sc)) {
17811781
scan >>= priority;
17821782
if (!scan && force_scan)
17831783
scan = SWAP_CLUSTER_MAX;

0 commit comments

Comments
 (0)