Skip to content

Commit 9acd7af

Browse files
wenchao-haokawasaki
authored andcommitted
mm/zswap: defer zs_free() in zswap_invalidate() path
zswap_invalidate() is called on the same process exit path as zram_slot_free_notify(). The zswap_entry_free() it calls internally performs zs_free() which is expensive due to zsmalloc internal locking. Unlike zram which has a trylock fallback, zswap_invalidate() executes unconditionally, making the latency impact potentially worse. Like zram, the expensive zs_free() here blocks the process exit path, delaying overall memory release. Additionally, zswap_entry_free() performs extra work beyond zs_free(): list_lru_del() (takes its own spinlock), obj_cgroup accounting, and kmem_cache_free for the entry itself. Use zs_free_deferred() in zswap_invalidate() path to defer the expensive zsmalloc handle freeing to a workqueue, allowing the exit path to release memory faster. All other callers (zswap_load, zswap_writeback_entry, zswap_store error paths) run in process context and continue to use synchronous zs_free(). Signed-off-by: Wenchao Hao <[email protected]>
1 parent 38f3303 commit 9acd7af

1 file changed

Lines changed: 13 additions & 3 deletions

File tree

mm/zswap.c

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -761,11 +761,16 @@ static void zswap_entry_cache_free(struct zswap_entry *entry)
761761
/*
762762
* Carries out the common pattern of freeing an entry's zsmalloc allocation,
763763
* freeing the entry itself, and decrementing the number of stored pages.
764+
* When @deferred is true, the zsmalloc handle is queued for async freeing
765+
* instead of being freed immediately.
764766
*/
765-
static void zswap_entry_free(struct zswap_entry *entry)
767+
static void __zswap_entry_free(struct zswap_entry *entry, bool deferred)
766768
{
767769
zswap_lru_del(&zswap_list_lru, entry);
768-
zs_free(entry->pool->zs_pool, entry->handle);
770+
if (deferred)
771+
zs_free_deferred(entry->pool->zs_pool, entry->handle);
772+
else
773+
zs_free(entry->pool->zs_pool, entry->handle);
769774
zswap_pool_put(entry->pool);
770775
if (entry->objcg) {
771776
obj_cgroup_uncharge_zswap(entry->objcg, entry->length);
@@ -777,6 +782,11 @@ static void zswap_entry_free(struct zswap_entry *entry)
777782
atomic_long_dec(&zswap_stored_pages);
778783
}
779784

785+
static void zswap_entry_free(struct zswap_entry *entry)
786+
{
787+
__zswap_entry_free(entry, false);
788+
}
789+
780790
/*********************************
781791
* compressed storage functions
782792
**********************************/
@@ -1648,7 +1658,7 @@ void zswap_invalidate(swp_entry_t swp)
16481658

16491659
entry = xa_erase(tree, offset);
16501660
if (entry)
1651-
zswap_entry_free(entry);
1661+
__zswap_entry_free(entry, true);
16521662
}
16531663

16541664
int zswap_swapon(int type, unsigned long nr_pages)

0 commit comments

Comments
 (0)