Skip to content

Commit 08282b1

Browse files
zzjianhuigregkh
authored andcommitted
mm/userfaultfd: fix hugetlb fault mutex hash calculation
commit 0217c7f upstream. In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the page index for hugetlb_fault_mutex_hash(). However, linear_page_index() returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash() expects the index in huge page units. This mismatch means that different addresses within the same huge page can produce different hash values, leading to the use of different mutexes for the same huge page. This can cause races between faulting threads, which can corrupt the reservation map and trigger the BUG_ON in resv_map_release(). Fix this by introducing hugetlb_linear_page_index(), which returns the page index in huge page granularity, and using it in place of linear_page_index(). Link: https://lkml.kernel.org/r/[email protected] Fixes: a08c719 ("mm/filemap: remove hugetlb special casing in filemap.c") Signed-off-by: Jianhui Zhou <[email protected]> Reported-by: [email protected] Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7 Acked-by: SeongJae Park <[email protected]> Reviewed-by: David Hildenbrand (Arm) <[email protected]> Acked-by: Mike Rapoport (Microsoft) <[email protected]> Cc: Jane Chu <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: JonasZhou <[email protected]> Cc: Muchun Song <[email protected]> Cc: Oscar Salvador <[email protected]> Cc: Peter Xu <[email protected]> Cc: SeongJae Park <[email protected]> Cc: Sidhartha Kumar <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
1 parent 2145c71 commit 08282b1

2 files changed

Lines changed: 18 additions & 1 deletion

File tree

include/linux/hugetlb.h

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(struct hstate *h)
796796
return h->order + PAGE_SHIFT;
797797
}
798798

799+
/**
800+
* hugetlb_linear_page_index() - linear_page_index() but in hugetlb
801+
* page size granularity.
802+
* @vma: the hugetlb VMA
803+
* @address: the virtual address within the VMA
804+
*
805+
* Return: the page offset within the mapping in huge page units.
806+
*/
807+
static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
808+
unsigned long address)
809+
{
810+
struct hstate *h = hstate_vma(vma);
811+
812+
return ((address - vma->vm_start) >> huge_page_shift(h)) +
813+
(vma->vm_pgoff >> huge_page_order(h));
814+
}
815+
799816
static inline bool order_is_gigantic(unsigned int order)
800817
{
801818
return order > MAX_PAGE_ORDER;

mm/userfaultfd.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -573,7 +573,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
573573
* in the case of shared pmds. fault mutex prevents
574574
* races with other faulting threads.
575575
*/
576-
idx = linear_page_index(dst_vma, dst_addr);
576+
idx = hugetlb_linear_page_index(dst_vma, dst_addr);
577577
mapping = dst_vma->vm_file->f_mapping;
578578
hash = hugetlb_fault_mutex_hash(mapping, idx);
579579
mutex_lock(&hugetlb_fault_mutex_table[hash]);

0 commit comments

Comments
 (0)