Skip to content

Commit 0a8321d

Browse files
riteshharjanimaddy-kerneldev
authored andcommitted
powerpc/mem: Move CMA reservations to arch_mm_preinit
commit 4267739 ("arch, mm: consolidate initialization of SPARSE memory model"), changed the initialization order of "pageblock_order" from... start_kernel() - setup_arch() - initmem_init() - sparse_init() - set_pageblock_order(); // this sets the pageblock_order - xxx_cma_reserve(); to... start_kernel() - setup_arch() - xxx_cma_reserve(); - mm_core_init_early() - free_area_init() - sparse_init() - set_pageblock_order() // this sets the pageblock_order. So this means, pageblock_order is not initialized before these cma reservation function calls, hence we are seeing CMA failures like... [ 0.000000] kvm_cma_reserve: reserving 3276 MiB for global area [ 0.000000] cma: pageblock_order not yet initialized. Called during early boot? [ 0.000000] cma: Failed to reserve 3276 MiB .... [ 0.000000][ T0] cma: pageblock_order not yet initialized. Called during early boot? [ 0.000000][ T0] cma: Failed to reserve 1024 MiB This patch moves these CMA reservations to arch_mm_preinit() which happens in mm_core_init() (which happens after pageblock_order is initialized), but before the memblock moves the free memory to buddy. Fixes: 4267739 ("arch, mm: consolidate initialization of SPARSE memory model") Suggested-by: Mike Rapoport <[email protected]> Reported-and-tested-by: Sourabh Jain <[email protected]> Closes: https://lore.kernel.org/linuxppc-dev/[email protected]/ Signed-off-by: Ritesh Harjani (IBM) <[email protected]> Tested-by: Dan Horák <[email protected]> Signed-off-by: Madhavan Srinivasan <[email protected]> Link: https://patch.msgid.link/6e532cf0db5be99afbe20eed699163d5e86cd71f.1772303986.git.ritesh.list@gmail.com
1 parent 35e4f2a commit 0a8321d

2 files changed

Lines changed: 14 additions & 10 deletions

File tree

arch/powerpc/kernel/setup-common.c

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,6 @@
3535
#include <linux/of_irq.h>
3636
#include <linux/hugetlb.h>
3737
#include <linux/pgtable.h>
38-
#include <asm/kexec.h>
3938
#include <asm/io.h>
4039
#include <asm/paca.h>
4140
#include <asm/processor.h>
@@ -995,15 +994,6 @@ void __init setup_arch(char **cmdline_p)
995994

996995
initmem_init();
997996

998-
/*
999-
* Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and
1000-
* hugetlb. These must be called after initmem_init(), so that
1001-
* pageblock_order is initialised.
1002-
*/
1003-
fadump_cma_init();
1004-
kdump_cma_reserve();
1005-
kvm_cma_reserve();
1006-
1007997
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
1008998

1009999
if (ppc_md.setup_arch)

arch/powerpc/mm/mem.c

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,10 @@
3030
#include <asm/setup.h>
3131
#include <asm/fixmap.h>
3232

33+
#include <asm/fadump.h>
34+
#include <asm/kexec.h>
35+
#include <asm/kvm_ppc.h>
36+
3337
#include <mm/mmu_decl.h>
3438

3539
unsigned long long memory_limit __initdata;
@@ -268,6 +272,16 @@ void __init paging_init(void)
268272

269273
void __init arch_mm_preinit(void)
270274
{
275+
276+
/*
277+
* Reserve large chunks of memory for use by CMA for kdump, fadump, KVM
278+
* and hugetlb. These must be called after pageblock_order is
279+
* initialised.
280+
*/
281+
fadump_cma_init();
282+
kdump_cma_reserve();
283+
kvm_cma_reserve();
284+
271285
/*
272286
* book3s is limited to 16 page sizes due to encoding this in
273287
* a 4-bit field for slices.

0 commit comments

Comments
 (0)