Skip to content

Commit 511092c

Browse files
johnpgarrykawasaki
authored andcommitted
block: use chunk_sectors when evaluating stacked atomic write limits
The atomic write unit max value is limited by any stacked device stripe size. It is required that the atomic write unit is a power-of-2 factor of the stripe size. Currently we use io_min limit to hold the stripe size, and check for a io_min <= SECTOR_SIZE when deciding if we have a striped stacked device. Nilay reports that this causes a problem when the physical block size is greater than SECTOR_SIZE [0]. Furthermore, io_min may be mutated when stacking devices, and this makes it a poor candidate to hold the stripe size. Such an example (of when io_min may change) would be when the io_min is less than the physical block size. Use chunk_sectors to hold the stripe size, which is more appropriate. [0] https://lore.kernel.org/linux-block/[email protected]/T/#mecca17129f72811137d3c2f1e477634e77f06781 Reviewed-by: Nilay Shroff <[email protected]> Tested-by: Nilay Shroff <[email protected]> Signed-off-by: John Garry <[email protected]>
1 parent 15cd6e8 commit 511092c

1 file changed

Lines changed: 33 additions & 23 deletions

File tree

block/blk-settings.c

Lines changed: 33 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -594,41 +594,50 @@ static bool blk_stack_atomic_writes_boundary_head(struct queue_limits *t,
594594
return true;
595595
}
596596

597-
598-
/* Check stacking of first bottom device */
599-
static bool blk_stack_atomic_writes_head(struct queue_limits *t,
600-
struct queue_limits *b)
597+
static void blk_stack_atomic_writes_chunk_sectors(struct queue_limits *t)
601598
{
602-
if (b->atomic_write_hw_boundary &&
603-
!blk_stack_atomic_writes_boundary_head(t, b))
604-
return false;
599+
unsigned int chunk_bytes;
605600

606-
if (t->io_min <= SECTOR_SIZE) {
607-
/* No chunk sectors, so use bottom device values directly */
608-
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
609-
t->atomic_write_hw_unit_min = b->atomic_write_hw_unit_min;
610-
t->atomic_write_hw_max = b->atomic_write_hw_max;
611-
return true;
612-
}
601+
if (!t->chunk_sectors)
602+
return;
603+
604+
/*
605+
* If chunk sectors is so large that its value in bytes overflows
606+
* UINT_MAX, then just shift it down so it definitely will fit.
607+
* We don't support atomic writes of such a large size anyway.
608+
*/
609+
if (check_shl_overflow(t->chunk_sectors, SECTOR_SHIFT, &chunk_bytes))
610+
chunk_bytes = t->chunk_sectors;
613611

614612
/*
615613
* Find values for limits which work for chunk size.
616614
* b->atomic_write_hw_unit_{min, max} may not be aligned with chunk
617-
* size (t->io_min), as chunk size is not restricted to a power-of-2.
615+
* size, as the chunk size is not restricted to a power-of-2.
618616
* So we need to find highest power-of-2 which works for the chunk
619617
* size.
620-
* As an example scenario, we could have b->unit_max = 16K and
621-
* t->io_min = 24K. For this case, reduce t->unit_max to a value
622-
* aligned with both limits, i.e. 8K in this example.
618+
* As an example scenario, we could have t->unit_max = 16K and
619+
* t->chunk_sectors = 24KB. For this case, reduce t->unit_max to a
620+
* value aligned with both limits, i.e. 8K in this example.
623621
*/
624-
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
625-
while (t->io_min % t->atomic_write_hw_unit_max)
626-
t->atomic_write_hw_unit_max /= 2;
622+
t->atomic_write_hw_unit_max = min(t->atomic_write_hw_unit_max,
623+
max_pow_of_two_factor(chunk_bytes));
627624

628-
t->atomic_write_hw_unit_min = min(b->atomic_write_hw_unit_min,
625+
t->atomic_write_hw_unit_min = min(t->atomic_write_hw_unit_min,
629626
t->atomic_write_hw_unit_max);
630-
t->atomic_write_hw_max = min(b->atomic_write_hw_max, t->io_min);
627+
t->atomic_write_hw_max = min(t->atomic_write_hw_max, chunk_bytes);
628+
}
629+
630+
/* Check stacking of first bottom device */
631+
static bool blk_stack_atomic_writes_head(struct queue_limits *t,
632+
struct queue_limits *b)
633+
{
634+
if (b->atomic_write_hw_boundary &&
635+
!blk_stack_atomic_writes_boundary_head(t, b))
636+
return false;
631637

638+
t->atomic_write_hw_unit_max = b->atomic_write_hw_unit_max;
639+
t->atomic_write_hw_unit_min = b->atomic_write_hw_unit_min;
640+
t->atomic_write_hw_max = b->atomic_write_hw_max;
632641
return true;
633642
}
634643

@@ -656,6 +665,7 @@ static void blk_stack_atomic_writes_limits(struct queue_limits *t,
656665

657666
if (!blk_stack_atomic_writes_head(t, b))
658667
goto unsupported;
668+
blk_stack_atomic_writes_chunk_sectors(t);
659669
return;
660670

661671
unsupported:

0 commit comments

Comments
 (0)