Skip to content

Commit 142261c

Browse files
ps-ushankarkawasaki
authored andcommitted
ublk: speed up ublk server exit handling
Recently, we've observed a few cases where a ublk server is able to complete restart more quickly than the driver can process the exit of the previous ublk server. The new ublk server comes up, attempts recovery of the preexisting ublk devices, and observes them still in state UBLK_S_DEV_LIVE. While this is possible due to the asynchronous nature of io_uring cleanup and should therefore be handled properly in the ublk server, it is still preferable to make ublk server exit handling faster if possible, as we should strive for it to not be a limiting factor in how fast a ublk server can restart and provide service again. Analysis of the issue showed that the vast majority of the time spent in handling the ublk server exit was in calls to blk_mq_quiesce_queue, which is essentially just a (relatively expensive) call to synchronize_rcu. The ublk server exit path currently issues an unnecessarily large number of calls to blk_mq_quiesce_queue, for two reasons: 1. It tries to call blk_mq_quiesce_queue once per ublk_queue. However, blk_mq_quiesce_queue targets the request_queue of the underlying ublk device, of which there is only one. So the number of calls is larger than necessary by a factor of nr_hw_queues. 2. In practice, it calls blk_mq_quiesce_queue _more_ than once per ublk_queue. This is because of a data race where we read ubq->canceling without any locking when deciding if we should call ublk_start_cancel. It is thus possible for two calls to ublk_uring_cmd_cancel_fn against the same ublk_queue to both call ublk_start_cancel against the same ublk_queue. Fix this by making the "canceling" flag a per-device state. This actually matches the existing code better, as there are several places where the flag is set or cleared for all queues simultaneously, and there is the general expectation that cancellation corresponds with ublk server exit. This per-device canceling flag is then checked under a (new) lock (addressing the data race (2) above), and the queue is only quiesced if it is cleared (addressing (1) above). The result is just one call to blk_mq_quiesce_queue per ublk device. To minimize the number of cache lines that are accessed in the hot path, the per-queue canceling flag is kept. The values of the per-device canceling flag and all per-queue canceling flags should always match. In our setup, where one ublk server handles I/O for 128 ublk devices, each having 24 hardware queues of depth 4096, here are the results before and after this patch, where teardown time is measured from the first call to io_ring_ctx_wait_and_kill to the return from the last ublk_ch_release: before after number of calls to blk_mq_quiesce_queue: 6469 256 teardown time: 11.14s 2.44s There are still some potential optimizations here, but this takes care of a big chunk of the ublk server exit handling delay. Signed-off-by: Uday Shankar <[email protected]> Reviewed-by: Ming Lei <[email protected]> Reviewed-by: Caleb Sander Mateos <[email protected]>
1 parent 9a69d4f commit 142261c

1 file changed

Lines changed: 21 additions & 15 deletions

File tree

drivers/block/ublk_drv.c

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -216,6 +216,8 @@ struct ublk_device {
216216
struct completion completion;
217217
unsigned int nr_queues_ready;
218218
unsigned int nr_privileged_daemon;
219+
struct mutex cancel_mutex;
220+
bool canceling;
219221
};
220222

221223
/* header of ublk_params */
@@ -1578,6 +1580,7 @@ static int ublk_ch_release(struct inode *inode, struct file *filp)
15781580
* All requests may be inflight, so ->canceling may not be set, set
15791581
* it now.
15801582
*/
1583+
ub->canceling = true;
15811584
for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
15821585
struct ublk_queue *ubq = ublk_get_queue(ub, i);
15831586

@@ -1706,23 +1709,18 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
17061709
}
17071710
}
17081711

1709-
/* Must be called when queue is frozen */
1710-
static void ublk_mark_queue_canceling(struct ublk_queue *ubq)
1711-
{
1712-
spin_lock(&ubq->cancel_lock);
1713-
if (!ubq->canceling)
1714-
ubq->canceling = true;
1715-
spin_unlock(&ubq->cancel_lock);
1716-
}
1717-
1718-
static void ublk_start_cancel(struct ublk_queue *ubq)
1712+
static void ublk_start_cancel(struct ublk_device *ub)
17191713
{
1720-
struct ublk_device *ub = ubq->dev;
17211714
struct gendisk *disk = ublk_get_disk(ub);
1715+
int i;
17221716

17231717
/* Our disk has been dead */
17241718
if (!disk)
17251719
return;
1720+
1721+
mutex_lock(&ub->cancel_mutex);
1722+
if (ub->canceling)
1723+
goto out;
17261724
/*
17271725
* Now we are serialized with ublk_queue_rq()
17281726
*
@@ -1731,8 +1729,12 @@ static void ublk_start_cancel(struct ublk_queue *ubq)
17311729
* touch completed uring_cmd
17321730
*/
17331731
blk_mq_quiesce_queue(disk->queue);
1734-
ublk_mark_queue_canceling(ubq);
1732+
ub->canceling = true;
1733+
for (i = 0; i < ub->dev_info.nr_hw_queues; i++)
1734+
ublk_get_queue(ub, i)->canceling = true;
17351735
blk_mq_unquiesce_queue(disk->queue);
1736+
out:
1737+
mutex_unlock(&ub->cancel_mutex);
17361738
ublk_put_disk(disk);
17371739
}
17381740

@@ -1805,8 +1807,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
18051807
if (WARN_ON_ONCE(task && task != io->task))
18061808
return;
18071809

1808-
if (!ubq->canceling)
1809-
ublk_start_cancel(ubq);
1810+
ublk_start_cancel(ubq->dev);
18101811

18111812
WARN_ON_ONCE(io->cmd != cmd);
18121813
ublk_cancel_cmd(ubq, pdu->tag, issue_flags);
@@ -1933,6 +1934,7 @@ static void ublk_reset_io_flags(struct ublk_device *ub)
19331934
ubq->canceling = false;
19341935
ubq->fail_io = false;
19351936
}
1937+
ub->canceling = false;
19361938
}
19371939

19381940
/* device can only be started after all IOs are ready */
@@ -2580,6 +2582,7 @@ static void ublk_cdev_rel(struct device *dev)
25802582
ublk_deinit_queues(ub);
25812583
ublk_free_dev_number(ub);
25822584
mutex_destroy(&ub->mutex);
2585+
mutex_destroy(&ub->cancel_mutex);
25832586
kfree(ub);
25842587
}
25852588

@@ -2933,6 +2936,7 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
29332936
goto out_unlock;
29342937
mutex_init(&ub->mutex);
29352938
spin_lock_init(&ub->lock);
2939+
mutex_init(&ub->cancel_mutex);
29362940

29372941
ret = ublk_alloc_dev_number(ub, header->dev_id);
29382942
if (ret < 0)
@@ -3003,6 +3007,7 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
30033007
ublk_free_dev_number(ub);
30043008
out_free_ub:
30053009
mutex_destroy(&ub->mutex);
3010+
mutex_destroy(&ub->cancel_mutex);
30063011
kfree(ub);
30073012
out_unlock:
30083013
mutex_unlock(&ublk_ctl_mutex);
@@ -3357,8 +3362,9 @@ static int ublk_ctrl_quiesce_dev(struct ublk_device *ub,
33573362
if (ub->dev_info.state != UBLK_S_DEV_LIVE)
33583363
goto put_disk;
33593364

3360-
/* Mark all queues as canceling */
3365+
/* Mark the device as canceling */
33613366
blk_mq_quiesce_queue(disk->queue);
3367+
ub->canceling = true;
33623368
for (i = 0; i < ub->dev_info.nr_hw_queues; i++) {
33633369
struct ublk_queue *ubq = ublk_get_queue(ub, i);
33643370

0 commit comments

Comments
 (0)