Skip to content

Commit 6de88f3

Browse files
committed
Merge branch 'for-6.17/block' into for-next
* for-6.17/block: nvme-pci: try function level reset on init failure nvmet: pci-epf: Do not complete commands twice if nvmet_req_init() fails nvme-tcp: log TLS handshake failures at error level docs: nvme: fix grammar in nvme-pci-endpoint-target.rst nvme: fix typo in status code constant for self-test in progress nvmet: remove redundant assignment of error code in nvmet_ns_enable() nvme: fix incorrect variable in io cqes error message nvme: fix multiple spelling and grammar issues in host drivers md/raid10: fix set but not used variable in sync_request_write() md: allow removing faulty rdev during resync md/raid5: unset WQ_CPU_INTENSIVE for raid5 unbound workqueue md: remove/add redundancy group only in level change md: Don't clear MD_CLOSING until mddev is freed md: call del_gendisk in control path
2 parents 6527a8e + c20413b commit 6de88f3

18 files changed

Lines changed: 143 additions & 79 deletions

File tree

Documentation/nvme/nvme-pci-endpoint-target.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,21 @@ NVMe PCI Endpoint Function Target
66

77
:Author: Damien Le Moal <[email protected]>
88

9-
The NVMe PCI endpoint function target driver implements a NVMe PCIe controller
10-
using a NVMe fabrics target controller configured with the PCI transport type.
9+
The NVMe PCI endpoint function target driver implements an NVMe PCIe controller
10+
using an NVMe fabrics target controller configured with the PCI transport type.
1111

1212
Overview
1313
========
1414

15-
The NVMe PCI endpoint function target driver allows exposing a NVMe target
15+
The NVMe PCI endpoint function target driver allows exposing an NVMe target
1616
controller over a PCIe link, thus implementing an NVMe PCIe device similar to a
1717
regular M.2 SSD. The target controller is created in the same manner as when
1818
using NVMe over fabrics: the controller represents the interface to an NVMe
1919
subsystem using a port. The port transfer type must be configured to be
2020
"pci". The subsystem can be configured to have namespaces backed by regular
2121
files or block devices, or can use NVMe passthrough to expose to the PCI host an
22-
existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP
23-
host controller).
22+
existing physical NVMe device or an NVMe fabrics host controller (e.g. a NVMe
23+
TCP host controller).
2424

2525
The NVMe PCI endpoint function target driver relies as much as possible on the
2626
NVMe target core code to parse and execute NVMe commands submitted by the PCIe
@@ -181,10 +181,10 @@ Creating an NVMe endpoint device is a two step process. First, an NVMe target
181181
subsystem and port must be defined. Second, the NVMe PCI endpoint device must
182182
be setup and bound to the subsystem and port created.
183183

184-
Creating a NVMe Subsystem and Port
185-
----------------------------------
184+
Creating an NVMe Subsystem and Port
185+
-----------------------------------
186186

187-
Details about how to configure a NVMe target subsystem and port are outside the
187+
Details about how to configure an NVMe target subsystem and port are outside the
188188
scope of this document. The following only provides a simple example of a port
189189
and subsystem with a single namespace backed by a null_blk device.
190190

@@ -234,8 +234,8 @@ Finally, create the target port and link it to the subsystem::
234234
# ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
235235
/sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
236236

237-
Creating a NVMe PCI Endpoint Device
238-
-----------------------------------
237+
Creating an NVMe PCI Endpoint Device
238+
------------------------------------
239239

240240
With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
241241
device can now be created and enabled. The NVMe PCI endpoint target driver
@@ -303,7 +303,7 @@ device controller::
303303

304304
nvmet_pci_epf nvmet_pci_epf.0: Enabling controller
305305

306-
On the host side, the NVMe PCI endpoint function target device will is
306+
On the host side, the NVMe PCI endpoint function target device is
307307
discoverable as a PCI device, with the vendor ID and device ID as configured::
308308

309309
# lspci -n

drivers/md/md.c

Lines changed: 43 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -636,9 +636,6 @@ static void __mddev_put(struct mddev *mddev)
636636
mddev->ctime || mddev->hold_active)
637637
return;
638638

639-
/* Array is not configured at all, and not held active, so destroy it */
640-
set_bit(MD_DELETED, &mddev->flags);
641-
642639
/*
643640
* Call queue_work inside the spinlock so that flush_workqueue() after
644641
* mddev_find will succeed in waiting for the work to be done.
@@ -873,6 +870,16 @@ void mddev_unlock(struct mddev *mddev)
873870
kobject_del(&rdev->kobj);
874871
export_rdev(rdev, mddev);
875872
}
873+
874+
/* Call del_gendisk after release reconfig_mutex to avoid
875+
* deadlock (e.g. call del_gendisk under the lock and an
876+
* access to sysfs files waits the lock)
877+
* And MD_DELETED is only used for md raid which is set in
878+
* do_md_stop. dm raid only uses md_stop to stop. So dm raid
879+
* doesn't need to check MD_DELETED when getting reconfig lock
880+
*/
881+
if (test_bit(MD_DELETED, &mddev->flags))
882+
del_gendisk(mddev->gendisk);
876883
}
877884
EXPORT_SYMBOL_GPL(mddev_unlock);
878885

@@ -5774,32 +5781,37 @@ md_attr_store(struct kobject *kobj, struct attribute *attr,
57745781
struct md_sysfs_entry *entry = container_of(attr, struct md_sysfs_entry, attr);
57755782
struct mddev *mddev = container_of(kobj, struct mddev, kobj);
57765783
ssize_t rv;
5784+
struct kernfs_node *kn = NULL;
57775785

57785786
if (!entry->store)
57795787
return -EIO;
57805788
if (!capable(CAP_SYS_ADMIN))
57815789
return -EACCES;
5790+
5791+
if (entry->store == array_state_store && cmd_match(page, "clear"))
5792+
kn = sysfs_break_active_protection(kobj, attr);
5793+
57825794
spin_lock(&all_mddevs_lock);
57835795
if (!mddev_get(mddev)) {
57845796
spin_unlock(&all_mddevs_lock);
5797+
if (kn)
5798+
sysfs_unbreak_active_protection(kn);
57855799
return -EBUSY;
57865800
}
57875801
spin_unlock(&all_mddevs_lock);
57885802
rv = entry->store(mddev, page, length);
57895803
mddev_put(mddev);
5804+
5805+
if (kn)
5806+
sysfs_unbreak_active_protection(kn);
5807+
57905808
return rv;
57915809
}
57925810

57935811
static void md_kobj_release(struct kobject *ko)
57945812
{
57955813
struct mddev *mddev = container_of(ko, struct mddev, kobj);
57965814

5797-
if (mddev->sysfs_state)
5798-
sysfs_put(mddev->sysfs_state);
5799-
if (mddev->sysfs_level)
5800-
sysfs_put(mddev->sysfs_level);
5801-
5802-
del_gendisk(mddev->gendisk);
58035815
put_disk(mddev->gendisk);
58045816
}
58055817

@@ -6413,15 +6425,10 @@ static void md_clean(struct mddev *mddev)
64136425
mddev->persistent = 0;
64146426
mddev->level = LEVEL_NONE;
64156427
mddev->clevel[0] = 0;
6416-
/*
6417-
* Don't clear MD_CLOSING, or mddev can be opened again.
6418-
* 'hold_active != 0' means mddev is still in the creation
6419-
* process and will be used later.
6420-
*/
6421-
if (mddev->hold_active)
6422-
mddev->flags = 0;
6423-
else
6424-
mddev->flags &= BIT_ULL_MASK(MD_CLOSING);
6428+
/* if UNTIL_STOP is set, it's cleared here */
6429+
mddev->hold_active = 0;
6430+
/* Don't clear MD_CLOSING, or mddev can be opened again. */
6431+
mddev->flags &= BIT_ULL_MASK(MD_CLOSING);
64256432
mddev->sb_flags = 0;
64266433
mddev->ro = MD_RDWR;
64276434
mddev->metadata_type[0] = 0;
@@ -6516,8 +6523,6 @@ static void __md_stop(struct mddev *mddev)
65166523
if (mddev->private)
65176524
pers->free(mddev, mddev->private);
65186525
mddev->private = NULL;
6519-
if (pers->sync_request && mddev->to_remove == NULL)
6520-
mddev->to_remove = &md_redundancy_group;
65216526
put_pers(pers);
65226527
clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
65236528

@@ -6646,10 +6651,8 @@ static int do_md_stop(struct mddev *mddev, int mode)
66466651
mddev->bitmap_info.offset = 0;
66476652

66486653
export_array(mddev);
6649-
66506654
md_clean(mddev);
6651-
if (mddev->hold_active == UNTIL_STOP)
6652-
mddev->hold_active = 0;
6655+
set_bit(MD_DELETED, &mddev->flags);
66536656
}
66546657
md_new_event();
66556658
sysfs_notify_dirent_safe(mddev->sysfs_state);
@@ -9456,17 +9459,11 @@ static bool md_spares_need_change(struct mddev *mddev)
94569459
return false;
94579460
}
94589461

9459-
static int remove_and_add_spares(struct mddev *mddev,
9460-
struct md_rdev *this)
9462+
static int remove_spares(struct mddev *mddev, struct md_rdev *this)
94619463
{
94629464
struct md_rdev *rdev;
9463-
int spares = 0;
94649465
int removed = 0;
94659466

9466-
if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
9467-
/* Mustn't remove devices when resync thread is running */
9468-
return 0;
9469-
94709467
rdev_for_each(rdev, mddev) {
94719468
if ((this == NULL || rdev == this) && rdev_removeable(rdev) &&
94729469
!mddev->pers->hot_remove_disk(mddev, rdev)) {
@@ -9480,6 +9477,21 @@ static int remove_and_add_spares(struct mddev *mddev,
94809477
if (removed && mddev->kobj.sd)
94819478
sysfs_notify_dirent_safe(mddev->sysfs_degraded);
94829479

9480+
return removed;
9481+
}
9482+
9483+
static int remove_and_add_spares(struct mddev *mddev,
9484+
struct md_rdev *this)
9485+
{
9486+
struct md_rdev *rdev;
9487+
int spares = 0;
9488+
int removed = 0;
9489+
9490+
if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
9491+
/* Mustn't remove devices when resync thread is running */
9492+
return 0;
9493+
9494+
removed = remove_spares(mddev, this);
94839495
if (this && removed)
94849496
goto no_add;
94859497

@@ -9522,6 +9534,7 @@ static bool md_choose_sync_action(struct mddev *mddev, int *spares)
95229534

95239535
/* Check if resync is in progress. */
95249536
if (mddev->recovery_cp < MaxSector) {
9537+
remove_spares(mddev, NULL);
95259538
set_bit(MD_RECOVERY_SYNC, &mddev->recovery);
95269539
clear_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
95279540
return true;

drivers/md/md.h

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -700,11 +700,26 @@ static inline bool reshape_interrupted(struct mddev *mddev)
700700

701701
static inline int __must_check mddev_lock(struct mddev *mddev)
702702
{
703-
return mutex_lock_interruptible(&mddev->reconfig_mutex);
703+
int ret;
704+
705+
ret = mutex_lock_interruptible(&mddev->reconfig_mutex);
706+
707+
/* MD_DELETED is set in do_md_stop with reconfig_mutex.
708+
* So check it here.
709+
*/
710+
if (!ret && test_bit(MD_DELETED, &mddev->flags)) {
711+
ret = -ENODEV;
712+
mutex_unlock(&mddev->reconfig_mutex);
713+
}
714+
715+
return ret;
704716
}
705717

706718
/* Sometimes we need to take the lock in a situation where
707719
* failure due to interrupts is not acceptable.
720+
* It doesn't need to check MD_DELETED here, the owner which
721+
* holds the lock here can't be stopped. And all paths can't
722+
* call this function after do_md_stop.
708723
*/
709724
static inline void mddev_lock_nointr(struct mddev *mddev)
710725
{
@@ -713,7 +728,14 @@ static inline void mddev_lock_nointr(struct mddev *mddev)
713728

714729
static inline int mddev_trylock(struct mddev *mddev)
715730
{
716-
return mutex_trylock(&mddev->reconfig_mutex);
731+
int ret;
732+
733+
ret = mutex_trylock(&mddev->reconfig_mutex);
734+
if (!ret && test_bit(MD_DELETED, &mddev->flags)) {
735+
ret = -ENODEV;
736+
mutex_unlock(&mddev->reconfig_mutex);
737+
}
738+
return ret;
717739
}
718740
extern void mddev_unlock(struct mddev *mddev);
719741

drivers/md/raid10.c

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2446,15 +2446,12 @@ static void sync_request_write(struct mddev *mddev, struct r10bio *r10_bio)
24462446
* that are active
24472447
*/
24482448
for (i = 0; i < conf->copies; i++) {
2449-
int d;
2450-
24512449
tbio = r10_bio->devs[i].repl_bio;
24522450
if (!tbio || !tbio->bi_end_io)
24532451
continue;
24542452
if (r10_bio->devs[i].bio->bi_end_io != end_sync_write
24552453
&& r10_bio->devs[i].bio != fbio)
24562454
bio_copy_data(tbio, fbio);
2457-
d = r10_bio->devs[i].devnum;
24582455
atomic_inc(&r10_bio->remaining);
24592456
submit_bio_noacct(tbio);
24602457
}

drivers/md/raid5.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9040,7 +9040,7 @@ static int __init raid5_init(void)
90409040
int ret;
90419041

90429042
raid5_wq = alloc_workqueue("raid5wq",
9043-
WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE|WQ_SYSFS, 0);
9043+
WQ_UNBOUND|WQ_MEM_RECLAIM|WQ_SYSFS, 0);
90449044
if (!raid5_wq)
90459045
return -ENOMEM;
90469046

drivers/nvme/host/apple.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -301,8 +301,8 @@ static void apple_nvme_submit_cmd(struct apple_nvme_queue *q,
301301
memcpy(&q->sqes[tag], cmd, sizeof(*cmd));
302302

303303
/*
304-
* This lock here doesn't make much sense at a first glace but
305-
* removing it will result in occasional missed completetion
304+
* This lock here doesn't make much sense at a first glance but
305+
* removing it will result in occasional missed completion
306306
* interrupts even though the commands still appear on the CQ.
307307
* It's unclear why this happens but our best guess is that
308308
* there is a bug in the firmware triggered when a new command

drivers/nvme/host/constants.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ static const char * const nvme_statuses[] = {
133133
[NVME_SC_NS_NOT_ATTACHED] = "Namespace Not Attached",
134134
[NVME_SC_THIN_PROV_NOT_SUPP] = "Thin Provisioning Not Supported",
135135
[NVME_SC_CTRL_LIST_INVALID] = "Controller List Invalid",
136-
[NVME_SC_SELT_TEST_IN_PROGRESS] = "Device Self-test In Progress",
136+
[NVME_SC_SELF_TEST_IN_PROGRESS] = "Device Self-test In Progress",
137137
[NVME_SC_BP_WRITE_PROHIBITED] = "Boot Partition Write Prohibited",
138138
[NVME_SC_CTRL_ID_INVALID] = "Invalid Controller Identifier",
139139
[NVME_SC_SEC_CTRL_STATE_INVALID] = "Invalid Secondary Controller State",
@@ -145,7 +145,7 @@ static const char * const nvme_statuses[] = {
145145
[NVME_SC_BAD_ATTRIBUTES] = "Conflicting Attributes",
146146
[NVME_SC_INVALID_PI] = "Invalid Protection Information",
147147
[NVME_SC_READ_ONLY] = "Attempted Write to Read Only Range",
148-
[NVME_SC_CMD_SIZE_LIM_EXCEEDED ] = "Command Size Limits Exceeded",
148+
[NVME_SC_CMD_SIZE_LIM_EXCEEDED] = "Command Size Limits Exceeded",
149149
[NVME_SC_ZONE_BOUNDARY_ERROR] = "Zoned Boundary Error",
150150
[NVME_SC_ZONE_FULL] = "Zone Is Full",
151151
[NVME_SC_ZONE_READ_ONLY] = "Zone Is Read Only",

drivers/nvme/host/core.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4295,7 +4295,7 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
42954295
}
42964296

42974297
/*
4298-
* If available try to use the Command Set Idependent Identify Namespace
4298+
* If available try to use the Command Set Independent Identify Namespace
42994299
* data structure to find all the generic information that is needed to
43004300
* set up a namespace. If not fall back to the legacy version.
43014301
*/

drivers/nvme/host/fc.c

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -899,7 +899,7 @@ EXPORT_SYMBOL_GPL(nvme_fc_set_remoteport_devloss);
899899
* may crash.
900900
*
901901
* As such:
902-
* Wrapper all the dma routines and check the dev pointer.
902+
* Wrap all the dma routines and check the dev pointer.
903903
*
904904
* If simple mappings (return just a dma address, we'll noop them,
905905
* returning a dma address of 0.
@@ -1955,8 +1955,8 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
19551955
}
19561956

19571957
/*
1958-
* For the linux implementation, if we have an unsucceesful
1959-
* status, they blk-mq layer can typically be called with the
1958+
* For the linux implementation, if we have an unsuccessful
1959+
* status, the blk-mq layer can typically be called with the
19601960
* non-zero status and the content of the cqe isn't important.
19611961
*/
19621962
if (status)
@@ -2429,7 +2429,7 @@ static bool nvme_fc_terminate_exchange(struct request *req, void *data)
24292429

24302430
/*
24312431
* This routine runs through all outstanding commands on the association
2432-
* and aborts them. This routine is typically be called by the
2432+
* and aborts them. This routine is typically called by the
24332433
* delete_association routine. It is also called due to an error during
24342434
* reconnect. In that scenario, it is most likely a command that initializes
24352435
* the controller, including fabric Connect commands on io queues, that
@@ -2622,7 +2622,7 @@ nvme_fc_unmap_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
26222622
* as part of the exchange. The CQE is the last thing for the io,
26232623
* which is transferred (explicitly or implicitly) with the RSP IU
26242624
* sent on the exchange. After the CQE is received, the FC exchange is
2625-
* terminaed and the Exchange may be used on a different io.
2625+
* terminated and the Exchange may be used on a different io.
26262626
*
26272627
* The transport to LLDD api has the transport making a request for a
26282628
* new fcp io request to the LLDD. The LLDD then allocates a FC exchange

drivers/nvme/host/nvme.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ enum nvme_quirks {
6969
NVME_QUIRK_IDENTIFY_CNS = (1 << 1),
7070

7171
/*
72-
* The controller deterministically returns O's on reads to
72+
* The controller deterministically returns 0's on reads to
7373
* logical blocks that deallocate was called on.
7474
*/
7575
NVME_QUIRK_DEALLOCATE_ZEROES = (1 << 2),

0 commit comments

Comments
 (0)