Skip to content

Commit b8da740

Browse files
committed
Merge tag 'nvme-6.17-2025-07-22' of git://git.infradead.org/nvme into for-6.17/block
Pull NVMe updates from Christoph: "- try PCIe function level reset on init failure (Keith Busch) - log TLS handshake failures at error level (Maurizio Lombardi) - pci-epf: do not complete commands twice if nvmet_req_init() fails (Rick Wertenbroek) - misc cleanups (Alok Tiwari)" * tag 'nvme-6.17-2025-07-22' of git://git.infradead.org/nvme: nvme-pci: try function level reset on init failure nvmet: pci-epf: Do not complete commands twice if nvmet_req_init() fails nvme-tcp: log TLS handshake failures at error level docs: nvme: fix grammar in nvme-pci-endpoint-target.rst nvme: fix typo in status code constant for self-test in progress nvmet: remove redundant assignment of error code in nvmet_ns_enable() nvme: fix incorrect variable in io cqes error message nvme: fix multiple spelling and grammar issues in host drivers
2 parents 675f940 + 5b2c214 commit b8da740

14 files changed

Lines changed: 75 additions & 43 deletions

File tree

Documentation/nvme/nvme-pci-endpoint-target.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,21 @@ NVMe PCI Endpoint Function Target
66

77
:Author: Damien Le Moal <[email protected]>
88

9-
The NVMe PCI endpoint function target driver implements a NVMe PCIe controller
10-
using a NVMe fabrics target controller configured with the PCI transport type.
9+
The NVMe PCI endpoint function target driver implements an NVMe PCIe controller
10+
using an NVMe fabrics target controller configured with the PCI transport type.
1111

1212
Overview
1313
========
1414

15-
The NVMe PCI endpoint function target driver allows exposing a NVMe target
15+
The NVMe PCI endpoint function target driver allows exposing an NVMe target
1616
controller over a PCIe link, thus implementing an NVMe PCIe device similar to a
1717
regular M.2 SSD. The target controller is created in the same manner as when
1818
using NVMe over fabrics: the controller represents the interface to an NVMe
1919
subsystem using a port. The port transfer type must be configured to be
2020
"pci". The subsystem can be configured to have namespaces backed by regular
2121
files or block devices, or can use NVMe passthrough to expose to the PCI host an
22-
existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP
23-
host controller).
22+
existing physical NVMe device or an NVMe fabrics host controller (e.g. a NVMe
23+
TCP host controller).
2424

2525
The NVMe PCI endpoint function target driver relies as much as possible on the
2626
NVMe target core code to parse and execute NVMe commands submitted by the PCIe
@@ -181,10 +181,10 @@ Creating an NVMe endpoint device is a two step process. First, an NVMe target
181181
subsystem and port must be defined. Second, the NVMe PCI endpoint device must
182182
be setup and bound to the subsystem and port created.
183183

184-
Creating a NVMe Subsystem and Port
185-
----------------------------------
184+
Creating an NVMe Subsystem and Port
185+
-----------------------------------
186186

187-
Details about how to configure a NVMe target subsystem and port are outside the
187+
Details about how to configure an NVMe target subsystem and port are outside the
188188
scope of this document. The following only provides a simple example of a port
189189
and subsystem with a single namespace backed by a null_blk device.
190190

@@ -234,8 +234,8 @@ Finally, create the target port and link it to the subsystem::
234234
# ln -s /sys/kernel/config/nvmet/subsystems/nvmepf.0.nqn \
235235
/sys/kernel/config/nvmet/ports/1/subsystems/nvmepf.0.nqn
236236

237-
Creating a NVMe PCI Endpoint Device
238-
-----------------------------------
237+
Creating an NVMe PCI Endpoint Device
238+
------------------------------------
239239

240240
With the NVMe target subsystem and port ready for use, the NVMe PCI endpoint
241241
device can now be created and enabled. The NVMe PCI endpoint target driver
@@ -303,7 +303,7 @@ device controller::
303303

304304
nvmet_pci_epf nvmet_pci_epf.0: Enabling controller
305305

306-
On the host side, the NVMe PCI endpoint function target device will is
306+
On the host side, the NVMe PCI endpoint function target device is
307307
discoverable as a PCI device, with the vendor ID and device ID as configured::
308308

309309
# lspci -n

drivers/nvme/host/apple.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -301,8 +301,8 @@ static void apple_nvme_submit_cmd(struct apple_nvme_queue *q,
301301
memcpy(&q->sqes[tag], cmd, sizeof(*cmd));
302302

303303
/*
304-
* This lock here doesn't make much sense at a first glace but
305-
* removing it will result in occasional missed completetion
304+
* This lock here doesn't make much sense at a first glance but
305+
* removing it will result in occasional missed completion
306306
* interrupts even though the commands still appear on the CQ.
307307
* It's unclear why this happens but our best guess is that
308308
* there is a bug in the firmware triggered when a new command

drivers/nvme/host/constants.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ static const char * const nvme_statuses[] = {
133133
[NVME_SC_NS_NOT_ATTACHED] = "Namespace Not Attached",
134134
[NVME_SC_THIN_PROV_NOT_SUPP] = "Thin Provisioning Not Supported",
135135
[NVME_SC_CTRL_LIST_INVALID] = "Controller List Invalid",
136-
[NVME_SC_SELT_TEST_IN_PROGRESS] = "Device Self-test In Progress",
136+
[NVME_SC_SELF_TEST_IN_PROGRESS] = "Device Self-test In Progress",
137137
[NVME_SC_BP_WRITE_PROHIBITED] = "Boot Partition Write Prohibited",
138138
[NVME_SC_CTRL_ID_INVALID] = "Invalid Controller Identifier",
139139
[NVME_SC_SEC_CTRL_STATE_INVALID] = "Invalid Secondary Controller State",
@@ -145,7 +145,7 @@ static const char * const nvme_statuses[] = {
145145
[NVME_SC_BAD_ATTRIBUTES] = "Conflicting Attributes",
146146
[NVME_SC_INVALID_PI] = "Invalid Protection Information",
147147
[NVME_SC_READ_ONLY] = "Attempted Write to Read Only Range",
148-
[NVME_SC_CMD_SIZE_LIM_EXCEEDED ] = "Command Size Limits Exceeded",
148+
[NVME_SC_CMD_SIZE_LIM_EXCEEDED] = "Command Size Limits Exceeded",
149149
[NVME_SC_ZONE_BOUNDARY_ERROR] = "Zoned Boundary Error",
150150
[NVME_SC_ZONE_FULL] = "Zone Is Full",
151151
[NVME_SC_ZONE_READ_ONLY] = "Zone Is Read Only",

drivers/nvme/host/core.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4286,7 +4286,7 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
42864286
}
42874287

42884288
/*
4289-
* If available try to use the Command Set Idependent Identify Namespace
4289+
* If available try to use the Command Set Independent Identify Namespace
42904290
* data structure to find all the generic information that is needed to
42914291
* set up a namespace. If not fall back to the legacy version.
42924292
*/

drivers/nvme/host/fc.c

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -899,7 +899,7 @@ EXPORT_SYMBOL_GPL(nvme_fc_set_remoteport_devloss);
899899
* may crash.
900900
*
901901
* As such:
902-
* Wrapper all the dma routines and check the dev pointer.
902+
* Wrap all the dma routines and check the dev pointer.
903903
*
904904
* If simple mappings (return just a dma address, we'll noop them,
905905
* returning a dma address of 0.
@@ -1955,8 +1955,8 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
19551955
}
19561956

19571957
/*
1958-
* For the linux implementation, if we have an unsucceesful
1959-
* status, they blk-mq layer can typically be called with the
1958+
* For the linux implementation, if we have an unsuccessful
1959+
* status, the blk-mq layer can typically be called with the
19601960
* non-zero status and the content of the cqe isn't important.
19611961
*/
19621962
if (status)
@@ -2429,7 +2429,7 @@ static bool nvme_fc_terminate_exchange(struct request *req, void *data)
24292429

24302430
/*
24312431
* This routine runs through all outstanding commands on the association
2432-
* and aborts them. This routine is typically be called by the
2432+
* and aborts them. This routine is typically called by the
24332433
* delete_association routine. It is also called due to an error during
24342434
* reconnect. In that scenario, it is most likely a command that initializes
24352435
* the controller, including fabric Connect commands on io queues, that
@@ -2622,7 +2622,7 @@ nvme_fc_unmap_data(struct nvme_fc_ctrl *ctrl, struct request *rq,
26222622
* as part of the exchange. The CQE is the last thing for the io,
26232623
* which is transferred (explicitly or implicitly) with the RSP IU
26242624
* sent on the exchange. After the CQE is received, the FC exchange is
2625-
* terminaed and the Exchange may be used on a different io.
2625+
* terminated and the Exchange may be used on a different io.
26262626
*
26272627
* The transport to LLDD api has the transport making a request for a
26282628
* new fcp io request to the LLDD. The LLDD then allocates a FC exchange

drivers/nvme/host/nvme.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ enum nvme_quirks {
6969
NVME_QUIRK_IDENTIFY_CNS = (1 << 1),
7070

7171
/*
72-
* The controller deterministically returns O's on reads to
72+
* The controller deterministically returns 0's on reads to
7373
* logical blocks that deallocate was called on.
7474
*/
7575
NVME_QUIRK_DEALLOCATE_ZEROES = (1 << 2),

drivers/nvme/host/pci.c

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2064,8 +2064,28 @@ static int nvme_pci_configure_admin_queue(struct nvme_dev *dev)
20642064
* might be pointing at!
20652065
*/
20662066
result = nvme_disable_ctrl(&dev->ctrl, false);
2067-
if (result < 0)
2068-
return result;
2067+
if (result < 0) {
2068+
struct pci_dev *pdev = to_pci_dev(dev->dev);
2069+
2070+
/*
2071+
* The NVMe Controller Reset method did not get an expected
2072+
* CSTS.RDY transition, so something with the device appears to
2073+
* be stuck. Use the lower level and bigger hammer PCIe
2074+
* Function Level Reset to attempt restoring the device to its
2075+
* initial state, and try again.
2076+
*/
2077+
result = pcie_reset_flr(pdev, false);
2078+
if (result < 0)
2079+
return result;
2080+
2081+
pci_restore_state(pdev);
2082+
result = nvme_disable_ctrl(&dev->ctrl, false);
2083+
if (result < 0)
2084+
return result;
2085+
2086+
dev_info(dev->ctrl.device,
2087+
"controller reset completed after pcie flr\n");
2088+
}
20692089

20702090
result = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);
20712091
if (result)
@@ -2439,7 +2459,7 @@ static ssize_t cmb_show(struct device *dev, struct device_attribute *attr,
24392459
{
24402460
struct nvme_dev *ndev = to_nvme_dev(dev_get_drvdata(dev));
24412461

2442-
return sysfs_emit(buf, "cmbloc : x%08x\ncmbsz : x%08x\n",
2462+
return sysfs_emit(buf, "cmbloc : 0x%08x\ncmbsz : 0x%08x\n",
24432463
ndev->cmbloc, ndev->cmbsz);
24442464
}
24452465
static DEVICE_ATTR_RO(cmb);

drivers/nvme/host/rdma.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -877,7 +877,7 @@ static int nvme_rdma_configure_io_queues(struct nvme_rdma_ctrl *ctrl, bool new)
877877

878878
/*
879879
* Only start IO queues for which we have allocated the tagset
880-
* and limitted it to the available queues. On reconnects, the
880+
* and limited it to the available queues. On reconnects, the
881881
* queue number might have changed.
882882
*/
883883
nr_queues = min(ctrl->tag_set.nr_hw_queues + 1, ctrl->ctrl.queue_count);

drivers/nvme/host/tcp.c

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1745,9 +1745,14 @@ static int nvme_tcp_start_tls(struct nvme_ctrl *nctrl,
17451745
qid, ret);
17461746
tls_handshake_cancel(queue->sock->sk);
17471747
} else {
1748-
dev_dbg(nctrl->device,
1749-
"queue %d: TLS handshake complete, error %d\n",
1750-
qid, queue->tls_err);
1748+
if (queue->tls_err) {
1749+
dev_err(nctrl->device,
1750+
"queue %d: TLS handshake complete, error %d\n",
1751+
qid, queue->tls_err);
1752+
} else {
1753+
dev_dbg(nctrl->device,
1754+
"queue %d: TLS handshake complete\n", qid);
1755+
}
17511756
ret = queue->tls_err;
17521757
}
17531758
return ret;

drivers/nvme/target/core.c

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -581,8 +581,6 @@ int nvmet_ns_enable(struct nvmet_ns *ns)
581581
if (ns->enabled)
582582
goto out_unlock;
583583

584-
ret = -EMFILE;
585-
586584
ret = nvmet_bdev_ns_enable(ns);
587585
if (ret == -ENOTBLK)
588586
ret = nvmet_file_ns_enable(ns);

0 commit comments

Comments
 (0)