Your configuration file is creating an NVMe-oF target containing 1 subsystem whose
namespace is a remote NVMe-oF device. You’re using the same IP/port as both the listen
address for new connections to the target and as the location of the NMe-oF device that’s
backing it. You’ve essentially created a circle where the target is connecting to itself
for its backing storage.
Change the [Nvme] section to point to a local NVMe SSD (the line you have commented out in
that section is a good example) and it should start working.
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Ankit Jain
Sent: Monday, August 21, 2017 3:50 AM
To: Verkamp, Daniel <daniel.verkamp(a)intel.com>; Storage Performance Development Kit
<spdk(a)lists.01.org>
Subject: Re: [SPDK] Not able to start nvmf_tgt with NVMe device
Hi
Thanks for the response.
As you suggested I Disabled the kernel nmve module and ran setup.sh and start the target
,but face the same issue. Below the steps which I followed .
1.) rmmod nvme
2.) rmmod nvme
rmmod: ERROR: Module nvme is not currently loaded
3.) [root@host-1 app]# lsmod| grep nvme
nvme_rdma 28672 0
nvme_fabrics 20480 1 nvme_rdma
nvme_core 45056 2 nvme_fabrics,nvme_rdma
rdma_cm 57344 5 ib_iser,nvme_rdma,ib_isert,rpcrdma,rdma_ucm
ib_core 204800 15
ib_iser,rdma_rxe,ib_cm,rdma_cm,ib_umad,ib_srp,nvme_rdma,ib_isert,ib_uverbs,rpcrdma,ib_ipoib,iw_cm,ib_srpt,ib_ucm,rdma_ucm
4.) [root@host-1 scripts]# ./setup.sh
0000:07:00.0 (15b7 2001): no driver -> uio_pci_generic
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
5.) [root@host-1 app]# lsmod | grep uio
uio_pci_generic 16384 0
uio 16384 1 uio_pci_generic
6.) [root@host-1 app]# ./nvmf_tgt/nvmf_tgt -c ../etc/spdk/mynvmf.conf.in <==
Loading configuration file
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid22170 ]
EAL: Detected 6 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload
Enabled
nvme_rdma.c: 186:nvme_rdma_get_event: *ERROR*: Received event 8 from CM event channel, but
expected event 9
nvme_rdma.c: 514:nvme_rdma_connect: *ERROR*: RDMA connect error
nvme_rdma.c: 758:nvme_rdma_qpair_connect: *ERROR*: Unable to connect the rqpair
nvme_rdma.c:1288:nvme_rdma_ctrlr_construct: *ERROR*: failed to create admin qpair
nvme.c: 320:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 192.168.10.20
port 4420 ***
conf.c: 492:spdk_nvmf_construct_subsystem: *ERROR*: Could not find namespace bdev
'Nvme0n1'
nvmf_tgt.c: 276:spdk_nvmf_startup: *ERROR*: spdk_nvmf_parse_conf() failed
Faced same issue “ Could not find namespace bdev” also Failed to construct NVMe
Controller.
7.) Now I commented “ # Namespace Nvme0n1 “ in configuration file and loaded the file
.This time target started successfully , successfully connected to initiator but initiator
unable able to list NVMe device .used “ nvme list” command to check.
Dmesg output of initiator side
[445474.446352] nvme nvme0: creating 3 I/O queues.
[445474.449927] nvme nvme0: new ctrl: NQN "nqn.2016-06.io.spdk:cnode1", addr
192.168.10.20:4420
Attaching target configuration file for your reference.
Thanks
Ankit Jain
From: Verkamp, Daniel [mailto:daniel.verkamp@intel.com]
Sent: Friday, August 18, 2017 9:16 PM
To: Storage Performance Development Kit
<spdk@lists.01.org<mailto:spdk@lists.01.org>>
Cc: Ankit Jain <Ankit.Jain@wdc.com<mailto:Ankit.Jain@wdc.com>>
Subject: RE: Not able to start nvmf_tgt with NVMe device
Hi Ankit,
Your configuration file looks correct to me.
However, to use the SPDK NVMe driver (as specified in the [Nvme] section of the
configuration file),you will need to unbind the kernel ‘nvme’ driver and use either
‘vfio-pci’ or ‘uio_pci_generic’ so that the userspace NVMe driver can use it.
You can use scripts/setup.sh from the SPDK repository to configure all NVMe devices on the
system to be used with SPDK. Once the nvme driver has been unbound, you should no longer
see the device in lsblk.
Thanks,
-- Daniel
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Ankit Jain
Sent: Friday, August 18, 2017 3:53 AM
To: spdk@lists.01.org<mailto:spdk@lists.01.org>
Subject: [SPDK] FW: Not able to start nvmf_tgt with NVMe device
Hi
We are trying to setup NVMeOF target using SPDK but are unable to start using “nvmf_tgt”
application with an NVMe Device.
Please find the steps followed below
[root@host-1 nvmf_tgt]# ./nvmf_tgt -c ../../etc/spdk/mynvmf.conf.in
>>>>>>>>>>>>>>>>> Trying to load the
config file
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: nvmf -c 0x1 --file-prefix=spdk_pid19063 ]
EAL: Detected 6 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 314:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 306:copy_engine_ioat_init: *NOTICE*: Ioat Copy Engine Offload
Enabled
EAL: PCI device 0000:07:00.0 on NUMA socket 0
EAL: probe driver: 15b7:2001 spdk_nvme
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2014-08.org.nvmexpress.discovery on lcore 0 on socket 0
nvmf_tgt.c: 215:nvmf_tgt_create_subsystem: *NOTICE*: allocated subsystem
nqn.2016-06.io.spdk:cnode1 on lcore 0 on socket 0
rdma.c: 955:spdk_nvmf_rdma_create: *NOTICE*: *** RDMA Transport Init ***
rdma.c:1120:spdk_nvmf_rdma_listen: *NOTICE*: *** NVMf Target Listening on 192.168.10.20
port 4420 ***
conf.c: 492:spdk_nvmf_construct_subsystem: *ERROR*: Could not find namespace bdev
'Nvme0n1'
nvmf_tgt.c: 276:spdk_nvmf_startup: *ERROR*: spdk_nvmf_parse_conf() failed
[root@host-1 nvmf_tgt]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda2 8:2 0 931G 0 part
│ ├─centos-swap 253:1 0 15.7G 0 lvm [SWAP]
│ ├─centos-home 253:2 0 865.3G 0 lvm /home
│ └─centos-root 253:0 0 50G 0 lvm /
└─sda1 8:1 0 500M 0 part /boot
nvme0n1 259:0 0 1.8T 0 disk
The Configuration file used is:
[Nvmf]
MaxQueuesPerSession 4
AcceptorPollRate 10000
[Nvme]
TransportId "trtype:PCIe traddr:0000:07:00.0" Nvme0
#[Malloc]
# NumberOfLuns 1
# LunSizeInMB 512
[Subsystem1]
NQN nqn.2016-06.io.spdk:cnode1
Core 0
#Mode Direct
Listen RDMA 192.168.10.20:4420
#Host nqn.2016-06.io.spdk:init
SN SPDK00000000000001
Namespace Nvme0n1
#NVMe 0000:07:00.0
Can anyone please let me know what went wrong with this configuration when NVMe device is
used.
Please note that this is working fine when malloc devices are configured instead of NVMe
Devices.
Thanks
Ankit Jain
Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice &
Disclaimer:
This e-mail and any files transmitted with it may contain confidential or legally
privileged information of WDC and/or its affiliates, and are intended solely for the use
of the individual or entity to which they are addressed. If you are not the intended
recipient, any disclosure, copying, distribution or any action taken or omitted to be
taken in reliance on it, is prohibited. If you have received this e-mail in error, please
notify the sender immediately and delete the e-mail in its entirety from your system.
Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice &
Disclaimer:
This e-mail and any files transmitted with it may contain confidential or legally
privileged information of WDC and/or its affiliates, and are intended solely for the use
of the individual or entity to which they are addressed. If you are not the intended
recipient, any disclosure, copying, distribution or any action taken or omitted to be
taken in reliance on it, is prohibited. If you have received this e-mail in error, please
notify the sender immediately and delete the e-mail in its entirety from your system.