SPDK Trello board
by Shah, Prital B
All,
I just want to highlight that we have a SPDK Trello board: https://trello.com/spdk for Roadmap discussion and current feature design discussions.
1) To add items to SPDK backlog : Please use this board : https://trello.com/b/P5xBO7UR/things-to-do
2) To discuss features design via individual feature boards.
We are planning to do a discussion on JSON based configuration for SPDK and brainstorming on how we can make it easier for developers to configure SPDK.
You're welcome to join our meeting at +1(916)356-2663. Choose bridge 5, Conference ID: 864687063
Meeting on 09/15/2017 at 02:30 PM MST (UTC-7), this Friday.
Thanks
Prital
3 years, 1 month
SPDK errors
by Santhebachalli Ganesh
Folks,
My name is Ganesh, and I am working on NVEMoF performance metrics using
SPDK (and kernel).
I would appreciate your expert insights.
I am observing errors when QD on perf is increased above >=64 most of the
times. Sometimes, even for <=16
Errors are not consistent.
Attached are some details.
Please let me know if have any additional questions.
Thanks.
-Ganesh
3 years, 2 months
nvme drive not showing in vm in spdk
by Nitin Gupta
Hi All
i am new in spdk development and currently doing spdk setup in that was
able to setup back-end storage with NVME .After running the VM with
following command , there is no nvme drive present .
/usr/local/bin/qemu-system-x86_64 -m 1024 -object
memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
-nographic -no-user-config -nodefaults -serial
mon:telnet:localhost:7704,server,nowait -monitor
mon:telnet:localhost:8804,server,nowait -numa node,memdev=mem -drive
file=/home/qemu/qcows,format=qcow2,if=none,id=disk -device
ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
how to identify which is nvme drive ?
is there any way to enable nvme from qemu command ?
PS: i have already specified the nvme drive in vhost.conf.in
Regards
Nitin
3 years, 3 months
Qemu command is not working in spdk
by Nitin Gupta
Hi All
i am trying to run qemu command but its not working
please help
in terminal 1 :-
for qemu :- i am running below command from same page
/usr/local/bin/qemu-system-x86_64 -m 1024 -object
memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on -numa
node,memdev=mem -drive file=/root/mgmt.qcow2,if=none,id=disk -device
ide-hd,drive=disk,bootindex=0 -chardev socket,id=char0,path=./spdk/vhost.0
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 --enable-kvm
gtk failed error
in terminal 2
-bash-4.2# app/vhost/vhost -c etc/spdk/vhost.conf.in
Starting DPDK 17.05.0 initialization...
[ DPDK EAL parameters: vhost -c 0x1 -m 1024 --file-prefix=spdk_pid79776 ]
EAL: Detected 40 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 1
Occupied cpu socket mask is 0x1
reactor.c: 362:_spdk_reactor_run: *NOTICE*: Reactor started on core 0 on
socket
0
EAL: PCI device 0000:d8:00.0 on NUMA socket 1
EAL: probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:d9:00.0 on NUMA socket 1
EAL: probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:da:00.0 on NUMA socket 1
EAL: probe driver: 8086:a53 spdk_nvme
EAL: PCI device 0000:db:00.0 on NUMA socket 1
EAL: probe driver: 8086:a53 spdk_nvme
gpt.c: 201:spdk_gpt_check_mbr: *ERROR*: Currently only support GPT
Protective MB
R format
VHOST_CONFIG: vhost-user server: socket created, fd: 14
VHOST_CONFIG: bind to vhost.0
vhost.c: 426:spdk_vhost_dev_construct: *NOTICE*: Controller vhost.0: new
control
ler added
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller
vhost.0: def
ined device 'Dev 0' using lun
'Malloc0'
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller
vhost.0: def
ined device 'Dev 2' using lun
'Nvme0n1'
VHOST_CONFIG: new vhost user connection is 15
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_PROTOCOL_FEATURES
VHOST_CONFIG: read message VHOST_USER_GET_QUEUE_NUM
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:16
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:17
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:2 file:18
VHOST_CONFIG: recvmsg failed
VHOST_CONFIG: vhost peer closed
Regards
Nitin
3 years, 3 months
can not construct vhost controller
by Nitin Gupta
Hi All
i am new in spdk development . currently i am planning to setup
vhost.conf.in for nvme device .
to set nvme as lun i tried the attached file change and none of it working
Please see below error .
1. Error if i take 1st attached vhost.conf.in_first file
VHOST_CONFIG: bind to vhost.0
vhost.c: 426:spdk_vhost_dev_construct: *NOTICE*: Controller vhost.0: new
controller added
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller
vhost.0: defined device 'Dev 0' using lun 'Malloc0'
vhost_scsi.c: 860:spdk_vhost_scsi_dev_add_dev: *ERROR*: Couldn't create
spdk SCSI device 'Dev 2' using lun device 'Nvme0n1' in controller: vhost.0
vhost.c: 768:spdk_vhost_startup: *ERROR*: Cannot construct vhost controllers
2. Error if i take vhost.conf.in_second file
vhost.c: 426:spdk_vhost_dev_construct: *NOTICE*: Controller vhost.0: new
controller added
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller
vhost.0: defined device 'Dev 0' using lun 'Malloc0'
vhost_scsi.c: 860:spdk_vhost_scsi_dev_add_dev: *ERROR*: Couldn't create
spdk SCSI device 'Dev 2' using lun device 'Nvme0' in controller: vhost.0
vhost.c: 768:spdk_vhost_startup: *ERROR*: Cannot construct vhost controllers
3. Error if i take vhost.conf.in_third file
vhost.c: 426:spdk_vhost_dev_construct: *NOTICE*: Controller vhost.0: new
controller added
vhost_scsi.c: 874:spdk_vhost_scsi_dev_add_dev: *NOTICE*: Controller
vhost.0: defined device 'Dev 0' using lun 'Malloc0'
vhost_scsi.c: 860:spdk_vhost_scsi_dev_add_dev: *ERROR*: Couldn't create
spdk SCSI device 'Dev 2' using lun device 'Nvme0n1' in controller: vhost.0
vhost.c: 768:spdk_vhost_startup: *ERROR*: Cannot construct vhost controllers
if i make entry for nvme like AIO device the command is working fine .
please help me so that i can move ahead.
Regards
nitin
3 years, 3 months
Proposal to Dynamic Reconfiguration of SPDK iSCSI Target
by 松本周平 / MATSUMOTO,SHUUHEI
Hello,
This is my first post and I’m very glad to join the SPDK community.
We are working on the improvement of SPDK iSCSI target for Hitachi’s new storage product.
I would like to post a few proposals about flexibility of SPDK iSCSI target.
What we want to do is to support dynamic adding/removing ports and LUNs for iSCSI target and
by this improvement, we make SPDK iSCSI target have at least as flexible as SPDK VHOST-SCSI.
This is my first post and I appreciate your any feedback.
Thank you,
Shuhei Matsumoto
What we want to do
==================
Adding/removing portal to/from iscsi target dynamically,
- We make portal-portal_group pair for each portal (each portal group has only
one portal and each portal belongs to only one portal group).
- New JSON-RPC commands, will add/remove portals to/from an existing iSCSI
target.
Adding an LUN (BDEV) to an existing iSCSI target
- VHOST SCSI support this function.
- Comparison between VHOST SCSI and iSCSI target
- VHOST SCSI
- each SCSI controller can have multiple SCSI devices
- each SCSI device can have only one LUN
- we can add a SCSI device with an LUN to an existing SCSI controller
- iSCSI target
- a iSCSI target can have only a SCSI device.
- a SCSI device can have multiple LUNs.
- we cannot add an additional SCSI device to an existing iSCSI target
-> Hence we would like to add an LUN to an existing SCSI device as same as
VHOST-SCSI. We should support the following:
- JSON-RPC command.
- Unit Attention function
- add an LUN to an existing SCSI device.
- callback function to notify Unit Attention at the hot-add event.
Removing an LUN (BDEV) from an existing iSCSI target
- VHOST SCSI support this function.
- Comparison between VHOST SCSI and iSCSI target
- VHOST SCSI
- Removing a BDEV will remove an LUN binded to the BDEV from an existing
SCSI device. Empty SCSI device will be remained.
- Removing a SCSI device will remove an LUN and its binded BDEV from
the removed SCSI device.
- Notify the hot-remove event to the host by the VIRTIO event through
the registration of the callback function at the SCSI device creation.
- iSCSI target
- Removing a BDEV will leave an LUN binded to the BDEV from an existing
SCSI device as same as VHOST SCSI.
- This hot-remove event is not notified to the host due to lack of
Unit Attention function.
-> Hence we would like to support the following:
- Unit Attention function
- callback function to notify Unit Attention at the hot-remove event.
Design
======
1. iSCSI Portal
1.1 iscsi_portal_acceptor poller for each portal (currently one global
iscsl_portal_acceptor poller for all portals)
One iscsi_portal_acceptor poller walks all portal_group_list and portal_list is
not safe when adding/removing portal and portal_groups are done by dynamically.
Currently each portal can be registered into only one portal_group. Hence one
iscsi_portal_acceptor for each portal is correct.
1.2 Synchronous removing the iscsi_portal_acceptor poller
Currently removing the iscsi_portal_acceptor does not wait completion.
When the requester runs on the different core from the to-be-stopped
iscsi_portal_acceptor, the requester will wait completion by using
event_call+semaphore+timer.
To achieve this, the lcore variable is added to the spdk_iscsi_conn structure.
Without spdk_iscsi_conn->lcore, the requester cannot know where the
iscsl_portal_acceptor runs. We referred the vhost_scsi code about
event_call+semaphore+timer.
1.3 Decouple spdk_iscsi_conn and spdk_iscsi_portal to remove a portal from the
active connections.
Currently spdk_iscsi_conn refers obsolete spdk_iscsi_portal after
spdk_iscsi_portal removal. We copied member data of the iscsi_portal to the
iscsi_conn during login processing, and after login completion, iscsi_conn
refer not iscsi_portal but copied data of iscsi_portal.
About iSCSI target, we do not change as it was. iSCSI target can be removed
dynamically if no active connection.
1.4 Support dynamic adding/removing target ports to/from an existing iSCSI/SCSI
target.
Current spdk_scsi_dev has an array of target ports. However, this array
supports only adding target ports and has some bugs.
2. Internal Data Structure
2.1 PG-IG Mapping (PG: Portal Group, IG: Initiator Group)
Each target node has PG-IG mappings.
Current PG-IG mappings is an array and not flexible.
We do not change parameter of JSON-RPC but change the internal data
structure of PG-IG mappings.
Current:
target -- (1..n)----> PG_map (1..1) --> PG
|
--> IG_map (1..1) --> IG
Our proposal
target -- (1..n) --> PG_map --- (1..1) --> PG
|
-- (1..m) --> IG_map -- (1..1) --> IG
Our proposal will make the accessibility control of PG and IG comprehensible.
We keep the upper limit of number of PG-IG mappings.
PG_map must have at least one IG_map, and if all IG_maps of a PG_map are
removed, the PG_map will be also removed from the target.
2.2 Global iSCSI Portal List
Currently each portal can be registered into only one portal group. However
duplicated registration of a portal can be detected at the failure of socket
open system call of the portal.
By adding global portal list, we can detect duplicated registration without
system call failure.
3. Dynamic Adding/Removing LUNs to/from an existing iSCSI/SCSI target
3.1 UA queue for each I_T (or I_T_L) nexus (UA: Unit Attention)
Each SCSI device must have a queue of UA for each I_T (or I_T_L) nexus.
Currently the appropriate data structure for I_T nexus is the
spdk_iscsi_session. Hence we add a UA queue for each I_T_L nexus into
the spdk_iscsi_session.
3.2 Lockless concurrency control by event based callback function
Currently all connections of an iSCSI target will run on the same CPU.
by using SPDK event_call and callback function, we will implement Unit
Attention function without using locks.
3.3 How to add an LUN when LUN0 is not active
To make the initiator detect newly added LUNs, the iSCSI target must have
the active LUN0.
When LUN1, LUN2, and LUN3 are acitive and LUN0 is not active,
if you add an LUN as LUN4 the iSCSI target will not run correctly.
When LUN1, LUN2, and LUN3 are active and LUN0 is not active,
adding an LUN as LUN0 is required for the iSCSI target to run correctly.
Hence we will handle the LUN ID parameter as optional of the ADD LUN command.
If the LUN ID parameter is omitted, the new LUN will have the smallest LUN ID
of unused IDs.
4. JSON-RPC for iSCSI target
4.1 construct_target_node
By our change, this command can skip registration of PG-IG mappings and LUNs
at target creation, and later we can add/remove PG-IG mappings and LUNs.
4.2 add_pg_ig_mappings_to_target_node
This new command supports adding the specified PG-IG mappings to the existing
target.
4.3 remove_pg_ig_mappings_from_target_node
This new command supports removing the specified PG-IG mappings from the
existing target.
4.4 clear_pg_ig_mappings_from_target_node
This new command supports removing all PG-IG mappings from the existing target.
4.5 add_lun_to_target_node
This new command supports adding an LUN to an existing target.
When a LUN is added, the target does not immediately notify the host. When an
I/O is sent to an existing LUN, the I/O will get a “check condition” status.
When the request sense is issued the sense data will indicate that the
"REPORT LUNS DATA HAS CHANGED".
To add/remove LUNs to/from an existing target correctly, LUN0 is special and
if we remove LUN0, LUN0 has to be assigned to the next added LUN.
To satisfy this requirement, LUN ID parameter of the ADD LUN command is not
mandatory but optional, and if it is omitted, the smallest free LUN ID will be
assigned to the newly added LUN.
4.6 remove_luns_from_target_node
This new command supports removing LUNs from the existing target.
5. Bug fix of python code (rpc.py) for JSON-RPC
Due to erroneous sorting, we may not be able to construct the iSCSI target node
with multiple LUNs when cofigured by rpc.py. Sorting should be done by LUN ID
but currently is done by LUN name. Hence we observed this unintentional failure.
3 years, 3 months
[WIP] Add py-spdk client for SPDK
by We We
Hi, @Paul @Jim
About the topic “[WIP]Add py-spdk client for SPDK”, I am sorry for that I didn't mention it on the mailling list in advance. Indeed,this is unrelated to my proposal on Trello about the blobkv proposal.
Here is my idea of py-spdk:
The py-spdk maybe we can regard it as client for SPDK, which provides a python binding for the native SPDK operations. Take the example of OpenStack/Cyborg as follows:
Cyborg ——————>Cyborg SPDK driver (in Cyborg) —————> py-spdk ——————>SPDK
In the above example, OpenStack Cyborg as a management layer calls upon SPDK module via (1) its driver that will talk to the SPDK client and (2) the SPDK client that provides the python biding.
This is almost a canonical procedure. For example when Kubernetes needs to use an OpenStack infrastructure , it will first calls upon the OpenStack cloud provider (k8s’s openstack driver), and then the cloud provider will calls OpenStack SDKs which intern calls OpenStack clients and then calls into OpenStack native APIs. In such case the OpenStack clients are hosted within the OpenStack community whereas the cloud providers are hosted in-tree in Kubernetes community.
Although it should be very clear that unlike the cyborg-spdk-driver, py-spdk should be part of the general spoke community. However it does make sense as you guys suggested that such module may not be appropriate to be hosted in the master repo of SPDK.
Maybe it is possible to have an individual “spoke/client” repo for all the client side modules ? (We might have other language binding clients for java, go, perl, … in the near future)
Is there anyone else interested in the proposal? Maybe you can review it in https://review.gerrithub.io/#/c/379741/ <https://review.gerrithub.io/#/c/379741/>
Regards,
Helloway
3 years, 3 months
spdk user mode initiator
by Kirubakaran Kaliannan
Hi,
We are evaluating the spdk-target with NVMf initiator (installing linux 4.9
only on the initator but 3.18 on the target).
This is good so for, and we do have customers don’t want to upgrade 3.18
kernel even on the initiator side.
So, Similarly do we have SPDK initiator (at user land) so I can have this
configured on 3.18 kernel ?
If we have one, any pointer to the documentation will help ?
Thanks
-kiru
3 years, 3 months
blobfs improvements
by txcy uio
Hello
Are there any improvements for blobfs which optimise the search time in
spdk_fs_file_stat_async() (avoiding the sting compare and linear scan of
the link list)?
Thanks
3 years, 3 months
blobfs questions
by txcy uio
Hello
I haver been trying to write a sample app which make use of blobs fs apis.
I have been running into some issues with correctly using spdk_fs_init and
spdk_fs_load?
Could someone answer the following ?
- When is the spdk_fs_init()/spdk_fs_load() used ?
- What are use of completion call back functions ?
- Is it safe to assume that blobfs is ready to be used once spdk_fs_load()
returns successfully?
- spdk_fs_init() and _load() take a send_request_fn as an argument and the
code sometimes coredumps when this is NULL. How does someone use this ?
Thanks
3 years, 4 months