Best practices on driver binding for SPDK in production environments
by Lance Hartmann ORACLE
This email to the SPDK list is a follow-on to a brief discussion held during a recent SPDK community meeting (Tue Jun 26 UTC 15:00).
Lifted and edited from the Trello agenda item (https://trello.com/c/U291IBYx/91-best-practices-on-driver-binding-for-spd... <https://trello.com/c/U291IBYx/91-best-practices-on-driver-binding-for-spd...>):
During development many (most?) people rely on the run of SPDK's scripts/setup.sh to perform a number of initializations, among them the unbinding of the Linux kernel nvme driver from NVMe controllers targeted for use by the SPDK and then binding them to either uio_pci_generic or vfio-pci. This script is applicable for development environments, but not targeted for use in productions systems employing the SPDK.
I'd like to confer with my fellow SPDK community members on ideas, suggestions and best practices for handling this driver unbinding/binding. I wrote some udev rules along with updates to some other Linux system conf files for automatically loading either the uio_pci_generic or vfio-pci modules. I also had to update my initramfs so that when the system comes all the way up, the desired NVMe controllers are already bound to the needed driver for SPDK operation. And, as a bonus, it should "just work" when a hotplug occurs as well. However, there may be additional considerations I might have overlooked on which I'd appreciate input. Further, there's the matter of how and whether to semi-automate this configuration via some kind of script and how that might vary according to Linux distro to say nothing of the determination of employing uio_pci_generic vs vfio-pci.
And, now some details:
1. I performed this on an Oracle Linux (OL) distro. I’m currently unaware how and what configuration files might be different depending on the distro. Oracle Linux is RedHat-compatible, so I’m confident my implementation should run similarly on RedHat-based systems, but I’ve yet to delve into other distro’s like Debian, SuSE, etc.
2. In preparation to writing my own udev rules, I unbound a specific NVMe controller from the Linux nvme driver by hand. Then, in another window I launched: "udevadm monitor -k -p” so that I could observe the usual udev events when a NVMe controller is bound to the nvme driver. On my system, I observed four (4) udev kernel events (abbreviated/edited output to avoid this become excessively long):
(Event 1)
KERNEL[382128.187273] add /devices/pci0000:00/0000:00:02.2/0000:30:00.0/nvme/nvme0 (nvme)
ACTION=add
DEVNAME=/dev/nvme0
…
SUBSYSTEM=nvme
(Event 2)
KERNEL[382128.244658] bind /devices/pci0000:00/0000:00:02.2/0000:30:00.0 (pci)
ACTION=bind
DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:30:00.0
DRIVER=nvme
…
SUBSYSTEM=pci
(Event 3)
KERNEL[382130.697832] add /devices/virtual/bdi/259:0 (bdi)
ACTION=add
DEVPATH=/devices/virtual/bdi/259:0
...
SUBSYSTEM=bdi
(Event 4)
KERNEL[382130.698192] add /devices/pci0000:00/0000:00:02.2/0000:30:00.0/nvme/nvme0/nvme0n1 (block)
ACTION=add
DEVNAME=/dev/nvme0n1
DEVPATH=/devices/pci0000:00/0000:00:02.2/0000:30:00.0/nvme/nvme0/nvme0n1
DEVTYPE=disk
...
SUBSYSTEM=block
3. My udev rule triggers on (Event 2) above: the bind action. Upon this action, my udev rule appends operations to the special udev RUN variable such that udev will essentially mirror that which is done in the SPDK’s scripts/setup.sh for unbinding from the nvme driver and binding to, in my case, the vfio-pci driver.
4. With my new udev rules in place, I was successful getting specific NVMe controllers (based on bus-device-function) to unbind from the Linux nvme driver and bind to vfio-pci. However, I made a couple of observations in the kernel log (dmesg). In particular, I was drawn to the following for an NVMe controller at BDF: 0000:40:00.0 for which I had a udev rule to unbind from nvme and bind to vfio-pci:
[ 35.534279] nvme nvme1: pci function 0000:40:00.0
[ 37.964945] nvme nvme1: failed to mark controller live
[ 37.964947] nvme nvme1: Removing after probe failure status: 0
One theory I have for the above is that my udev RUN rule was invoked while the nvme driver’s probe() was still running on this controller, and perhaps the unbind request came in before the probe() completed hence this “name1: failed to mark controller live”. This has left lingering in my mind that maybe instead of triggering on (Event 2) when the bind occurs, that perhaps I should instead try to derive a trigger on the “last" udev event, an “add”, where the NVMe namespace’s are instantiated. Of course, I’d need to know ahead of time just how many namespaces exist on that controller if I were to do that so I’d trigger on the last one. I’m wondering if that may help to avoid what looks like a complaint during the middle of probe() of that particular controller. Then, again, maybe I can just safely ignore that and not worry about it at all? Thoughts?
I discovered another issue during this experimentation that is somewhat tangential to this task, but I’ll write a separate email on that topic.
thanks for any feedback,
--
Lance Hartmann
lance.hartmann(a)oracle.com
1 year, 2 months
Topic from last week's community meeting
by Luse, Paul E
Hi Shuhei,
I was out of town last week and missed the meeting but saw on Trello you had the topic below:
"a few idea: log structured data store , data store with compression, and metadata replication of Blobstore"
Which I'd be pretty interested in working on with you or at least hearing more about it. When you get a chance, no hurry, can you please expand a little on how the conversation went and what you're looking at specifically?
Thanks!
Paul
1 year, 3 months
Add py-spdk client for SPDK
by We We
Hi, all
I have submitted the py-spdk code on https://review.gerrithub.io/#/c/379741/, please take some time to visit it, I will be very grateful to you.
The py-spdk is client which can help the upper-level app to communicate with the SPDK-based app (such as: nvmf_tgt, vhost, iscsi_tgt, etc.). Should I submit it into the other repo I rebuild rather than SPDK repo? Because I think it is a relatively independent kit upon the SPDK.
If you have some thoughts about the py-spdk, please share with me.
Regards,
Helloway
1 year, 3 months
virtio-vhost-user with virtio-scsi: end-to-end setup
by Nikos Dragazis
Dear Stefan,
I hope you are the right person to contact about this, please point me
to the right direction otherwise. I am also Cc:ing the SPDK and
qemu-devel mailing lists, to solicit community feedback.
As part of my internship at Arrikto, I have spent the last few months
working on the SPDK vhost target application. I was triggered by the
“VirtioVhostUser” feature you proposed for QEMU
[https://wiki.qemu.org/Features/VirtioVhostUser] and made my end goal to
have an end-to-end system running, where a slave VM offers storage to a
master VM over vhost-user, and exposes an underlying SCSI block device
underneath. My current approach is to use virtio-scsi-based storage
inside the slave VM.
I see that you have managed to move the vhost-user backend inside a VM
over a virtio-vhost-user transport. I have experimented with running the
SPDK vhost app over vhost-user, but have run with quite a few problems
with the virtio-pci driver. Apologies in advance for the rather lengthy
email, I would definitely value any short-term hints you may have, as
well as any longer-term feedback you may offer on my general direction.
My current state is:
I started with your DPDK code at
https://github.com/stefanha/dpdk/tree/virtio-vhost-user, and read about
your effort to integrate the DPDK vhost-scsi application with
virtio-vhost-user, here:
http://mails.dpdk.org/archives/dev/2018-January/088155.html
My initial approach was to replicate your work, but with the SPDK vhost
library running over virtio-vhost-user. I have pushed all of my code in
the following repository, it is still a WIP and I really need to tidy up
the commits:
https://bitbucket.org/ndragazis/spdk.git
Hacks I had to do:
- I use the modified script usertools/dpdk-devbind.py found in your DPDK
repository here: https://github.com/stefanha/dpdk to bind the
virtio-vhost-user device to the vfio-pci kernel driver. The SPDK setup
script in scripts/setup.sh does not handle unclassified devices like
the virtio-vhost-user device. I plan to fix this later.
- I pass the PCI address of the virtio-vhost-user device to the vhost
library, by repurposing the existing -S option; it no longer refers to
the UNIX socket, as in the case of the UNIX transport. This means the
virtio-vhost-user transport is hardcoded and not configurable by the
user. I plan to fix this later.
- I copied your code that implements the virtio-vhost-user transport and
made the necessary changes to abstract the transport implementation.
I also copied the virtio-pci code from DPDK rte_vhost into the SPDK
vhost library, so the virtio-vhost-user driver could use it. I saw
this is what you did as a quick hack to make the DPDK vhost-scsi
application handle the virtio-vhost-user device.
Having done that, I tried to demo my integration end-to-end, and
everything worked fine with a Malloc block device, but things broke
when I switched to a virtio-scsi block device inside the slave. My
attempts to call construct_vhost_scsi_controller failed with an I/O
error. Here is the log:
-- cut here --
$ export VVU_DEVICE="0000:00:06.0"
$ sudo modprobe vfio enable_unsafe_noiommu_mode=1
$ sudo modprobe vfio-pci
$ sudo ./dpdk-devbind.py -b vfio-pci $VVU_DEVICE
$ cd spdk
$ sudo scripts/setup.sh
Active mountpoints on /dev/vda, so not binding PCI dev 0000:00:04.0
0000:00:05.0 (1af4 1004): virtio-pci -> vfio-pci
$ sudo app/vhost/vhost -S "$VVU_DEVICE" -m 0x3 &
[1] 3917
$ Starting SPDK v18.07-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: vhost -c 0x3 -m 1024 --file-prefix=spdk_pid3918 ]
EAL: Multi-process socket /var/run/.spdk_pid3918_unix
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0000:00:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1017 virtio_vhost_user
EAL: using IOMMU type 8 (No-IOMMU)
EAL: Ignore mapping IO port bar(0)
VIRTIO_PCI_CONFIG: found modern virtio pci device.
VIRTIO_PCI_CONFIG: modern virtio pci detected.
VHOST_CONFIG: Added virtio-vhost-user device at 0000:00:06.0
$ sudo scripts/rpc.py construct_virtio_pci_scsi_bdev 0000:00:05.0 VirtioScsi0
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 1af4:1004 spdk_virtio
EAL: Ignore mapping IO port bar(0)
[
"VirtioScsi0t0"
]
$ sudo scripts/rpc.py construct_vhost_scsi_controller --cpumask 0x1 vhost.0
VHOST_CONFIG: BAR 2 not availabled
Got JSON-RPC error response
request:
{
"params": {
"cpumask": "0x1",
"ctrlr": "vhost.0"
},
"jsonrpc": "2.0",
"method": "construct_vhost_scsi_controller",
"id": 1
}
response:
{
"message": "Input/output error",
"code": -32602
}
-- cut here --
This was really painful to debug. I managed to find the cause yesterday,
I had bumped into this DPDK bug:
https://bugs.dpdk.org/show_bug.cgi?id=85
and I worked around it, essentially by short-circuiting the point where
the DPDK runtime rescans the PCI bus and corrupts the
dev->mem_resource[] field for the already-mapped-in-userspace
virtio-vhost-user PCI device. I just commented out this line:
https://github.com/spdk/dpdk/blob/08332d13b3a66cb1a8c3a184def76b039052d67...
This seems to be a good enough workaround for now. I’m not sure this bug
has been fixed, I will comment on the DPDK bugzilla.
But, now, I have really hit a roadblock. I get a segfault, I run the
exact same commands as shown above, and end up with this backtrace:
-- cut here --
#0 0x000000000046ae42 in spdk_bdev_get_io (channel=0x30) at bdev.c:920
#1 0x000000000046c985 in spdk_bdev_readv_blocks (desc=0x93f8a0, ch=0x0,
iov=0x7ffff2fb7c88, iovcnt=1, offset_blocks=0, num_blocks=8,
cb=0x453e1a <spdk_bdev_scsi_task_complete_cmd>, cb_arg=0x7ffff2fb7bc0) at bdev.c:1696
#2 0x000000000046c911 in spdk_bdev_readv (desc=0x93f8a0, ch=0x0, iov=0x7ffff2fb7c88,
iovcnt=1, offset=0, nbytes=4096, cb=0x453e1a <spdk_bdev_scsi_task_complete_cmd>,
cb_arg=0x7ffff2fb7bc0) at bdev.c:1680
#3 0x0000000000453fe2 in spdk_bdev_scsi_read (bdev=0x941c80, bdev_desc=0x93f8a0,
bdev_ch=0x0, task=0x7ffff2fb7bc0, lba=0, len=8) at scsi_bdev.c:1317
#4 0x000000000045462e in spdk_bdev_scsi_readwrite (task=0x7ffff2fb7bc0, lba=0,
xfer_len=8, is_read=true) at scsi_bdev.c:1477
#5 0x0000000000454c95 in spdk_bdev_scsi_process_block (task=0x7ffff2fb7bc0)
at scsi_bdev.c:1662
#6 0x00000000004559ce in spdk_bdev_scsi_execute (task=0x7ffff2fb7bc0)
at scsi_bdev.c:2029
#7 0x00000000004512e4 in spdk_scsi_lun_execute_task (lun=0x93f830, task=0x7ffff2fb7bc0)
at lun.c:162
#8 0x0000000000450a87 in spdk_scsi_dev_queue_task (dev=0x713c80 <g_devs>,
task=0x7ffff2fb7bc0) at dev.c:264
#9 0x000000000045ae48 in task_submit (task=0x7ffff2fb7bc0) at vhost_scsi.c:268
#10 0x000000000045c2b8 in process_requestq (svdev=0x7ffff31d9dc0, vq=0x7ffff31d9f40)
at vhost_scsi.c:649
#11 0x000000000045c4ad in vdev_worker (arg=0x7ffff31d9dc0) at vhost_scsi.c:685
#12 0x00000000004797f2 in _spdk_reactor_run (arg=0x944540) at reactor.c:471
#13 0x0000000000479dad in spdk_reactors_start () at reactor.c:633
#14 0x00000000004783b1 in spdk_app_start (opts=0x7fffffffe390,
start_fn=0x404df8 <vhost_started>, arg1=0x0, arg2=0x0) at app.c:570
#15 0x0000000000404ec0 in main (argc=7, argv=0x7fffffffe4f8) at vhost.c:115
-- cut here --
I have not yet been able to debug this, it’s most probably my bug, but I
am wondering whether there could be a conflict between the two distinct
virtio drivers: (1) the pre-existing one in the SPDK virtio library
under lib/virtio/, and (2) the one I copied into lib/vhost/rte_vhost/ as
part of the vhost library.
I understand that even if I make it work for now, this cannot be a
long-term solution. I would like to re-use the pre-existing virtio-pci
code from the virtio library to support virtio-vhost-user.
Do you see any potential problems in this? Did you change the virtio
code that you placed inside rte_vhost? It seems there are subtle
differences between the two codebases.
These are my short-term issues. On the longer term, I’d be happy to
contribute to VirtioVhostUser development any way I can. I have seen
some TODOs in your QEMU code here:
https://github.com/stefanha/qemu/blob/virtio-vhost-user/hw/virtio/virtio-...
and I would like to contribute, but it’s not obvious to me what
progress you’ve made since.
As an example, I’d love to explore the possibility of adding support for
interrupt-driven vhost-user backends over the virtio-vhost-user
transport.
To summarize:
- I will follow up on the DPDK bug here:
https://bugs.dpdk.org/show_bug.cgi?id=85 about a proposed fix.
- Any hints on my segfault? I will definitely continue troubleshooting.
- Once I’ve sorted this out, how can I start using a single copy of the
virtio-pci codebase? I guess I have to make some changes to comply
with the API and check the dependencies.
- My current plan to contribute towards an IRQ-based implementation of
the virtio-vhost-user transport would be to use the vhost-user kick
file descriptors as a trigger to insert virtual interrupts and handle
them in userspace. The virtio-vhost-user device could exploit the
irqfd mechanism of the KVM for this purpose. I will keep you and the
list posted on this, I would appreciate any early feedback you may
have.
Looking forward to any comments/feedback/pointers you may have. I am
rather inexperienced with this stuff, but it’s definitely exciting and
I’d love to contribute more to QEMU and SPDK.
Thank you for reading this far,
Nikos
--
Nikos Dragazis
Undergraduate Student
School of Electrical and Computer Engineering
National Technical University of Athens
2 years, 3 months
Design policy to support extended LBA and T10 PI in bdevs.
by 松本周平 / MATSUMOTO,SHUUHEI
Hi All,
I've worked on extended LBA in bdevs first.
I will do T10 PI on top of the extended LBA next.
I expect some applications or users will need separate metadata and will do this too.
About extended LBA in bdevs, I would like to hear any feedback before submitting patches.
Any feedback is very very appreciated.
Q1. Which length should spdk_bdev_get_block_size(bdev) describe?
Option 1: length of extended block (data + metadata)
Option 2: length of only data block. user will get length of metadata by spdk_bdev_get_md_size(bdev)
Or any other idea?
Current implementation is A1 but NVMe-oF target cuts off the size of metadata now even if metadata is enabled.
Keeping current implementation, A1, sounds reasonable for me.
Q2. Which behavior should bdev IO APIs have by default?
Option 1: READ_PASS and WRITE_PASS (the upper layer must be aware of extended LBA by default)
Option 2: READ_STRIP and WRITE_INSERT (extended LBA is transparent to the upper layer by default)
Or any other idea?
READ_STRIP reads data and metadata from the target, discards metadata, and transfers only data to the upper layer.
WRITE_INSERT transfers only data from the upper layer, add metadata, and writes both data and metadata to the target.
READ_PASS reads data and metadata from the target and transfers both data and metadata to the upper layer.
WRITE_PASS transfers data and metadata from the upper layer and writes both data and metadata to the target.
A1 looks reasonable to me. I will be able to provide an new bdev module to use extended LBA bdevs transparently.
The new bdev module will do READ_STRIP and WRITE_INSERT internally.
If we take A2, we will have to provide the set of ELBA aware APIs.
Q3. To support any application which requires separate metadata, providing the following APIs is reasonable?
- add md_buf and md_len to parameters
- add the suffix "_md" to the function name.
About T10 PI, I will send questions as separate mail later.
Bdev provides the following APIs:
- int spdk_bdev_read(desc, ch, buf, offset, nbytes, cb, cb_arg)
- int spdk_bdev_readv(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_arg)
- int spdk_bdev_write(desc, ch, buf, offset, nbytes, cb, cb_arg)
- int spdk_bdev_writev(desc, ch, iov, iovcnt, offset, nbytes, cb, cb_arg)
- int spdk_bdev_write_zeroes(desc, ch, offset, len, cb, cb_arg)
- int spdk_bdev_unmap(desc, ch, offset, nbytes, cb, cb_arg)
- int spdk_bdev_reset(desc, ch, cb, cb_arg)
- int spdk_bdev_flush(desc, ch, offset, length, cb, cb_arg)
- int spdk_bdev_nvme_admin_passthru(desc, ch, cmd, buf, nbytes, cb, cb_arg)
- int spdk_bdev_nvme_io_passthru(bdev_desc, ch, cmd, buf, nbytes, bc, cb_arg)
- int spdk_bdev_nvme_io_passthru_md(bdev_desc, ch, cmd, buf, nbytes, md_buf, md_len, cb, cb_arg)
- uint32_t spdk_bdev_get_blocks_size(bdev)
- uint32_t spdk_bdev_get_num_blocks(bdev)
- int spdk_bdev_read_blocks(desc, ch, buf, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_readv_blocks(desc, ch, iov, iovcnt, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_write_blocks(desc, ch, buf, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_writev_blocks(desc, ch, iov, iovcnt, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_write_zeroes_blocks(desc, ch, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_unmap_blocks(desc, ch, offset_blocks, num_blocks, cb, cb_arg)
- int spdk_bdev_flush_blocks(desc, ch, offset_blocks, num_blocks, cb, cb_arg)
Thanks,
Shuhei
2 years, 3 months
Question on LVS and hotplug
by Sablok, Kunal
Hi,
I have few questions on LVS.
* Consider LVS/LVOL is created over nvme_bdev device. When nvme_bdev is removed (hotplug), the notification goes to LVS where it calls its hotplug remove_cb function. As part of that function why LVS issues some read IOs? Since the drive getting removed (or physically removed) it can never serve IOs.
* Also if the IO (fired by LVS down in context of base bdev removal) fails, LVS never gets unloaded. Why LVS never gets unloaded in this case?
If I do some change to fail new IOs when nvme_bdev is removed from nvme_bdev layer, then I get issue where LVS never gets unloaded as IO has failed now (I am wondering when actual physical device gets removed then also IO could be failing).
If I do further changes in LVS it started working fine with below changes:
In _vbdev_lvs_remove_cb()
if (lvserrno != 0) {
SPDK_INFOLOG(SPDK_LOG_VBDEV_LVOL, "Could not remove lvol store bdev\n");
TAILQ_REMOVE(&g_spdk_lvol_pairs, lvs_bdev, lvol_stores);
free(lvs_bdev);
} else {
TAILQ_REMOVE(&g_spdk_lvol_pairs, lvs_bdev, lvol_stores);
free(lvs_bdev);
}
In _spdk_bs_load_ctx_fail(), change is do "_spdk_bs_free(ctx->bs);" everytime and comment out "if (ctx->is_load)"
Could anybody please comment on this?
Regards,
Kunal
2 years, 3 months
NVMe-oF question with SPDK
by Shi Bingxun
Dear experts,
Could you help to advice for my issues with SPDK? I'm trying to setup NVMe over fabric environment using SPDK. With below listed steps, I can successfully connect NVMe subsystem on initiator. So I think my steps should work. But I found once the target reboots, the setup information on the target will be gone, like the created NVMe bdevs, I'm really confused. Won't the created bdev be saved?
----------------------------------------------------------------------------------------------
On NVMe target:
sudo scripts/setup.sh
./app/nvmf_tgt/nvmf_tgt &
cd scripts
./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:03:00.0
./rpc.py construct_nvmf_subsystem nqn.2016-06.io.spdk:cnode1 "trtype:RDMA traddr:192.168.1.2 trsvcid:4420" "" -a -s SPDK00000000000001 -n NVMe1n1
On initiator:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.1.2 -s 4420
------------------------------------------------------------------------------------------------
When target reboots, I run below command to check the bdev, it reports "*ERROR*: bdev 'Nvme0n1' does not exist"
./app/nvmf_tgt/nvmf_tgt &
./rpc.py get_bdevs -b NVMe1n1
On SPDK website, there is a configuration file followed nvmf_tgt command. I tried to add "TransportID "trtype:PCIe traddr:0000:03:00.0" NVMe1" in the configuration file, NVMe1n1 bdev is available after this command. But looks like the NVMe bdev is created again when the command runs, is my understanding correct?
app/nvmf_tgt/nvmf_tgt -c /path/to/nvmf.conf
Thanks
Neil
2 years, 4 months
SPDK Ring Enqueue/Dequeue Issue
by John Barnard
While testing our NVMf FC transport code, we ran into an issue with an
spdk_ring we are using to share an FC resource between multiple pollers
running on a particular FC port (i.e. a multi-producer, multi-consumer
scenario). We were seeing corruption of the ring and the resources causing
the IO's to fail. When we debugged this problem it came down to the
difference in the rte_ring calls being made but SPDK enqueue/dequeue.
Here's the code snippet from lib/env_dpdk/env.c (with the highlighted
difference):
size_t
spdk_ring_enqueue(struct spdk_ring *ring, void **objs, size_t count)
{
int rc;
#if RTE_VERSION < RTE_VERSION_NUM(17, 5, 0, 0)
rc = rte_ring_mp_enqueue_bulk((struct rte_ring *)ring, objs, count);
if (rc == 0) {
return count;
}
return 0;
#else
rc = rte_ring_mp_enqueue_bulk((struct rte_ring *)ring, objs, count, NULL);
return rc;
#endif
}
size_t
spdk_ring_dequeue(struct spdk_ring *ring, void **objs, size_t count)
{
#if RTE_VERSION < RTE_VERSION_NUM(17, 5, 0, 0)
return rte_ring_sc_dequeue_burst((struct rte_ring *)ring, objs, count);
#else
return rte_ring_sc_dequeue_burst((struct rte_ring *)ring, objs, count,
NULL);
#endif
}
It seems that the spdk_ring_enqueue function calls
rte_ring_mp_enqueue_bulk(), while spdk_ring_dequeue function calls
rte_ring_sc_dequeue_burst(). When we change it to call
ret_ring_mp_dequeue_burst(), the problem went away. So my question is, why
this difference in the rte calls made by the SPDK? Is this a bug or on
purpose? Also, we noticed that there is no ring create flag (in
include/spdk/env.h) for multi-producer, multi-consumer (i.e.
SPDK_RING_TYPE_MP_MC), although it doesn't seem to matter if we pass the
SPDK_RING_TYPE_MP_SC flag to spdk_ring_create() (i.e. it's the call to
rte_ring_mp_dequeue_burst that's critical). Is there a reason this flag
was left out?
I'm not familiar with the history of these functions, but there seems to be
an inconsistency here and it's causing a failure for us.
Thanks,
John Barnard
2 years, 4 months
Community meeting
by Harris, James R
There was an issue with the WebEx meeting for today’s call. A new meeting has been set up. The meeting number is 590 091 137. Same password as always.
Thanks,
-Jim
2 years, 4 months