Re: [SPDK] Buffer I/O error on bigger block size running fio
by Harris, James R
Hi Victor,
Could you provide a few more details? This will help the list to provide some ideas.
1) On the client, are you using the SPDK NVMe-oF initiator or the kernel initiator?
2) Can you provide the fio configuration file or command line? Just so we can have more specifics on “bigger block size”.
3) Any details on the HW setup – specifically details on the RDMA NIC (or if you’re using SW RoCE).
Thanks,
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Victor Banh <victorb(a)mellanox.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Thursday, October 5, 2017 at 11:26 AM
To: "spdk(a)lists.01.org" <spdk(a)lists.01.org>
Subject: [SPDK] Buffer I/O error on bigger block size running fio
Hi
I have SPDK NVMeoF and keep getting error with bigger block size with fio on randwrite tests.
I am using Ubuntu 16.04 with kernel version 4.12.0-041200-generic on target and client.
The DPDK is 17.08 and SPDK is 17.07.1.
Thanks
Victor
[46905.233553] perf: interrupt took too long (2503 > 2500), lowering kernel.perf_event_max_sample_rate to 79750
[48285.159186] blk_update_request: I/O error, dev nvme1n1, sector 2507351968
[48285.159207] blk_update_request: I/O error, dev nvme1n1, sector 1301294496
[48285.159226] blk_update_request: I/O error, dev nvme1n1, sector 1947371168
[48285.159239] blk_update_request: I/O error, dev nvme1n1, sector 1891797568
[48285.159252] blk_update_request: I/O error, dev nvme1n1, sector 10833824
[48285.159265] blk_update_request: I/O error, dev nvme1n1, sector 614937152
[48285.159277] blk_update_request: I/O error, dev nvme1n1, sector 1872305088
[48285.159290] blk_update_request: I/O error, dev nvme1n1, sector 1504491040
[48285.159299] blk_update_request: I/O error, dev nvme1n1, sector 1182136128
[48285.159308] blk_update_request: I/O error, dev nvme1n1, sector 1662985792
[48285.191185] nvme nvme1: Reconnecting in 10 seconds...
[48285.191254] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191291] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191305] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191314] ldm_validate_partition_table(): Disk read failed.
[48285.191320] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191327] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191335] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191342] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191347] Dev nvme1n1: unable to read RDB block 0
[48285.191353] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191360] Buffer I/O error on dev nvme1n1, logical block 0, async page read
[48285.191375] Buffer I/O error on dev nvme1n1, logical block 3, async page read
[48285.191389] nvme1n1: unable to read partition table
[48285.223197] nvme1n1: detected capacity change from 1600321314816 to 0
[48289.623192] nvme1n1: detected capacity change from 0 to -65647705833078784
[48289.623411] ldm_validate_partition_table(): Disk read failed.
[48289.623447] Dev nvme1n1: unable to read RDB block 0
[48289.623486] nvme1n1: unable to read partition table
[48289.643305] ldm_validate_partition_table(): Disk read failed.
[48289.643328] Dev nvme1n1: unable to read RDB block 0
[48289.643373] nvme1n1: unable to read partition table
2 years, 10 months
Re: [SPDK] histogram patches
by Harris, James R
Hi John,
Thanks for pushing out the new request. Could you use the original Gerrit commit ID instead of starting a new one? This will allow us to keep the history from the earlier reviews, to make sure all of the previous comments were addressed.
You should just need to:
1) Restore https://review.gerrithub.io/#/c/363114/
2) Modify the Change-Id in your new patch to Change-Id: I6db92dda23ed8cabef7f222aba0c9523e8f75bf7 (this is the same as the original patch Girish submitted)
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of "Meneghini, John" <John.Meneghini(a)netapp.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Sunday, November 26, 2017 at 8:57 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: "Chandrashekar, Girish" <Girish.Chandrashekar(a)netapp.com>
Subject: Re: [SPDK] histogram patches
As agreed during the SPDK Meetup early this month, Girish and I have rebased and updated the Histogram Utility changes and published a new pull request.
https://review.gerrithub.io/#/c/389053/
Thanks to Paul Luse for his help and encouragement in getting this done.
--
John Meneghini
johnm(a)netapp.com
From: "Chandrashekar, Girish" <Girish.Chandrashekar(a)netapp.com>
Date: Tuesday, November 14, 2017 at 1:00 PM
To: John Meneghini <John.Meneghini(a)netapp.com>, "Luse, Paul E" <paul.e.luse(a)intel.com>
Subject: Re: histogram patches
Hi John,
Changes to rdma.c will not affect much as it comes under #ifdef HISTOGRAM,
spdk/lib/env_dpdk/pci_virtio.c can be ignored as it has empty line as diff
Paul has to wait, since I missed to address 14 minor comments from Daniel. I will finish this before weekend along with unit test.
Can I use remote_histogram_changes22 branch to address these comments?
Regards
Girish
From: Girish Chandrashekar <Girish.Chandrashekar(a)netapp.com>
Date: Monday, November 13, 2017 at 10:45 AM
To: "Meneghini, John" <John.Meneghini(a)netapp.com>, "Luse, Paul E" <paul.e.luse(a)intel.com>
Subject: Re: histogram patches
Thanks John, I will test once patch is ready
Regards
Girish
From: "Meneghini, John" <John.Meneghini(a)netapp.com>
Date: Monday, November 13, 2017 at 10:44 AM
To: "Luse, Paul E" <paul.e.luse(a)intel.com>, Girish Chandrashekar <Girish.Chandrashekar(a)netapp.com>
Cc: "Meneghini, John" <John.Meneghini(a)netapp.com>
Subject: Re: histogram patches
Sure thing Paul. Thanks for your help.
Girish, I’ll download these patches on Monday and pull them into the VED so you can test them out. In the meantime, you can see Paul’s updates to the pull request in GerritHub at:
https://review.gerrithub.io/#/c/363114/
If everything looks good, please feel free to give Paul the go ahead and these patches can be checked into SPDK.
I’ve already approved these changes.
/John
From: "Luse, Paul E" <paul.e.luse(a)intel.com>
Date: Saturday, November 11, 2017 at 1:55 PM
To: John Meneghini <John.Meneghini(a)netapp.com>
Subject: histogram patches
Hi John,
I rebased the patches last week so they’re ready for Girish to take a look at, fix where I may have missed something and address Daniel’s last set of comments. Two things during the rebase:
- in rdma.c there were two ‘finish markers’ that I couldn’t find a good home for in the refactored code. I think I got one of them right but either Girish or Ben or someone will have to pinpoint where the other marker should go, I just have never looked at any of the NVMeoF stuff before so was essentially guessing
- the rebase picked up some conflicts in spdk/lib/env_dpdk/pci_virtio.c which seemed odd since it wasn’t touced in any of the patches that I could tell so I just took the latest so he might want to double check and make sure there’s not something missing there
Other than that it was pretty straightforward
Thx
Paul
3 years, 1 month
SPDK Trello board
by Shah, Prital B
All,
I just want to highlight that we have a SPDK Trello board: https://trello.com/spdk for Roadmap discussion and current feature design discussions.
1) To add items to SPDK backlog : Please use this board : https://trello.com/b/P5xBO7UR/things-to-do
2) To discuss features design via individual feature boards.
We are planning to do a discussion on JSON based configuration for SPDK and brainstorming on how we can make it easier for developers to configure SPDK.
You're welcome to join our meeting at +1(916)356-2663. Choose bridge 5, Conference ID: 864687063
Meeting on 09/15/2017 at 02:30 PM MST (UTC-7), this Friday.
Thanks
Prital
3 years, 1 month
Re: [SPDK] Request for comments regarding latency measurements
by Harris, James R
Agreed - #2 is the way to go.
While we’re on the topic - should we be resetting the stats when spdk_bdev_get_io_stat() is called? This eliminates the ability for two separate applications to monitor stats in parallel – or even a separate application plus some future yet-to-be-written internal load-balancing monitor. I’m thinking that bdev should just keep the running total and let the caller own tracking differences from the last time it called spdk_bdev_get_io_stat().
-Jim
From: SPDK <spdk-bounces(a)lists.01.org> on behalf of Nathan Marushak <nathan.marushak(a)intel.com>
Reply-To: Storage Performance Development Kit <spdk(a)lists.01.org>
Date: Tuesday, November 14, 2017 at 7:29 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Request for comments regarding latency measurements
So, not being the technical expert so to speak ☺, please take this with a grain of salt. Agree with option #2. Seems like this would allow for keeping stats across the different device types e.g. NVMe, NVML, NVMe-oF (although that has a transport within NVMe, so it might be covered for either option).
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Luse, Paul E
Sent: Tuesday, November 14, 2017 7:57 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Request for comments regarding latency measurements
FWIW option 2 sounds like the most appropriate to me…
Thx
Paul
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Paul Von-Stamwitz
Sent: Monday, November 13, 2017 7:29 PM
To: spdk(a)lists.01.org<mailto:spdk@lists.01.org>
Subject: [SPDK] Request for comments regarding latency measurements
Background:
Bdev.c currently maintains a count of reads/writes/bytes_read/bytes_written in channel->stat. This information is retrieved (and reset) via spdk_bdev_get_io_stat.
Proposal:
Add latency information to channel->stat and enable the option to provide a histogram.
We can measure the latency of each IO and keep a running total on a read/write basis. We can also use the measured latency to keep a running count of reads/writes in their associated histogram “buckets”.
The question is, how do we measure latency?
Option 1:
Measure latency at the NVMe tracker .
Currently, we already timestamp every command placed on the tracker if the abort timeout callback is registered. When we remove a completed IO from the tracker, we can timestamp it again and calculate the latency.
There are several issues here that need to be considered.
We need to get the latency information back up to the bdev layer, most likely through a callback argument, but this would require a change the NVMe API. If the bdev breaks down a request into smaller IOs, it can add up the latencies of the child IOs for io_stat and histogram purposes.
Also, this method does not take into account any latency added by the spdk bdev/nvme layers (except the poller.) If a request was queued before being placed on the tracker, then the time it was queued is not factored into the latency calculation.
Option 2:
Measure latency at the bdev layer.
We can timestamp at submit and again at completion. This would keep all io_stat information local to bdev and would take into account the overhead of most queued operations. Any applications written directly to the NVMe layer would have to calculate their own latencies, but that is currently true for all io_stats.
I’m sure that there are other issues I am missing, but I would appreciate any comments on how best to move forward on this.
Thanks,
Paul
3 years, 1 month
Compilation error while compiling nvme-cli with SPdk
by Prashant Prabhu
Hello,
I started to check out the nvme-cli for spdk on my test system, and am constantly running into this compilation error when I try to make nvme-cli.
I have already successfully done a make of the spdk folder under nvme-cli folder.
Would appreciate any help. I am not sure what I am doing wrong.
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_device_init':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:51: undefined reference to `rte_pci_unmap_device'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_enumerate':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:173: undefined reference to `rte_pci_register'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_get_device':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:215: undefined reference to `rte_pci_bus'
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:219: undefined reference to `rte_eal_compare_pci_addr'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_device_cfg_read':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:330: undefined reference to `rte_pci_read_config'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_device_cfg_write':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:343: undefined reference to `rte_pci_write_config'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(pci.o): In function `spdk_pci_device_attach':
/root/.../nvme-cli/spdk/lib/env_dpdk/pci.c:130: undefined reference to `rte_pci_register'
/root/.../nvme-cli/spdk/build/lib/libspdk_env_dpdk.a(vtophys.o): In function `spdk_vtophys_iommu_init':
/root/.../nvme-cli/spdk/lib/env_dpdk/vtophys.c:276: undefined reference to `pci_vfio_is_enabled'
/root/.../nvme-cli/spdk/dpdk/build/lib/librte_eal.a(eal_memory.o): In function `rte_eal_hugepage_init':
eal_memory.c:(.text+0xa08): undefined reference to `numa_allocate_nodemask'
eal_memory.c:(.text+0xa12): undefined reference to `numa_available'
eal_memory.c:(.text+0xa56): undefined reference to `get_mempolicy'
eal_memory.c:(.text+0xc8e): undefined reference to `numa_bitmask_free'
eal_memory.c:(.text+0xd3e): undefined reference to `numa_allocate_nodemask'
eal_memory.c:(.text+0xd48): undefined reference to `numa_available'
eal_memory.c:(.text+0xf36): undefined reference to `numa_bitmask_free'
eal_memory.c:(.text+0x2055): undefined reference to `set_mempolicy'
eal_memory.c:(.text+0x2087): undefined reference to `numa_set_localalloc'
eal_memory.c:(.text+0x2091): undefined reference to `numa_bitmask_free'
eal_memory.c:(.text+0x2250): undefined reference to `numa_set_preferred'
collect2: error: ld returned 1 exit status
make: *** [nvme] Error 1
Thanks
Prashant
3 years, 1 month
why spdk need to unload all ioat devices?
by huangqingxin@ruijie.com.cn
hi,
when i run setup.sh, I found some output from dmesg
```sh
[root@localhost scripts]# ./setup.sh
0000:00:04.0 (8086 2f20): ioatdma -> uio_pci_generic
0000:00:04.1 (8086 2f21): ioatdma -> uio_pci_generic
0000:00:04.2 (8086 2f22): ioatdma -> uio_pci_generic
0000:00:04.3 (8086 2f23): ioatdma -> uio_pci_generic
0000:00:04.4 (8086 2f24): ioatdma -> uio_pci_generic
0000:00:04.5 (8086 2f25): ioatdma -> uio_pci_generic
0000:00:04.6 (8086 2f26): ioatdma -> uio_pci_generic
0000:00:04.7 (8086 2f27): ioatdma -> uio_pci_generic
[root@localhost scripts]# dmesg | grep ioat
[ 59.031281] ioatdma: Intel(R) QuickData Technology Driver 4.00
[ 347.890436] ioatdma 0000:00:04.0: Removing dma and dca services
[ 347.907950] ioatdma 0000:00:04.1: Removing dma and dca services
[ 347.924960] ioatdma 0000:00:04.2: Removing dma and dca services
[ 347.941547] ioatdma 0000:00:04.3: Removing dma and dca services
[ 347.957906] ioatdma 0000:00:04.4: Removing dma and dca services
[ 347.974823] ioatdma 0000:00:04.5: Removing dma and dca services
[ 347.990314] ioatdma 0000:00:04.6: Removing dma and dca services
[ 348.006346] ioatdma 0000:00:04.7: Removing dma and dca services
```
I wonder why need to do these unload operations? What can I do if other APP needs DMA channel?
3 years, 1 month
Large amount of read requests after connection
by Meng Wang
Hello,
Recently, I profiled nvmf with AIO device, and found there are large amount
of read requests after connection with target. Here is the command:
```
In server:
./app/nvmf_tgt/nvmf_tgt -t nvmf -c run/aio.conf
In client:
nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnode1" -a 192.168.6.25 -s 4420
```
After connection from client, the server prints out the nvme traces. Here
is an excerpt:
```
......
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 0 cdw10 0x00000001
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d79000 key 0xa8044f4d len 0x1000
ctrlr.c: 870:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ctrlr data: maxcmd
0x400
ctrlr.c: 871:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: sgls data: 0x100005
ctrlr_bdev.c: 69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
io_type 3
ctrlr_bdev.c: 69:spdk_nvmf_subsystem_bdev_io_type_supported: *INFO*:
Subsystem nqn.2016-06.io.spdk:cnode1 namespace 1 (Nvme1n1) does not support
io_type 8
ctrlr.c: 908:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: ioccsz
0x104
ctrlr.c: 910:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: iorcsz
0x1
ctrlr.c: 912:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: icdoff
0x0
ctrlr.c: 914:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data:
ctrattr 0x0
ctrlr.c: 916:spdk_nvmf_ctrlr_identify_ctrlr: *INFO*: ext ctrlr data: msdbd
0x1
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x0c fuse 0 cid
31 nsid 0 cdw10 0x00000000
ctrlr.c: 749:spdk_nvmf_ctrlr_async_event_request: *INFO*: Async Event
Request
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 0 cdw10 0x00000002
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7d000 key 0xa8044f4e len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7f000 key 0xa8044f4f len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: Admin cmd: opc 0x06 fuse 0 cid 2
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257d7f000 key 0xa8044f4f len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=2
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000000
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1258192000 key 0xa8056e4c len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000008
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1243239000 key 0xa8056e4d len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x00000018
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x1257f71000 key 0xa8056e4e len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x84da42000 key 0xa804ee4c len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
......
<millions of lines>
......
```
In this trace, the line such as
```
request.c: 103:nvmf_trace_command: *INFO*: I/O cmd: opc 0x02 fuse 0 cid 1
nsid 1 cdw10 0x6fc81a00
request.c: 118:nvmf_trace_command: *INFO*: SGL: Keyed (Inv): addr
0x84da42000 key 0xa804ee4c len 0x1000
request.c: 60:spdk_nvmf_request_complete_on_qpair: *INFO*: cpl: cid=1
cdw0=0x00000000 rsvd1=0 status=0x0000
```
repeats millions of times. As the I/O opc 0x02 is read, it means millions
of read requests executed after client connection. As a comparison, we
profile nvmf with NVMe device on same host and SSD. And merely ~50 read
requests after client connection. So what are these read requests for? And
why there are so many reads in aio devices?
Thanks,
--
Meng
3 years, 1 month
Re: [SPDK] histogram patches
by Meneghini, John
As agreed during the SPDK Meetup early this month, Girish and I have rebased and updated the Histogram Utility changes and published a new pull request.
https://review.gerrithub.io/#/c/389053/
Thanks to Paul Luse for his help and encouragement in getting this done.
--
John Meneghini
johnm(a)netapp.com
From: "Chandrashekar, Girish" <Girish.Chandrashekar(a)netapp.com>
Date: Tuesday, November 14, 2017 at 1:00 PM
To: John Meneghini <John.Meneghini(a)netapp.com>, "Luse, Paul E" <paul.e.luse(a)intel.com>
Subject: Re: histogram patches
Hi John,
Changes to rdma.c will not affect much as it comes under #ifdef HISTOGRAM,
spdk/lib/env_dpdk/pci_virtio.c can be ignored as it has empty line as diff
Paul has to wait, since I missed to address 14 minor comments from Daniel. I will finish this before weekend along with unit test.
Can I use remote_histogram_changes22 branch to address these comments?
Regards
Girish
From: Girish Chandrashekar <Girish.Chandrashekar(a)netapp.com>
Date: Monday, November 13, 2017 at 10:45 AM
To: "Meneghini, John" <John.Meneghini(a)netapp.com>, "Luse, Paul E" <paul.e.luse(a)intel.com>
Subject: Re: histogram patches
Thanks John, I will test once patch is ready
Regards
Girish
From: "Meneghini, John" <John.Meneghini(a)netapp.com>
Date: Monday, November 13, 2017 at 10:44 AM
To: "Luse, Paul E" <paul.e.luse(a)intel.com>, Girish Chandrashekar <Girish.Chandrashekar(a)netapp.com>
Cc: "Meneghini, John" <John.Meneghini(a)netapp.com>
Subject: Re: histogram patches
Sure thing Paul. Thanks for your help.
Girish, I’ll download these patches on Monday and pull them into the VED so you can test them out. In the meantime, you can see Paul’s updates to the pull request in GerritHub at:
https://review.gerrithub.io/#/c/363114/
If everything looks good, please feel free to give Paul the go ahead and these patches can be checked into SPDK.
I’ve already approved these changes.
/John
From: "Luse, Paul E" <paul.e.luse(a)intel.com>
Date: Saturday, November 11, 2017 at 1:55 PM
To: John Meneghini <John.Meneghini(a)netapp.com>
Subject: histogram patches
Hi John,
I rebased the patches last week so they’re ready for Girish to take a look at, fix where I may have missed something and address Daniel’s last set of comments. Two things during the rebase:
- in rdma.c there were two ‘finish markers’ that I couldn’t find a good home for in the refactored code. I think I got one of them right but either Girish or Ben or someone will have to pinpoint where the other marker should go, I just have never looked at any of the NVMeoF stuff before so was essentially guessing
- the rebase picked up some conflicts in spdk/lib/env_dpdk/pci_virtio.c which seemed odd since it wasn’t touced in any of the patches that I could tell so I just took the latest so he might want to double check and make sure there’s not something missing there
Other than that it was pretty straightforward
Thx
Paul
3 years, 1 month
Re: [SPDK] lvol function: hangs up with Nvme bdev.
by Terry_MF_Kao@wistron.com
Hello Tomek,
>Without support for unmap, whole device is written to - taking a long time. This operation has to be only once per lvol store creation. An optional flag could be added to RPC construct_lvol_store
>that allows user to skip unmapping whole device - with caveat that previously present data would still be there after lvol store creation. Would you find such option useful ?
True, I think so.
>Do you see other output than "NVMe DSM: success" ?
No, it looks work fine.
1.) RPC get_bdevs()
{
"num_blocks": 3125627568,
"supported_io_types": {
"reset": true,
"nvme_admin": true,
"unmap": true, *****
"read": true,
"write_zeroes": false,
"write": true,
"flush": true,
"nvme_io": true
},
"driver_specific": {
"nvme": {
"trid": {
"trtype": "PCIe",
"traddr": "0000:84:00.0"
},
"ns_data": {
"id": 1
},
"pci_address": "0000:84:00.0",
"vs": {
"nvme_version": "1.1"
},
"ctrlr_data": {
"firmware_revision": "KPYA6B3Q",
"serial_number": "S2EVNAAH600017",
"oacs": {
"ns_manage": 0,
"security": 0,
"firmware": 1,
"format": 1
},
"vendor_id": "0x144d",
"model_number": "SAMSUNG MZWLK1T6HCHP-00003"
},
"csts": {
"rdy": 1,
"cfs": 0
}
}
},
"claimed": true,
"block_size": 512,
"product_name": "NVMe disk",
"name": "Nvme0n1"
}
2.) nvme dsm
# nvme dsm /dev/nvme0n1 -d -s 0 -b 0
NVMe DSM: success
# nvme dsm /dev/nvme0n1 -d -s 0 -b 1
NVMe DSM: success
3.) nvme id-ctrl (from Andrey's suggestion
# nvme id-ctrl /dev/nvme0 -H | grep Data
[2:2] : 0x1 Data Set Management Supported
>This might be a good idea, but would require some changes in blobstore or a bdev aggregating multiple others underneath. I've added topic for this to the SPDK Community meeting agenda.
I saw it on trello. Looks forward to it. Thanks for promoting!
Best Regards,
Terry
---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient.
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------
3 years, 1 month