Re: [SPDK] SPDK aio examples
by Bhadauria, Varun
Hello Ben
Thank you for the clarification. I was under the false impression that Linux AIO can be made to use SPDK under the hood which is clearly not the case since they will have to go through the filesystem. BTW are there any known early filesystem implementation besides ceph’s rocksdb based bluestore FS which use SPDK.
Regards,
Varun Bhadauria
On 6/15/16, 4:37 PM, "SPDK on behalf of Walker, Benjamin" <spdk-bounces(a)lists.01.org on behalf of benjamin.walker(a)intel.com> wrote:
>Can you explain a bit more about why you want to use AIO? Are you referring to Linux AIO or POSIX AIO? If you want to do a performance comparison of Linux AIO and the SPDK NVMe driver then the perf tool is your best bet.
>
>You can run the perf tool against a block device using Linux AIO by binding your NVMe device to the kernel ("./scripts/setup.sh reset" will hand them all back to the kernel) and then doing something like:
>
>./perf -q 1 -s 4096 -w read -t 10 /dev/nvme0n1 /dev/nvme1n1
>
>-----Original Message-----
>From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Bhadauria, Varun
>Sent: Wednesday, June 15, 2016 4:30 PM
>To: Storage Performance Development Kit <spdk(a)lists.01.org>
>Subject: [SPDK] SPDK air examples
>
>Hello
>
>Are there any SPDK examples which use AIO? Perf.c has very little documentation in the usage for AIO.
>
>Regards,
>Varun Bhadauria
>
>
>_______________________________________________
>SPDK mailing list
>SPDK(a)lists.01.org
>https://lists.01.org/mailman/listinfo/spdk
>_______________________________________________
>SPDK mailing list
>SPDK(a)lists.01.org
>https://lists.01.org/mailman/listinfo/spdk
4 years, 10 months
Re: [SPDK] SPDK fio benchmark support
by Walker, Benjamin
The fio plugin for SPDK is new and was only tested for a single job. We'd welcome patches to make it span more cores when given more jobs if someone is interested in doing the work here. At some point, I'm sure we'll get to it ourselves, but it's lower priority than the other work we're doing right now (NVMf target). The perf example we provide definitely does scale to many cores though, and it can optionally use libaio as its backend, so it is a great alternative to using fio that works today with no extra effort.
As far as expected performance scaling across cores with our perf tool, are you seeing the maximum quoted IOPS for all of the device(s) on the system with only a single core? If you are, adding more cores won't improve performance. If you are not getting the performance you expect from the hardware I'd love to know the details of the devices and the platform.
To provide a quick example from my own system (and none of the following should be treated as official benchmarking numbers for any reason), I have a Haswell Xeon E7 with 8 Intel P3700 NVMe SSDs attached. Running the perf tool on my system yields about 3 million 4k read IOPs on 1 core, which is the equivalent of 6 or 7 of my SSDs. Moving to two cores gives me the full hardware spec IOPS for 8 SSDs. If I only test against 4 SSDs I get the same performance (the maximum that the SSDs can provide) no matter how many cores I give the tool.
Thanks,
Ben
-----Original Message-----
From: Raj (Rajinikanth) Pandurangan [mailto:rajini.pandu@samsung.com]
Sent: Wednesday, June 15, 2016 11:52 AM
To: Walker, Benjamin <benjamin.walker(a)intel.com>; sbradshaw(a)micron.com; spdk(a)lists.01.org; Robert.Cleveland(a)skhms.com
Cc: Raj (Rajinikanth) Pandurangan <rajini.pandu(a)samsung.com>
Subject: RE: [SPDK] SPDK fio benchmark support
Hello All,
1. I tried to run SPDK fio with multiple nvme drives. But for some reason, I don't see more than 1 core being used even when I increased numjobs?
Here is my config file.
=============================================================
[global]
ioengine=/spdk-master_temp_experiment/examples/nvme/fio_plugin/fio_plugin
group_reporting=1
direct=1
verify=0
time_based=1
ramp_time=0
runtime=10
[test1]
iodepth=128
rw=randread
bs=4k
#filename=0000.09.00.0/1:0000.07.00.0/1:0000.0a.00.0/1
filename=0000.09.00.0/1
numjobs=2
[test2]
iodepth=128
rw=randread
bs=4k
numjobs=1
filename=0000.08.00.0/1
================================================================
Has anyone succeeded using multiple cores?
2. I have also tried with 'perf' and I noticed that even though multiple cores were being used but performance numbers didn't scale. Seemed like similar IOPS when I used single core.
Any insights would really be appreciated.
Thanks,
-Raj P
-----Original Message-----
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Walker, Benjamin
Sent: Friday, February 26, 2016 2:30 PM
To: sbradshaw(a)micron.com; spdk(a)ml01.01.org; Robert.Cleveland(a)skhms.com
Subject: Re: [SPDK] SPDK fio benchmark support
As a follow up to this, I pushed some improvements to SPDK's setup scripts so that we now more easily support vfio. That makes it a lot easier to switch back and forth between the unvme driver and the SPDK driver. We'd happily accept more patches to improve vfio if anyone has suggestions.
Thanks all.
On Thu, 2016-02-18 at 18:59 +0000, Sam Bradshaw (sbradshaw) wrote:
Hi Robert,
We built a fio ioengine plugin for benchmarking SPDK as well as our own userspace NVMe driver. Source and documentation are available here: https://github.com/MicronSSD/unvme
(SPDK fio plugin is ioengine/spdk_fio.c)
If you have any questions on how to use it or interpret benchmark results, feel free to ask.
-Sam
From: SPDK [mailto:spdk-bounces@ml01.01.org] On Behalf Of Robert Cleveland
Sent: Thursday, February 18, 2016 10:24 AM
To: spdk(a)lists.01.org
Subject: [SPDK] SPDK fio benchmark support
Hello all,
Has anyone done any work to make it easy to benchmark polling mode driver with something like FIO?
Thanks,
Robert Cleveland
The information contained in this e-mail is considered confidential of SK hynix memory solutions Inc. and intended only for the persons addressed or copied in this e-mail. Any unauthorized use, dissemination of the information, or copying of this message is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email.
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org<mailto:SPDK@lists.01.org>
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
4 years, 10 months
SPDK air examples
by Bhadauria, Varun
Hello
Are there any SPDK examples which use AIO? Perf.c has very little documentation in the usage for AIO.
Regards,
Varun Bhadauria
4 years, 10 months
SPDK fio benchmark support
by Robert Cleveland
Hello all,
Has anyone done any work to make it easy to benchmark polling mode driver with something like FIO?
Thanks,
Robert Cleveland
The information contained in this e-mail is considered confidential of SK hynix memory solutions Inc. and intended only for the persons addressed or copied in this e-mail. Any unauthorized use, dissemination of the information, or copying of this message is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email.
4 years, 10 months
[PATCH] nvme: Return negative errno for failure
by Minfei Huang
The conventional rule for returning errno is negative, hence there is no
need to modify caller's code to adjust this NVMe library.
Signed-off-by: Minfei Huang <mnghuan(a)gmail.com>
---
lib/nvme/nvme_ctrlr.c | 24 ++++++++++++------------
lib/nvme/nvme_ctrlr_cmd.c | 38 +++++++++++++++++++-------------------
lib/nvme/nvme_ns.c | 2 +-
lib/nvme/nvme_ns_cmd.c | 36 ++++++++++++++++++------------------
lib/nvme/nvme_qpair.c | 4 ++--
5 files changed, 52 insertions(+), 52 deletions(-)
diff --git a/lib/nvme/nvme_ctrlr.c b/lib/nvme/nvme_ctrlr.c
index 45e3238..a520ba5 100644
--- a/lib/nvme/nvme_ctrlr.c
+++ b/lib/nvme/nvme_ctrlr.c
@@ -239,7 +239,7 @@ static int nvme_ctrlr_set_intel_support_log_pages(struct spdk_nvme_ctrlr *ctrlr)
64, &phys_addr);
if (log_page_directory == NULL) {
nvme_printf(NULL, "could not allocate log_page_directory\n");
- return ENXIO;
+ return -ENXIO;
}
status.done = false;
@@ -253,7 +253,7 @@ static int nvme_ctrlr_set_intel_support_log_pages(struct spdk_nvme_ctrlr *ctrlr)
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_free(log_page_directory);
nvme_printf(ctrlr, "nvme_ctrlr_cmd_get_log_page failed!\n");
- return ENXIO;
+ return -ENXIO;
}
nvme_ctrlr_construct_intel_support_log_page_list(ctrlr, log_page_directory);
@@ -441,7 +441,7 @@ nvme_ctrlr_enable(struct spdk_nvme_ctrlr *ctrlr)
if (cc.bits.en != 0) {
nvme_printf(ctrlr, "%s called with CC.EN = 1\n", __func__);
- return EINVAL;
+ return -EINVAL;
}
nvme_mmio_write_8(ctrlr, asq, ctrlr->adminq.cmd_bus_addr);
@@ -556,7 +556,7 @@ nvme_ctrlr_identify(struct spdk_nvme_ctrlr *ctrlr)
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "nvme_identify_controller failed!\n");
- return ENXIO;
+ return -ENXIO;
}
/*
@@ -600,7 +600,7 @@ nvme_ctrlr_set_num_qpairs(struct spdk_nvme_ctrlr *ctrlr)
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "nvme_set_num_queues failed!\n");
- return ENXIO;
+ return -ENXIO;
}
/*
@@ -758,7 +758,7 @@ nvme_ctrlr_configure_aer(struct spdk_nvme_ctrlr *ctrlr)
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "nvme_ctrlr_cmd_set_async_event_config failed!\n");
- return ENXIO;
+ return -ENXIO;
}
/* aerl is a zero-based value, so we need to add 1 here. */
@@ -1200,7 +1200,7 @@ spdk_nvme_ctrlr_attach_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid,
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "spdk_nvme_ctrlr_attach_ns failed!\n");
- return ENXIO;
+ return -ENXIO;
}
return spdk_nvme_ctrlr_reset(ctrlr);
@@ -1225,7 +1225,7 @@ spdk_nvme_ctrlr_detach_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid,
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "spdk_nvme_ctrlr_detach_ns failed!\n");
- return ENXIO;
+ return -ENXIO;
}
return spdk_nvme_ctrlr_reset(ctrlr);
@@ -1277,7 +1277,7 @@ spdk_nvme_ctrlr_delete_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid)
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "spdk_nvme_ctrlr_delete_ns failed!\n");
- return ENXIO;
+ return -ENXIO;
}
return spdk_nvme_ctrlr_reset(ctrlr);
@@ -1302,7 +1302,7 @@ spdk_nvme_ctrlr_format(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid,
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "spdk_nvme_ctrlr_format failed!\n");
- return ENXIO;
+ return -ENXIO;
}
return spdk_nvme_ctrlr_reset(ctrlr);
@@ -1347,7 +1347,7 @@ spdk_nvme_ctrlr_update_firmware(struct spdk_nvme_ctrlr *ctrlr, void *payload, ui
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "spdk_nvme_ctrlr_fw_image_download failed!\n");
- return ENXIO;
+ return -ENXIO;
}
p += transfer;
offset += transfer;
@@ -1373,7 +1373,7 @@ spdk_nvme_ctrlr_update_firmware(struct spdk_nvme_ctrlr *ctrlr, void *payload, ui
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "nvme_ctrlr_cmd_fw_commit failed!\n");
- return ENXIO;
+ return -ENXIO;
}
return spdk_nvme_ctrlr_reset(ctrlr);
diff --git a/lib/nvme/nvme_ctrlr_cmd.c b/lib/nvme/nvme_ctrlr_cmd.c
index d074109..74f1d33 100644
--- a/lib/nvme/nvme_ctrlr_cmd.c
+++ b/lib/nvme/nvme_ctrlr_cmd.c
@@ -45,7 +45,7 @@ spdk_nvme_ctrlr_cmd_io_raw(struct spdk_nvme_ctrlr *ctrlr,
req = nvme_allocate_request_contig(buf, len, cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
memcpy(&req->cmd, cmd, sizeof(req->cmd));
@@ -66,7 +66,7 @@ spdk_nvme_ctrlr_cmd_admin_raw(struct spdk_nvme_ctrlr *ctrlr,
req = nvme_allocate_request_contig(buf, len, cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
memcpy(&req->cmd, cmd, sizeof(req->cmd));
@@ -88,7 +88,7 @@ nvme_ctrlr_cmd_identify_controller(struct spdk_nvme_ctrlr *ctrlr, void *payload,
sizeof(struct spdk_nvme_ctrlr_data),
cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -114,7 +114,7 @@ nvme_ctrlr_cmd_identify_namespace(struct spdk_nvme_ctrlr *ctrlr, uint16_t nsid,
sizeof(struct spdk_nvme_ns_data),
cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -138,7 +138,7 @@ nvme_ctrlr_cmd_create_io_cq(struct spdk_nvme_ctrlr *ctrlr,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -168,7 +168,7 @@ nvme_ctrlr_cmd_create_io_sq(struct spdk_nvme_ctrlr *ctrlr,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -195,7 +195,7 @@ nvme_ctrlr_cmd_delete_io_cq(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_qpai
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -214,7 +214,7 @@ nvme_ctrlr_cmd_delete_io_sq(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_qpai
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -237,7 +237,7 @@ nvme_ctrlr_cmd_attach_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid,
cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -264,7 +264,7 @@ nvme_ctrlr_cmd_detach_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid,
cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -291,7 +291,7 @@ nvme_ctrlr_cmd_create_ns(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_ns_data
cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -316,7 +316,7 @@ nvme_ctrlr_cmd_delete_ns(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid, spdk_nvme
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -341,7 +341,7 @@ nvme_ctrlr_cmd_format(struct spdk_nvme_ctrlr *ctrlr, uint32_t nsid, struct spdk_
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -368,7 +368,7 @@ spdk_nvme_ctrlr_cmd_set_feature(struct spdk_nvme_ctrlr *ctrlr, uint8_t feature,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -396,7 +396,7 @@ spdk_nvme_ctrlr_cmd_get_feature(struct spdk_nvme_ctrlr *ctrlr, uint8_t feature,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -447,7 +447,7 @@ spdk_nvme_ctrlr_cmd_get_log_page(struct spdk_nvme_ctrlr *ctrlr, uint8_t log_page
req = nvme_allocate_request_contig(payload, payload_size, cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -471,7 +471,7 @@ nvme_ctrlr_cmd_abort(struct spdk_nvme_ctrlr *ctrlr, uint16_t cid,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -494,7 +494,7 @@ nvme_ctrlr_cmd_fw_commit(struct spdk_nvme_ctrlr *ctrlr,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -522,7 +522,7 @@ nvme_ctrlr_cmd_fw_image_download(struct spdk_nvme_ctrlr *ctrlr,
cb_fn, cb_arg);
if (req == NULL) {
nvme_mutex_unlock(&ctrlr->ctrlr_lock);
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
diff --git a/lib/nvme/nvme_ns.c b/lib/nvme/nvme_ns.c
index 0ed715e..167d176 100644
--- a/lib/nvme/nvme_ns.c
+++ b/lib/nvme/nvme_ns.c
@@ -61,7 +61,7 @@ int nvme_ns_identify_update(struct spdk_nvme_ns *ns)
}
if (spdk_nvme_cpl_is_error(&status.cpl)) {
nvme_printf(ctrlr, "nvme_identify_namespace failed\n");
- return ENXIO;
+ return -ENXIO;
}
ns->sector_size = 1 << nsdata->lbaf[nsdata->flbas.format].lbads;
diff --git a/lib/nvme/nvme_ns_cmd.c b/lib/nvme/nvme_ns_cmd.c
index cf7f215..5f33eb5 100644
--- a/lib/nvme/nvme_ns_cmd.c
+++ b/lib/nvme/nvme_ns_cmd.c
@@ -237,7 +237,7 @@ spdk_nvme_ns_cmd_read(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair, vo
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -260,7 +260,7 @@ spdk_nvme_ns_cmd_read_with_md(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *q
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -275,7 +275,7 @@ spdk_nvme_ns_cmd_readv(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
struct nvme_payload payload;
if (reset_sgl_fn == NULL || next_sge_fn == NULL)
- return EINVAL;
+ return -EINVAL;
payload.type = NVME_PAYLOAD_TYPE_SGL;
payload.md = NULL;
@@ -288,7 +288,7 @@ spdk_nvme_ns_cmd_readv(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -310,7 +310,7 @@ spdk_nvme_ns_cmd_write(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -332,7 +332,7 @@ spdk_nvme_ns_cmd_write_with_md(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -347,7 +347,7 @@ spdk_nvme_ns_cmd_writev(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
struct nvme_payload payload;
if (reset_sgl_fn == NULL || next_sge_fn == NULL)
- return EINVAL;
+ return -EINVAL;
payload.type = NVME_PAYLOAD_TYPE_SGL;
payload.md = NULL;
@@ -360,7 +360,7 @@ spdk_nvme_ns_cmd_writev(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
if (req != NULL) {
return nvme_qpair_submit_request(qpair, req);
} else {
- return ENOMEM;
+ return -ENOMEM;
}
}
@@ -375,12 +375,12 @@ spdk_nvme_ns_cmd_write_zeroes(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *q
uint64_t *tmp_lba;
if (lba_count == 0) {
- return EINVAL;
+ return -EINVAL;
}
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -403,14 +403,14 @@ spdk_nvme_ns_cmd_deallocate(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpa
struct spdk_nvme_cmd *cmd;
if (num_ranges == 0 || num_ranges > SPDK_NVME_DATASET_MANAGEMENT_MAX_RANGES) {
- return EINVAL;
+ return -EINVAL;
}
req = nvme_allocate_request_contig(payload,
num_ranges * sizeof(struct spdk_nvme_dsm_range),
cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -433,7 +433,7 @@ spdk_nvme_ns_cmd_flush(struct spdk_nvme_ns *ns, struct spdk_nvme_qpair *qpair,
req = nvme_allocate_request_null(cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -459,7 +459,7 @@ spdk_nvme_ns_cmd_reservation_register(struct spdk_nvme_ns *ns,
sizeof(struct spdk_nvme_reservation_register_data),
cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -491,7 +491,7 @@ spdk_nvme_ns_cmd_reservation_release(struct spdk_nvme_ns *ns,
req = nvme_allocate_request_contig(payload, sizeof(struct spdk_nvme_reservation_key_data), cb_fn,
cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -524,7 +524,7 @@ spdk_nvme_ns_cmd_reservation_acquire(struct spdk_nvme_ns *ns,
sizeof(struct spdk_nvme_reservation_acquire_data),
cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
@@ -552,12 +552,12 @@ spdk_nvme_ns_cmd_reservation_report(struct spdk_nvme_ns *ns,
struct spdk_nvme_cmd *cmd;
if (len % 4)
- return EINVAL;
+ return -EINVAL;
num_dwords = len / 4;
req = nvme_allocate_request_contig(payload, len, cb_fn, cb_arg);
if (req == NULL) {
- return ENOMEM;
+ return -ENOMEM;
}
cmd = &req->cmd;
diff --git a/lib/nvme/nvme_qpair.c b/lib/nvme/nvme_qpair.c
index 8b3f1a4..b9c6bd3 100644
--- a/lib/nvme/nvme_qpair.c
+++ b/lib/nvme/nvme_qpair.c
@@ -852,7 +852,7 @@ nvme_qpair_submit_request(struct spdk_nvme_qpair *qpair, struct nvme_request *re
if (ctrlr->is_failed) {
nvme_free_request(req);
- return ENXIO;
+ return -ENXIO;
}
nvme_qpair_check_enabled(qpair);
@@ -915,7 +915,7 @@ nvme_qpair_submit_request(struct spdk_nvme_qpair *qpair, struct nvme_request *re
} else {
nvme_assert(0, ("invalid NVMe payload type %d\n", req->payload.type));
_nvme_fail_request_bad_vtophys(qpair, tr);
- return EINVAL;
+ return -EINVAL;
}
nvme_qpair_submit_tracker(qpair, tr);
--
2.7.4 (Apple Git-66)
4 years, 10 months