Topic from last week's community meeting
by Luse, Paul E
Hi Shuhei,
I was out of town last week and missed the meeting but saw on Trello you had the topic below:
"a few idea: log structured data store , data store with compression, and metadata replication of Blobstore"
Which I'd be pretty interested in working on with you or at least hearing more about it. When you get a chance, no hurry, can you please expand a little on how the conversation went and what you're looking at specifically?
Thanks!
Paul
1 year, 3 months
Add py-spdk client for SPDK
by We We
Hi, all
I have submitted the py-spdk code on https://review.gerrithub.io/#/c/379741/, please take some time to visit it, I will be very grateful to you.
The py-spdk is client which can help the upper-level app to communicate with the SPDK-based app (such as: nvmf_tgt, vhost, iscsi_tgt, etc.). Should I submit it into the other repo I rebuild rather than SPDK repo? Because I think it is a relatively independent kit upon the SPDK.
If you have some thoughts about the py-spdk, please share with me.
Regards,
Helloway
1 year, 3 months
Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
by Isaac Otsiabah
Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).
------- 192.168.2.10
| | initiator
-------
|
|
|
-------------------------------------------- 192.168.2.0
|
|
| 192.168.2.20
-------------- vpp, vppctl
| | iscsi_tgt
--------------
Both system have a 10GB NIC
(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first 10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp
[root@spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran
STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 up
local0 0 down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
local0 (dn):
/* ping initiator from vpp */
vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms
(On Initiator):
/* ping vpp interface from initiator*/
[root@spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms
STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:
ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host
vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
host-vpp1out (up):
192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10
/* From host, ping vpp */
[root@spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms
/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms
Statistics: 5 sent, 5 received, 0% packet loss
>From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.
Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?
Isaac
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>
Cc: Harris, James R <james.r.harris(a)intel.com>; Verkamp, Daniel <daniel.verkamp(a)intel.com>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/
SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.
Suggested flow of starting up applications is:
1. Unbind interfaces from kernel
2. Start VPP and configure the interface via vppctl
3. Start SPDK
4. Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP
Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.
Let me know if you have any questions.
Tomek
From: Isaac Otsiabah [mailto:IOtsiabah@us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki@intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris@intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp@intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.
For VPP, first, I unbind the nick from the kernel as and start VPP application.
./usertools/dpdk-devbind.py -u 0000:07:00.0
vpp unix {cli-listen /run/vpp/cli.sock}
Unbinding the nic takes down the interface, however, the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."
is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?
Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
I would appreciate if anyone would help. Thank you.
Isaac
2 years, 4 months
Building spdk on CentOS6
by Shahar Salzman
Hi,
Finally got to looking at support of spdk build on CentOS6, things look good, except for one issue.
spdk is latest 18.01.x version, dpdk is 16.07 (+3 dpdk patches to allow compilation) and some minor patches (mainly some memory configuration stuff), kernel is a patched 4.9.6.
build succeeded except for the usage of the dpdk function pci_vfio_is_enabled.
I had to apply the patch bellow, removing the usage of this function and then compilation completed without any issues.
It seems that I am missing some sort of dpdk configuration as I see that the function is built, but not packaged into the generated archive.
I went back to square one and ran the instructions in http://www.spdk.io/doc/getting_started.html, but I see no mention of dpdk there. Actually the ./configure requires it.
My next step is to use a more recent dpdk, but shouldn't this work with my version? Am I missing some dpdk configuration?
BTW, as we are not using vhost, on our 17.07 version we simply use CONFIG_VHOST=n in order to skip this, but I would be happier if we used a better solution.
Shahar
P.S. Here is the patch to remove use of this function:
diff --git a/lib/env_dpdk/vtophys.c b/lib/env_dpdk/vtophys.c
index 92aa256..f38929f 100644
--- a/lib/env_dpdk/vtophys.c
+++ b/lib/env_dpdk/vtophys.c
@@ -53,8 +53,10 @@
#define SPDK_VFIO_ENABLED 1
#include <linux/vfio.h>
+#if 0
/* Internal DPDK function forward declaration */
int pci_vfio_is_enabled(void);
+#endif
struct spdk_vfio_dma_map {
struct vfio_iommu_type1_dma_map map;
@@ -341,9 +343,11 @@ spdk_vtophys_iommu_init(void)
DIR *dir;
struct dirent *d;
+#if 0
if (!pci_vfio_is_enabled()) {
return;
}
+#endif
dir = opendir("/proc/self/fd");
if (!dir) {
2 years, 4 months
SPDK + user space appliance
by Shahar Salzman
Hi all,
Sorry for the delay, had to solve a quarantine issue in order to get access to the list.
Some clarifications regarding the user space application:
1. The application is not the nvmf_tgt, we have an entire applicance to which we are integrating spdk
2. We are currently using nvmf_tgt functions in order to activate spdk, and the bdev_user in order to handle IO
3. This is all in user space (I am used to the kernel/user distinction in order to separate protocol/appliance).
4. The bdev_user will also notify spdk of changes to namespaces (e.g. a new namespace has been added, and can be attached to the spdk subsystem)
I am glad that this is your intention, the question is, do you think that it would be useful to create such a bdev_user module which will allow other users to integrate spdk to their appliance using such a simple threading model? Perhaps such a module will allow easier integration of spdk.
I am attaching a reference application which is does NULL IO via bdev_user.
Regarding the RPC, we have an implementation of it, and will be happy to push it upstream.
I am not sure that using the RPC for this type of bdev_user namespaces is the correct approach in the long run, since the user appliance is the one adding/removing namespaces (like hot plugging of a new NVME device), so it can just call the "add_namespace_to_subsystem" interface directly, and does not need to use an RPC for it.
Thanks,
Shahar
2 years, 6 months
vbdev_agg module (Was: Understanding io_channel)
by Fenggang Wu
Hi Jim,
Our industry partner is pushing forward on this vbdev_agg module along with
other efforts, as Tushar and I recently discussed in a separate context.
I think it is a good time to finalize the possible contribution to SPDK
too. I notice SPDK have the logical volume (lvol) developed. Just wondering
how this vbdev_agg may fit into current SPDK landscape? What would be the
suggested path to the main branch?
Thanks!
-Fenggang
On Tue, Oct 10, 2017 at 4:04 PM Harris, James R <james.r.harris(a)intel.com>
wrote:
>
> > On Oct 10, 2017, at 1:53 PM, Fenggang Wu <fenggang(a)cs.umn.edu> wrote:
> >
> > Hi Jim,
> >
> > Thank you very much for the great answer! It makes perfect sense to me.
> This saves so much time.
> >
> >
> > On Mon, Oct 9, 2017 at 7:04 PM Harris, James R <james.r.harris(a)intel.com>
> wrote:
> > Hi Fenggang,
> >
> > > On Oct 9, 2017, at 12:04 PM, Fenggang Wu <fenggang(a)cs.umn.edu> wrote:
> > >
> > > Hi,
> > >
> > > I am new to SPDK and trying to develop an aggregated virtual block
> device module (vbdev_agg.c) that stripes across multiple base devices. I am
> having difficulty understanding
> >
> > Welcome to SPDK! An aggregated virtual block device module is
> interesting - will this do striping and/or concatenation?
> >
> > Current I am only considering striping. But I would expect an easy
> extension from striping to concatenation.
>
> I think striping is the more interesting and common use case.
>
> > <snip>
> >
> > Now I have learned from the nvme module and register my agg_disk struct
> as the void* io_device, or better named, the "unique pointer". Currenly, a
> space with a size of a array of io_channel pointers is allocated after the
> io_channel struct. The io_channels pointers of the base devices are kept in
> the array. They are got (get_io_channel(base_dev)) in the create_cb and put
> (put_io_channel(base_dev)) in the destroy_cb.
>
> Yes - that sounds right.
>
> >
> >
> > >
> > > Any suggestions/hints will be appreciated. Thank you very much!
> >
> > If you would like to post your module to GerritHub, I’m sure you’d get
> some good review feedback from myself and others. Please note that this is
> a very active area of development right now. Your questions are really
> appreciated and will help us clarify where we need to improve on example
> code and documentation.
> >
> >
> > Personally I would like to share it or even make some contribution to
> the community if possible. Yet I would have to double check with the
> industry partner supporting my project to see their opinions.
>
> That would be fantastic. A striping module would be generally useful to
> SPDK. Plus if the module is contributed to SPDK, the SPDK project will
> make sure it gets tested automatically as part of the per-patch test suite
> to ensure against regressions. This might be important to your industry
> partner.
>
> Thanks,
>
> -Jim
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
2 years, 8 months
Bdev claim and single writer semantics
by Andrey Kuzmin
Hi all,
while looking at the block device subsystem code, I noticed bdev
claims being used for two purposes at the same time. First (and a
primary one, as far as I understood it) is a perfectly reasonable idea
to provide the bdev subsystem with dependency information so that
block devices could be brought up and shut down in the proper order
(which doesn't seem to be in place yet, though).
On the second hand, claims are somewhat counter-intuitively also used
to ensure single writer semantics where only a module that has claimed
the block device has write access. While it may be considered a safety
measure of sorts, it doesn't go well with the case where a
higher-level application is interested in shared access to the
underlying block device(s), in particular when there are
application-level means to ensure access consistency under multiple
writers scenario.
While adding support for the shared access semantics seems to be
pretty straightforward, I thought it reasonable to bring the issue up
here first, looking for insights, comments, and objections. Please let
me know if there are any or I can give a shot to a shared access
patch.
Best regards,
Andrey
2 years, 8 months
SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP testing; Made progress but we have errors
by Isaac Otsiabah
Tomasz, Edward and I have been working on this again today. We did not create any veth nor tap device. Instead in vppctl we set the 10G interface to 192.168.2.10/10 and then upped the interface as shown below.
(On Server):
root@spdk1 ~]# vppctl
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
192.168.2.10/24
local0 (dn):
The client ip address is 192.168.2.20
The we started the iscsi_tgt server and executed the commands below.
python /root/spdk_vpp/spdk/scripts/rpc.py add_portal_group 1 192.168.2.10:3260
python /root/spdk_vpp/spdk/scripts/rpc.py add_initiator_group 2 ANY 192.168.2.20/24
python /root/spdk_vpp/spdk/scripts/rpc.py construct_malloc_bdev -b MyBdev 64 512
python /root/spdk_vpp/spdk/scripts/rpc.py construct_target_node Target3 Target3_alias MyBdev:0 1:2 64
We got these errors from iscsi_tgt server (as shown below).
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=1, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 1): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2090:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login(discovery) from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on (192.168.2.10:3260,1), ISID=23d000000, TSIH=2, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 2): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
iscsi.c:2601:spdk_iscsi_op_logout: *NOTICE*: Logout from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000001, TSIH=3, CID=0, HeaderDigest=off, DataDigest=off
conn.c: 736:spdk_iscsi_conn_read_data: *ERROR*: spdk_sock_recv() failed, errno 104: Connection reset by peer
conn.c: 455:spdk_iscsi_remove_conn: *NOTICE*: Terminating connections(tsih 3): 0
conn.c: 323:spdk_iscsi_conn_construct: *NOTICE*: Launching connection on acceptor thread
iscsi.c:2078:spdk_iscsi_op_login_notify_session_info: *NOTICE*: Login from iqn.1994-05.com.redhat:92bdee055ba (192.168.2.20) on iqn.2016-06.io.spdk:Target3 tgt_node-1 (192.168.2.10:3260,1), ISID=23d000002, TSIH=4, CID=0, HeaderDigest=off, DataDigest=off
(On client):
However, we can see the iscsi devices from the client machine. Any suggestion on how to get rid of these errors? Were the above steps correct?
Isaac/Edward
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Tuesday, April 17, 2018 7:49 PM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Thank you for all the detailed descriptions, it really helps to understand the steps you took.
I see that you are working with two hosts and using network cards (TenGigabitEthernet82). Actually all you did in "STEP1" is enough for iscsi_tgt to listen on 192.168.2.20. "STEP2" is not required. Only thing left to do on target is to configure portal/initiator_group/target_node, described here<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>.
"Example: Tap interfaces on a single host" is describing situation when someone would like to try out VPP without using another host and "real" network cards. Same goes for veth interfaces used in scripts for per-patch tests - they are done on single host.
Thinking back, there should be second example with exact setup that you have - two hosts using network cards. I will look into it.
Thanks for all the feedback !
PS. Patch with VPP implementation is merged on master as of today, no need to cherry-pick anymore.
Regards,
Tomek
From: Isaac Otsiabah [mailto:IOtsiabah@us.fujitsu.com]
Sent: Wednesday, April 18, 2018 1:29 AM
To: 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk@lists.01.org>>; Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki@intel.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomasz, after executing the commands from the paragraph Example: Tap interfaces on a single host in this link (http://www.spdk.io/doc/iscsi.html#vpp) document, when I ping vpp from the target Server i get a respond.
[root@spdk2 ~]# vppctl
_______ _ _ _____ ___
__/ __/ _ \ (_)__ | | / / _ \/ _ \
_/ _// // / / / _ \ | |/ / ___/ ___/
/_/ /____(_)_/\___/ |___/_/ /_/
vpp# tap connect tap0
tapcli-0
vpp# show interface
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
tapcli-0 2 down drops 8
vpp# set interface state tapcli-0 up
vpp# show interface
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
tapcli-0 2 up drops 8
vpp# set interface ip address tapcli-0 192.168.2.20/24
vpp# show int addr
TenGigabitEthernet82/0/0 (dn):
local0 (dn):
tapcli-0 (up):
192.168.2.20/24
ip addr add 192.168.2.202/24 dev tap0
ip link set tap0 up
/* pinging vpp from target Server */
[root@spdk2 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.129 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.116 ms
My question is, what about the 10G interface as shown above "TenGigabitEthernet82/0/0 1 down"? The document does not say anything about it. Shouldn't I set ip address for it and up it?
Isaac
From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Isaac Otsiabah
Sent: Tuesday, April 17, 2018 11:46 AM
To: 'Zawadzki, Tomasz' <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki@intel.com>>; 'spdk(a)lists.01.org' <spdk(a)lists.01.org<mailto:spdk@lists.01.org>>
Cc: Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: Re: [SPDK] anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomasz, I got the SPDK patch. My network topology is simple but making the network ip address accessible to the iscsi_tgt application and to vpp is not working. From my understanding, vpp is started first on the target host and then iscsi_tgt application is started after the network setup is done (please, correct me if this is not the case).
------- 192.168.2.10
| | initiator
-------
|
|
|
-------------------------------------------- 192.168.2.0
|
|
| 192.168.2.20
-------------- vpp, vppctl
| | iscsi_tgt
--------------
Both system have a 10GB NIC
(On target Server):
I set up the vpp environment variables through sysctl command.
I unbind the kernel driver and loaded the DPDK uio_pci_generic driver for the first 10GB NIC (device address= 0000:82:00.0).
That worked so I started the vpp application and from the startup output, the NIC is in used by vpp
[root@spdk2 ~]# vpp -c /etc/vpp/startup.conf
vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins
load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)
load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)
load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)
load_one_plugin:114: Plugin disabled (default): ixge_plugin.so
load_one_plugin:184: Loaded plugin: kubeproxy_plugin.so (kube-proxy data plane)
load_one_plugin:184: Loaded plugin: l2e_plugin.so (L2 Emulation)
load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)
load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment on IPv4 Infrastructure (RFC5969))
load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface (experimetal))
load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)
load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)
load_one_plugin:184: Loaded plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/memif_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/nat_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
vpp[4168]: load_one_plugin:63: Loaded plugin: /usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so
vpp[4168]: dpdk_config:1240: EAL init args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -w 0000:82:00.0 --master-lcore 0 --socket-mem 64,64
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
DPDK physical memory layout:
Segment 0: IOVA:0x2200000, len:2097152, virt:0x7f919c800000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: IOVA:0x3e000000, len:16777216, virt:0x7f919b600000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: IOVA:0x3fc00000, len:2097152, virt:0x7f919b200000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: IOVA:0x54c00000, len:46137344, virt:0x7f917ae00000, socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: IOVA:0x1f2e400000, len:67108864, virt:0x7f8f9c200000, socket_id:1, hugepage_sz:2097152, nchannel:0, nran
STEP1:
Then from vppctl command prompt, I set up ip address for the 10G interface and up it. From vpp, I can ping the initiator machine and vice versa as shown below.
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 down
local0 0 down
vpp# set interface ip address TenGigabitEthernet82/0/0 192.168.2.20/24
vpp# set interface state TenGigabitEthernet82/0/0 up
vpp# show int
Name Idx State Counter Count
TenGigabitEthernet82/0/0 1 up
local0 0 down
vpp# show int address
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
local0 (dn):
/* ping initiator from vpp */
vpp# ping 192.168.2.10
64 bytes from 192.168.2.10: icmp_seq=1 ttl=64 time=.0779 ms
64 bytes from 192.168.2.10: icmp_seq=2 ttl=64 time=.0396 ms
64 bytes from 192.168.2.10: icmp_seq=3 ttl=64 time=.0316 ms
64 bytes from 192.168.2.10: icmp_seq=4 ttl=64 time=.0368 ms
64 bytes from 192.168.2.10: icmp_seq=5 ttl=64 time=.0327 ms
(On Initiator):
/* ping vpp interface from initiator*/
[root@spdk1 ~]# ping -c 2 192.168.2.20
PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
64 bytes from 192.168.2.20: icmp_seq=1 ttl=64 time=0.038 ms
64 bytes from 192.168.2.20: icmp_seq=2 ttl=64 time=0.031 ms
STEP2:
However, when I start the iscsi_tgt server, it does not have access to the above 192.168.2.x subnet so I ran these commands on the target server to create veth and then connected it to a vpp host-interface as follows:
ip link add name vpp1out type veth peer name vpp1host
ip link set dev vpp1out up
ip link set dev vpp1host up
ip addr add 192.168.2.201/24 dev vpp1host
vpp# create host-interface name vpp1out
vpp# set int state host-vpp1out up
vpp# set int ip address host-vpp1out 192.168.2.202
vpp# show int addr
TenGigabitEthernet82/0/0 (up):
192.168.2.20/24
host-vpp1out (up):
192.168.2.202/24
local0 (dn):
vpp# trace add af-packet-input 10
/* From host, ping vpp */
[root@spdk2 ~]# ping -c 2 192.168.2.202
PING 192.168.2.202 (192.168.2.202) 56(84) bytes of data.
64 bytes from 192.168.2.202: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from 192.168.2.202: icmp_seq=2 ttl=64 time=0.067 ms
/* From vpp, ping host */
vpp# ping 192.168.2.201
64 bytes from 192.168.2.201: icmp_seq=1 ttl=64 time=.1931 ms
64 bytes from 192.168.2.201: icmp_seq=2 ttl=64 time=.1581 ms
64 bytes from 192.168.2.201: icmp_seq=3 ttl=64 time=.1235 ms
64 bytes from 192.168.2.201: icmp_seq=4 ttl=64 time=.1032 ms
64 bytes from 192.168.2.201: icmp_seq=5 ttl=64 time=.0688 ms
Statistics: 5 sent, 5 received, 0% packet loss
>From the target host,I still cannot ping the initiator (192.168.2.10), it does not go through the vpp interface so my vpp interface connection is not correct.
Please, how does one create the vpp host interface and connect it, so that host applications (ie. iscsi_tgt) can communicate in the 192.168.2 subnet? In STEP2, should I use a different subnet like 192.168.3.X and turn on IP forwarding add a route to the routing table?
Isaac
From: Zawadzki, Tomasz [mailto:tomasz.zawadzki@intel.com]
Sent: Thursday, April 12, 2018 12:27 AM
To: Isaac Otsiabah <IOtsiabah(a)us.fujitsu.com<mailto:IOtsiabah@us.fujitsu.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris@intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp@intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: RE: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hello Isaac,
Are you using following patch ? (I suggest cherry picking it)
https://review.gerrithub.io/#/c/389566/
SPDK iSCSI target can be started without specific interface to bind on, by not specifying any target nodes or portal groups. They can be added later via RPC http://www.spdk.io/doc/iscsi.html#iscsi_rpc.
Please see https://github.com/spdk/spdk/blob/master/test/iscsi_tgt/lvol/iscsi.conf for example of minimal iSCSI config.
Suggested flow of starting up applications is:
1. Unbind interfaces from kernel
2. Start VPP and configure the interface via vppctl
3. Start SPDK
4. Configure the iSCSI target via RPC, at this time it should be possible to use the interface configured in VPP
Please note, there is some leeway here. The only requirement is having VPP app started before SPDK app.
Interfaces in VPP can be created (like tap or veth) and configured at runtime, and are available for use in SPDK as well.
Let me know if you have any questions.
Tomek
From: Isaac Otsiabah [mailto:IOtsiabah@us.fujitsu.com]
Sent: Wednesday, April 11, 2018 8:47 PM
To: Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com<mailto:tomasz.zawadzki@intel.com>>
Cc: Harris, James R <james.r.harris(a)intel.com<mailto:james.r.harris@intel.com>>; Verkamp, Daniel <daniel.verkamp(a)intel.com<mailto:daniel.verkamp@intel.com>>; Paul Von-Stamwitz <PVonStamwitz(a)us.fujitsu.com<mailto:PVonStamwitz@us.fujitsu.com>>
Subject: anyone ran the SPDK ( app/iscsi_tgt/iscsi_tgt ) with VPP?
Hi Tomaz, Daniel and Jim, i am trying to test VPP so build the VPP on a Centos 7.4 (x86_64), build the SPDK and tried to run the ./app/iscsi_tgt/iscsi_tgt application.
For VPP, first, I unbind the nick from the kernel as and start VPP application.
./usertools/dpdk-devbind.py -u 0000:07:00.0
vpp unix {cli-listen /run/vpp/cli.sock}
Unbinding the nic takes down the interface, however, the ./app/iscsi_tgt/iscsi_tgt -m 0x101 application needs an interface to bind to during startup so it fails to start. The information at:
"Running SPDK with VPP
VPP application has to be started before SPDK iSCSI target, in order to enable usage of network interfaces. After SPDK iSCSI target initialization finishes, interfaces configured within VPP will be available to be configured as portal addresses. Please refer to Configuring iSCSI Target via RPC method<http://www.spdk.io/doc/iscsi.html#iscsi_rpc>."
is not clear because the instructions at the "Configuring iSCSI Traget via RPC method" suggest the iscsi_tgt server is running for one to be able to execute the RPC commands but, how do I get the iscsi_tgt server running without an interface to bind on during its initialization?
Please, can anyone of you help to explain how to run the SPDK iscsi_tgt application with VPP (for instance, what should change in iscsi.conf?) after unbinding the nic, how do I get the iscsi_tgt server to start without an interface to bind to, what address should be assigned to the Portal in iscsi.conf.. etc)?
I would appreciate if anyone would help. Thank you.
Isaac
2 years, 8 months
spdk in docker container
by PR PR
Hi, I am trying to run one of the spdk example (reactor_perf) in docker
container. Our build system uses bazel so after compiling spdk natively, I
have created a bazel target for linking with spdk (I have set the
alwayslink option to 1 - https://github.com/spdk/spdk/issues/262). The
container is run with privileged option on docker cmdline and has /dev bind
mounted from host. Before running example app, I do run the
scripts/setup.sh which succeeds. The example app (reactor_perf) is copied
to new workspace location and compiled as a bazel target. For some reason,
it fails to allocate huge pages. Some relevant outputs below. Appreciate
any pointers.
Thanks,
PR
========
After running scripts/setup.sh
========
cat /proc/sys/vm/nr_hugepages
1024
cat /sys/devices/system/node/node*/meminfo | fgrep Huge
Node 0 AnonHugePages: 8429568 kB
Node 0 HugePages_Total: 640
Node 0 HugePages_Free: 512
Node 0 HugePages_Surp: 0
Node 1 AnonHugePages: 3663872 kB
Node 1 HugePages_Total: 384
Node 1 HugePages_Free: 384
Node 1 HugePages_Surp: 0
=======
Run example app without mem_size option
=======
/ws/bazel-bin/spdk/spdk_test -t 10
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: reactor_perf -c 0x1 --no-pci
--file-prefix=spdk_pid28875 ]
EAL: Detected 72 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Can only reserve 259 pages from 1024 requested
Current CONFIG_RTE_MAX_MEMSEG=256 is not enough
Please either increase it or request less amount of memory.
EAL: FATAL: Cannot init memory
EAL: Cannot init memory
Failed to initialize DPDK
app.c: 372:spdk_app_start: *ERROR*: Unable to initialize SPDK env
=======
with mem_size option
=======
/ws/bazel-bin/spdk/spdk_test -t 10
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: reactor_perf -c 0x1 -m 256 --no-pci
--file-prefix=spdk_pid29246 ]
EAL: Detected 72 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
app.c: 377:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 654:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
RING: Cannot reserve memory
reactor.c: 675:spdk_reactors_init: *NOTICE*: Event_mempool creation failed
on preferred socket 0.
RING: Cannot reserve memory
reactor.c: 696:spdk_reactors_init: *ERROR*: spdk_event_mempool creation
failed
app.c: 385:spdk_app_start: *ERROR*: Invalid reactor mask.
ps: I tried disabling huge pages but spdk seems to be crashing. Looks like
this is not an option.
===========
with —no-huge option
===========
Starting DPDK 17.11.0 initialization...
[ DPDK EAL parameters: reactor_perf -c 0x1 --no-pci --no-huge
--file-prefix=spdk_pid23800 ]
EAL: Detected 72 lcore(s)
EAL: Probing VFIO support...
EAL: Started without hugepages support, physical addresses not available
Unable to unlink shared memory file: /var/run/.spdk_pid23800_hugepage_info.
Error code: 2
invalid spdk_mem_register parameters, vaddr=0x7fbf2603b000 len=67108864
app.c: 377:spdk_app_start: *NOTICE*: Total cores available: 1
reactor.c: 654:spdk_reactors_init: *NOTICE*: Occupied cpu socket mask is 0x1
Bus error (core dumped)
2 years, 8 months
enable spdk on ceph
by 杨亮
hi,
I am making enable spdk on ceph. I got the error below. Could someone could help me ? Thank you very much.
1. SPDK code will be compiled by default
if(CMAKE_SYSTEM_PROCESSOR MATCHES "i386|i686|amd64|x86_64|AMD64|aarch64")
option(WITH_SPDK "Enable SPDK" ON)
else()
option(WITH_SPDK "Enable SPDK" OFF)
endif()
2.bluestore_block_path = spdk:5780A001A5KD
3.
ceph-disk prepare --zap-disk --cluster ceph --cluster-uuid $ceph_fsid --bluestore /dev/nvme0n1
ceph-disk activate /dev/nvme0n1p1 this step failed, the error information is as below
[root@ceph-rep-05 ceph-ansible-hxt-0417]# ceph-disk activate /dev/nvme0n1p1
/usr/lib/python2.7/site-packages/ceph_disk/main.py:5689: UserWarning:
*******************************************************************************
This tool is now deprecated in favor of ceph-volume.
It is recommended to use ceph-volume for OSD deployments. For details see:
http://docs.ceph.com/docs/master/ceph-volume/#migrating
*******************************************************************************
warnings.warn(DEPRECATION_WARNING)
got monmap epoch 1
2018-04-26 17:57:21.897 ffffa4090000 -1 bluestore(/var/lib/ceph/tmp/mnt.5lt4X5) _setup_block_symlink_or_file failed to create block symlink to spdk:5780A001A5KD: (17) File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 bluestore(/var/lib/ceph/tmp/mnt.5lt4X5) mkfs failed, (17) File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 OSD::mkfs: ObjectStore::mkfs failed with error (17) File exists
2018-04-26 17:57:21.897 ffffa4090000 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.5lt4X5: (17) File exists
mount_activate: Failed to activate
Traceback (most recent call last):
File "/sbin/ceph-disk", line 11, in <module>
load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5772, in run
main(sys.argv[1:])
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5718, in main
main_catch(args.func, args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5746, in main_catch
func(args)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3796, in main_activate
reactivate=args.reactivate,
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3559, in mount_activate
(osd_id, cluster) = activate(path, activate_key_template, init)
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3736, in activate
keyring=keyring,
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3187, in mkfs
'--setgroup', get_ceph_group(),
File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 577, in command_check_call
return subprocess.check_call(arguments)
File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--no-mon-config', '--cluster', 'ceph', '--mkfs', '-i', u'0', '--monmap', '/var/lib/ceph/tmp/mnt.5lt4X5/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.5lt4X5', '--osd-uuid', u'8683718d-0734-4043-827c-3d1ec4f65422', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 250
2 years, 8 months