> -----Original Message-----
> From: Yibo Cai <Yibo.Cai(a)arm.com>
> Sent: Monday, June 27, 2022 8:01 AM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: [SPDK] Re: About SMA + IPU
> So, in cloud storage environment (e.g., kubernetes + csi), looks orchestrator
> is still responsible for most of the control plane rpcs that's not in current SMA
Correct, the orchestrator is responsible for sending the RPCs to configure the target system.
> Do we want to support more commands in SMA?
Yes, we do want to support more commands in SMA, although we're mostly thinking about the initiator side.
> Does it make sense to add a grpc method to transfer low level spdk rpc commands (json string)
> transparently, or we prefer higher level abstractions?
Ideally, we'd have a high-level abstraction for everything, but I can see how a passthrough method like that could be useful in some cases (e.g. to use the gRPC transport to send SPDK JSON-RPCs to configure the target). So, if there's a demand for it, I don't see any reason not to add it.
I'm struggling in understand use cases of SMA(Storage Management Agent)
and IPU(Infrastructure Processing Unit). Would like to seek for help.
Assume a host with IPU, connected to a remote storage node running SPDK
If an app on host CPU needs storage resource, in my understanding, below
steps will happen:
- app (or orchestrator) sends SMA CreateDevice/AttachVolume commands to
the SPDK service running on IPU
- IPU sends SPDK-RPC commands to some storage node through http proxy,
to create bdev, export NVMe-oF target
- IPU runs NVMe initiator to connect to remote NVMe-oF target
- IPU exposes the NVMe-oF target as a local NVMe device to host OS, so
the app (or orchestrator) can make use of it
Is that correct?
SPDK Jenkins CI system will be shut down
Building electrical maintenance.
Shutdown is planned for July 29th 10:00 AM GMT. CI will be shut down until August 1st 10:00 AM GMT.
How does that affect us?
CI will be unable to pick up changes and perform tests during this time.
I have a question.
The bdev_nvme_attach_controller add all namespaces of this nvme as bdevs.I have created some namespaces attach to system, then I create more ns at the same nvme later. How can I re-scan the nvme ctrl new namespace as bdev?It is easy to re-attach again, but the former namespaces(bdev) are in-using.
Just an FYI - SPDK will be moving to SPDX license identifiers, instead of explicit license text in each file. This follows the precedence of many other open source projects. See https://spdx.dev for more information.
https://review.spdk.io/gerrit/c/spdk/spdk/+/12904 contains the bulk of the text replacement for those that are interested.
https://review.spdk.io/gerrit/c/spdk/spdk/+/12904/7/LICENSE in that patch specifically describes the policy for using the SPDX identifiers moving forward.
If you have any questions, please respond to the patch review on Gerrit. We expect to merge these changes next week, pending review.
P.S. Yes, I have misspelled SPDX as SPDK many times over the last week while preparing this patch and associated communication!
Hi guys,During the development process, i find that there is an rpc interface that can rescan aio bdev, but no corresponding rpc interface is provided for uring bdev. This feature is very important to us, do we plan to add a corresponding rpc interface to io_uring?