I simply tested the BlobFS Asynchronous API by using SPDK events framework to execute multi tasks, each task writes one file.
But it doesn't work, the spdk_file_write_async() reported an error when resizing the file size.
The call stack looks like this:
spdk_file_write_async() -> __readwrite() -> spdk_file_truncate_async() -> spdk_blob_resize()
The resize operation must be done in the metadata thread which invoked the spdk_fs_load(), so only the task dispatched to the metadata CPU core works.
That's to say only one thread can be used to write files. It's hard to use, and performance issues may arise.
Does anyone knows further more about this?
thanks very much
On behalf of the SPDK community I'm pleased to announce the release of SPDK 21.10!
This release contains the following new features:
- DMA library: Added DMA library providing the necessary infrastructure for handling systems and devices with multiple memory domains to perform DMA transfers between them.
- Fully asynchronous NVMe driver: Removed NVMe driver inline polling during controller initialization and reset. This enables use of SPDK in cluster solutions.
- SPDK container scripts: Added set of scripts to serve as an example of how SPDK can be encapsulated into docker container images. Please see spdk/docker/README.md for details.
- Scheduler API and improvements: API for implementing schedulers and governors plugins has been made public. Please see include/spdk/scheduler.h for details. Additionally improved dynamic_scheduler by favoring performance over power saving for cases with multiple threads with low activity.
- Trace parser library: Added trace parser library that parse traces recorded by an SPDK application. It includes merging traces from multiple cores, sorting them by their timestamp and constructing trace entries spanning across multiple buffers.
- IDXD perf tool: Added a standalone tool for measuring IDXD performance.
The full changelog for this release is available at:
This release contains 619 commits from 47 authors with over 38k lines of code changed.
We'd especially like to recognize all of our first time contributors:
Thanks to everyone for your contributions, participation, and effort!
I am Suhana Khan, a Loney and Horny Girl available in Bangalore. If you are in Bangalore and seeking a hot and sexy girl then you can contact me now. I am ready to give you wonderful escort services in exchange for some money. I am 24 years old young girl who will provide you most amazing sexual services in Bangalore.
The merge window for SPDK 21.10 release will close by October 22nd.
Please ensure all patches you believe should be included in the release are merged to 'master' branch by this date.
You can do it by adding a hashtag '21.10' in Gerrit on those patches.
The current set of patches that are tagged and need to be reviewed can be seen here:
On October 22nd new branch 'v21.10.x' will be created, and a patch on it will be tagged as release candidate.
Then, by October 29th, a formal release will take place tagging the last patch on the branch as SPDK 21.10.
Between release candidate and formal release, only critical fixes shall be backported to the 'v21.10.x' branch.
Development can continue without disruptions on 'master' branch.
Are you looking for Independent Escorts in Agra then you are in right place. Here at Agra Escorts, you can access hot and independent Agra escorts services. If you are an inhabitant of Agra or came here for business or a tour, then our Agra call girls are the perfect destination for you. We are the most reputable and demanded agency for escorts in Agra. On our website, we have registered high-profile call girls in Agra at affordable prices. No matter what you need a sexy Agra call girl, for a short date or long date, we will provide you most fascinating Agra call girls. Here is an exhibition of free escorts and models to have a groundbreaking sexy connection in the city. It is an as frequently as conceivable refreshed Agra Escorts photograph exhibition, you can find here simply the genuine lady friends who are available as independent escorts in Agra now.
I would be happy to get some feedback on my NVMf target namespace masking
implementation using attach/detach:
The patch introduces namespace masking for NVMe-over-fabrics
targets by allowing to (dynamically) attach and detach
controllers to/from namespaces, cf. NVMe spec 1.4 - section 6.1.4.
Since SPDK only supports the dynamic controller model a new
controller is allocated on every fabric connect command.
This allows to attach/detach controllers of a specific
host NQN to/from a namespace. A host can only perform
operations to an active namespace. Inactive namespaces can
be listed (not supported by SPDK) but no additional
information can be retrieved:
"Unless otherwise noted, specifying an inactive NSID in a
command that uses the Namespace Identifier (NSID) field shall
cause the controller to abort the command with status
Invalid Field in Command" - NVMe spec 1.4 - section 6.1.5
Note that this patch does not implement the NVMe namespace
attachment command but allows to attach/detach via RPCs only.
To preserve current behavior all controllers are auto attached.
To not not auto attach controllers the nvmf_subsystem_add_ns
shall be called with "--no-auto-attach". We introduce two new
- nvmf_ns_attach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
- nvmf_ns_detach_ctrlr <subsysNQN> <NSID> [--host <hostNQN>]
If no host NQN is specified all controllers
(new and currently connected) will attach/detach to/from the
The list in spdk_nvmf_ns is used to keep track of hostNQNs
which controllers should be attached on connect.
The active_ns array in spdk_nvmf_ctrlr is used for fast lookup
to check whether a NSID is active/inactive on command execution.
> From: Engel, Amit <Amit.Engel(a)Dell.com>
> Recent updates to the NVMe spec have added definitions for in-band
> authentication (NVMe TP 8006) Does SPDK roadmap include adding support for
> this feature ?
Currently no one is allocated to work on this but there has been some interest. We'd love for someone to attempt it and submit some patches.
> Internal Use - Confidential
> SPDK mailing list -- spdk(a)lists.01.org
> To unsubscribe send an email to spdk-leave(a)lists.01.org
In the SPDK source code, the message pool is of size 262143 ( taken from _thread_lib_init() ) and the per thread cache size is SPDK_MSG_MEMPOOL_CACHE_SIZE (1024). What this means is the first 255 threads created will have the cache size as 1024 and after that the threads will be created but the cache will be NULL and the threads have to refer the global pool to get the msge object to send the messages.
In my application, we create around 300 threads during init. So, 256th thread onward, global pool is referenced for the msge object.
Now, when the program is run, some of the threads that got 1024 entries in the cache while creation (i.e. threads between 0 - 255) remain unused. What this means is that some of the entries from the global pool remain unused always and the threads (from 256 - 300) are not able to get the free entries from the global pool and this leads to spdk_thread_send_msg() failing for the threads in range of 256 - 300.
I have the following doubts here:
1. How do we scale the number of threads in the SPDK environment with the hardcoded value of cache size and pool size.
2. Why is the pool size set to 262143 and cache size to 1024. Is there some logical explanation behind the numbers?
3. If the user changes the above two numbers, are there any issues with it, be it performance or memory or any thing else?
4. Is there any maximum limit that the SPDK suggests for the scaling of threads? If yes, what is the scaling model that SPDK suggests the user should implement?
The SPDK roadmap does not yet include adding support for NVMe in-band authentication. If you (or any other parties) are interested in working on this feature, the SPDK community would welcome it.
On 10/6/21, 1:37 AM, "Engel, Amit" <Amit.Engel(a)Dell.com> wrote:
Recent updates to the NVMe spec have added definitions for in-band authentication (NVMe TP 8006)
Does SPDK roadmap include adding support for this feature ?
Internal Use - Confidential
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org