Other than the obvious performance degradation, I’ll offer two other examples of questionable code behavior: 1) All IO is halted while waiting for a disk to spin up, even if the disk being spun up has nothing to do with servicing the IO backed by another disk. In my experience the performance and quality demands of filesystem code requires extremely competent and diligent developers. The Unraid FUSE code is proprietary, so code inspection is not possible, but I suspect the code path is less than optimized.
One may expect some performance degradation due to the FUSE code needing to perform disk parity operations, but this level of impact is unacceptable compared to other software based RAID systems, and worse is that the test was performed on the SSD cache volume where no parity computation is required. This means the performance problem is caused by the Unraid FUSE code affecting all user shares. The case insensitive SMB and DirectIO options made no discernable difference.īut we can see that the performance of disk shares are near the same performance we get from Ubuntu. Per suggestions from Unraid I also tested enabling DirectIO and disabling SMB case sensitivity.Īs before, I used my DiskSpeedTest utility to automate the testing. An Unraid “User” share is a volume backed by Unraid’s FUSE filesystem, while a “Disk” share is a volume directly backed by the disk’s native filesystem. In this round of testing I compared the performance of “User” shares vs. I discovered this about 2 months ago, but with COVID-19 and no kids weekend sporting event duties, I have some time to post.
With multiple pools, these users can now set up a pool of HDDs for the purposes of write caching over the 1gbps network, leaving the full performance and capacity of their SSD pool for the applications that can benefit from it the most.This is yet another post on Unraid’s poor SMB performance, but I think I narrowed down the cause of the problem to the Unraid FUSE filesystem. Yet, users are still having to reserve a chunk of space in that SSD pool for the purpose of network file transfer caching. However, given that most users only have a 1gbps network, those SSDs are never really pushed when it comes to performance. This is to increase performance to applications that live in that pool as Docker Containers or Virtual Machines. Many users leverage SSDs for the cache pool in Unraid 6. With multiple pools, you can now create groups of storage devices to dedicate to specific functions. While this may not be noticeable for the average home usage, with power users and multi-user environments, the cost of resource contention is more obvious and overt.
This means when a large copy to a share is taking place, performance in those other applications may suffer. With a single pool, the same underlying storage for your VMs and containers is being used to provide a cache for your data written to shares. So what does this new functionality offer? Separate Storage for Caching, VMs, and Containers
With the release of Unraid 6.9, we have included support for Multiple Pools, granting you even further control around how you arrange storage devices in your server.