Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
When are Free Sectors Free?
Message
From
04/05/2020 08:47:41
 
 
To
02/05/2020 04:17:36
General information
Forum:
Hardware
Category:
Disk drives
Miscellaneous
Thread ID:
01674199
Message ID:
01674217
Views:
46
Hi Al,

I cannot write any longer that you always give great answers.
The scale has to be widened to include the areas around "fantastic" and "stellar". Wow!

Seems you are switching opinion of "best practices" similar to me:
>> Sparse storage. In the old locally-attached HDD days, some apps such as SQL Server would recommend that you predefine/preallocate a chunk of disk space for each database, preferably on a virgin or very lightly used drive.

Yupp, one of the points I was mulling. Current thinking is that on SSD it is better NOT to use fixed size virtual disk formats like .vdi or .vhd, but let those files shrink and expand as needed. Not only more sectors are left "untouched" from host OS point of view, saving each of those files should be faster, as byte size copied is less, also ALL of the empty space in each of the virtual disks stays on host disk and you don't have empty reseves in each .vhd. Anything I have overlooked so far ?

On the
https://unix.stackexchange.com/questions/309900/deploy-linux-into-and-boot-from-vhd
topic, seems a lot has happened since I last looked:
https://www.tenforums.com/tutorials/139119-native-boot-virtual-hard-disk-how-upgrade-windows.html
https://www.tenforums.com/tutorials/53256-hyper-v-native-boot-vhd.html
https://uk.pcmag.com/windows0-2/88253/how-to-run-windows-10-from-a-usb-drive

might be worth to try again some things to get identical machines to go or where I go, either running on the metal or as a VM. If those .Vhds shrink, I can move some back from to faster storage and a "copy" is faster.

Perhaps some "reverse bootstrap" is possible, as deleting most of first OS install should be possible if booting into Win10 from .vhd works without problem - then most of the unneccessary partions should be gone, working from 1 huge data disk with several other .vhd to be mounted if necessary and easy to save.

>Let's take a hypothetical example where an app "pre-allocates" 75% of an SSD. An SSD is random-access and has no concept of cylinders so there is no benefit to having those cells physically contiguous. It just assigns a list of cells or blocks to a file. If only a few of those cells actually have any data written to them, then for wear leveling purposes the drive will record that some of those cells were written to once, and the remainder zero times.
...
>>I think with the above I've convinced myself that "free space" on a wear-leveling SSD is not as important for performance as one might think:
>

Following you totally, but uncertain if the controller would reassign/exchange cells between different partitions. No real problem if you look at it from SSD sector side, but another level of indirection that has to be listed/maintained. And keeping .vhd growable should keep "free space" larger just in case. Directory housing .vhd is the largest on my SSD.
Will try to clone into "growable" and test...

>Controller parallelism. This is an example where the physical layout of a file in an SSD's NAND cells may matter. In the early days of SATA SSDs, larger-capacity drives were faster than smaller ones e.g.
... This holds true for free space as well.
>Just one more way the controller can affect performance, in ways you can't control ;)

Ack.

>
>Other Factors
>
>Personally I wouldn't mess with reserved space. I bet it's highly optimized for the drive's expected performance and life.

True. But you could use same HW "finetuned" for the use case. On desktop typical user can be called upon to save important data, in willy-nilly sever happenings randomly loosing data across clients might evoke scepticism ;-)

>Drive internal buffering: Almost all SSDs have a DRAM buffer. I believe even some consumer QLC drives are tiered; they have a DRAM first level buffer, then an SLC second level buffer, before reaching the QLC flash. Enterprise and/or NVMe drives may also be tiered.
>
>Consider RAID: for scratch space or stuff you can afford to lose you can use RAID0. I believe some implementations of redundant RAID can give some parallelism on read operations while providing data safety.

Since witnessing a RAID 10 saying goodbye I argue for JBOD...

>Interface type: I believe NVMe is the fastest available interface, with PCIe Gen4 now available with AMD chipsets. I think Gen4 NVMe SSDs are just being introduced. I seem to recall a Linus Tech Tips video where an NVMe/PCIe external enclosure was used ;)

Yupp, no more huge 2.5' cases to carry around ;-)

>Aside/PSA: Writes on Shingled Magnetic Recording (SMR) HDDs are slow and are managed in ways similar to the techniques SSDs use. Because of long timeouts during certain write operations, these drives can drop out of RAID arrays. There's currently a bit of a scandal about this, because Western Digital for one has been marketing these drives for NAS (where RAID is expected) and users are reporting severe problems with them in that role e.g. https://arstechnica.com/gadgets/2020/04/caveat-emptor-smr-disks-are-being-submarined-into-unexpected-channels/ . Apparently Seagate and Toshiba are also slipping SMR drives into retail channels without warning customers.

Paranoid prejudgemental panic proper p[backspace]behaviour is...

>If you have a NAS or RAID application it's probably best to avoid SMR drives for now, and stick with conventional magnetic recording (CMR) drives.

I welcome them for JBOD USB3. Murphy was right and RAID is best used for insects...

But clearly I must think more at the HW level again - SSDs are hardly news any more, should not have taken that long to question own practices!

regards
thomas
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform