Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
When are Free Sectors Free?
Message
From
05/05/2020 06:27:08
 
 
To
04/05/2020 18:52:14
General information
Forum:
Hardware
Category:
Disk drives
Miscellaneous
Thread ID:
01674199
Message ID:
01674243
Views:
43
>>Following you totally, but uncertain if the controller would reassign/exchange cells between different partitions. No real problem if you look at it from SSD sector side, but another level of indirection that has to be listed/maintained. And keeping .vhd growable should keep "free space" larger just in case. Directory housing .vhd is the largest on my SSD.
>>Will try to clone into "growable" and test...
>
>On an HDD a partition is just a contiguous block of cylinders. Functionally it's just a container for a set of files, plus some pre-allocated free space.
>
>An SSD is a block device (Just a Bunch of Blocks) with no concept of cylinders and with no performance advantage to using contiguous blocks. The drive maintains a list of which blocks are allocated to which files; blocks not allocated are free. I believe I convinced myself that SSD storage must be sparse, and I think that implies it doesn't have to explicitly track blocks that are "free", but it might if that performs better.
>
>To support the concept of a partition, you're right, the drive would need to track at least a couple more things:
>
>- a partition table, with a nominal size (not as meaningful with sparse storage)
>- the partition to which a file belongs (attribute of the file entry, not the individual blocks)

All good and well seen from operating within the typical way of a dir/file system, but dd "copies" have to be supported as well and they are. Otherwise cloning via sector (which is last resort when in trouble) might itself be in trouble.

>Yes, it's extra work but it's pointer arithmetic, which is faster than any actual I/O operation. You can see that on NTFS; moving even large files from one folder to another on the same NTFS partition is basically instantaneous, even on an HDD, it's just some pointers getting updated. I haven't tried between different NTFS partitions. On an HDD that would require a physical copy, but I don't know about an SSD. It would be smart for an SSD to just update pointers in that case - fast and no actual write activity and associated wear, and the need to (eventually) erase the old blocks, which is expensive on an SSD.
>If partitions on SSDs are implemented as a partition table + partition attribute on each file, then I think the controller would have total flexibility to move any blocks/cells around as it sees fit, whether they contain data or are "free". Any free space on the total drive could be considered as "free floating" and could be added to any partition. The OS which "owns" a given partition would just need to request more space, and its nominal "size" could be adjusted. This could even natively support over-provisioning if the hypervisor or OS is brave enough to do that :)

At least when trying via Totalcommander, moving 8 Files of 11GB total size takes a bit more than a minute on a Mx500 while moving into another directory inside same partition is nearly instantaneous. Would need a call via other mechanisms closer to OS and other SSD vendors to verify, but is a significant hint to me.

Further tests should include moving files between .vhds on same partition but mounted as different devices to a client ;-)

>Musing aside, to get back to your question of "...if the controller would reassign/exchange cells between different partitions":
>
>- I don't see any restrictions on cells/blocks which contain data. For example, the wear leveler/GC could move their contents somewhere else, update the metadata, then erase/prep the original block(s) so they're "free" and ready for the next new write operation. This can effectively move blocks between partitions in that a block may once have been occupied by file A from partition 0 but is now occupied by file B from partition 2. This operation doesn't affect drive or partition "free space" and doesn't change the total size of files currently stored in that partition
>
>- The real question is about free space. I'd like to think it would be sparse and free-floating as above, but the controller designers might decide to impose some restrictions:
>
>1. Track partition free space as well as total space. If free space drops to zero, no free blocks are allowed to be added to the partition and the drive would report "out of space" to the OS. This is normal and expected behaviour with HDDs and conventional OSs so it might be how SSDs which are expected to be compatible with HDD environments behave. However, this would not necessarily prevent over-provisioning unless the drive also enforces that the sum of nominal sizes of all partitions can't exceed the drive capacity (which again is normal for HDDs and which might be implemented in SSDs for compatibility)
>
>2. When a partition is created, blocks which are not initially filled with data but which are "needed" to make up the "full" capacity of the partition are marked as belonging to the partition. This would emulate another HDD behaviour. In this case the drive could enforce that new data could only be written to free space blocks marked as "belonging" to the correct partition. This would not prevent the wear leveler/GC from eventually moving these new data to a different physical block, but it would prevent native over-provisioning. I'd like to think a controller would not do this but there may be some other considerations which cause it to be implemented in actual drives. In effect this would be a sort of "pseudo-sparse" storage
>

Gut guess is that eliminating partitioning as much as possible is the kindest way to treat your silicone (no Dragan, I am NOT talking about breast implants, don't have any and so on...) and probably the way to achieve this is by installing via USB starters. If I find time and a tiny SSD that can be played with without danger to data, I might try a few ideas along the links from "Bare Metal" thread.

>Aside: Having a complex and flexible controller is a two edged sword. One other edge is that hardware data recovery on failed or failing SSDs can be difficult to impossible. On an HDD, partitions are in fixed locations and there's a good chance files are in contiguous cylinders, or a relatively small number of groups of contiguous cylinders if the files are fragmented. Even if you lose the partition table and file metadata, you can make some educated guesses where the files may be. If you lose those on an SSD... :( Having good backups of SSDs is even more important than with HDDs.

BTW and aside: total desaster is not the only mishap that can happen. We use Nextcloud as container/synch mechanism for documents and up to now were very happy with it, as they even have better than rudimentary versioning, but not real source control, in-built.

As naming is one of the really hard things not only in programming but with files as well, we used to struggle bit with the way others would name their documents. Directory date usually a good way to limit search result set if normal search failed. Then something went wrong, all files were stamped with identical creation and last change time. My own filenames always include creation date - but even this paranoid was bitten as not all documents received were put in own directories and named the way sender found them...

Probably NOT anything HW/driver related, "just" a bug somewhere when updating Nextcloud.

fun times
thomas
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform