SNIP>
>Also, NTFS reacts very badly to fragmentation. Had a big performance problem today with some tables that were copied en block but ended up in the cracks between a million fragments created by TurboDB. Raw file reads from that drive were slower than reads from a 10 MBit network, and the defragger wailed that there was no free space that it could use (0%) even though it reported 53% free space on that drive. Defragged by reformatting and afterwards performance was back to normal levels. *g*
Defragger's 'complaint' sounds like it could well have been legitimate (more appropriately, I suppose, 'semi-legitimate)... If there wasn't a sufficiently large area of *contiguous* free space to let it use as an initial starting point then it gave up. A more appropriate response would have been to analyze further and **make itself** the contiguous free space, but it appears it doesn't do that. I wonder if the more expensive (than free) utilities for defrag do any better. One would hope so!
That is likely the same reason that your 'en block' copy ended up fragmented all over in the first place.
I guess your drastic 'defrag' was followed by lots of copying, and in such a case virtually every file copied should have ended up occupying contiguous space and your 53% free should now be all in 1 clump.
cheers
Précédent
Suivant
Répondre
Voir le fil de ce thread
Voir le fil de ce thread à partir de ce message seulement
Voir tous les messages de ce thread
Voir tous les messages de ce thread à partir de ce message seulement