Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
PACK - what to expect?
Message
From
14/01/2003 14:38:39
 
 
To
14/01/2003 13:58:16
Dragan Nedeljkovich (Online)
Now officially retired
Zrenjanin, Serbia
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00740999
Message ID:
00741586
Views:
26
SNIP
>>
>>I erred as regards PACK MEMO after adding 25,000 memo contents - the FPT was 'optimized' (de-bloated), just not defragmented.
>
>AFAIK, it debloats (at least we're adding a new word here, not overloading an old one) by writing the used blocks into a new file and then killing the old one. I would expect the new file to be contiguous - if you had contiguous free space on the disk. However, the current state-of-the-art defragmenters are just a pale shadow of what they once were. The Windows (W2K) defragmenter is a spayed version of the old Norton Defrag, and it doesn't defragment free space. I still remember the Central Point's Compress.exe, which worked long and thrashed your disk for quite a while, but in the end you had two blocks: the used space and the unused space, period.

From what I saw yesterday, there is NO writing of used blocks into a new file at all - it changed in fragments count (lowered) but remained highly fragmented. A real surprise to me, to be sure.
The Disk Defragmenter utility is said by MS to defrag used space, but of course there is no way of knowing for sure.
Regardless, it would be nice if there was some control available to specify how one wants it to run.

>
>IOW, when you're just copying a big file, does it get written in one piece? I doubt it. Even on a so-called "defragmented" disk, since the free space isn't contiguous, it gets written in the available bits and pieces of the disk space.

Well my observation strongly suggests that a big file WILL be written in one piece provided that the length gien to the OS as needing writing is the whole size of the file. In other words, it does look like NTFS does look for a contiguous set of clusters to hold all that is specified for the length of the write in question.
BUT it also looks like NTFS will take the FIRST such piece of contiguous space that exists - it does not restrict its initial search for space to the known large chunk of free space. I believe it would make a significant difference in things overall if it did operate as it does not now (above).
I've also seen that VFP, in EXCL mode, will also hold the physical write until it exhausts its cache/buffer RAM. I have seen files of 6 frags, 2 frags and a single frag in such circumstances. I say 'single frag' only on the basis that it is not seen in the defrag report (Analyze).

cheers
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform