Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
PACK - what to expect?
Message
From
14/01/2003 21:50:45
Dragan Nedeljkovich (Online)
Now officially retired
Zrenjanin, Serbia
 
 
To
14/01/2003 14:38:39
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00740999
Message ID:
00741734
Views:
18
>From what I saw yesterday, there is NO writing of used blocks into a new file at all - it changed in fragments count (lowered) but remained highly fragmented. A real surprise to me, to be sure.

I don't know how to find out whether it's a new file or not - short of using something to find the beginning cluster on the disk before and after the Pack Memo.

Try with a Copy To - if the copy is also fragmented, that's it: the disk space on your disk is fragmented.

>The Disk Defragmenter utility is said by MS to defrag used space, but of course there is no way of knowing for sure.

I see the white space being scattered in several chunks when Defrag finishes, and it also still has something to do if you run it repeatedly. So it doesn't really try to do everything, it just tries to do as much as it can, or is allowed to do (or how else would you be motivated to buy a real one).

>Well my observation strongly suggests that a big file WILL be written in one piece provided that the length gien to the OS as needing writing is the whole size of the file. In other words, it does look like NTFS does look for a contiguous set of clusters to hold all that is specified for the length of the write in question.
>BUT it also looks like NTFS will take the FIRST such piece of contiguous space that exists - it does not restrict its initial search for space to the known large chunk of free space. I believe it would make a significant difference in things overall if it did operate as it does not now (above).
>I've also seen that VFP, in EXCL mode, will also hold the physical write until it exhausts its cache/buffer RAM. I have seen files of 6 frags, 2 frags and a single frag in such circumstances. I say 'single frag' only on the basis that it is not seen in the defrag report (Analyze).

The point here is that we're not writing a big file. We're starting with a small file (actually two or three small files - the dbf, fpt and cdx) and growing them. The OS will be asked for additional clusters for each of them in turn, and even if the .cdx is created post festum, I have a strong feeling (can't know for sure) that the dbf and fpt are written sequentially - add a record to the dbf, add the blocks to the fpt, do so while !eof(). Whenever a record in either dbf or fpt overflows the currently allocated space, Fox asks OS for more - and gets the first available chunk of free space. I suppose if we had a real file mapping utility, that we'd find the dbf and fpt clusters alternating, and then followed by the cdx - unless the cdx is also built sequentially while appending the not-deleted records.

That's, at least, how it worked in DOS days, but I doubt it's too different today. Maybe the allocation goes in bigger chunks nowadays, but I think the table is still grown cluster by cluster.

back to same old

the first online autobiography, unfinished by design
What, me reckless? I'm full of recks!
Balkans, eh? Count them.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform