Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Fragmentation - why it is usually good for VFP native da
Message
From
19/01/2003 13:10:05
 
 
To
19/01/2003 12:04:17
Mike Yearwood
Toronto, Ontario, Canada
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00742043
Message ID:
00743277
Views:
18
Hiya Mike,
>Hi Jim
>
>This is all very interesting. I think it was norton utilities that had a feature where you could put certain files at the "end" of the drive so there would be less chance of fragmentation. You said VFP does the pack in RAM and writes out segments as the RAM is filled. Does playing with sys(3050) affect this? I mean can VFP's memory footprint be increased so it has a better chance of writing out one contiguous piece?

I don't know, but my guess would be yes, SYS(3050) could affect it.
I had a table, deliberately fragmented (USE...SHARED) that resulted in the following fragments:
DBF - 1,995
CDX - 367
FPT - 3,781

After deleting the last record of the DBF I ran a PACK and the resultant files were:
DBF - 0 (contiguous)
CDX - 19
FPT - 3,781

This on a DBF of 143,067KB, CDX of 24,207KB and FPT of 203,125KB. The system, running VFP7SP1 and with IE and Outlook in use, has 512 meg RAM.

But I'm not too fond of a sequence of operations like this on several DBFs. Sure, you can end up with defragmented files (except .FPTs) but there is no real control over where various pieces of each end up relative to each other.

Now I fully realize that we all make use of such operations on a regular basis and that we don't really have much choice at this time. But what I hope comes out of all this is an appreciation that we need more precise control capabilites as regards both the placement and fragmentation of files. And that people will start asking MS for it.

cheers

>
>>>>>I think this should be just better docummented in the help files. So, when working on level of single record, leave it as is. When doing time-consuming data processing, make PACK first, and then proceed.
>>>>
>>>>I agree that we need more/clearer information in the Help. But be surprised (as I was) that a PACK or PACK MEMO does NOT result in a defragmented .FPT file! It does get de-bloated, I assume, but the Disk Defragmenter "Analyze" shows that it remains fragmented (but with fewer fragments) after the operation.
>>>>
>>>
>>>This is because PACK just does a copy of file on disk. If free space on disk used to store copy of file is defragmented, resulting file will be defragmented too. VFP does not do any specific control of this. It is just on the system level.
>>
>>No, it is not "just" controlled at the system level - VFP has a big influence on the result (how much it is fragmented). This is because, VFP having te files (DBF, CDX and FPT) EXCLUSIVE, then it knows that no one else can have access. So it can - and does - withhold any WRITE until either it runs out of RAM or hits end-of-file. So if the DBF or CDX fits totally in RAM then the whole file will be written as a single fragment. If it is too big for RAM, it will be written in a few large chunks.
>>
>>>
>>>>>
>>>>>In addition, in many our production applications we have automated administrative/maintenance tools running and making PACK and full REINDEX of every table, check for data integrity and corruption issues etc. I think, if MS would make such utility in tools for next version of VFP, it would be VERY valuable.
>>>>
>>>>Yes, I agree. But this is a separate issue, and be aware that a PACK and/or a full REINDEX will result in a defragmented DBF and CDX (just not the FPT). The PACK will do so, as I learned recently, only if there was at least 1 record deleted during the copy operation it does internally. If no records are dropped then the copy of the table is simply deleted and the internal REINDEX is skipped. However, the FPT processing is still performed.
>>>>
>>>
>>>I did not knew this.
>>>As about reindex, I meant complete drop of CDX file and re-creating indexes, as usually many such tools do to avoid index corruption issues.
>>
>>Yes, well a reindexing like that will still be as contiguous as possible because it operates too like stated above for the CBF.
>>
>>>
>>>I think it would be extremely good if MS would place (on the system level) DBF and other files after PACK/REINDEX into the defragmented area.
>>
>>This it does not have control of, and I think the only way to be sure to achieve that is to defragment the volume first, but only after moving the subject DBFs and CDxs OFF that volume first, PACKing them there, then copying them back.
>>
>>>
>>>In addition, you can schedule defragmentation of hard drive each night, for example, just after running of tools for VFP database maintenance. This should take care of this issue. Anyway, it would be extremely good to have all such tols in a single box, or at least better docummented this issue.
>>
>>Well if this is done, then there is a chance that some of the VFP files will end up very far from their other pieces. For instance, a .CDX could be near the start of the volume and the DBF/FPT in the big free area.
>>
>>By the way, using a utility called FileMon I was able to confirm that VFP does NOT make a copy of the FPT when it runs PACK or PACK MEMO. It simply re-writes parts of the existing file.
>>
>>cheers
Previous
Reply
Map
View

Click here to load this message in the networking platform