Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Windows systems - is file fragmentation bad?
Message
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00736741
Message ID:
00737831
Views:
24
>George,
>
>Well the more interesting thing to keep in mind is that FP and VFP tables will typically be very fragmented just as a result of day-to-day inserts.
>And even after the volume is defragmented, that fragmentation starts again such that all subsequent inserts are in fragments until the next defragmentation is done.
>
>Now add to this that defragmentation, at least as performed by the utility delivered with Win2000 and WinXP, does all of its work on a "best fit" basis ONLY. It considers no other factors (directory residency, creation date, last update date, file name similarities) in making all the files contiguous.
>So there is a good chance that related tables can be dispersed all over the HD (partition)! This can be detrimental to application performance. In fact I suspect just that to have occurred in one of the tests described in the other thread mentioned in my first reply.
>
>One final thing is worth considering - both VFP and Windows itself cache much of the data that is ever transferred from HD to the system. In fact the OS will keep such data in RAM and remember that it is there even after the application that used it has shut down! Until, of course, some other application needing to read its own data comes along. That's why we find we have to re-boot between benchmark tests of VFP - just shutting down VFP and restarting it is not enough and the data it last used remains available to the restarted instance of VFP. To me that's really remarkable!
>
>As Walter has said (and I too) all this is academic because we have no way to control any of this save running (or not) the defrag utility from time to time, with its MINIMAL control. By the way I would bet that none of this applies to a RAID5 configuration. I worked in a large application with such a disk complex and performance was terrific and we never defragged in that case.
>
>cheers

Jim,

I hate to be contrary, but I think that there are some things we can do to minimize (at least) fragmentation. By setting up a solid file maintenance routines, run on a regular basis, fragmentation of all files can be minimized.

I don't think that many would agree that a periodic re-building (from scratch) of indexes is a bad idea. In fact, it's a good idea. From your other thread (item 6 I believe) this would cause the OS to attempt to create the file in a contiguous block.

One, and admittedly older idea, is to periodically sort the file to its natural order. By that I mean assuming the file has an index on say, a style number, periodically sorting that file by style number, puts it in its natural order. You then delete the old file, rename the file, and re-build the index, and the result is as stated above.

Naturally, there would be exceptions, but the overall result ends up being improved performance.
George

Ubi caritas et amor, deus ibi est
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform