Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Windows systems - is file fragmentation bad?
Message
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00736741
Message ID:
00737695
Views:
21
George,

Well the more interesting thing to keep in mind is that FP and VFP tables will typically be very fragmented just as a result of day-to-day inserts.
And even after the volume is defragmented, that fragmentation starts again such that all subsequent inserts are in fragments until the next defragmentation is done.

Now add to this that defragmentation, at least as performed by the utility delivered with Win2000 and WinXP, does all of its work on a "best fit" basis ONLY. It considers no other factors (directory residency, creation date, last update date, file name similarities) in making all the files contiguous.
So there is a good chance that related tables can be dispersed all over the HD (partition)! This can be detrimental to application performance. In fact I suspect just that to have occurred in one of the tests described in the other thread mentioned in my first reply.

One final thing is worth considering - both VFP and Windows itself cache much of the data that is ever transferred from HD to the system. In fact the OS will keep such data in RAM and remember that it is there even after the application that used it has shut down! Until, of course, some other application needing to read its own data comes along. That's why we find we have to re-boot between benchmark tests of VFP - just shutting down VFP and restarting it is not enough and the data it last used remains available to the restarted instance of VFP. To me that's really remarkable!

As Walter has said (and I too) all this is academic because we have no way to control any of this save running (or not) the defrag utility from time to time, with its MINIMAL control. By the way I would bet that none of this applies to a RAID5 configuration. I worked in a large application with such a disk complex and performance was terrific and we never defragged in that case.

cheers

>>George,
>>SNIP
>>>
>>>I'm well aware of the premise of this thread. The answer is still the same: Fragmented files are not, and cannot be more efficient than those that are not for the reason I stated. If you want to know the exact reasons why, then post back.
>>
>>I just completed some testing on this today, and though I'm still thinking through all the results, it is pretty clear that your statement above is not ALWAYS true.
>>
>>Have a look at thread #737567 for (limited) details.
>>
>>cheers, and Happy New Year
>
>Jim,
>
>As programmers we are compelled always to examine the worst case scenario. With any fragmented file the worst case is the drive's maximum seek time per cluster read. In reality, however, we know that this isn't the case, but still the best we can hope for would be the average seek time per cluster read.
>
>With an unfragmented file, however, the worst possible case would be that the each cluster read would result in the average seek time. In this case, the reality is probably closer to the minimum seek time.
>
>While it may be true that it may not always be the case that an unfragmented file results in faster retrieval, the instances where it does not are exceptions, and not the rules. As programmers, we design against rules.
>
>In regards to SQL Server, it's really an apples and oranges comparison, since SQL Server by-passes the NT/2000/XP file handling and deals directly with the drive.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform