Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Windows systems - is file fragmentation bad?
Message
From
01/01/2003 09:51:30
 
 
To
31/12/2002 08:03:05
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00736741
Message ID:
00737148
Views:
21
Hi Jim,

I'll have more to say on the whole topic later today or tomorrow, but I do want to comment on the IDC report...

>Seems to be a good one, results of a NSTL test commissioned by Executive Software (Diskeeper). VERY interesting in the gains derived in NT vs Win2k. I was surprised...

Yes, I was surprised too at that.
Now the whole premise of the recommendation is that the HD got so full that regular contiguous space was exhausted and thereafter the system was forced to fragment everything.
It is also noteworthy that the HD capacities tested are on the low side and that workstations seem to have a single HD that is partitioned. There was also no mention of how the extra HDs in server#2 were used, if at all.

In shops that I have worked at (networked workstations) the local HDs had little use in terms of 'permanent' storage. Most files were kept on network drives and the local HD was used for working on such files and related temporary files that may be created while doing so (Save was always back to the network drive).
The costs of hardware upgrades are also well out of line with today's prices.

All in all I believe they overstated the case very extremely. It is more like a sales pitch for Diskeeper than anything else. I doubt that a workstation's disk ever gets full and that it can be manually de-junked if it ever does. I feel that a network administrator/manager would be on top of the server HD space situation enough to take action before all contiguous space was used up.

But I will agree with one thing - it does prove that fragmentation is bad overall. I'm still leaning towards the idea that it is good for VFP tables and their applications. When I think about it, it had better be, because every file we create (DBF, CDX, FPT) is ALWAYS FRAGMENTED until a defrag is done, and even after a defrag, all subsequent record inserts are fragmented too.

I'm trying to dream up some simple tests to check this out and measure (basically) the impact of originally-laid (fragmented) versus defragged.

cheers


>
>>Great Jim. I'll read that one over later today.
>>And I continue to marvel at how Google searches rarely come to MY mind. I'll have to work on that.
>>
>>Happy New Year, Jim
>
>And the same to you!
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform