Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Macro substitution
Message
From
28/12/2002 08:52:20
 
 
To
28/12/2002 01:08:03
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Miscellaneous
Thread ID:
00735756
Message ID:
00736275
Views:
25
SNIP
>
>Hmmm... I'd be hesitant to extrapolate results you got on a mainframe "many years ago" to modern environments.

Yes, that's a fair concern. But on the other hand, we *did* have cacheing on HDs, we *did* have two sets of heads on each HD and we could control our block sizes and vary buffering by dataset (file) (in addition to not only controlling fragmentation, but also capabilities to place datasets in specific places on a disk if we wanted to).

Let me ask the opposite... why do people say that contiguous space is best for everything, all the time? Can you say why in the following context:
All tables used regularly (read, read/update or insert) in app., all on same HD with cluster size 4K (8 sectors):
table bytes used
a1.cdx  40 meg
a1.dbf 350 meg
a1.fpt 700 meg

b2.cdx  70 meg
b2.dbf 750 meg

c3.cdx 120 meg
c3.dbf 600 meg
c3.fpt 990 meg

50 misc. other tables totalling 200 megs space.
Let's assume their names start with "d" and higher :)
With fragmented files, b2.cdx and b2.dbf records would be within a few clusters at most from a1.cdx/a1.dbf/a1.fpt records and from c3.cdx/c3.dbf/c3.fpt records. The chances that RELATED records would be in close proximity are high.

Now with contiguous files (defragged to the sequence/layout seen above) a middle record of b2.cdx is about 410 megs away from its b2.dbf record and that record is around 1480 megs from its related a1.cdx record, which is 195 megs from its a1.dbf record, and that one is 525 megs from its a1.fpt record. It is also then 820 megs from its related c3.cdx record, which is 360 megs from its c3.dbf record and that one is 795 megs from its c3.fpt record.

In both cases there would be a minimum of 7 head movements.

In the case of fragmented files, that head movement would cover a few megs of data at most.
In the case of contiguous files the accumulated head movement amounts to covering 4585 megs.
Which sounds better to you?

Possibly, in the old days, a fragmented file required additional head movements back to the file system area to learn where the next piece is. *IF* that is the case I've just gotta believe that NTFS (at least, and probably FAT too) has been improved to load/keep such information for any "opened" file in RAM by now (M/F systems did, even way back then < s >).
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform