Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Two loop or not to loop?
Message
From
01/09/2004 15:31:49
 
 
To
01/09/2004 14:56:11
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Miscellaneous
Thread ID:
00938414
Message ID:
00938519
Views:
33
Hi Thomas

You raise one of my favourite topics... "Defragged as well ?".

If records are just being added in this process, defragging should have zero effect.
Sure, the CDXs (for the existing records and for new ones too, to a point) being fragmented could have some impact, but this should be marginal considering that the applicable nodes should be cached once accessed the first time.

Certainly any new records added, to any of the dbf, fpt or cdx WILL OCCUPY FRAGMENTED SPACE once they are written.

In many circumstances defragging can have a detrimental effect on throughput, especially when traditional xbase commands are invloved. SQL goes some way, at least per my simple observations, to alleviate this problem. But in fact a dbf and a cdx and a fpt will only be in a defragmented state from the time the defrag is done *until* the first record beyond the current part-empty sector is written. ALL records after that will be in sector fragments.


cheers
>Jos,
>
>SWAGing again:
>I believe processing would be faster if you write to each file in turn.
>Reason: the index file has to be updated for each record and more of the index could be cached.
>I think the size of the tables and the .cdx is the main factor here, since the machine seems well equipped. OTOH, since there a possible index degradation after ordered inputs, this might be less hampering than I assume.
>
>The basics (just to be sure...):
>You have reindexed all recently ?
>Defragged as well ?
>More than 30% free space on your disk(s) ?
>
>
>To get a rough estimate:
>Backup your data (again <g>),
>Test 1 :delete all indices and compare run times
>Test 2 :delete all but a handful of records and compare run times
>Test 3 :delete all but a handful of records, put ables into Ramdisk and compare run times
>
>Since the machine is no slouch: do you have more than one disk in it ?
>If so, try distributing the writes to different disks: does sometimes wonders <g>.
>
>Each disk can in turn write at its own max speed,
>giving you potentially n*disks the throughput of the slowest disk.
>If the disks vary much in speed,
>adjust the amount written to each accordingly.
>About the effect of RAIDing 0 with less risk...
>
>Perhaps more ideas after more data from your side,
>and perhaps a dirty idea to cut the work of rewriting,
>at least for test purposes <g>
>
>rgds & doloopdidoop (couldn't ressit)
>
>thomas
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform