Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Need a a better way to do this
Message
De
06/02/2005 09:19:26
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00983858
Message ID:
00984302
Vues:
25
Hi Kevin,

some numbers/description of your hardware would help visualizing <g>.
Your approach seems to be appropriate even for slow/busy LAN's if
not used to automatically gather data.

Just for kicks I'ld try something along the lines
local laID[1], lcUserID, lnBatchNo, lnStart
lnBatchNo = .nBatchNo
lcUserID = oApp.cUserId
sele ID from (alias()) into array laID
select Resh
SET ORDER TO Id 
lnStart = seconds()
For lnRun = 1 To Alen(laID, 1)
   = Seek(laID[m.lnRUn])
   REPLACE ;
       BatchNo WITH m.lnBatchNo,;
       Used_By WITH m.lcUserID
next
Wait Wind Str((Seconds()-m.lnStart)*1000) + "ms for No. of Items=" + Str(Alen(laID, 1))
to eliminate possible table change/selection cost
(which should only become problematic if there are other issues...)

If that STILL takes too long, first comment out each field of the replace separately
and measure again (on the assumption that only one field is used in MANY indices)
and also get measurements with the replace totally commented out just to get the cost for the seeks. Repeat to get stable measurements and post <g>.

This is to gather information: I'ld have guessed that a "set relation" based approach could be marginally faster given your partial information about the data/sizes/locations. But that difference should be only measured in a few ms. For only a few hundred records mixing approaches of "select(Resh)" and "replace IN" shouldn't be problematic, even if this is unneccessary and you should decide on one way to approach the Resh table.

If you need to block changes even sooner in Resh (machine gathering data?), you might try to integrate the "replace" into the step where you build the local table: just before a record is copied to your local table execute the replace in a UDF. As long as the resultset is Rushmore-prefiltered, your UDF should be called only for the 100 resulting records, keeping the performance high.

One other thing: you might reindex the Resh table if records were added there "ordered", as this can lead to unbalanced trees. If the index file size shrinks heavily, you might win a few cycles. Also take a look at the "physical" structure of Resh dbf/cdx: are those files perhaps heavily fragmented and/or compressed on the server ? If so, I'ld defragment, but there are differing opinions on this. If the server disk is filled to capacity, this can slow things down exponentially if other users also use the disk: keep 30% free for adequate performance. Monitor the network, since vfp is not the only possible culprit there <g>...

feed us more data please...

regards

thomas
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform