Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Two loop or not to loop?
Message
De
02/09/2004 03:15:49
 
 
À
01/09/2004 18:43:57
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00938414
Message ID:
00938658
Vues:
27
Hi Jim,


>But I am trying to cover the "average" use/user, because I think the general "rule of thumb" regarding defragging is wrong, particularly in those cases (the majority I suspect) where other processes also consume free space on the system.

I am lucky to be sometimes in the position where it is cost effective to check even small perf differences(data mining [we need those results FAST!] or apps running at a few thousand seats [few minutes spared for each person translates to enough budget for me <g>]. For "average" users that is not the case IMHO, since measuring/enhancing takes TIME. Here I follow simple rules of thumb:

1) check indices and programming metaphor
2) Try slow/fast cpu/disks: throw hardware at the problem: get faster/more disks, perhaps some RAM
3) in servers, check for network problems: if none exist, goto 2
4) put the tables / apps in separate partitions, so you can experiment with less danger
5) if you are working from NTFS, compress lookup/RO-tables
6) reindex often if machine ordered inputs are happening
7) automate "ordered defrags"

One area where vfp makes things harder from my current point of view is the location of data:
You have fewer maintainance problems if everything in a dbc is in the same directory. In the old FPD/FPW days it was much easier to separate .dbf and .cdx, which in certain situations is great for performance but difficult for "hacking/xcopy" maintainance. In dbc based situations I just use some temporary indices on other disks sometimes. [IMHO you could have better ROI for your efforts targeting that area, even if I know you only from your posts <g>.]

Working on administrators / OS's used: from a perf point of view on local machines it is good that vfp8 isn't supported on NT. NT is/was a beautifully stable system for development, but the caching on W2K and XP is greatly enhanced for machines with more than 512 MB ram (the actual border might be lower, "verified" by runs with 160 and 512 MB only on a system loading 70-90 MB at startup).

>Even printing on such systems consumes space, albeit temporarily. In these cases even "ordered defragging" will do little for the performance, likely still hurting it more than helping it.
Temp files have their own disk/partition im my setup schemas. Vacuum often <g>.

>I would be very interested to learn, though, the utilities that offer "ordered defragging".
>I e-mailed one vendor a couple of years ago asking them to include features for:
>1) defrag putting named files contiguously to the outside of the platter, with unnamed ones legitimately going anywhere but preferably towards the inside so that free space was central.
>2) purposefully fragment named files, allowing "grouping" so that ceertain files were fragged together and allowing specification of # sectors to contiguously fill for each file in the "group".
>3) control over the positioning of free space relative to named files per #1 or #2 above.
I doubt you will succeed here: too specific is your quest <g>.

regds

thomas
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform