Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
10 Things to Avoid in VFP Development
Message
De
31/12/1999 21:31:08
 
 
À
31/12/1999 13:22:13
Walter Meester
HoogkarspelPays-Bas
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00310318
Message ID:
00311078
Vues:
25
>Try this for a 1,000,000 record (same argument you used). Since it's is the norm that the tables are NOT filtered, and they *can* filter them. Also these main tables seldom reach such amount of records. (most mid-sized companies don't have 1,000,000 clients, articles, employees)
>

1M records is not at all unreasonable for a mid-sized business, at least from my POV. I'll use the company I work for - Weatherhill, a relatively small niche publishing house also providing distribution services for other companies, doing $6-8M in gross business annually - not a giant by any means. At an average list price of $22.95, that's be ~260K books shipped annually if we sold at list, and typical discounts run in the 40% range. Given that the average order is 7 books, representing 3 line items, that's over 60,000 invoices annually, and over 100K line items. We have more customer orders and inquiries than invoices, and we haven't dealt with returns. We have accounting requirements to keep at least 3 years of data on-line; we keep about twice that available, because we do statistical analysis using line item detail records and we have unique sales requirements where, in some circumstances, books may be returnable up to 5 years after they're sold.

In the shipping industry, it's not unusual to deal with mapping tables based on 5 digit postal code to 5 digit postal code rate mapping; there are over 78K valid 5 digit postal codes. Typical rate charts from the national-level trucking firms and LTL freight carriers run in the 800K-1M record range. I provide rating systems for some of them - I may have 10-15 national-level carriers on-hand now, and a similar number of regionals. Even end-users with a single warehouse will have 70-80K records per carrier in the base rating tables they use when arranging shipments, and typically they'll have business with 4-8 different carriers.

With the advent of mass mailing and emailing, it's not unusual for people with even one man shows to have lists of hundreds of thousands of potential contact names and addresses. Charities, churches, political organizations, the guy selling Ronco(tm)'s latest handy dandy widget...

You need to open your eyes and look at the world around you. We live in an era where storing and examining huge data stockpiles has become a routine way of doing business for a lot of people.

>It still depends on the selectivity of the filter. If only a few rows are going to be displayed, yep, you've got a performance problem. But if you want to display all 1,000,000 records, p-views are definitely not an attractive alternative. Since the UI design says that all rows must be accesable at default, SET FILTER is the better choice.

If I had an end-user who really needed to deal with all 1M records, or even a significant fraction of those 1M records, through a browse a dozen or two records at a time, I'd be looking into psychiatric care for the individual or some general help for him in terms of clarifying the data needed to do the task at hand. I have situations where I have end users working with candidate records sets that may be that large, but if they work with them through a browse or grid, they're either dealing with a distillation of the data set, or they're making discrete shifts in context, so that at any given moment, a much smaller recordset with a much narrower and much more clearly defined scope of reference is desirable than throwing up a significant fraction of the entire domain. YMMV. P-views and SQL SELECTs work well for my applications, letting the user deal with convenient sets of records, they scale well, and are easily moved to a backend environment when data set size or network performance becomes a major issue.

Ad-hoc data distillation is one area where backends pay off very well - rather than relying on the programmer to understand the exact patterns of use and reference to make performance acceptable, a backend can look at the history of user inquireis and use that as a basis to modify the storage and indexing of data to better fit required behavior without programmer intervention. SET FILTER doesn't address this issue at all, Things like OLAP, which can summarize and reorganize the data according to user specifications, in many cases operating behind the scenes in an off-line environment make SQL approaches to data manipulation very attractive.

I'm sure some people will continue to use xBASE efffectively; for me, and the people I work with, it doesn't make sense, and won't put us in a position to exploit the inforamtion technologies now reaching the marketplace.

If I had to page down through a million records in a grid or browse, my fingers would cramp up if nothing else.
EMail: EdR@edrauh.com
"See, the sun is going down..."
"No, the horizon is moving up!"
- Firesign Theater


NT and Win2K FAQ .. cWashington WSH/ADSI/WMI site
MS WSH site ........... WSH FAQ Site
Wrox Press .............. Win32 Scripting Journal
eSolutions Services, LLC

The Surgeon General has determined that prolonged exposure to the Windows Script Host may be addictive to laboratory mice and codemonkeys
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform