Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Optimizing Query
Message
 
 
À
08/11/2001 04:18:19
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Divers
Thread ID:
00578767
Message ID:
00578951
Vues:
25
Daniel,

Everything you said, makes sense, and I would do it this way, but unfortunately, I can not. For Weekly and Monthly kind of selects we implemented Data Warehouse idea, so they work fast. This one is On Demand (one-time) job, so this query will execute only once. In our system we have Query application, where each query definition is stored in MetaFile. That's why I can not split this query to pieces, even if I'd love too. My manager told me, that the query without zipcode restriction (it was a bug in my app, which I fixed), took 4 hours to finish. I don't expect the perfomance to be much better... :(

>Nadya,
>
>If I understand you correctly, the result of your query may be 800,000 records. Since you do a SELECT * INTO TABLE, there may be a lot of data to be written to disk. This takes time and may not necessary be a problem of optimazation when you're talking about so much data.
>
>It's difficult to propose a solution without knowing the data. But here are some ideas on how to improve query performance:
>
>- instead of big SQL SELECTs, split them into different smaller queries. I haven't benchmarked this since FPW2.6, but multi-table joins used to have a *very* bad impact on query performance. It would always be faster to run several queries or to use INLIST() in the WHERE clause instead of big multi-table SELECTs.
>
>- if you know the data and know which selection (either through a JOIN or through a WHERE clause) would reduce the result set most, then start with this query first and use this reduced result set in your subsequent queries.
>
>- SELECT only key fields (PKs and FKs) in your first queries to get pointers. Get the necessary data from the other tables once you have your final result set. This may dramatically reduce the number of records and volume of data to be read and written.
>
>As I said, this requires some knowledge of the data but hopefully gives you some ideas :)
>
>HTH
>>>I'll try at home with the test table. The temp table has 110 records. The main table has 5mln records. I think, the resulting set would be huge, may be 800000 recs. This is just a plain test, the live query uses 7 joins to tables, each of them has ~3-5mln. recs... :( <<
If it's not broken, fix it until it is.


My Blog
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform