Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Finding Duplicate Records Across 21 tables
Message
De
02/11/1998 10:33:21
 
 
À
02/11/1998 10:27:15
Information générale
Forum:
Visual FoxPro
Catégorie:
Codage, syntaxe et commandes
Divers
Thread ID:
00153490
Message ID:
00153514
Vues:
24
>>>>>I work for an auditing firm where we are looking for duplicate financial transactions that span approx 50M records over 21 separate tables. We are looking for matches based on, say, ABS(GROSS) and INVOICE across all 21 tables. Basically each record in each table has to be checked against the current table and each record in all the other tables. The results are complied into a separate table with a blank record between groupings. Currently this takes 72 hours. There's got to be a better way! Any suggestions?
>>>>
>>>>Would it be faster to UNION ALL tables together and then run GROUP+HAVING queries.
>>>
>>>Each table is ~1.5G in size-~30G in all. Wouldn't the UNION ALL put me over the 2G file limit?
>>
>>You said 50M, so I assumed the size. What are you using now?
>
>50M referred to number of records. Currently using 21 tables with ~2M records in each (~1.5G file size). Simply running a series of code and checking one record against another.

I have feeling that you still may consider cascading UNIONs, but concatening only keys which should actually be checked as duplicates. Here, you have chance to stay within 2G, at least you may reduce number of processed tables to 2-3 (you can ordert by some keyvalue to eliminate necessity of afterward cross-checking). Then you run GROUP+HAVING against either 1 resulting table, or cascade it against 2-3 tables.
Edward Pikman
Independent Consultant
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform