Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Selecting 80,000 from SQL 2K
Message
 
À
16/08/2002 11:17:17
Information générale
Forum:
Visual FoxPro
Catégorie:
Client/serveur
Divers
Thread ID:
00690068
Message ID:
00690488
Vues:
13
>The program is simple it marries to systems.
>1. a program written in VB using SQL.
>2. an accounting system written in VFP.
>
>The idea is to prevent fraud. Checks are prepared and written in the VB program to a SQL table. Then once a week (the one day checks are written) the check information is transferred to the VFP program and processed for transmission to a bank. Total number of records is approximately 80000 from four tables. Then information is processed for the accounting system. I considered breaking up the two processes but it appears it really does not change the number of required number of records transferred.
>
>With 24 locations writting approx. 5,000 checks each month - I think the number of records I'm getting is small. And of course the accounting staff work only 8-5 - So the hit on the system occurs during the middle of the day.
>
>I think the DBA should consider setting up a warehouse.
>
John,

That might be a good idea. Here are a few others to consider. In each case, I assuming that the hardware in use is sufficient (multiple processors, fast SCSI drives with a RAID array, etc.)

The first thing I'd look at (if I were the DBA) is the table structures themselves. With apologies to Markus and Rick, this has a "double impact"< s >.

First, if not as efficiently designed as possible, the fewer records per packet are sent across the wire, and more packets have to be sent that might otherwise be the case. More Packets = Higher Server Load.

Second, SQL server stores its data on 8K "pages". Again, an inefficiently designed tables requires more pages be retrieved to get all of the data. In this case, More Pages = Higher Server Load.

Things I'd look at would not only the design of the table itself, but the data types as well. For example, if only the date portion of a date/time datatype is significant, I'd use a small datetime rather than the a regular datetime. Does the check number have to be an int, instead of a smallint? And so on.

One other consideration just came to mind. Is there an index on the retrieval criteria that you're using? If so, is it clustered? A clustered index would perform better in your case.
George

Ubi caritas et amor, deus ibi est
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform