Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Performance Issues
Message
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00517292
Message ID:
00517362
Views:
10
>We are looking for benchmarks documneted for VFP 6.0/7.0. By Microsoft or any third party source. We would also like to know if anyone has large scale applications operating on tables greater than 40 million records. Today a customer requested if VFP 6.0/7.0 could handle 1000 transactions per minute. Also, if VFP 6.0/7.0 could hanlde file sizes of 40 million records. The most our applications have handled very well, is in the 10 million record range in a real working envirnoment.
>
>Thanks,
>
>Alfred M. Menendez
>CEO
>Fermen Corporation

Alfred,

Supposed you find a benchmark I'll bet it's not representative for the possibilities. I just refer to the "Speed tip needed" thread and my message there from today. Assumed that what's in there is understandable and prooven to work, anyone can guess that any benchmark will not have incorporated these underlaying topics. Ok, unless I wrote the benchmark {g}.

Looking at my account-info, chances are fair that we have one of the largests apps ever built in VFP, and with a 100,000 transactions a day, (over 100) users not noticing eachother. I think somewhere in my info there is a talk about having the assignment right now for the processing of 220,000 salesorderlines a day, resulting in far over 1M transactions a day, and (just guessing) over 10M records a day.

When you're thinking on native VFP-tables 1. IMO this is the fastest way but 2. 40M records with one byte each will lead to a 40MB table. This implies that a record-length of 50 bytes drives you at the limit of the 2GB for one table;
Unless you are very sure not to reach this 50 bytes ever in one table holding 40M records, don't do this ! and use remote tables (Oracle etc.)

Now whether I'm right or not, IMO the processing of native (dbx) tables is the fastest there is, but depending on how much one can benefit from large-scale SQL-selects. Now note this only applies for retrieval, leaving you with the gaining on retrieval which should equal the loss on writing; only (again IMO and not too much experienced on this right now yet) when a lot can be written from stored procedures (the server deals with it all), you can gain on writing too; it depends largely on the app if this can be achieved.

My 1M transactions, lead to 694 transactions per minute, or (guessed) 6944 record-writes per minute. IMO this may lead to a server which can't cope with that, just when it has to deal with processing too (stored procedures). Now note that when the transactions are divided over more PC's, and the server will act as fileserver-only (opposed to DB-server like Oracle), your (and my) chances are better. However, this by itself will lead to smaller scale SQL selects, giving the huge overhead on the formation of the (too ?) many SQL-commands. So look for the balance here.

Our ERP-app never, just never reads any records which aren't used. I mean, a Locate For without preceding Seek and While is just forbidden at ours; once this is arranged, you just as well can skip the DB-server principle, because it is only for "skipping" the not needed records, without stressing the network; that the DB-server therefore stresses itself, is what not everbody thinks so much about, but is too much true. Skip the DB-server principle ?
Yes, if it weren't for the 2GB limit in the native tables.

So it leads to the need of the DB-server anyhow, leaving you with finding the proper balance between the overhead on the SQL-command (formation of) and the app which should do as much as possible itself, once the transactions have to be devided over as much PC's as possible. Note that in a normal situation (like yours ?) there is not so much to decide because of just the many users being there with their processing PC's), while in my situation all the transactions are initated by EDI and *can* be performed on one PC, this PC getting stressed. So, this situations allows the choice;

With the DB-server as a fact being there, it should contain at least one physical disk for the data, and one for the indexes, giving a factor of 100 better response. This implies also that the DBMS should allow for a division on this matter.
Never use raid-5 because it's too slow, and better use duplexing; not for safety-reasons, but for response (hving the safety too).
Now first further thing to do is have two servers, one with the data, and one with the indexes. Duplex each of both servers (real duplexing, no mirroring !).
Give the servers disks of 15,000 rpm and oversize them as much as possible. I mean, when the database will hold 10 GB on one disk, give it a disk of 80 GB. Head-access-times are improved by a factor of 8 by this.

There are a few more of this kinda topics, and once they are all implemented, they will give a few thousand factors better response. Really.
For 100 % sure the 100 Mbit network will now be the bottleneck, so make it a GB-network.

For my own challenge I'm pretty sure I will succeed, but note this includes the change of all Seek's etc. into one SQL-command each. This because the app sadly is around native dbx and the logic can't be changed (far too large). If I succeed, you will succeed even better, supposing that your (new) app will have the right logic around SQL.

It may be too late for you, but in a few months I will have a representative benchmark myself.
Note that you should get around ODBC, which VFP7 allows for I think. Maybe that is even possible right now. Others will know.

HTH, regards,
Previous
Reply
Map
View

Click here to load this message in the networking platform