Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Any plans to increase the table size from 2 GB(Microsoft
Message
From
02/02/2004 11:38:37
 
 
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00872674
Message ID:
00872993
Views:
17
Martin,

Very good point on the deleted records. I've encountered that problem before and it definitely was the deleted records in the temporary file that was slowing down the USE in the environment I was discussing.

I'd also would point out that if for whaever reason your are unable to optimize a query in VFP, then a table of any size will create a lot more problems than an equivelent un-optimized query in SQL Server.

For single record seeks and optimized locates I completely 100% agree VFP can be lightning fast even with VERY large tables. What I'm pointing out is if you have to operate across a bulk of the records in a 2GB table, I doubt you will have 200 users with split second access.

One example I have right now is a production planning app that generates 14 million+ records every single day and then has to process ALL of those records to build PO's, work orders, and pick tickets. This generates millions more records every hour. Even with only a few dozen users, VFP is not the ideal database platform when you're actually processing and reporting off large tables.

Basically what I'm saying is if hundreds of users have to process 75%+ of a 2GB table, that is a lot different than hundreds of users accessing a single record or two every few minutes.

Greg




>Hi, Greg.
>
>[snip]
>>I've also found that when you get over about 50 million records that VFP can be slow even doing a USE (table)... Just a few months ago I fixed a VFP app that was taking users 5-10 minutes just to get into the program. Come to find out every workstation was opening a 10 million record 'temporary' file when the program launched. Once I deleted and packed the table the app opened right up.
>[snip]
>
>Just wanted to point out that my own experience showed me that it is not plain record count what slow things down, but having too many deleted records. The delay at USEing the table can be prevented with proper indexes on deleted(), as it is caused basically because a scan is needed to find the first non-deleted record.
>
>All in all, I had tables with more than 30 million records and searches were pretty fast (not only VFP searches, but phDBase searches also).
>
>However, in the end, for such intensive DB work, SQL Server or another relational engine is the solution. Since a few years ago, I use DBFs just for very small tasks, or some very little apps I wrote for non-profit orgs.
>
>My .02,
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform