My personal opinion on "big" is the following:
I would call anything below about 20K records small, 20K - 1M records medium, and anything above 1M records large. Where "large" doesn't imply that VFP can't handle it.
As mentioned by other replies you got, data corruption can be caused by users shutting down their computers incorrectly. However, even in these cases, you can eliminate most problems by using FLUSH. In my experience, TableUpdate() isn't enough; it should be followed by FLUSH.
Hilmar.
>A client wants to run a VFP system with
>Three Tables...
> Customer table up to 50,000 records
> Person table, linked to above, up to 300,000 records
> Order Form Table, linked to Customer, up to 700,000 records
>
>Under LAN I am afraid of data corruption and speed issues, as the software does many different queries that are not indexed.
>
>Under SQL I am not concerned with data corruption, but am still concerned with speed, especially if 30 people query the Order Form Table simultaneously.
>
>Has anyone ever dealt with tables these sizes and this many users. ALL input would be helpful.
>
>Thanks,
>Glenn
Difference in opinions hath cost many millions of lives: for instance, whether flesh be bread, or bread be flesh; whether whistling be a vice or a virtue; whether it be better to kiss a post, or throw it into the fire... (from Gulliver's Travels)