>In today's environment the standard PC is 32 MB of RAM
>(and growing) and a 6 GB hard drive (and growing).
>
>VFP has a serious limitation in that a table may not
>exceed 2GB.
(clip)
>Does anyone know if the table size constraint will ever
>be removed? I have heard there will probably never be a
>true VFP compiler because of things like macro expansion,
>but I sure wish M$ would expand the 2GB limit.
>
>Any reason why it can not be increased?
Keep in mind that the largest partition size available under FAT is also 2 Gig - while this may be exceeded under NT, as well as Win98, most machines running under Win95 are still running under this restriction. Because of this OS restriction, arguing for a larger file size cap is currently moot.
Also note that the 2 Gig cap is based on the structure of the .dbf file itself. While Microsoft has been slowly breaking away from the original .dbf format, chaning this cap would require a major kernal change, much for the same reason that the real field names in a file are still 10 characters long (long field names require a separate .dbc file to store them - they map back to 10 character names).
In short, breaking certain characteristics of the .dbf file structure, such as the 2 Gig limit, would likely require a major retooling of VFP's kernel.