>I was wondering (and I am sure that some of you must know) why exactly MS puts a 2 gig limit on the size of VFP tables. We are quickly approaching this limit on one of our tables and it has forced us to move the app over to SQL Server. Hopefully there is a better reason for Microsoft limiting the VFP file size other than trying to get VFP apps to migrate to SQL.
>
>I am experiencing the frustration of watching my app lose performance as I "upgrade" to a client server solution.
>
>Anyone know why there is a 2 gig limit, and does anyone know why MS hasn't ripped Rushmore out of fox and put it into SQL Server yet?
Yep - it has to do with some underlying C/C++ data structures common to the DBF format - the use of a signed 4 byte integer limits the range to 31 bits, or 2GB. Changing this would break a ton of things required for backwards compatibility and would require a change to every internal pointer's data type and the structures that contain them. Changing this would be a major overhaul.
Maybe if VFP goes 64 bit; in the meantime, it can use a backend to access larger tables; migrating the data to SQL Server by no means requires that you give up VFP! I think you can find a number of people who've integrated SQL Server data into VFP apps, and if you find that only a small number of tables are bumping their heads against the 2GB limit, remeber that a VFP DBC can contain both remote and local (native) data tables, so maybe migration of the problem table to SQL Server would suit your needs...with remote views replacing the problematic table while everything else remains DBFish...