>Some other (DB) systems allow you to have multiple 'fragments' for a data file (e.g. Omnis) - i.e. overcome the file-size limit by having additional physical files for the table which are used transparently when accessed. However, this won't get over the 4-byte index offset limitation...
>
There's nothing to prevent you from vertically or horizontally partitioning your tables, and if you truly need large files, you can move your data to a backend product like SQL Server or Oracle and continue to access it from your VFP application using SPT or remote views; now the limitation that you're stuck with is probably no more than 2G records/table. I haven't tried (I don't have) a table in a backend with >2G records, so this limit might not exist.
You might want to look at some of the projects where people have used VFP for data warehousing; at least at one time, Val Matison of Matison Consulting in Toronto did a project involving data warehousing with native VFP tables that circumvented the limits by using vertical partitioning of data sets.
>I'd be interested to see any articles on methods for using very large tables - a n existing program I have looks like it will exceed the 2Gb limit soon.
>
>>I suspect that MS isn't going to change the 2GB native file size limit in the near future, since it'll probably affect some of the underlying low-level file handling, and the size of things used to represent offsets in a file (right now, at 2GB, a signed integer of 4 bytes length can represent an offset; bumping it from there would change every data structure that contained an offset within VFP at a minimum.)
>>
>>You're free to make the request, though; it just happens that there are good, pragmatic reasons for the limit in this case. We'd lose a great deal of backwards file format compatibility with this change, since now, more than just the headers of a VFP(n+some value) table would be affected.