Hi, Tore.
>But I have found two big problems. PHDbase does not handle too big tables, in my case ca. 250000 records is the absolute max, at a given point it get unstable, and an phd('INDEX') simply crashes VFP. According to the specs this should not be a problem, but in reality it is. Personally I have been able to work around this limitation by splitting the big table into two smaller tables, but it is much slower and requires much more programming.
>
>The other problem is that it does not find all records, even if they are unique. This is repeatable, even after I create a new index. I found this out by accident, and it is really a serious problem because you can never trust that you find all your data.
I didn't detected any of those problems at the time I used PHDBase most. It was some years ago and the application was a medical database that was updated via CDs, where the main article table contained between 300K and 700K records. Anyway the central DB had above a million records, and the only limit we approached was the 2 Gb one.
However, the updates where done in batch from another (non PhDB-indexed) database, and then the indexes where rebuild from scratch (it took a full day to do it). After that, the main DB, as well as the segments distributed on CD where read-only. Maybe this was the reason of it's stability.
Regards,