Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
So Nobody here could answer my question... and I thought
Message
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00421860
Message ID:
00424280
Views:
21
Hey Ed:

I really do appreciate your response to my problem...I really do..BUT WHAT THE HELL ARE YOU TALKING ABOUT...Too much booze and wild women at Devcon? <g>, There are no objects, no COM, no nothing...Only 1 program, thats it ONE old Fashioned program that is reading from a table and populating or updating info in 12 other tables..thats it

I have since solved this problem. The problem is VFP cannot handle more than 512 MB (no matter what microsoft says) without going into a tailspin. I have a work around that flushes the buffers every 100,000 records and the program is working like a charm. I am already on month 4 having processed over 19.2 million records. each months takes 30 hours to process.

To everyone who bothered to take a crack at this, Thanks

Mohammed



>>guys were smart... legends in your own mind... Like Drew speedy
>>
>>Any and all thoughts, suggestions and help would be greatly appreciated.
>>
>>Have VFP 6 (enterprise edition) (service pack 4 installed) running on a NT Server 4.0, Dell 6400 box with Raid 5, 4 processors, 168 GB disk space and 2 GB memory.
>>
>>Reading a table (table a), this table has 4.8 million records, (size is 1.3 GB). From this table I am populating 12 relational tables. The largest of these is an address table which has about 3.8 million records (size is approx 454MB and a 500MB cdx) and a case table, same as address, ALL under the 2 GB limit.
>>
>>Doing a simple Do while on “table a”, storing info in a cursor, (created with a Create Cursor…) then doing indexseeks on the relational tables and either Inserting info or replacing info as needed.
>>
>>Should work OK, right…wrong. VFP keeps giving me an error “Not enough memory for file map”. (There is a 2 GB permanent swap file). This would start when record number 1.2 million was reached, now it’s happening at record number 465000.
>>
>
>The system itself sounds badly installed; VFP itself is single-threaded, so multiple CPUs are unlikely to have any significant effect, and at one time, pre VFP6, VFP had problems with >384MB of available RAM. You may be running into processor address space problems; Slot1 and SlotA only support upto 512MB/processor, so the excess may be causing a great deal of trouble. This problem isn't addressible by throwing more RAM at the problem, or enlarging swap file space; in fact, both can cause problems. You might try setting some limits on VFP's default allocation of memory, since it's based on VRAM, and well might try grabbing > 2GB. Xeon processors can address more RAM, but still run into the same problems with older VFP and >384MB. See the MSKB for more details
>
>If you insist on running this config, at least try tuning NT Server Enterprise, which, rather than dividing the 4GB Win32 address space 2GB app/2GB OS, can give 3GB to the app and 1GB to the OS when directed to do so. Unless you've got plans to run other products than the VFP app, you'll probably see virtually no difference between a single CPU and 2 CPUs, and the returns from added CPUs beyond two is even less. If the RAID is hardware-based, there's little CPU overhead for it.
>
>Obviously, if the query is running on a workstation other than the server, the server's capacities have nothing to do with the VFP issues - VFP runs at the local station. No matter how big and nasty you build a server, if you try to run the app from a 16MB P-100, VFP still runs on the local system (exceptions, such as remote VFP COM servers, either via DCOM or hosted under MTS, do exist, but not unless specifically designed for this capability.)
>
>At least you did make some salescritter's day, though. < g,d&r >
>
>If you're running this at the server, my guess is that the error sounds like the intermediate bitmaps may be too large to fit - an intermediate result of >2GB tuples might cause this, or resulting in >2GB of data in an interrim result. Try disabling Rushmore temporarily and see if it goes away; if so, the Rushmore process does some moderately hairy matrix manipulations to try to reduce a recordset based on indexes that affect Rushmore-optimizable statements.
>
>If you're using lots of objects, it could also be a garbage collection issue - see the ManualGarbageCollection entry on the wiki for more details on this. Steve Black mentioned it briefly in his "OOP Heuristics" session at DevCon (the highlight of DevCon AFA sessions went from my POV) - it's likely to be behind some anomalies reported with my API classes which embed a reference to a Heap object in themselves, and I'll be uploading a new version of it, and possibly NETRESOURCE, that will prevent the problem. The new CLSHEAP goes up tonight or tommorrow, and NETRESOURCE probably next week.
>
>>Buffering has been switched off in the project manager and all files are opened exclusive.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform