Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
The fastest way to move data from one table to another?
Message
From
05/01/2009 03:22:55
 
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Environment versions
Visual FoxPro:
VFP 9 SP2
OS:
Windows XP
Network:
Windows 2003 Server
Database:
MS SQL Server
Miscellaneous
Thread ID:
01370929
Message ID:
01370951
Views:
48
This message has been marked as a message which has helped to the initial question of the thread.
>Hi everybody,
>
>I have two tables with identical structures (Current and History). I need to move all data from Current into History and also clean data a bit during this process. I'm doing this in a scan loop, but the process is very slow (I have more than 4mln. records - 1,6GB). Do you have other ideas of moving data?

What is "slow"? What is your CPU speed and RAM? Can you give us actual times?

Have you experimented with setting SYS(3050) to large values (if you have more than, say, 2GB RAM), and/or calling SYS(1104) after intermediate operations?

Is antivirus scanning the 1.6GB table before your code processes it? That would take considerable time.

If the large table is on a server, maybe you could copy it to your local hard drive and process it there, with exclusive use. That might be a lot faster than processing it across the network.

Assuming the above don't yield enough improvement:

- I'm guessing your "clean data a bit" means you have to check each row and clean if necessary. The code for checking cleanliness runs against every row (i.e. 4 million times), so you should optimize that as much as possible. Post exact code here if you'd like specific suggestions.

- There may not be very many rows that need cleaning, it's probably not as critical to optimize that code

- One thing you could experiment with is processing groups of rows one after another, rather than all at once. IOW rather than processing 1.6GB at once, try processing the first 10% of records, say 160MB. Then do the second 10% etc. The idea is to not overflow actual physical RAM on your machine, and avoid use of your local page file. In Task Manager, if the Commit Charge Total value is exceeding the Physical Memory Total value, your machine is paging to disk and slowing down by probably a couple of orders of magnitude
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Previous
Reply
Map
View

Click here to load this message in the networking platform