Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Importing large files
Message
General information
Forum:
Visual FoxPro
Category:
Coding, syntax & commands
Miscellaneous
Thread ID:
00663825
Message ID:
00664014
Views:
26
Hi!

Hmm, what kind of preprocessing is done such way? Could it be moved to SQL Server? If yes, then follow the advise to export TXT file directly to SQL Server. The quickest way is to use BCP utility (bulk copy), if TXT file format is sutable for it. Then run SQL Script on the server to process data and split it to5 tables or similar tasks. We did such kind of works using exactly this approach, and we have experiense that this way is the best from many points of view.

As abut the plain "REPLACE ... ALL" command - you're right, it is extremely slow just because buffering, caching etc. and, specially, because work with memo fields. I beleive that in case you will do this at SQL Server dorectly to carchar(1000) field, this wil be MUCH quicker.

If you still want to go the way using VFP pre-processing of data and use of intermediate VFP cursor, let me know - I can give a few recommendations related to such large replacement and data change. The main thing is get rid of memo field at all. Is it needed to join comment fields into memo field just in VFP cursor? Is it significant for pre-processing? If not, then do not do this in VFP - memo field is very slow with large data. Jut export to SQL Server with all 4 comment fields only, then join them at SQL Server (it could be done also using a single SQL Server command). I vbeleive SQL Server will do this with varchar(1000) much quicker, because no memo field.

Hope this helps.

>We've thought about that. The problem is, there is considerable pre-processing that happens between this cursor and the final five tables in SQL that the data is distributed in.
>
>Donna
>
>>>We need to import large text files 1.5 million rows regularly (weekly). We create a cursor and append from text_file TYPE SDF. Very fast so far. Problem is, there is a description in this file that is up to 1000 characters wide. We import it into 4 char(250) fields and then after the import issue a replace statement to fill in the memo field.
>>>
>>>REPLACE fmitemdesc with ALLTRIM(fcitem1)+ALLTRIM(fcitem2)+ALLTRIM(fcitem3)+ALLTRIM(fcitem4) ALL
>>>
>>>The above replace statement is EXTREMELY slow - about 100 records every 3 seconds!! It takes forever to do these large files.
>>>
>>>Is there a different way to import from a text file and get the info into a single memo field? It is going from the cursor into SQL Server in a varchar(1000) field.
>>
>>Just a couple of general ideas you could try:
>>
>>- if the final target is SQL Server, maybe you could open an ADO connection to SQL Server and update the varchar column in a Recordset directly from the 4 VFP cursor fields, rather than needing to do the intermediate memo step
>>
>>- I understand there is a Bulk Copy (bc) command in SQL Server which is fast for importing large amounts of data.
Vlad Grynchyshyn, Project Manager, MCP
vgryn@yahoo.com
ICQ #10709245
The professional level of programmer could be determined by level of stupidity of his/her bugs

It is not appropriate to say that question is "foolish". There could be only foolish answers. Everybody passed period of time when knows nothing about something.
Previous
Reply
Map
View

Click here to load this message in the networking platform