Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
VFP versus C++
Message
From
27/10/2003 00:54:12
Al Doman (Online)
M3 Enterprises Inc.
North Vancouver, British Columbia, Canada
 
 
General information
Forum:
Visual FoxPro
Category:
Other
Title:
Miscellaneous
Thread ID:
00842594
Message ID:
00842804
Views:
25
>Hi to all who are interested in this topic.
>
>One of my customers has a colleague who states that the application that I've written should be written in C++, which would give a better performance. I wonder whether he's right and if not, what arguments I could use.
>
>The core of the app hardly uses DBMS functions. Instead, it uses low-level functions like FOPEN(), FGETS() and FPUTS(), and string manipulation functions like STRTRAN(). It reads tab delimited files, not with APPEND FROM, but with FGETS(). The reason is that the fields of the import files vary and the file size can be huge. And it also writes files with FPUTS() after having done some sophisticated string manipulation. The speed is still phenominal, thousands of records/lines are processed in a second or so. But the import files can contain 100,000,000 records or even more, so it may still take a while before they're all processed.
>
>The big question now is: Will C++ or Delphi do this job significantly faster??

Well, I don't know about your case but I do know the last time I tried to use VFP for some "relatively" heavy string manipulation I was quite disappointed.

I have a client who suspected a computer of theirs was being used after hours to browse porn sites. I copied that user's INDEX.DAT file for analysis - it was about 8MB.

If you open a sample in the VFP Editor or Notepad (if you use IE there's one on your HD, in W2K/XP in \Documents and Settings\USER\Local Settings\Temporary Internet Files\Content.IE5\) you can see it's binary data but contains URLs in plain text. The one I had contained a couple of hundred URLs. I wrote a QUAD routine pulling the file into a character variable using FILETOSTR(), then using standard AT(), RIGHT() functions etc. to look for strings starting with "http://". On a 1GHz Athlon with 512MB RAM it was taking about 45 seconds per URL (in VFP7 SP1).

I fired up the Coverage Profiler and found that with "large" strings the AT() and RIGHT()/SUBSTR() functions are both slow. However, I couldn't figure out a way to parse out the URLs from the character variable/string without using them. If someone can post a routine based on the technique outlined above that's reasonably fast it would be enlightening because it sure escapes me.

I eventually used a completely different technique. I created a cursor with a 1-byte character field, used low-level file functions to pull the INDEX.DAT file into the cursor one character at a time, then used SKIP etc. to move back and forth in the "string". Still wasn't instantaneous but was a big improvement and was "fast enough" for the job.

/SET RANT ON

I believe the job of parsing the INDEX.DAT file should be possible to do basically instantaneously with modern tools. I remember my first IBM-type PC, a true-blue IBM PC (4.77MHz 8088, 640K RAM) on which I used to do FoxBASE+ programming. I used the BRIEF editor and regularly edited procedure files that were 200 or 300K. I could search for any string I wanted, and BRIEF could find it faster than I could get my finger off the "Find" key. On the machine I have now, an 8MB file, well, it should be child's play.

/SET RANT OFF
Regards. Al

"Violence is the last refuge of the incompetent." -- Isaac Asimov
"Never let your sense of morals prevent you from doing what is right." -- Isaac Asimov

Neither a despot, nor a doormat, be

Every app wants to be a database app when it grows up
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform