>Hi Samuel,
>
>>I mean the Compiler running on top of the Data Layer for 64 bits (not VFP Runtime at all), already supports those commands.
>>Of course if we don't surpass VFP speed, we can always use the VFP9TableLayer functionality, compiling your code.
>
>not sure if I got you right. Do you say that the new VFPCompiler release would compile and run my code if I used the vfp9t.dll data layer?
>
>
>>To translate the PDF you could use
http://open-source.onestop.net/2005/07/pdftohtml.html or grab the commandline version in sourceforge.net and turn that to HTML.
>
>I found an easier solution:
http://www.freeware995.com/bin/pdfedit.exe>
>
>>Then use Google translator.
>
>Did it. You can view the results at
http://indot.net/dotnetprocontest07/dotnetprocontest_english.html>
>
>>We'll implement APPEND FROM, anyone knowing .NET tricks for speedy parsing is welcome to provide a .NET implementation written of course in VFP.
>
>Sorry, my knowledge of .NET is not that deep. As a starting point I'd try something like this
>
> static void AppendFromDelimited(string tcFilename, string tcDelimiter, int tnCodepage)
> {
> double lnSec0 = System.DateTime.Now.Subtract(System.DateTime.Today).Duration().TotalMilliseconds / 1000;
>
> char[] lcDelimiter = tcDelimiter.ToCharArray();
>
> string[] laLine;
> int i;
>
> StreamReader loSR = new StreamReader(tcFilename, Encoding.GetEncoding(tnCodepage));
> while (loSR.Peek() > -1)
> {
> laLine = loSR.ReadLine().Split(lcDelimiter);
> for (i = 0; i < laLine.Length; i++)
> {
> // Here each field should be written to the database.
> //Console.WriteLine(laLine[i]);
> }
> //Console.WriteLine();
> }
> loSR.Close();
>
> double lnSec1 = System.DateTime.Now.Subtract(System.DateTime.Today).Duration().TotalMilliseconds / 1000;
>
> Console.WriteLine("Done in " + (lnSec1 - lnSec0) + " seconds");
> Console.Read();
> }
>
>
>Unfortunately this sample code takes already 1.5 seconds. Vfp9's APPEND FROM...DELIMITED is done in 2 seconds.
>
>Markus
Markus,
I didn't understand how this contest relates to data handling at all:) Both VFP and .Net could bypass to handle it as 'data' and directly do lowlevel.
Apart from that .Net doesn't need to read it line by line using a filestream. It could use a CSV provider and do that like VFP's append from, with readers and not datatables (to be faster), and could asynchronously process parts of reading writing in multiple threads (of course you could do extreme lowlevel,highlevel,multithreading mix in VFP too but what for-just 500K rows:).
If it were 1.1 I could say that VFP would do it at least 2-3 faster (search for similar VFP/.Net comparison thread here done on 1.1 days, .Net was nowhere near then).
Cetin