Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
How to pass a date to a DLL?
Message
Information générale
Forum:
Visual FoxPro
Catégorie:
Fonctions Windows API
Divers
Thread ID:
00065737
Message ID:
00066355
Vues:
58
>>>>
>>>>Ok, I got it! But why not simply:
>>>>
>>>>bitand(lnRight, 0x7f)*256 + lnLeft
>>>>
>>>>Vlad
>>>
>>>Traditionally, a low level bitshift is Much faster than multiplication by a floating point value. But since this is foxpro, and we are actualling calling functions, the performance gains are probably lost. The readability is also reduced using my technique whereas in C++ it would be:
>>>
>>>unsigned char lcShort[2];
>>>short x;
>>>// ...
>>>x = ((lcShort[1] & 0x7f) << 8) | lcShort[0]);
>>>x = (lcShort[1] & 0x80) ? (-x) : (x);
>>>which would be thousands of times faster than the equivalent FoxPro and tens of times faster than the equivalent C++ using multiplies instead of bit operations.
>>>
>>>Of course in C++, the initial reason we had to unpack a short becomes moot :-)
>>>
>>>Peter
>>
>>Peter & Vlad,
>>
>>I assumed that using BITOR() would be faster than addition, however, when I ran a series of tests the opposite turned out to be true. The only reason that made any sense to me would be that the mathematical operations are handled by the internal math coprocessor, while this type of operation would be handled by the main. While I haven't tested shift (VFP's) vs. multiplication, it wouldn't surprise me in the least if the same doesn't apply.
>>
>>George
>
>Actually my theory is that the bitor/bitrshift is in effect calling a function whereas the addition/multiply is more highly optimized in the interpreter.
>
>Peter

Well, Peter, from my assembly language experience, I can tell you that the BITOR(), BITRSHIFT, BITNOT(), etc. probably map directly to the processor's instruction set. For example, in 6502, EOR was the instruction to execute an exclusive or. Admittedly, processor technology has come a long way since then. Still, I would think that math operations would require more instructions and, therfore, take longer. Given this, it would seem that regardless of how well optimized the threaded pcode floating point routines are, it would seem impossible for them to be faster. The only way that I can think of is that the internal math coprocessor is faster than the main, since it has less to do.

George
George

Ubi caritas et amor, deus ibi est
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform