>#DEFINE LOOP_LENGTH 100000000 > >LOCAL Start AS Datetime >LOCAL Stop AS Datetime >LOCAL LoopIndex AS Number > >LOCAL Buffer1 AS String >LOCAL Buffer2 AS String >LOCAL CurrentTickCount AS Number >LOCAL ElapsedSeconds AS Number > >DECLARE INTEGER QueryPerformanceFrequency IN kernel32 STRING @Frequency >DECLARE INTEGER QueryPerformanceCounter IN kernel32 STRING @PerformanceCount > >m.Buffer1 = SPACE(8) >m.Buffer2 = SPACE(8) > >QueryPerformanceFrequency(@m.Buffer1) > >m.TicksPerSecond = CTOBIN(m.Buffer1, "8") > >m.Start = DATETIME() >QueryPerformanceCounter(@m.Buffer1) > >FOR m.LoopIndex = 1 TO LOOP_LENGTH > * dummy >ENDFOR > >m.Stop = DATETIME() - m.Start >QueryPerformanceCounter(@m.Buffer2) > >m.ElapsedSeconds = (CTOBIN(m.Buffer2, "8") - CTOBIN(m.Buffer1, "8")) / m.TicksPerSecond > >? TEXTMERGE("<<m.Stop>> (based on DateTime) versus <<m.ElapsedSeconds>> (based on Win32)") >A better comparison is done via measuring a couple code stretches / single lines each a high double digit of counts and compare the variance of the result sets for each line measurement between gettickcount and QueryPerf ;-)