Charlie,
I think you should adjust your client's expectations of what they want to see. I mean what's the true readability of a value that's changing 100 times per second? Is 10 times a second enough?
Sampling theory says that you need to measure a sample at least twice as fast as the frequency of the value being monitored. If you want to measure to 0.01 seconds, you need to be sampling at half that rate, the equivalent of a 5ms timer.
If they really have to have that sort of resolution you are going to have to do it in C++.
>Customer wants to see a stop watch ticking real-time as lap is time is being recorded. If I set timer interval to 10ms, elapsed time calculated may not be the .01 seconds increment customer wants so clock may look like it's jumping. Simply calibrating timer by adjusting interval property seems to work, but needs to be tested.