All this seems to be normal. For some kind of reason, someone decided to only allow an accuracy of 3.33ms whenever we store a datetime with milliseconds. Basically, when I did some tests, for some records it was OK. For some others, it wasn't. If you go in SSMS and force the assignation of a datetime value with milliseconds, after a few tests, you will see that SQL Server decides by itself how much it will change your value.
SQL Server 2008 comes up with a DateTime2 format which allows a precision of up to 100ns. But, the client still has SQL Server 2005.
The recommended approach is to break the field in two, thus to save the milliseconds in a secondary field with Numeric 3.