Sergey, thanks for the reply. When I ran a query using your suggestion and some other techniques I am NOT seeing the millisecond portion of the datetime value.
SELECT
CONVERT(VARCHAR(32), INSERTED_ON, 121) AS cnv_121,
CONVERT(float, INSERTED_ON) AS float,
CONVERT(varchar, INSERTED_ON, 109) AS cnv_109
FROM tbl
cnv_121 float cnv_109
2003-01-22 16:04:33.000 37641.6698263889 Jan 22 2003 4:04:33:000PM
2003-01-22 16:04:33.000 37641.6698263889 Jan 22 2003 4:04:33:000PM
2003-01-22 16:04:33.000 37641.6698263889 Jan 22 2003 4:04:33:000PM
...
2003-01-22 16:05:02.000 37641.670162037 Jan 22 2003 4:05:02:000PM
Any ideas why this would be the case? Hard to believe that every insert happens when the millisecond value is at 000. However, that is the case for all 3,904 rows in this particular table.
>Datetime's always stored with accuracy of one three-hundredth of a second rounded to increments of .000, .003, or .007 seconds. In EM datetime values are shown w/o milliseconds or even w/o time part ii it's 00:00:00.000. The following selects will return actual values stored in datetime field, including milliseconds.
SELECT CONVERT( VARCHAR(32), fld_dt, 121) FROM mytable
>
>SELECT fld_dt FROM mytable
>
>
>>I have an application that has several datetime field but the level of precision stops at the seconds level. These columns are assigned a default value of GETDATE() which returns a value down to the millisecond but that is NOT the case for the actual columns in a table.
>>
>>Is there some type of globabl setting that defines the level of precision stored for a datetime column in a table?