Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
SCAN Confused?
Message
From
06/06/2001 16:29:42
Jonathan Cochran
Alion Science and Technology
Maryland, United States
 
 
To
06/06/2001 15:37:06
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Title:
Miscellaneous
Thread ID:
00515720
Message ID:
00516030
Views:
13
This message has been marked as a message which has helped to the initial question of the thread.
I've tried the same thing. Although the datetimes look the same, they aren't. Go to one of the duplicates, save it to a variable, skip to the next record, and save it to another variable. Now, if you do an equality test on the two, FoxPro says they are different.

It appears to work fine if I do the following:
ldCount = {^2001-01-01 12:00:00AM}
lnCount = 0
SCAN
   REPLACE PKEY WITH ldCount + lnCount
   lnCount = lnCount + 1
ENDSCAN
or:
ldCount = {^2001-01-01 12:00:00AM}
SCAN
   ldCount = DATETIME( YEAR(ldCount), MONTH(ldCount), DAY(ldCount), HOUR(ldCount), MINUTE(ldCount), SEC(ldCount) )
   REPLACE PKEY WITH ldCount
   ldCount = ldCount+1
ENDSCAN
It definitely looks like a rounding issue to me. The first one avoids the small rounding error by always adding to the initial date from scratch. The second one will force the datetime to remove any fractional pieces after each addition.

>Carol,
>
>As you have heard by now, you should be using an integer as your primary key. :)
>
>However, that doesn't make the behavior you are having any less disturbing (to me, anyway). I tried the test on my machine. I got 24 duplicate values, starting at record 92,000. Some notes:
>
>1) I get the same exact 24 duplicates every time, even when trying the following variations.
>
>2) Using 'do while not eof()' instead of SCAN gets the same bad results
>
>3) RLOCK(), REPLACE, UNLOCK does not solve the problem
>
>4) I don't think it is not a Novell or network issue, it does it on my local hard drive
>
>5) I don't think speed is an issue. I put '? recno()' in my loop, started it running, and went to lunch. It took A LOT longer to run; still same 24 duplicates.
>
>6) I put a WAIT statement for the record numbers between 92000 and 92030...letting the system pause during each loop makes no difference. Also, printed each ldCount to the screen in this range. ldCount was correct (no dups) but the stored values were not.
>
>7) Performing a FLUSH every 1,000 records, or every record for that matter, made no difference.
>
>8) I didn't go as far to try a database transaction for each record.
>
>9) Using a straight counter (starting at 1) and storing in an integer field did NOT create duplicates.
>
>In the example duplicates you posted, you had a January 8 date in your 92,000th record. There are 86,400 seconds in a day...shouldn't you have January 2 date in your 92,000th record?
>
>Anyway, I was bothered by the apparent bug, and this didn't take long to do. I was hoping Vlad was incorrect in his assesment, but it looks like he may be correct...
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform