Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Speeding up a query
Message
From
05/10/2001 11:35:56
Mike Yearwood
Toronto, Ontario, Canada
 
 
To
04/10/2001 16:57:04
Jerry Tovar
Dana Corporation Dana It
Maumee, Ohio, United States
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00564341
Message ID:
00564826
Views:
23
>What I ended up doing for now is:
>
>*Create a cursor.
>SELECT mytable
>lnrecno = RECCOUNT() - 100
>GO lnrecno
>SCAN REST
> SCATTER MEMVAR
>* INSERT INTO mycursor FROM MEMVAR
>ENDSCAN
>
>This greatly improved the speed of getting the last X number of records. The table I am querying contains over a million records. Indexing on RECNO() is the real answer but I cannot take the time to do this now.
>
>
>
>>I know there is something else that can be done, if I think of it I will post it
>>
>>
>>>Thanks for the help, I was trying to avoid indexing on recno() but it sounds like I'll have to.
>>>

Actually, what you did is best. If you add a tag on recno(), you will improve your SQL code, but it won't be as fast as what you have now. You might also have a major major problem!!!!

To prove the problem, try this code ...

USE SOMETABLE EXCL
INDEX ON RECNO() TAG RECNUM
=cursorsetprop("buffering",5)
append blank
=tableupdate()
=cursorsetprop("buffering",1)
set order to recnum
?seek(RECCOUNT()) &&<<<<< FAILS!!!!
?seek(-1) &&& <<<<< WORKS?????

I wrote about this in the July 1997 Foxpro Advisor. Append blank with table/row buffering enabled assigns a negative number to each new record. Index tags are updated as the record is entered by the user. Tableupdate changes the negative numbers to positive record numbers, but doesn't update the index. That may be a bug, but its not significant because I suspect few people have tags based on recno(). I think its almost a defacto standard to have unique primary keys for every record. You could compute the primary key generated 100 times before the latest one and extract these records with a query or a seek and a SCAN/ENDSCAN.

You might also consider...

GO RECCOUNT()-100
COPY REST TO mycursor.dbf

OR

COPY NEXT 100 TO mycursor.dbf

Unless you are adding the last 100 records to an existing cursor?

HTH
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform