Maybe we're being fussy...
Hello,
I have an application that queries on fairly large (250,000 - 1.2M records). The queries can have conditions upon any number of fields from one or two related tables. The output of the SELECT is a cursor containing the minimum fields needed to reference back into the original tables (for reports, etc...). My experience is that when the result set is small (SELECT returns a few thousand records or less), this process is very fast. However, if the result set is large (same conditions, only now SELECT returns up to *all* records for this
table), the process is very slow; and, in fact, adding index tags (yes, optimized) for the conditions in an attempt to speed up the select does almost nothing (probably because most all records meet the conditions anyway (my theory).
Given that all is optimized, since the query time seems proportional to the size of the result set, my guess is that the creation of the result cursor is where the time is used. In fact, I have done a few tests that indicate that a SELECT .. WHERE .. INTO CURSOR .. from a single table is roughly as fast as a COPY .. FOR .. TO ...
Is there a way to select from related tables that gets around this?
I'm not necessarily asking for a solution, maybe just a change in philosophy. ;)
As always, many thanks for any help!
Mark
"It hit an iceberg and it sank. Get over it."
Robert Ballard, dicoverer of the Titanic wreckage.