>Hi stephen,
>
>I guess the tranaction table is quite long. It is scanning the whole table to get those values. Adding an index in general does not help for SQL.
>
>If there are a lot of duplicate periods and the number of distinct periods is limited, You can speed up het whole process by using xBase commands. It is more work, but does not require so much bandwidth and that currently is slowing down.
>
>The basic idea is a follows (Code not tested)
>
>CREATE CURSOR Result (Period I)
>SELECT HistoryTable
>SET ORDER TO Period
>SET NEAR ON
>GO TOP
>nPeriod = Table.Period
>INSERT INTO Result VALUES(nPeriod)
>DO WHILE !EOF()
> =SEEK(nPeriod + 1)
> nPeriod = Table.Period
> INSERT INTO Result VALUES(nPeriod)
>ENDDO
>
>The idea is that it would skip a whole chunks of duplicate periods.
I have this schema in SQL Server:
CREATE TABLE [TestPeriod] (
[TestDate] [smalldatetime] NULL ,
[period] [int] NULL
) ON [PRIMARY]
GO
I am filling it with the datetime and the month per min for the past 25 years.
I have over 5 mill rows and it takes less then a second to pull distinct periods. This is while data is being inserted in a while loop:
declare @d datetime
set @d = dateadd(yy,-25,getdate())
print @d
WHILE @d < getdate()
BEGIN
set @d = dateadd(n,1,@d)
insert into TestPeriod (TestDate,period)
values (
@d, datepart(m, @d))
CONTINUE
END
PRINT 'To many rows!'
I have no index on the table because this is a load operation.
I guess that dragging DBF data down the pipe is the drag in time? I can't see a seek being the way to go for finding the next largest value when your in 2 mill rows of data.