>>>Hi Larry this is not exactly true you can have the same response time and yet one process be more efficient that the other, for efficiency try to look to the statistics how may reads that statement done the more efficient one will have less reads to SQL SERVER less reads implicates less possibilities of encounter a lock somewhere.
>>
>>Alexandre,
>>This implies that there is a performance boost when dealing with higher volumes of rows.
>>
>>Fewer reads per row operation * More rows = Delta
>>
>>As the number of rows increases, my delta should increase which translates into a performance gain. However, if fewer reads never translate into any kind of performance gain, then it isn't more efficient. It is simply a different way, IMO.
>>
>>As for locks, I am talking about queries only. Unless I want to repeat the same query over and over again and issue a HOLDLOCK in my SELECT statement, I shouldn't need to worry about locks other users are holding.
>Larry this is not true take an example if you are doing an update inside a transaction you will have an exclusive lock on the records that you have updated if someone is trying to do a select that for some reason is passing to that record that person will be locked.
If you use the defaults of SS, yes this is true. The default of SS (at least 2000) is READ COMMITED. However, for simple querying, I change the transaction isolation level to READ UNCOMMITED. Then no locking interferes. This makes SS act the same as VFP. Only those mods commited to disk are read.
Larry Miller
MCSD
LWMiller3@verizon.netAccumulate learning by study, understand what you learn by questioning. -- Mingjiao