close all set safety off create database junk create table junktable name junktable (junkfield c(10)) index on junkfield tag junkfield for not deleted() for xx=1 to 1000000 if mod(xx,10000)=0 &&Create special key every 10,000 records insert into junktable values (replicate("X",10)) if xx=100000 &&Delete one of them just for the heckuvit delete endif else &&Otherwise create random value and delete 2% of them insert into junktable values (sys(2015)) if rand()<=0.02 delete endif endif endfor clear sys(3054,1) ? "Filtered Index with Set Deleted Off:" set deleted off lnSec=seconds() select * from junktable where junkfield=replicate("X",10) into array myarr &&No optimization... 0.485 seconds ? seconds()-lnSec,"seconds" ? "Filtered Index with Set Deleted On:" set deleted on lnSec=seconds() select * from junktable where junkfield=replicate("X",10) into array myarr &&Fully optimizable... 0.000 seconds ? seconds()-lnSec,"seconds"I discovered this by mistake. I was answering a question regarding key uniqueness at www.foxite.com and mentioned that a filter of NOT DELETED() could be done on their candidate key to help solve their problem. I didn't go the extra mile, though, and neglected to mention the non-optimizability of that option. Andy Kramek jumped in and mentioned that, and so, just for the heck of it, I wrote a little program to test that assumption, since I, like Andy and all other developers out there, have just had that notion in my head for more than a decade and never questioned it. I wrote and ran the program in VFP9 and SYS(3054) said it WAS fully optimizable. Upon running the same program in VFP6, it was NOT. Andy ran the program in VFP7 and VFP8 (which I don't have) and it was also not optimizable. So this is a behavior that was added in VFP9.