Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Bypassing the 16 field limit in index
Message
From
01/02/2014 16:41:09
Mike Yearwood
Toronto, Ontario, Canada
 
 
To
01/02/2014 14:53:12
General information
Forum:
Microsoft SQL Server
Category:
Indexing
Environment versions
SQL Server:
SQL Server 2008
Application:
Web
Miscellaneous
Thread ID:
01592711
Message ID:
01592888
Views:
41
>>Your users can decide to query on any combination of up to two dozen columns? My preferred use case is to build dynamic SQL and allow the user to query any way they want. SQL will build all the execution plans and cache them based on the queries your users construct. Take the user's query and sort the columns alphabetically. That will increase the chance of reusing an existing execution plan.
>
>...much more actually. We have hundreds of lists and some of them have up to 25 to 35 fields. They can enter criteria in one or many columns. It is the same as here. If you go to your replies, you will see a list with several columns. Imagine having many lists like that with 25 to 35 fields, and they are not always the same as the table has over 200 fields so there are more on other lists, add to that 2 to 5 millions of records.
>
>As I mentioned, the possibility are in the trillions. There is just no way I can predict what they will enter and search for. I am not sure what you mean by dynamic SQL. Each command is creating a new SQL based on what has to be searched.

That sounds like dynamic sql to me. Many will say never create dynamic sql. It can be done safely.

>
>>My understanding is use INCLUDE to let SQL return column data from the index without accessing the table - covering the query. That is going to be complex if you have 2 dozen indexes.
>
>There are about 200 indexes and we have about 40 major indexes which combined two to 16 fields with zero to sometimes 40 included fields.
>
>>the execution plan shows that it used index xfield2field1. I suppose SQL uses the first index it can find. After deleting the xfield2field1 index, the above sql used index xfield2 which is what I would have preferred. It's basically how Fox optimizes. I do not see any large benefit to it using xfield2field1.
>
>Well, we need to have index on each field as is. As this is used widely. And, on many places, we need composite indexes. And, yes, as you mentioned, sometimes, it is frustrating as SQL Server will bypass a solo index and decide to use a non optimized composite index instead.

Can you show me an example of where a composite index is needed?

>
>>If you do not have the simple index on field1 it will advise you to add it. Why? Are composite indexes not good enough?
>
>I need both.

Not based on what I just saw.

>
>>There may be a little benefit in using one composite index to handle several queries. How long would it take sql to search the composite index of 16 keys to find the 10th key is the one to use? It must have to bypass 14 of the keys to find the two keys needed below.
>
>Most of the composite indexes have 4 to 8 fields. And, they serve several purposes and that need to be as is. This is a very huge application and we have to think very global into what it does.

It seems to me that having every field indexed with a simple index and no covering will handle every possible need.

>
>>Is one composite index of 16 keys better than 16 simple (non-composite) indexes? In your case, you have two dozen keys. So that means you'd need at least 2 composite indexes (16 keys in one and 8 in the other). Are two composite indexes better than 2 dozen simple indexes? My *guess* would be to add only the 2 dozen simple indexes, avoid composite keys and not INCLUDE any other columns.
>
>With composite indexes, in many places, this will put the sytem down. There is just no way we can handle such a design without having composite indexes. This will simply creates timeouts, dead locks and so on. We need to have a response time with 0.2 to 2 second at most on those very big queries and we are trying to find an intermediate point which will fit both.
>
>>If you have 2 composite indexes and your user queries field1='text1' and field24='text24'
>>
>>SQL will have to exclude 22 keys across 2 composite indexes. It will have to merge join the results. Can that be any faster than using indexes field1 and field24 and merge joining the results? The latter sounds simpler to me. What about 3 indexes? Is that only one merge join?
>>
>>The 24 indexes will do exactly what I expect. For 2 columns it will do 2 index seeks and 1 merge join, for 3 it will do three index seeks and two merge joins. I have not done any timing tests. Maybe 2 fully covered composite indexes are better? Can SQL update two composite indexes faster than 24 simple indexes? That may be a benefit too. It would be a nice article contrast 2 fully covered composite indexes versus 24 non-covered simple indexes versus 24 fully covered simple indexes.
>
>The update has never been an issue no matter how intensive and how many indexes we have. Whenever a transaction requires to update the table, this is instant.

Good

> The problem relies on what the client wants as flexibility to search pratically everywhere on anything. Not every application allows this. If you take a look at this site, I do. However, on very big tables with dozens and dozens of fields, with very big lists with so many choices, many application will simply allow simple searches and not offer that much possibilities.

I understand that need. There is room for finding ways to meet the client's wants. Unfortunately at the moment, I am not aware of the client's wants. In order to handle large lists, that can always be done with joins to tables containing the user's selected choices. I'd be interested to learn more.
Previous
Reply
Map
View

Click here to load this message in the networking platform