Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Looking for best index
Message
De
28/02/2016 12:14:36
 
 
À
28/02/2016 11:35:56
Information générale
Forum:
Microsoft SQL Server
Catégorie:
Autre
Versions des environnements
SQL Server:
SQL Server 2014
OS:
Windows 8.1
Network:
Windows 2008 Server
Divers
Thread ID:
01632184
Message ID:
01632249
Vues:
44
>>I do realize that this has nothing to do with selecting the best index for your particular query timing out.
>>
>>I was heavily involved for over a dozen years in designing data and statistical models for vehicles, engines, optional equipment [packs] and their used prices (think blue book in the states), even if this was last century. Changing the approach from finding/identifying the specific model as a function of correct information entered by a user to the idea that ***all*** user input is saddled with errors (in very few cases approaching zero) and creating a ranking of all models according to the "error" distance from user given input was one of the things that was a game changer - (and not too easy to implement in last centuries HW). No idea how involved you are there and how much influence you have, but consider such an approach if it is not already implemented.
>
>What was the concept exactly involving?

Calculating weighted differences for each "attibute" on the one hand and somewhat heavy handed in/exclusion from the set the weigthed differences are calculated upon as well as ordering the "heavily recalculated" and the "excluded" set. Calculation done in CoRoutines (ModulaII implementation of parallell execution under DOS, GemDOS, OS/2 and NT-family). As this was a data entry task, such minimal calculation tasks had to run between keypresses and had to implement a multi-level strategy probably best described by analogy to multi-sweep garbage collection strategies in languages like C# or Java. Heavily tailored to the input task and somewhat adjustable if structure and/or quality if data input source was known. Made manual "control" feasible, as previously sometimes cars were not correctly identified, throwing off the total calculation of the price according to the statistical models calculation (done via multiple regression and factorial decomposition from a time series estimate reached via combining "typical" time series) - esp. if not many cars of that special make, model and year were sampled.

As the correlation based statistical methods include the concept of squared difference from arithmetical mean, making sure data was not mistakenly put into false "bins" helped reducing model errors a lot ;-))
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform