Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Looking for best index
Message
De
29/02/2016 03:39:07
 
 
À
28/02/2016 17:50:13
Information générale
Forum:
Microsoft SQL Server
Catégorie:
Autre
Versions des environnements
SQL Server:
SQL Server 2014
OS:
Windows 8.1
Network:
Windows 2008 Server
Divers
Thread ID:
01632184
Message ID:
01632272
Vues:
30
>>Calculating weighted differences for each "attibute" on the one hand and somewhat heavy handed in/exclusion from the set the weigthed differences are calculated upon as well as ordering the "heavily recalculated" and the "excluded" set. Calculation done in CoRoutines (ModulaII implementation of parallell execution under DOS, GemDOS, OS/2 and NT-family). As this was a data entry task, such minimal calculation tasks had to run between keypresses and had to implement a multi-level strategy probably best described by analogy to multi-sweep garbage collection strategies in languages like C# or Java. Heavily tailored to the input task and somewhat adjustable if structure and/or quality if data input source was known. Made manual "control" feasible, as previously sometimes cars were not correctly identified, throwing off the total calculation of the price according to the statistical models calculation (done via multiple regression and factorial decomposition from a time series estimate reached via combining "typical" time series) - esp. if not many cars of that special make, model and year were sampled.
>>
>
>A simple "take a look at Data Mining models and algorithms" might have sufficed :)

I interpreted Michels question to revolve around the things found to be working. I did not always find the final solution on first try, but was far ahead of other well known companies trying their approaches - as such information/expertise is the thing that company I was working for is living on, the did NOT just blindly put their trust in me, a lot of their money was burned to make certain nobody else could create a better prediction model.

If your line was meant as a hint on how I should have progressed: googling for such key words was hard as:
a) Google did not exist
b) Yahoo and other earlier search engines did not exist
c) the Internet was at a much earlier stage
d) nobody wanted to sell a book/article with such a topic in the late 80ies, when I got involved ;-))

>Some companies out there have paid hundreds of thousands of dollars for those types of calculations. Some wound up saving money in the long run...and some were just hundreds of thousands of dollars poorer at the end :)

embolded part certainly true. "Just poorer": no. When regulators showed up late in the game, having spent a lot of money MORE on "other" research than just for the final approach put an end to the investigation better than any bullshit bingo ever could have.

>Any company looking to go down the road you described will need to make sure the key business users are prepared to spend some time in a dual-effort sanity check. Even the savvy business users wind up discovering a thing or two in the process of evaluating results.

When german used car market moved into areas literally seen never before (fall of east german wall syphoning off nearly every used car on west german market, subsidy to wreck old cars and buy new ones plus the economic situation triggering that subsidy) working from a mathmatical model with parameters at least half understood was better than trying to guesstimate from the always unevenly distributed data points available when asked for the worth (not price!) of a specific car perhaps stolen or smashed by another driver...
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform