>Calculating weighted differences for each "attibute" on the one hand and somewhat heavy handed in/exclusion from the set the weigthed differences are calculated upon as well as ordering the "heavily recalculated" and the "excluded" set. Calculation done in CoRoutines (ModulaII implementation of parallell execution under DOS, GemDOS, OS/2 and NT-family). As this was a data entry task, such minimal calculation tasks had to run between keypresses and had to implement a multi-level strategy probably best described by analogy to multi-sweep garbage collection strategies in languages like C# or Java. Heavily tailored to the input task and somewhat adjustable if structure and/or quality if data input source was known. Made manual "control" feasible, as previously sometimes cars were not correctly identified, throwing off the total calculation of the price according to the statistical models calculation (done via multiple regression and factorial decomposition from a time series estimate reached via combining "typical" time series) - esp. if not many cars of that special make, model and year were sampled.
>
>As the correlation based statistical methods include the concept of squared difference from arithmetical mean, making sure data was not mistakenly put into false "bins" helped reducing model errors a lot ;-))
Very interesting and very detailed, thanks for sharing this.