>Hi,
>Sorry I didn,t explain properly, my thoughts are as follows.
>
>1. Conversion of a whole word rather than each individual character has to be faster.
>
>2. Assuming that the first and last character of the word are correct i.e. wluod = would. Searching a dictionary for a 5 character word starting with "w" and end with "d" does reduce the decision set greatly.
>
>3. This would probably work with spell checking as well.
>
>4. This may idea may work with NEAR finds as well.
>
>This has probably been researched before, but I thought I would mention it just in case it hadn't.
>
>As we know processing speed is dependant upon what perspective we look at things, i.e. the bigger the chuck of data that is accurately converted the faster the program runs.
>
>Who knows, someone may apply this to some other process and come up with a significant increase in speed.
In an OCR application, this would surely make a lot of sense. I was just trying to prove that the original set of assumptions (first and last character of each word remain in place and the text is still readable) is wrong, because the example given was weak. The words weren't sufficiently scrambled, too many phonemes remained intact (ch, th and pairs were often left together). Once these are broken, the readability goes down the drain.
And who knows, maybe your ideas are already applied in existing OCR and spell-checking software.