Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Friday fun with words
Message
De
27/09/2004 10:22:44
Dragan Nedeljkovich (En ligne)
Now officially retired
Zrenjanin, Serbia
 
 
À
25/09/2004 21:52:13
Neil Mc Donald
Cencom Systems P/L
The Sun, Australie
Information générale
Forum:
Politics
Catégorie:
Autre
Divers
Thread ID:
00945823
Message ID:
00946248
Vues:
23
>Hi,
>Sorry I didn,t explain properly, my thoughts are as follows.
>
>1. Conversion of a whole word rather than each individual character has to be faster.
>
>2. Assuming that the first and last character of the word are correct i.e. wluod = would. Searching a dictionary for a 5 character word starting with "w" and end with "d" does reduce the decision set greatly.
>
>3. This would probably work with spell checking as well.
>
>4. This may idea may work with NEAR finds as well.
>
>This has probably been researched before, but I thought I would mention it just in case it hadn't.
>
>As we know processing speed is dependant upon what perspective we look at things, i.e. the bigger the chuck of data that is accurately converted the faster the program runs.
>
>Who knows, someone may apply this to some other process and come up with a significant increase in speed.

In an OCR application, this would surely make a lot of sense. I was just trying to prove that the original set of assumptions (first and last character of each word remain in place and the text is still readable) is wrong, because the example given was weak. The words weren't sufficiently scrambled, too many phonemes remained intact (ch, th and pairs were often left together). Once these are broken, the readability goes down the drain.

And who knows, maybe your ideas are already applied in existing OCR and spell-checking software.

back to same old

the first online autobiography, unfinished by design
What, me reckless? I'm full of recks!
Balkans, eh? Count them.
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform