Perhaps some old photographic formulas might help, though I can't remember where I put my textbook. I'm thinking about equations relating to image size versus distance and focal length. As focal length and distance changes, the image PERSPECTIVE changes. The robot would "see," for example, a larger square as it approaches. Computationally, you'd probably want the program to "weigh" the images by summing the number of contiguous blue pixels to determine how close the block might be. A distant small block would not be considered as part of the foreground image because the pixels would not be contiguous or attached to the closer image. Perhaps perfect focus might not be an issue and therefore exact shape (square or spherical) might be less of an issue than blue image size and location.
Another idea would be a sequence of "memory" images stored in RAM of known reference locations. As the robot approaches a known location, the camera image and the memory image would exhibit greater and greater levels of similarity ... just like the way a human child learns.
Too bad you're using C++ ... I seem to recall some functions in Java that involved hash tables and pattern matching features of something called an "agent." But that was several years ago ...
Précédent
Suivant
Répondre
Voir le fil de ce thread
Voir le fil de ce thread à partir de ce message seulement
Voir tous les messages de ce thread
Voir tous les messages de ce thread à partir de ce message seulement