I am developing a mobile robot that uses a top mounted camera to capture information about its surroundings. To simplify the problem, I am using special blue square-faced obstacles which can be scattered around a room (all the same size). The idea is that the robot could calculate its distance from these objects and formulate a "birds eye view" map of its environment, in relation to itself.
However, I am unsure how to arrive at an algorithm for calculating these distances? From the robot's perspective, it has an image of the known size blue blocks to utilise. It knows the size of the blocks and the size of their appearance on its internal image from the camera. Somehow, it would be good to be able to "look" at the size of the images on screen, compared to their known physical size and therefore be able to calculate the robot's distance from the objects and possibly their distance from each other. Make sense?
Finally, any calculation would need to be able to handle a situation when perhaps the objects it sees are not of a known size.
If anyone has any ideas in this connection, your comments would be appreciated. FYI, this is not driven by Fox <s> - this is a C++ API.
Best
-=Gary