I have the ambitious aim of allowing my Quadcopter to see in stereo. So recent tests are being conducted of a suitability of on board stereo processing via OpenCV. So far the unoptimized version of OpenCV makes this really slow and impossible to do in realtime. I just got the last day of author registration in for IGARSS as well, I hope it was on time.
Some USB devices arrived from deal extreme - USB GPS running at 1Hz / 38400 baud and a Ralink Wireless-N dongle which does not have prebuilt drivers in Linux or atleast not one that autoloads with the USB device. I will investigate the wireless later but for now it frees up a slot to try out stereo - so here is the rig.
The gstreamer capture and opencv stereo_match both work as expected and it takes about 300ms to grab frames and 12300 milliseconds to form the disparity map with default settings. Much can be improved in the opencv front to reduce the time required for processing. As a comparison my laptop using the same set of tools - Gstreamer for windows with the ksvideosrc as frame source and OpenCV 2.1. The frame capture speed was 250ms and the disparity map was produced in 300ms.
So a realtime stereo vision based quadcopter may not be possible, but data capture and indoor map building with post processing is definitely feasible. I got a quote for the Hokuyo laser scanner, though significantly more expensive than the stereo rig ( few hundred thousand yen) it has larger field of view and more reliable point cloud production independent of features on the target surface.