Entirely depends on the application. The Euclid device might be a little spendy for the 'maker' market, but it includes a fisheye cam + imu combo for SLAM (like Google's Tango tablet) and a 30fps stereo depth cam for densely reconstructing an environment. The target seems perfect for students learning robotics.
Heh - RealSense is not exactly one depth camera. This particular formfactor uses active stereo IR (R200 camera), so it also works outside but loses the projective texturing (where it's usually not needed anyway).
Actually, I don't think so. Its kind of hard to get any technical information on the real sense and I know that Intel is making different versions of it, so please correct me if I'm wrong and if they have one which really does stereo. On http://www.intel.com/content/www/us/en/architecture-and-tech... they say the following:
"The Intel® RealSense™ Camera F200 is actually three cameras in one—a 1080p HD camera, an infrared camera, and an infrared laser projector"
So this is just looks like the first Kinect. The infrared camera will observe the projected pattern and the RGB camera is there to capture the color information. You can't really match an infrared image (which is also covered with a laser pattern) with a visible light image, as they will look very different. So you would require a yet another camera (infrared or visible light) in order to do stereo.
The robot is using the R200, an active stereo camera, not the F200. Even the F200 uses a fundamentally different technique (coded light, projected grey code) rather than structured light as the Kinect uses.
Source: I work as a computer vision engineer on these products for Intel RealSense.
With the drone the scan space doesn't overlap and it only needs to detect obstacles not fine details. It's trickier when they are scanning all sides of an object at the same time because the projector structured light patterns will tend to interfere with each other.
Even non musicians- the OSC protocol is used a lot in HCI research for prototyping (TUIO is built on top of OSC). For my master's thesis, I implemented a system that allowed a user to seamlessly transfer content (an open picture, playing video, etc.) from device to device in a room (phone, pads, tabletop computers, etc.), and the protocol gluing everything together was built upon OSC.
Thanks both for taking the time to check this out and offer these insights and links - I was so blown away by the UI on the Star Wars demo I was sure they'd probably developed it themselves. Looks like I didn't need to build these at all! (Although I will be rebuilding these using SVG and doing tutorials on them as I go along).