Therefore, when the Microsoft Xbox Kinect sensor came to Europe, I could not resist buying one. If you have been living in a cave during the Christmas 2010 season, someone should tell you that the Kinect is a combined IR emitter and camera that can map an environment in three dimensions, so that the Xbox can precisely locate your various body parts and let you control a game with your bare hands.
I have seen very appealing projects involving the Kinect. Short of direct Gibsonian brain-computer communications, gesturing in the air is the ultimate interface, and many computer geeks have been seen drooling at the sight of Tom Cruise scanning criminal records in Minority Report.
Controlling a computer screen is fine, but having some solid piece of hardware respond to your handwaving is much cooler. Here is what I could do with the limited resources (in terms of time and money) that I could recollect.
The robot moves on two independent wheels. The user controls the two motors by moving his hands forward and backwards, as if pushing and pulling two levers. The driving interface is therefore similar to that of a tank, or of a caterpillar. The speed of the motor is proportional to the displacement of the hands from a pre-defined "neutral" position.
Well, being able to control the robot just by moving your hands is awesome; however, grasping two real levers gives a better control over the mechanism. In particular, what's missing is the feedback that actual controls provide about the neutral position, and finding the position that ensures that both motors are still is very difficult (although I set a threshold in my program).
In order to glue the drivers together,
I modified a pair of sample programs found on the net. The
first
acquires body joint coordinates from the PrimeSense software that
interfaces with the Kinect drivers. and sends such coordinates
on a specific UDP port. It is adapted from a very useful
and interesting application by
Stephen Howell
that lets the Kinect control the
Scratch educational
program.
Stephen's
article is, in fact, the main source of information that I used in order to
make the Kinect work.
A second program, based on a sample program for controlling the Lego Mindstorms kit via USB and Bluetooth, receives the joint coordinates and transforms them into commands for the robot.
The two programs that I linked above are just quick-and-dirty fixes of already existing programs; I invite you to refer to the original sources (see above) to understand how things really work.
The code compiles directly on Windows with Visual Studio compilers; other compilers and operating systems can be targeted with straightforward modifications (namely, the USB socket functions).