LATEST VERSION 1.0 (25.06.2012)
ATTENTION!
This module is OBSOLETE! Use UKinect (OpenNI+NITE 2.x)

Description

openni1

Human body posture detection is an important problem that was recently tackled using various approaches. The most common ones are based either on depth map generation or on the human body parts classification based on a camera image. Dynamic boost of the entertainment technologies related with body posture recognition resulted in the availability of cheap and reliable 3D sensors such as: Microsoft Kinect [http://www.microsoft.com/en-us/kinectforwindows/], Asus Xtion Pro Live [http://www.asus.com/Multimedia/Motion Sensor/Xtion_ PRO_LIVE/], SoftKinetic DepthSense 311 [http://www.softkinetic.com/]. Most of these solutions are based on the structured light method while some others use an alternative technique base on time-of-flight. Currently, Kinect is the most often used sensor in interactive robotic research projects.

Kinect is a low cost device for 3D measurements that provides both 2D colour image and structured light based depth map. RGB camera provides VGA (640x480px) resolution while depth sensor's resolution is limited to 300x200px. However, depth image is being interpolated inside this device to the VGA size. The depth sensor range is limited to the 0,4 – 6,5m. Moreover Kinect is equipped with 4 microphone array, 3-axis accelerometer and a motor to control the tilt angle of the sensor head. Communication with this sensor is based on a popular USB interface. The manufacturer of the depth map technology used in Kinect (PrimeSense [http://www.primesense.com]) supports the previously mentioned OpenNI library. OpenKinect is an open community of people working on free, open source libraries that will enable the Kinect to be used with Windows, Linux, and Mac. The primary focus of this group is the libfreenect software [https://github.com/OpenKinect/libfreenect]. This library currently supports access to: RGB and Depth Images, Motors, Accelerometer, LED. Access to the audio is under development. OpenNI (Open Natural Interaction) is a multi-language, cross-platform framework that forms a standard API that enables communication with both vision and audio sensors and perception software. This project breaks the dependency between the sensor and the software. OpenNI’s API enables applications to be written and ported with no additional effort to operate on top of different modules.

openni3

OpenNI is an open source API that is publicly available [http://www.OpenNI.org]. Currently OpenNI supports Kinect and Xtion sensors. Moreover the main partner of the OpenNI organization (PrimeSense) provides NiTE - the Natural Interaction Middleware. It allows to perceive the world in 3D, comprehend, translate and respond to human movements without any wearable equipment or controls. NiTE provides functionality of human body detection, skeleton extraction and simple gesture recognition. The person can be detected and tracked with her/his wire-frame skeleton, which gives the location of the body parts in the kinect image. Now, it is possible to create color recogniton system using another module [http://lirec.ict.pwr.wroc.pl/~flash/?q=node/75]. The colour of the object kept in hand is sampled in a small region around the end-point of the forearm, while the colour of the shirt (jacket) is sampled around the crossing of the torso lines.

openni2

Module functions

UKinectOpenNI.new(flag1, flag2, flag3);
flag1 - activateImageComponent (true or false)
flag2 - activateDepthComponent (true or false)
flag3 - activateUserComponent (true or false)
UKinectOpenNI.refreshData(); - obtain new data from Kinect. Needed in a loop if you want results in real time.
UKinectOpenNI.getSkeleton(image); - draw skeletons on skeleton image based on baseImage, for example camera image or depth map image
UKinectOpenNI.image; - access to UImage with camera image
UKinectOpenNI.depth; - access to UImage with depth map image, unit value in 25 mm
UKinectOpenNI.skeleton; - access to UImage with depth map image
UKinectOpenNI.matchDepthToImage(true or false); - fix calibration depth and image.

UKinectOpenNI.numUsers; - number of detected user(s)
UKinectOpenNI.getUsersID(); - obtain the list of tracked user(s) ID list
UKinectOpenNI.getVisibleUsersID(jointNumber); - obtain the list of visible user(s) ID list. User is visible if its jointNumber is visible. jointNumber defines a body part, so it identifies if user is visible or not
UKinectOpenNI.jointConfidence; -access to a statistic value : the identification confidence level, 0-1 (float)
UKinectOpenNI.getJointPosition(userID, jointNumber); - get vector which defines a joint position for a given user. userID: result of the function getVisibleUsersID().
UKinectOpenNI.getJointImageCoordinate(userID, jointNumber); - get vector of a 2D joint coordinate in image for given user
UKinectOpenNI.getDepthXY(x,y); - depth map with pixel values
UKinectOpenNI.getDepthMedianFromArea(x1, y1, x2, y2); - depth map mediana from area

UKinectOpenNI.setLed(color); - change Kinect LED color. 1-green; 2-red ; 3-orange ;4-flashing green; 5-fast
flashing green; 6- red/orange; 7- fast red/orange

UKinectOpenNI.getAccelerometer(); - acces gravity vector coordinate XYZ,
X - to Kinect rigth,
Y - to Kinect botom,
Z - to Kinect rear

UKinectOpenNI.motorMove(absolutAngle); - move motor head to given angle (0° < absolutAngle < 26°)

UKinectOpenNI.fps; - module performance Frames Pers Seconds max 30

jointNumbers

HEAD = 1,
NECK = 2,
TORSO = 3,
WAIST = 4,
LEFT_COLLAR = 5,
LEFT_SHOULDER = 6,
LEFT_ELBOW = 7,
LEFT_WRIST = 8,
LEFT_HAND = 9,
LEFT_FINGERTIP = 10,
RIGHT_COLLAR = 11,
RIGHT_SHOULDER = 12,
RIGHT_ELBOW = 13,
RIGHT_WRIST = 14,
RIGHT_HAND = 15,
RIGHT_FINGERTIP = 16,
LEFT_HIP = 17,
LEFT_KNEE = 18,
LEFT_ANKLE = 19,
LEFT_FOOT = 20,
RIGHT_HIP = 21,
RIGHT_KNEE = 22,
RIGHT_ANKLE = 23,
RIGHT_FOOT = 24 

How to use in urbiscript

Example1

loadModule("UKinectOpenNI");                         // call the librairy <strong>UKinetOpenNI</strong>
var Global.Kinect=UKinectOpenNI.new(true,true,true);// creation of a Global value <strong>Kinect</strong> to facilitate the programming
Kinect.matchDepthToImage(true);
tag1:loop {                                        //loop which allows to have results in real time, and a skeleton tracking on depth images.
   Kinect.refreshData();
   Kinect.getSkeleton(Kinect.depth);
}, 

Example 2

loadModule("UKinectOpenNI"); 
var Global.Kinect=UKinectOpenNI.new(true,true,true);
Kinect.matchDepthToImage(true);
tag1:loop {
   Kinect.refreshData();                        //results iin real time
   Kinect.getSkeleton(Kinect.image);           // skeleton tracking on recorded images
   if (Kinect.getVisibleUsersID(3).size>0) {  //if the torso is detected
      Kinect.setLed(2);                      // Kinect LED color becomes red
   } else {
      Kinect.setLed(1);                    //else it becomes green
   };
},

Example 3

loadModule("UKinectOpenNI");
var Global.Kinect=UKinectOpenNI.new(true,true,true);
Kinect.matchDepthToImage(true);
var users;
tag1:loop {
Kinect.refreshData();
Kinect.getSkeleton(Kinect.image);
users=Kinect.getVisibleUsersID(3); 
if (users.size>0) {                                                             //if the torso is detected, so if there is someone
      echo("position of right hand: " + Kinect.getJointPosition(users[0],15)); // users[0] = first detected person's torso  ID, 15: joint of the right hand
      Kinect.setLed(2);
   } else {
      Kinect.setLed(1);
   };
},

Example 4

loadModule("UImageTool");
loadModule("UKinectOpenNI");
var Global.Tool=UImageTool.new();
Tool.createImage(640,480,0,0,0);
Tool.updateImage;
var Global.Kinect=UKinectOpenNI.new(true,true,true);
Kinect.matchDepthToImage(true);
tag1:loop {
   Kinect.refreshData();
   Kinect.getSkeleton(Kinect.image);
   var users=Kinect.getVisibleUsersID(3);
   if (users.size>0) {
      var handPosition = Kinect.getJointImageCoordinate(users[0],9);
      if (handPosition.size>0) {
         Tool.setImage(Kinect.image);
         Tool.imgMedianBlur(11);
         var pixelVal = Tool.getPixelValue(handPosition[0],handPosition[1]);
         Tool.putCircle(handPosition[0],handPosition[1],40,pixelVal[0],pixelVal[1],pixelVal[2],-1);
         Tool.updateImage; Kinect.setLed(2);
      };
   } else {
      Tool.createImage(640,480,0,0,0);
      Tool.updateImage; Kinect.setLed(1);
   };
},

Download

LINK

 

 

 

EMYS and FLASH are Open Source and distributed according to the GPL v2.0 © Rev. 0.8.0, 27.04.2016

FLASH Documentation