Camshift for Hand TrackingCamshift uses color data to locate the largest sub-region of an image whose color probability distribution most closely matches a template color histogram. To do that, we first obtain a region of the image to sample for color data. We then construct a histogram containing probabilities of hue values contained in the region. When a new image is taken, it is sampled around the previous window to find the new window that best fits the histogram signature.
The image on the right :
Gesture RecognitionTo determine the current gesture we first compute the convex hull of the hand and obtain a sequence of convexity defects for the hull. Only thos defects having depths greater than a predefined threshold value are taken into account and used to count the number of fingers. This step disregards the insignificant defects formed over the rest of the hand. Once the gesture is identified this information along with the velocity information obtained in the predictive windowing step of the camshift module is used to control the hand and feet motion of a digital puppet.
Instead of working with the entire captured image, we provide the gesture recognition module with the back projection of the tracked hand obtained from the camshift module. This effectively suppresses the interference of defects arising out of other irrelevant objects in the scene.