HANDYTRACK: HPC FOR HAND GESTURE DATASET GENERATION AND DEEP LEARNING TRAINING FOR DETECTION AND TRACKING

  1. /
  2. Success stories
  3. /
  4. HANDYTRACK: HPC FOR HAND...

The project is dedicated to Artificial Intelligence and specifically to Computer Vision and Machine Learning for training neural networks for hand gesture recognition and for generating data based on 3D models to reduce the time required and allow for more accurate performance.

Start date: 01/06/2021

Duration in months: 18

Problem Description

Hand tracking and gesture recognition are critical components of AR (Augmented Reality) as they allow users to interact with virtual objects in a natural and intuitive way. Current tools are either limited to predefined gestures or require developing expensive and time-consuming custom solutions.

Goals

Reduce time to solution

Challenges

To enhance the user experience in AR, two main challenges need to be addressed: latency reduction and accuracy in detecting hand gestures in different scenarios. AR experiences should be smooth and responsive, but standard products are hardware-dependent, calibrated for one hand at a time.

Innovation results

The HandyTrack solution utilizes HPC to enhance hand gesture detection and tracking algorithms. It uses deep learning and 3D models to generate large datasets, reducing training time and enhancing accuracy. It is an hardware-agnostic framework, allowing for unlimited dynamic gesture recognition.

Business impact

HPC significantly reduces training time by up to 80% compared to traditional computing, enabling faster development and deployment of hand gesture recognition. It reduces costs by 99.6% for custom gesture generation, offers a natural user experience, and allows for an unlimited hand gesture dataset.

Project page

Follow the external link