Publication - Gestural teleoperation of a mobile robot based on visual recognition of sign language static handshapes
PROFILE

Gestural teleoperation of a mobile robot based on visual recognition of sign language static handshapes

Research Area:  
    
Type:  
In Proceedings

 

Year: 2009
Authors: Costas Tzafestas; N. Mitsou; N. Georgakarakos; O. Diamanti; P. Maragos; Stavroula-Evita Fotinea
Book title: Proc. of 18th IEEE International Symposium on Robot and Human Interactive Communication
Address: Japan
Date: Sept. 27-Oct. 2
Abstract:
This paper presents results achieved in the frames of the DIANOEMA research project, in the framework of which visual analysis and sign recognition techniques have been explored on Greek Sign Language (GSL) data aiming, besides GSL modelling, at a pilot application on a mobile robot teleoperation. A small vocabulary of hand signs is designed to enable desktop-based teleoperation at a high-level of supervisory telerobotic control. Real-time visual recognition of the hand images is performed by training a multi-layer perceptron (MLP) neural network. Various shape descriptors of the segmented hand posture images have been explored as inputs to the MLP network. These include Fourier shape descriptors on the contour of the segmented hand sign images, moments, compactness, eccentricity, and histogram of the curvature. It is examined which of these shape descriptors are best suited for real-time recognition of hand signs, in relation to the number and choice of hand postures, in order to achieve maximum recognition performance. The hand-sign recognizer has been integrated in a graphical user interface, and has been implemented with success to a pilot application for real-time desktop-based gestural teleoperation of a mobile robot vehicle.
[Bibtex]