Mistry, J. and Inden, B. ORCID: 0000-0001-6048-6856, 2018. An approach to sign language translation using the Intel Realsense camera. In: 10th Computer Science and Electronic Engineering Conference (CEEC’18), University of Essex, Colchester, 19-21 September 2018.
|
Text
12053_Inden.pdf - Post-print Download (1MB) | Preview |
Abstract
An Intel RealSense camera is used for translating static manual American Sign Language gestures into text. The system uses palm orientation and finger joint data as inputs for either a support vector machine or a neural network whose architecture has been optimized by a genetic algorithm. A data set consisting of 100 samples of 26 gestures (the letters of the alphabet) is extracted from 10 participants. When comparing the different learners in combination with different standard preprocessing techniques, the highest accuracy of 95% is achieved by a support vector machine with a scaling method, as well as principal component analysis, used for preprocessing. The highest performing neural network system reaches 92.1% but produces predictions much faster. We also present a simple software solution that uses the trained classifiers to enable user-friendly sign language translation.
Item Type: | Conference contribution |
---|---|
Creators: | Mistry, J. and Inden, B. |
Date: | September 2018 |
Divisions: | Schools > School of Science and Technology |
Record created by: | Jonathan Gallacher |
Date Added: | 26 Sep 2018 12:44 |
Last Modified: | 26 Sep 2018 12:44 |
URI: | https://irep.ntu.ac.uk/id/eprint/34581 |
Actions (login required)
Edit View |
Views
Views per month over past year
Downloads
Downloads per month over past year