Design and Implementation of an Innovative System for Automatic Recognition of ASL using Machine Learning

Grade Level at Time of Presentation

Senior

Major

Computer Science General Area

Institution

Morehead State University

KY House District #

99

KY Senate District #

27

Department

Department of Computer Science & Information Systems

Abstract

Design and Implementation of an Innovative System for Automatic Recognition of ASL using Machine Learning

Joshua Webb (undergraduate student researcher) and Sherif Rashad (faculty mentor)

Department of Computer Science & Information Systems

Deaf and hearing-impaired persons learn American Sign Language (ASL) as their natural language. There is a need for a new innovative technology that will enable deaf and hearing-impaired persons to communicate without difficulty, anytime and anywhere with persons who do not know ASL. We explore in this research project the problem of automatic conversion from ASL to speech using motion sensors and machine learning. The goal of this project is to design a smart system to capture and recognize hand gestures using leap motion sensors and machine learning algorithms. The new proposed system will be able to work in an adaptive way to learn new signs to expand and to improve the dictionary of ASL. This system will have a wide range of applications for healthcare, education, gamification, entertainment, and many other applications.

This document is currently not available here.

Share

COinS
 

Design and Implementation of an Innovative System for Automatic Recognition of ASL using Machine Learning

Design and Implementation of an Innovative System for Automatic Recognition of ASL using Machine Learning

Joshua Webb (undergraduate student researcher) and Sherif Rashad (faculty mentor)

Department of Computer Science & Information Systems

Deaf and hearing-impaired persons learn American Sign Language (ASL) as their natural language. There is a need for a new innovative technology that will enable deaf and hearing-impaired persons to communicate without difficulty, anytime and anywhere with persons who do not know ASL. We explore in this research project the problem of automatic conversion from ASL to speech using motion sensors and machine learning. The goal of this project is to design a smart system to capture and recognize hand gestures using leap motion sensors and machine learning algorithms. The new proposed system will be able to work in an adaptive way to learn new signs to expand and to improve the dictionary of ASL. This system will have a wide range of applications for healthcare, education, gamification, entertainment, and many other applications.