Northern Kentucky University
Gesture-based American Sign Language (ASL) Translation System
Grade Level at Time of Presentation
Senior
Major
Mechatronic Engineering Technology
2nd Grade Level at Time of Presentation
Senior
2nd Student Major
Mechatronic Engineering Technology
KY House District #
67
KY Senate District #
24
Faculty Advisor/ Mentor
Dr. Mahdi Yazdanpour
Department
Department of Physics, Geology and Engineering Technology
Abstract
According to the World Health Organization (WHO), over 5% of the world's population experiences severe hearing loss. Approximately 9 million people in the U.S. are either functionally deaf or have mild-to-severe hearing loss. In this research, we designed and implemented a translation interface which turns American Sign Language (ASL) gestures captured from a pair of soft robotic gloves into text and speech instantaneously.
We used a combination of flex sensors, tactile sensors, and accelerometers to recognize hand gestures and to record hand and fingers positions, movements, and orientations. The digitized captured gestures were then sent to our proposed translation interface wirelessly and were compared with the patterns stored in our dataset using a supervised Support Vector Machine (SVM) classification model. Once the captured gesture matched a predefined pattern, the associated letter, word, or phrase was shown on an embedded display, and the voice was generated by a text to speech conversion module.
This project aimed to develop an accessible and easy to use solution to help individuals who are deaf or have speech impairment problems to communicate directly to non‐signer people. These gloves can also be integrated with immersive learning technologies to enhance higher education and expand access to active learning opportunities for many underrepresented students.
Gesture-based American Sign Language (ASL) Translation System
According to the World Health Organization (WHO), over 5% of the world's population experiences severe hearing loss. Approximately 9 million people in the U.S. are either functionally deaf or have mild-to-severe hearing loss. In this research, we designed and implemented a translation interface which turns American Sign Language (ASL) gestures captured from a pair of soft robotic gloves into text and speech instantaneously.
We used a combination of flex sensors, tactile sensors, and accelerometers to recognize hand gestures and to record hand and fingers positions, movements, and orientations. The digitized captured gestures were then sent to our proposed translation interface wirelessly and were compared with the patterns stored in our dataset using a supervised Support Vector Machine (SVM) classification model. Once the captured gesture matched a predefined pattern, the associated letter, word, or phrase was shown on an embedded display, and the voice was generated by a text to speech conversion module.
This project aimed to develop an accessible and easy to use solution to help individuals who are deaf or have speech impairment problems to communicate directly to non‐signer people. These gloves can also be integrated with immersive learning technologies to enhance higher education and expand access to active learning opportunities for many underrepresented students.