Herviyana, Herviyana (2025) DEVELOPMENT OF A SPEECH-TO-TEXT COMMUNICATION SYSTEM FOR THE DEAF AND DUMB USING COMPUTER VISION. S1 thesis, Universitas Andalas.
![]() |
Text (Abstrak)
ABSTRAK.pdf - Published Version Download (232kB) |
![]() |
Text (Bab 1 Pendahuluan)
BAB 1.pdf - Published Version Download (200kB) |
![]() |
Text (Bab 5 Kesimpulan dan Saran)
BAB 5.pdf - Published Version Download (176kB) |
![]() |
Text (Daftar Pustaka)
DAFTAR PUSTAKA.pdf - Published Version Download (181kB) |
![]() |
Text (Skripsi Full Text)
FULL TEXT.pdf - Published Version Restricted to Repository staff only Download (1MB) | Request a copy |
Abstract
Verbal communication remains a significant challenge for people with hearing impairments, so a system is needed to effectively bridge two-way interaction. This research develops a communication system that integrates Computer Vision (CV) for sign language classification and Speech-to-Text (STT) for speech recognition, enabling more effective communication between people with disabilities and the general public. The system operates by utilizing Computer Vision, with a camera as a sensor to detect hand gestures, which are then processed using MediaPipe for keypoint extraction and classified using a Convolutional Neural Network (CNN) model. The model was trained using 71 classes of sign language images, consisting of 24 courses of SIBI images and 47 classes of BISINDO images, with a total of 2,160 raw images before augmentation and 6,390 images after augmentation. Test results show that the CNN can recognize gestures with high accuracy, namely 95.91% for SIBI and 92.64% for BISINDO. The system is equipped with Speech-to-Text (STT) to convert speech into text. The integration between Computer Vision (CV) and STT will be displayed through a responsive website on both PCs and smartphones. Meanwhile, gesture detection results will be displayed in real-time on a 16x2 LCD screen connected to a NodeMCU ESP8266 using the MQTT protocol. This study shows that the integration of CV and STT can provide solutions to support verbal communication for people with speech and hearing disabilities. Future implementations will focus on developing more efficient IoT-based portable devices.
Item Type: | Thesis (S1) |
---|---|
Supervisors: | Dr. Meqorry Yusfi, M.Si |
Uncontrolled Keywords: | Sign Language, Computer Vision, MediaPipe, CNN Model, Speech-to-Text |
Subjects: | Q Science > QC Physics |
Divisions: | Fakultas Matematika dan Ilmu Pengetahuan Alam > S1 Fisika |
Depositing User: | S1 Fisika Fisika |
Date Deposited: | 29 Aug 2025 04:33 |
Last Modified: | 29 Aug 2025 04:33 |
URI: | http://scholar.unand.ac.id/id/eprint/507284 |
Actions (login required)
![]() |
View Item |