Optimizing American Sign Language Recognition with Binarized Neural Networks: A Comparative Study with Traditional Models
Abstract
Sign language is crucial for communication among individuals with hearing or speech impairments. Automated recognition systems are essential for learning and translating different sign language variants. However, these systems often face high computational demands.
This study proposes a Binarized Neural Network (BNN) architecture designed to drastically reduce memory footprint and computational requirements by restricting weights and activations to +1 and -1. We conduct a comprehensive comparative analysis between our proposed BNN and traditional Convolutional Neural Networks (CNNs) for American Sign Language (ASL) recognition. Results demonstrate that BNNs can achieve competitive accuracy while offering significant advantages in deployment on edge devices and low-power hardware, paving the way for more accessible and real-time sign language translation technologies.