Sign to Speech Converter facilitated Wireless Communication
Sign to Speech converter is a wearable glove which can be used by speech impaired people to communicate with other people who doesn’t understand the sign language. The controller attached to the glove, reads the gestures, understands it through machine learning and gives the voice output. Currently, it is also facilitated with Bluetooth module so that, anyone wearing the glove, can communicate with other person within the 20–30 meters of the range using sign language. The project was a part of my final semester project during my Bachelors’ degree at School of Engineering and Applied Science, Ahmedabad University. It was done in a team of two, where I was responsible for defining the gestures, coding it on Arduino and writing the machine learning algorithm and my teammate Rachana did a great job with making the circuit and hardware for the glove and making it feasible for bluetooth communication.
Being tech enthusiasts, we have always believed that technologies should be available for everyone and for good whether they are normal humans or specially-abled people. The main benefit of technology is to empower people and making them better, not to overcome them but to support them. Personally, my father has been really active in social services in my state in India and often times I had the opportunity to go with him to different places and one day, we went to the deaf and dumb school for one of their projects and there when the students were trying to communicate with us, we figured out that they always needed someone who can interpret the sign language to us so that, we can understand that what they are trying to say. And, that’s where the idea came in mind that how can we design a solution that can help them to overcome these barriers and which is also better than currently existing systems.
As a part of the project, we designed and prototyped a wearable glove which has embedded sensors on it and all those are connected to a micro-controller. The user just has to perform the gestures of sign language and micro-controller gives a speech output for that. It is also powered by an ML algorithm, which understands the gestures and predicts the output with 96.93% of the accuracy. The ML algorithm makes this system as one system for all which means that regardless of different hand sizes, the glove provides an accurate output. To remove the barrier of remote-communication, we attempted to implement bluetooth protocols to it so that, any speech impaired person can communicate with others within 20–30 meters of the range.
Our initial part started with some desk research to identify how big is the problem and what are the possible solutions available in the market. During our research, we found out that in India, speech impairment is listed as fifth highest occurring disability with a prevalence of 7%.
We also did some research on the existing systems and we read some of the IEEE papers. Most of the existing systems were based on some or the other sort of computers or tablets. There were no handy systems available. As our target group was mainly the people of rural areas, we had to make a system which is simple, easily available and can work without any sort of internet connectivity.
For our primary research, we went to the same school to observe the students overthrew and understand that how do they use sign language to communicate with others and what is their behaviour. We also interviewed one of the faculties over there who gave us some really interesting insights.
Considering all the challenges, we started ideation around the system with the goals to make it Effective, Independent and easy to carry. And during that process, we came to the solution which was inspired by one of the projects of MIT and it was a wearable glove. Now everything was around the engineering side to make a quick prototype of it.
Why did we use Machine Learning?
We were always connected to our users to test and iterate the prototypes. One of the issues happened during the first test that while we were building the product, we were using my hand to define the gestures and also sensors were threaded with the glove and they were not so tight. So every time we did the test, we failed because if the sensor moves from its place or if someone has different hand-size than mine, it didn’t work at all. For that reason, we started ideating the solution for that and we found the opportunity to use Machine Learning and provide predictive output. For the test, we used different types of algorithms to check which works the best for our problem and we found that K-Nearest Neighbour works the best for our system with the accuracy of ~97%. (We recorded 33,000 instances of 20 different people with different hand-sizes for our model) Introducing ML just removed all the cases of the glove not working. Even if it is a different hand-size or even if the sensors move from its place, it worked most of the time.
During our primary research, we also found that it gets really difficult for speech impaired people to communicate with others who are not in front of them. There is only one possibility that they need to have some sort of camera-enabled devices through that they can make a video call to someone and can communicate. But, again as our target group was mainly the people from the Rural area, it was difficult for them to have such high-end devices and internet connectivity. Again, we saw the opportunity and tried to implement wireless communication protocols so that they can talk to anyone at a remote distance. Currently, the system is enabled for one-sided communication through Bluetooth protocol within the range of 20–30 meters.
Benefits of this system
This is a very low budget prototype for people to use which also serves many needs of our users. Also, it is facilitated with a Machine Learning Algorithm which makes one system for all which is our USP apart from the Bluetooth connectivity. The system is also portable and works with a 9 V battery and no other form of electricity is needed. The words and gestures are stored in the micro-controller itself. Right now, it is working for American Sign Language but, we can make it available for multiple types of sign languages just by adding words and gestures for that sign language.
The successful outcome that we loved the most about the project is its accuracy and ability to transmit and receive signs from the transmitter to the receiver. Other reasons to feel fortunate were the joy and happiness we experienced while and after doing this project. Those were because we could do our bit for the people who actually need it with some innovation instead of doing a repetitive kind of a project like making another social network or a home automation system.
CURIOUS ABOUT MORE DETAIL?
Ask us for the entire technical report and we will send it to you by email.