We’ve seen the Raspberry Pi used to bridge all sorts of connections but today we’re sharing a Pi project that bridges a gap in human connection. Maker and developer Prabhjot Singh has created what he calls Deaf Link. It uses a Raspberry Pi to serve as a translation hub that lets you convert sign language into audible speech and speech into sign language using a robotic hand.
The first mode is known as sign to speech. To capture sign language, the Pi uses a camera module. The input data is processed using both OpenCV and MediaPipe with Google TensorFlow and hundreds of sign language images. Once the sign is identified, it’s translated to audio which is output to a speaker.
The second mode is speech to sign mode which is where things get a little crazy. Audio is detected using a micrphone which is then parsed through Google’s speech to text API. The text is handled by an MQTT broker that passes it to an Arduino. This Arduino drives servos in the robotic hand to recreate the text as sign language.
The main board driving the project is a Raspberry Pi 4 B alongside an Arduino Nano 33 IoT. The Pi is connected to a Raspberry Pi Camera Module 3 and Razer Seiren Mini microphone. The Arduino is connected to the six servo motors used inside of the prosthetic hand. Everything is mounted inside custom housing.
Software-wise, Singh is using a few tools including MQTT, OpenCV and Tensorflow. If you want to know more about how this project is programmed, you’re in luck. Singh decided to make the whole project open source so you can find more details in the project guide shared to Hackster to see how it all goes together.
If you want to see this Raspberry Pi project in action, you can check out the demo video shared to YouTube by Singh. Be sure to follow Singh for more cool creations as well as any future updates on this one.
Be the first to comment