Google is Training its AI to Help People With Speech Impairments

0
364

Voice assistants are the ultimate luxury for all of us as they can perform a variety of actions after hearing a wake word or command. These are developing quickly, changing our lives, and making things easier. However, millions of people are suffering speech impairments caused by neurological conditions such as stroke, ALS, multiple sclerosis, traumatic brain injuries, etc. For those people, voice assistants can be a frustrating and challenging thing.

In order to change it, Google has revealed their project named Euphonia, through which they are putting efforts under its AI for Social Good program that utilizes artificial intelligence to improve speech recognition technology. In other words, Google is training its AI to better understand diverse speech patterns, like impaired speech.

One of the most important announcements at Google I/O was about the company’s efforts towards improving speech recognition software to aid people with speech impairment.

Voice interfaces and assistants are very common and are being used as a tool in almost all daily tasks. However, they are not as accessible as we think. These tools are off-limits for individuals with speaking disorders or speech impairments due to neurological issues including ALS.

Hence, Google decided to launch a new initiative under its Project Euphonia.

Project Euphonia

Under this project, Google has collaborated with the ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI). The main idea here is that if the friends and family of the affected person can understand what he/she is saying then the assistant should also be able to understand them with proper training.

Project Euphonia is focusing on remedying the problem of AI bias created using limited training data. Google Assistant is created to respond to the majority of voices and accents, but it does not work for the minorities. Google wants to change that now.

AI Training

For Project Euphonia, Google has been calling out for volunteers that can help provide data for AI training. The company has sent out a plea to people all around the world, so they can record their voice samples and submit them. These volunteers will help in improving the algorithm and can submit their voice samples here.

Along with this, the AI is also being trained to recognize facial expressions and gestures. This would be helpful for people who cannot speak at all. One of the volunteers, Dimitri Kanevsky, who is also a speech researcher has helped in recording 15,000 phrases for the cause.

Live Relay

Even though the work is still in progress, Google has unveiled a prototype called the Live Relay. The prototype uses speech recognition and text to speech conversion to help anyone who has hearing or speech impairments. Google is calling it ‘still in research phase’ and has not given a timetable as to when it would be released.

Google CEO Sundar Pichai said, “Fundamental AI research which enables new products for people with disabilities is an important way we drive our mission forward, These projects will ultimately result in products that work better for all of us. It’s the perfect example of what we mean by building a more helpful Google for everyone.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here