Some patients with speech deficits, such as individuals with laryngectomy or brain injury, cannot vocalize. Others, such as Special Forces soldiers, may need to rely on voiceless communication for covert operations or noisy environments. Imagine a wearable technology that gives voice to the voiceless, or provides silent communication.
We are collaborating with researchers from BAE Systems, the Massachusetts General Hospital Voice Center and Northeastern University (Boston, USA) to develop a non-vocal speech communication device that translates facial muscle signals produced when silently mouthing words into automated speech.Current algorithms can process continuous speech from unimpaired and speech dysfunction subjects with less than 10% error rate from a 2,500 word vocabulary. Improvements in the detection technology and in the signal processing algorithms are under development to render a practical implementation with a reduced error rate and for prosodic elements of speech.
Our preliminary results are detailed in Publication list below.
Deng Y, Patel R, Heaton JT, Colby G, Gilmore LD, Cabrera J, Roy SH, De Luca CJ, Meltzner GS. Disordered Speech Recognition Using Acoustic and sEMG Signals, Interspeech, Brighton UK, Sept 2009.
Colby G, Heaton JT, Gilmore LD, Sroka J, Bend Y, Cabrera J, Roy SH, De Luca CJ, Meltzner GS. Sensor Subset Selection for Surface Electromyography Based Speech Recognition, Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing (ICASSP), 2009, Taipei, Taiwan.
Meltzner GS, Sroka J, Heaton JT, Gilmore LD, Colby G, Roy SH, Chen N, and De Luca CJ. Speech Recognition for Vocalized and Subvocal Modes of Production Using Surface EMG Signals from the Neck and Face, Interspeech 2008, Brisbane, Australia, Sept 2008.