System for Non-vocal Speech Communication

System for
Non-vocal Speech Communication

Some patients with speech deficits, such as individuals with laryngectomy or brain injury, cannot vocalize. Others, such as Special Forces soldiers, may need to rely on voiceless communication for covert operations or noisy environments. Imagine a wearable technology that gives voice to the voiceless, or provides silent communication.

We are collaborating with researchers from BAE Systems, the Massachusetts General Hospital Voice Center and Northeastern University (Boston, USA) to develop a non-vocal speech communication device that translates facial muscle signals produced when silently mouthing words into automated speech.Current algorithms can process continuous speech from unimpaired and speech dysfunction subjects with less than 10% error rate from a 2,500 word vocabulary. Improvements in the detection technology and in the signal processing algorithms are under development to render a practical implementation with a reduced error rate and for prosodic elements of speech.

Our preliminary results are detailed in Publication list below. 


“This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. D13PC00074. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA); or its Contracting Agent, the U.S. Department of the Interior, National Business Center, Acquisition Services Directorate, Sierra Vista Branch.”

Publications

Deng Y, Patel R, Heaton JT, Colby G, Gilmore LD, Cabrera J, Roy SH, De Luca CJ, Meltzner GS. Disordered Speech Recognition Using Acoustic and sEMG Signals, Interspeech, Brighton UK, Sept 2009.

Colby G, Heaton JT, Gilmore LD, Sroka J, Bend Y, Cabrera J, Roy SH, De Luca CJ, Meltzner GS. Sensor Subset Selection for Surface Electromyography Based Speech Recognition, Proc. IEEE Int. Conf. Acoustics Speech and Signal Processing (ICASSP), 2009, Taipei, Taiwan.

Meltzner GS, Sroka J, Heaton JT, Gilmore LD, Colby G, Roy SH, Chen N, and De Luca CJ. Speech Recognition for Vocalized and Subvocal Modes of Production Using Surface EMG Signals from the Neck and Face, Interspeech 2008, Brisbane, Australia, Sept 2008.


Support

DARPA
NIDCD
The views expressed in these materials do not necessarily reflect the official policies of the U.S. Department of Defense, U.S. Department of the Interior, U.S. Department of Veterans Affairs, U.S. Department of Health and Human Services, the NIH or its components; nor does the inclusion of trade names/logos/trademarks/or references to outside entities constitute or imply an endorsement by any Federal entity.

© Copyright 2018 Altec Incorporated - All Rights Reserved