Machine Design

What's that you say? Computers hear subaudible speech

NASA scientists are computerizing silent reading through use of nerve signals in the throat that control speech.

They found that small, button-sized sensors, stuck under the chin and on either side of the Adam's apple, gather nerve signals and send them to a computer program for translation into words. According to NASA scientists, this subvocal speech could be used in spacesuits, in noisy places like airport towers to capture traffic-control commands, and in traditional voice-recognition programs to increase accuracy.

In initial experiments researchers repeated subvocally six words and 10 digits. Word recognition accuracy was 92%. New, noncontact sensors are being tested that can read muscle signals even through layers of clothing.

"We use an amplifier to strengthen the electrical-nerve signals. These are processed to remove noise and let us see useful parts of the signals to distinguish one word from another," says NASA scientist Chuck Jorgensen.

After the signals are amplified, software examines them to recognize each word and sound. "The scientific meat of what we're doing resides in sensors, signal processing, and pattern recognition," says Jorgensen. "We will continue to expand the vocabulary with sets of English sounds, usable by a full speech-recognition computer program," he adds.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.