One of the biggest mysteries of language, how the brain synthesizes speech and converts it into words, which are then issued our vocal system. For many years, scientists, neurologists and linguists are trying to understand, with little success as the logical mechanism works. Nevertheless, a group of experts with artificial intelligence, conceived a brain-computer interface (BCI) that insight translates into wordsWhat can restore speech function in people with neurological disorders.
Using artificial intelligence, they conceived a brain computer interface (ICC), which translates the word inspiration
For the authors, this "proof of principle" that is, the first step on the way to the Ministry of Interior to restore the ability to speak people who can not due to neurodegenerative diseases such as Parkinson's disease, stroke and amyotrophic lateral sclerosis (ALS), or whose vocal tract damaged by cancer.
it neural decoder It uses kinematics and sound presentation in the cerebral cortex to synthesize a message. The study was published K. Gopal Anumanchipalli, Josh Chartier and Edward F. Chang, professors from various US universities in scientific publications nature.
The device processes to "read" their intentions directly from the brain, and use this information to control external devices or to move the paralyzed limb, the authors remind.
ICC projects that visit people who have lost their speaking skills of muscle movements required for interpretation. The most famous case of the & # 39 is Stephen Hawking, who was supposed to enable the infrared sensor in his eyeglasses and the cursor on the computer screen, Intel released the metallic timbre.
The technology behind the MIA plays virtually the organs involved in speech, lips, jaw, tongue and larynx and translates brain signals into words that have a correlation in the movement of these members.
The device can restituit ability to speak to people who suffer from neurodegenerative diseases
"We thought that if the language centers in the brain to encode the movements mostly sounds, we are trying to do the same to decode these signals," said Anumanchipalli.
For speech reconstruction, instead of directly converting brain signals into sound signals, researchers have used a two-step approach. First, they developed a recurrent neural network decodificaba neural signals in the movements of the vocal tract. Then, these motions have been used for speech synthesis.
"We have shown that the use of brain activity to control a computer simulated version member of the vocal tract to allow us to generate synthetic speech with more natural and accurate than trying to get it just the brain the sound sounds," says Chang.
To adjust the device, the researchers recorded the activity of the cortex of five volunteers who have spoken aloud. From there, they analyzed brain signals that control the movements of the bodies associated with the language and the system of artificial intelligence by transforming them into sounds and words through a synthesizer.
In order to evaluate the clarity of synthesized speech, the researchers conducted a listening task, based on the identification of a word and offer transcription level. The first task, which is estimated 325 words, they found that better identify keywords listeners as increasing the length and number of word storage options (10, 25 or 50) decreased in accordance with the perception of natural speech.
The system is the most promising of the current speech synthesizers used by movements of the head or eyes to monitor the progress in the mode of computer & # 39; computer select letters to make a word.
However, these processes are much slower, the normal flow of the standard language. This means that the decoder for use on a regular basis, still need to overcome many problems.