Ever Closer to Artificial Mind Reader

A device which reads the thoughts of brain-damaged patients could become a reality, scientists claimed, after proving they could tell what someone was hearing just by decoding their brain waves.

In the video embedded above, each word spoken to a group of patients by an electronic voice is replicated twice by a computer which analysed the patients’ brain waves to ‘guess’ what they had heard.

Researchers demonstrated that the brain breaks down words into complex patterns of electrical activity, which can be decoded and translated back into an approximate version of the original sound.

Because the brain is believed to process thought in a similar way to sound, scientists hope the breakthrough could lead to an implant which can interpret imagined speech in patients who cannot talk.

Any such device is a long way off because researchers would have to make the technology much more accurate and find a way to apply it to sounds which the patient merely thinks of, rather than hears.

It would also require electrodes to be placed beneath the skull onto the brain itself, because no sensors exist which could detect the tiny patterns of electrical activity non-invasively.

But the proof-of-concept study published in the Public Library of Sciences Biology journal could offer hope to thousands of brain-damaged patients who face the daily agony of being unable to communicate with their loved ones.

Prof Robert Knight, one of the researchers from the University of California at Berkeley, said: “This is huge for patients who have damage to their speech mechanisms because of a stroke or Lou Gehrig’s disease and can’t speak.

“If you could eventually reconstruct imagined conversations from brain activity, thousands of people could benefit.”

The team studied 15 epilepsy patients who were undergoing exploratory surgery to find the cause of their seizures, a process in which a series or electrodes are connected to the brain through a hole in the skull.

While the electrodes were attached, the researchers monitored activity in the temporal lobe – a speech-processing area of the brain – as the patients listened to five to ten minutes of conversation.

By breaking down the conversation into its component sounds, they were able to build two computer models which matched distinct signals in the brain to individual sounds.

They then tested the models by playing a recording of a single word to the patients, and predicting from the brain activity what the word they had heard was.

The better of the two programmes was able to produce a close enough approximation of the word that scientists could guess what it was, from a list of two options, 90 per cent of the time.

Researchers said it could be made more accurate by studying patients’ brain signals during a longer conversation, or examining other parts of the brain involved in speech-processing.

Dr Brian Pasley, who led the study, compared the method to a pianist who could watch a piano being played in a soundproof room and “hear” the music just by watching the movement of the keys.

Any concerns about sinister “mind-reading” devices which could spy on a person’s secret thoughts would be misguided, he added, because the technique would rely on a patient consciously “hearing” a word in their mind.

He said: “This is just to understand how the brain converts sound into meaning, and that is a very complicated process. The clinical application would be down the road if we could find out more about those imaginary processes.

“This research is based on sounds a person actually hears, but to use this for a prosthetic device these principles would have to apply to someone who is imagining speech.”

Jan Schnupp, Professor of Neuroscience at Oxford University, described the study as “remarkable”.

He said: “Neuroscientists have long believed that the brain essentially works by translating aspects of the external world, such as spoken words, into patterns of electrical activity.

“But proving that this is true by showing that it is possible to translate these activity patterns back into the original sound (or at least a fair approximation of it) is nevertheless a great step forward, and it paves the way to rapid progress toward biomedical applications.”

Via:telegraph

Leave a Reply

  

  

  


*

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

sharethis_button(); }?>