Brain scanner and AI algorithm read minds

Date:

With brain scanners and AI, US researchers have been able to at least roughly capture certain types of thoughts of willing subjects. A decoder they developed was able to use so-called fMRI images in certain experimental situations to roughly reproduce what was going through the participants’ heads, the team writes in the journal “Nature Neuroscience”.

This brain-computer interface, which does not require surgery, could one day help people who have lost their ability to speak, for example due to a stroke, the researchers hope. However, experts are skeptical. The study authors from the University of Texas emphasize that their technology cannot be used to secretly read minds.

Brain-computer interfaces (BCI) are based on the principle of reading human thoughts through technical circuits, processing them and translating them into movement or speech. For example, paralyzed people could use mind control to control an exoskeleton, or people with locked-in syndrome could communicate with their outside world. However, many of the corresponding systems currently under investigation require the surgical implantation of electrodes.

Hours of training for the AI
In the new approach, a computer forms words and sentences based on brain activity. The researchers trained this speech decoder by having three test subjects listen to stories for 16 hours while lying in a functional magnetic resonance imaging (fMRI) scanner. With such an fMRI, changes in blood flow in brain regions can be visualized, which in turn are an indicator of the activity of the neurons.

In the next step, the subjects heard new stories while their brains were re-examined in the fMRI tube. The previously trained speech decoder was now able to create strings of words from the fMRT data, which the researchers say largely correctly reproduced the content of what was heard. The system did not translate the information recorded in the fMRI into individual words. Instead, it used the connections recognized in the training and artificial intelligence (AI) to map the measured brain activity to the most likely sentences in new stories.

Rainer Goebel, head of the Department of Cognitive Neuroscience at Maastricht University in the Netherlands, explains this approach in an independent classification: “A central idea of ​​the work was to use an AI language model to count the number of possible sentences that can be combined with consistent with a pattern of brain activity.”

System is not yet flawless
At a press conference about the study, co-author Jerry Tang illustrated the results of the tests: The decoder reproduced the sentence “I don’t have my driver’s license yet” as “She hasn’t even started learning to drive yet.” According to Tang, the example illustrates a difficulty: “The model is very bad with pronouns – but we don’t know why yet.”

In general, the decoder is successful because many selected sentences in new, i.e. untrained stories contain words from the original text or at least have a similar meaning, according to Rainer Goebel. “But it also contained quite a lot of errors, which is very bad for a full-fledged brain-computer interface, because for critical applications – for example, communication with trapped patients – it is especially important not to generate false statements.” Further mistakes were made when the subjects were asked to independently come up with a story or watch a short silent animated film and the decoder was asked to play events in it.

For Goebel, the results of the presented system are generally too poor to be suitable as a reliable interface: “I would venture to predict that fMRI-based BCIs will (unfortunately) be limited in the future to research work with few test subjects – such as will continue to be the case in this study.”

Hoping for more accurate measuring techniques
Christoph Reichert of the Leibniz Institute of Neurobiology is also skeptical: “Looking at the examples of the presented and reconstructed text, it quickly becomes clear that this technology is still a long way from reliably generating an ‘imagined’ text from brain data. Nevertheless, the research indicates what could be possible if measurement techniques improved.

There are also ethical concerns: Depending on future developments, measures to protect intellectual privacy may be necessary, the authors themselves write. However, tests with the decoder showed that the subjects had to cooperate both for training and for later application. “If they were counting in their heads, naming animals or thinking of a different story during decoding, the process was sabotaged,” Jerry Tang describes. The decoder also performed poorly when the model was trained with another person.

Source: Krone

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related