The brain remains one of the most unknown organs, despite continuing advances in neuroscience, which in recent years has not stopped providing new discoveries, also in Spain. In this sense, some of the most hopeful initiatives have to do with brain-computer interfaces through projects such as Stentrode, the brain implant that is used to send emails with the mind and treat Parkinson’s, or the one that has made a patient with advanced ELA can communicate again.
[La tecnología china que preocupa a EEUU: así han logrado hacer realidad la telepatía y la telequinesis]
One of the most anticipated, more because of the marketing that surrounds Elon Musk than because of its true potential, is Neuralink, with which a few years ago the founder of Tesla showed monkeys playing video games and that this week has again delayed until end of November. However, the most decisive may be the one just presented by a team from the University of Texas, which in a scientific article published in bioRxiv details the development of an algorithm that can ‘read’ the words a person is hearing or thinking without the need for an implant or invasive surgery.
And it is that, although other scientists had already managed to reconstruct language and images from signals coming from brain implants, this new decoder works during fMRI brain imaging (fMRI, for its acronym in English), those big machines in which scans are made. It’s a big find that opens up new possibilities for developing assistive technologies for those who can’t speak or type, but it also raises numerous ethical concerns about privacy.
How does it work
Instead of focusing on word-for-word deciphering, the system devised by Alexander Huth, a neuroscientist at the University of Texas at Austin and co-author of the study, tries to discern the high-level meaning of a sentence or thought. A) Yes, does not accurately translate every sentence literally, but it manages to convey the general idea of what is thought quite accurately. “Twenty years ago, if you had asked any cognitive neuroscientist in the world if this was possible, he would have laughed at you,” says Huth in The Scientist.
To train their algorithm, Huth and the rest of the team of neuroscientists used fMRI brain recordings, which measure changes in blood flow within the brain as indicators of brain activity. These recordings were made while three study subjects —one woman and two men, all between 20 and 30 years old— they listened to 16 hours of different podcasts, including several TED talks, and audiobooks.
It was important for the subjects to listen to as many different media as possible, in order to build a ‘decoder’ as precise as possible and applicable to the most diverse concepts and situations. Analyzing those 16 hours of changes in the blood flows of the brain of each individual, the algorithm made a series of predictions of what future readings would look like of MRIs. These ‘guesses’ were then checked against the fMRI recording in real time, with the ones closest to the actual readings determining the words that the decoder ultimately generated.
After completing this filtering process, the researchers scored the similarity of the stimuli presented to the subject with what was generated by the ‘decoder’. The results indicated that the algorithm ended up generating a complete history from the recordings of fMRI that coincided “pretty well”, according to Huth, with the story narrated by the podcast or audiobook of the day.
However, it did not always hit the target, as there are still some flaws to be polished. For example, the algorithm is not particularly good at pronouns and often confuse first and third person. “He knows what’s going on pretty precisely, but not who’s doing things,” says Huth. This may also be due to the peculiarities of the English language, since it has no grammatical gender and tends towards neutral language.
The potential application of this discovery is enormous, since, unlike the systems known up to now, does not involve complex surgeries or the need to implant any device in the brain. Reducing the expense and inconvenience of using large MRI machines is the hard part, but it’s a challenge that Huth and his team are willing to take on.
And it is that there are neuroimaging alternatives such as magnetoencephalography (MEG), another similar technique that allows recording the magnetic fields produced by the electrical currents of the brain. However, this it has a more portable and precise potential than fMRI, which could be the key along with the ‘decoder’ developed by these University of Texas scientists so that people with paralysis or in advanced stages of ALS can communicate.
Beyond its practical applications, what is important is the new knowledge that this project offers about how the organ that manages the activity of our nervous system works. The results of the study reveal, for example, which parts of the brain are responsible for creating meaning. Among their findings was that two apparently different brain regions, the prefrontal cortex and the temporal parietal cortex, represented the same information to the decoder, which worked equally well using recordings from either area.
Nor was the algorithm limited to identifying verbal stimuli. Although trained with subjects who exclusively listened to spoken language, the scientists tested its effectiveness by projecting a silent film during one of the experiments. The decoder successfully reconstructed what was happening, as well as a participant’s imagined experience of telling a story themselves rather than listening to it. “The fact that those things overlap so much [en el cerebro] It’s something we’re beginning to appreciate,” Huth concluded.
To ensure that their discovery is not used for purposes beyond research or help for people who are paralyzed and unable to communicate, the neuroscientists responsible for this study conducted several tests to confirm that does not work without the voluntary cooperation of the participant.
[DARPA crea el primer brazo protésico controlado por la mente que permite recuperar el tacto]
To verify this, while the audio was played, the researchers asked subjects to distract themselves by performing other mental tasks, such as counting, listing animals, or imagining a different story than the one they were listening to. Of all of them, the most effective strategy to confuse the algorithm and cause it to give inaccurate readings was, curiously, to think of animals.
They also concluded that a decoder trained on one subject’s brain scans was useless for others, so it would take someone to engage in long training sessions before the algorithm would be effective at accurately reading your thoughts. Even so, further developments in this technology may bring things closer that until recently we saw as dystopian, such as the possibility of soldiers with telepathy to receive orders directly in the brain.
You may also like:
Follow the topics that interest you