Brain Scan Used To decode Thoughts
Using MRI scans and artificial intelligence, researchers have discovered a method to interpret a stream of words in the brain.
According to research published in the journal Nature Neuroscience, the system reconstructs the general idea of what a person hears or imagines rather than trying to reproduce every word.
According to Alexander Huth, a research author and assistant professor of neuroscience and computer science at The University of Texas at Austin, “It’s getting at the ideas behind the words, the semantics, the meaning.”
However, this technology is unable to read minds. It only functions when a participant actively collaborates with researchers.
Nevertheless, language-decoding systems might one day be able to assist those who are unable to speak due to a disease or brain injury. They also support research into how words and thoughts are processed by the brain.
The use of sensors positioned directly on the surface of the brain has been used in earlier language decoding attempts. The sensors pick up signals in regions that are engaged in word articulation.
According to Marcel Just, a psychology professor at Carnegie Mellon University who was not involved in the new research, the Texas team’s methodology is an attempt to “decode more freeform thought,” he adds.
According to him, this could imply that technology has uses other than communication.
Understanding mental disease, which is fundamentally a brain disorder, is one of the largest scientific medical concerns, according to Just. “I believe this general strategy will eventually resolve that puzzle,”
The goal of the new study is to comprehend how the brain interprets words.
Three participants spent up to 16 hours each in a functional MRI scanner, which looks for activity patterns across the brain.
Podcasts were transmitted through headphones that participants wore. They mostly just lay there while listening to stories from The Moth Radio Hour, according to Huth.
Not just the speech and language-related regions of the brain were activated by those word streams.
According to Huth, “It turns out that a significant portion of the brain is doing something.” Therefore, areas that we use for navigation, areas that we use for mental computation and areas that we use for processing how something feels when we touch it.
The MRI data were transferred to a computer after participants listened to hours of stories while they were in the scanner. It acquired the ability to correlate particular brain activity patterns with particular word streams.
Participants then listened to fresh stories from the scanner, according to the team. Then, using the participants’ brain activity as a starting point, the computer tried to piece together these stories.
An early version of the well-known natural language processing program ChatGPT provided significant assistance to the system in building understandable phrases.
A rephrased version of what a participant heard came out of the system.
According to Huth, if a participant overheard someone say, “I didn’t even have my driver’s license yet,” the decoded version might have been, “She hadn’t even learned to drive yet.” He claims that the decoded version was often flawed.
In a different test, the system was able to translate phrases that a user had just envisioned expressing.
Participants in a third trial watched videos that were wordless narratives.
According to Huth, “We didn’t tell the subjects to try to describe what’s happening.” And yet, all we received was a description in this language of what was happening in the video.
The experimental communication system being created by a team led by Dr. Edward Chang at the University of California, San Francisco for paralyzed persons is faster and more precise than the MRI method at the moment.
According to David Moses, a researcher in Chang’s group, “People get a sheet of electrical sensors implanted directly on the surface of the brain.” That captures brain activity right at its source.
The sensors pick up activity in the parts of the brain that normally control speech. The system has at least one user who can accurately produce 15 words per minute while utilizing only his ideas.
Moses asserts that “No one has to get surgery” with an MRI-based solution.
Both methods require the other person’s cooperation in order to read their thoughts. People were able to trick the system in the Texas study by simply telling themselves a new story.
Future iterations, though, might bring up moral dilemmas.
This is both thrilling and unsettling, according to Huth. What if you could hear someone else say what they are simply thinking in their head? There’s a chance that may be hazardous.
“This is all about the user having a new way of communicating, a new tool that is totally in their control,” he claims. “That is the goal, and we have to make sure that it stays the goal.”
Are you excited about the Brain Scan? Comment down below
Also, see
Follow us on Instagram – here