Scientists Have Developed An AI System That Decodes Thoughts And Converts Them Into Text

Scientists at the University of Texas have discovered the last secret domain that could be revealed to the entire world — or even read.

Researchers have developed a way to convert human thoughts into text that is noninvasive. The “semantic coder”, while currently cumbersome, could one day be miniaturized and moved around so that the sanctum sacrum of the human body can be monitored from anywhere.

The researchers’ paper published in Nature Neuroscience on Monday indicated that “a brain-computer interface which decodes continuous languages from non-invasive recordings could have many scientific applications and practical uses.”

As the MIT Technology Review suggested, practical applications could include surveillance and interrogation, in addition to helping mutes speak. The current technology is dependent on subject cooperation, but it can be actively resisted.

This new decoder uses both functional magnetic resonance imaging of the brain and artificial intelligence.

The team led by Jerry Tang, a PhD student in computer sciences, and Alex Huth – an assistant professor of Neuroscience and Computer Science at UT Austin – trained GPT-1 – an artificial intelligence system of the ChatGPT language models – on a dataset containing different English sentences from hundreds narrative stories.

The stories were featured in the podcast “Modern Love”, which was produced by the New York Times.

The AI model developed by the researcher found brain patterns that corresponded to specific words. It could then fill the gaps based on its predictive ability by “generating words sequences, scoring each candidate’s likelihood of evoking the recorded brain reactions and selecting the best candidate.”

The decoder recognized and deciphered the thoughts of test subjects when scanned again.

The reconstructions, while not perfect in terms of translation, were a little more imaginative than the originals.

One test subject, who heard a speaker state, “I haven’t got my license yet,” was decoded to mean, “she’s not even begun to learn how to drive.”

Another test subject understood the words “I didn’t know whether to run away, cry, or scream.” Instead, I said “Leave Me Alone!” and those thoughts were decoded to be: “Started screaming and crying, and then just said, “I told you to Leave Me Alone.”

Researchers in Texas tested the decoder not only on verbal but also on non-narrative visual thoughts.

The test subjects watched four Pixar shorts, each lasting 4-6 minutes. These films were “self-contained” and almost completely devoid of any language. The brain responses were recorded in order to determine if the thought decoder was able to make sense of what had been seen. According to reports, the model showed promise.

Huth said in a University of Texas at Austin Podcast that “for a noninvasive technique, this is a big leap forward” compared to previous methods, which were usually single words or sentences.

Huth added, “We are getting a model to decode continuous languages for extended periods with complex ideas.”

Researchers are aware of the ethical issues raised by this technology.

“We have taken the concern that it could potentially be misused very seriously and worked to prevent that,” said Tang. We want people to only use this technology when they are interested and it is helpful.

Even if the bad actors were to get their hands on this technology today, they wouldn’t have a great deal of success.

Decoders only produce meaningful results when they are asked to analyze thoughts of people who have already been trained. This training involves scanning the subject for several hours. The decoder would not produce any meaningful results if used on an unwilling passerby. A large enough dataset could eventually eliminate the need for intimacy and familiarity.

If an authoritarian government or criminal code-breaker managed to do the impossible today and get their hands on both this technology and the individual on whom it was trained, the captive still has ways of protecting their mental secrets.

Researchers claim that test subjects could resist the mind-reading attempts of the decoder if they imagined animals or told their own stories.

Tang said that despite the current limitations of technology and its ability to resist it, “it is important to be proactive and enact policies that protect the people and their privacy.” … “It is important to regulate the use of these devices.”

Tang, a MIT Technology Review contributor, said: “Nobody should have their brain decoded without consent.”

TheBlaze covered a World Economic Forum conference in January that emphasized the “age of brain transparency.”

Nita Farrahany, professor at Duke Law School, faculty chair for the Duke MA program in bioethics, science policy, and professor of philosophy and law, said, “What you feel, what you think: it’s just data.” Artificial intelligence can decode large patterns.

Farahany, in her presentation on Jan. 19, entitled “Ready for Brain Transparency? When people think or express emotions, “neurons in the brain emit tiny little electrical discharges.” As a particular thought takes form, hundreds of thousands of neurons fire in characteristic patterns that can be decoded with EEG- or electroencephalography- and AI-powered devices.”

Farahany expressed a similar optimism as the UT researchers when he said that widespread adoption of the technologies would “change the ways that we interact and understand ourselves.”