Home » today » Technology » AI Trained to Read Electrical Waves in the Brain for Speech Recognition: A Remarkable Advance in Computational Neuroscience.

AI Trained to Read Electrical Waves in the Brain for Speech Recognition: A Remarkable Advance in Computational Neuroscience.

A large readership likely recalls the iconic image of the late theoretical physicist Stephen Hawking, completely confined to a computer-controlled wheelchair. And trained that computer to read the movements of Hawking’s eyes, if fixed on a word or letter, the computer knows what is required of it. At that time, there were only a few of these computers, but the first thing that Hawking used was a computer specially designed to suit the needs of that unique world.

It is likely that something from that experience comes to mind, with reading the lengthy news recently published by the website of the American Association for the Advancement of Science, which oversees the famous science magazine Science, that a scientific team has reached the training of intelligence. The artificial intelligence reads the electrical waves in the brain, to the extent of recognizing certain words and sentences within those waves.

As a reminder, the brain works throughout its life through the transmission of electrical waves that travel between the neurons that make it up, and move with all its functions. By extension, modern medicine defines death as stopping the flow of electrical waves in the brain and the heart, which also works through electrical waves.

And by extension, when the brain performs a function, electrical waves move within it in parallel with the task it performs. In addition, there are certain centers whose electrical waves are associated with the specialized task they perform, such as centers that control the movement of a particular limb, memory, speech, pronunciation, and so on.

Finally, it is probably common to talk about EEG in a variety of health and disease states. Finally, a team specialized in “computational neuroscience” has managed to train artificial intelligence to carefully read the drawings of electrical waves in the brain, especially during speech and dialogue.

According to a comment from the specialist in “computational neuroscience” at the Massachusetts Institute of Technology (better known by his acronym “MIT”), Professor Martin Schrempf, there is a horizon of development open to this achievement that lies in that “by focusing on With advanced models and appropriate approaches, we may be able to decipher what one thinks.” Also, according to Schrempf, most scientific teams so far have focused on monitoring the patterns of electrical waves related to speech or thinking, by developing intelligent brain-computer interface (BCI) platforms, with special attention to electrical waves in specialized regions. speaking in the brain.

In this regard, a new achievement was achieved by a team specializing in “computational neuroscience” from the “University of Texas Austin” led by Professor Alexander Huth. This achievement was represented by the development of intelligent algorithms that trained artificial intelligence to read the data collected by projecting “functional magnetic resonance scans” on the centers specialized in producing words in the brain.

To illustrate, an fMRI scan works by detecting changes in blood flow to areas of the brain.

read more

This section contains related articles, placed in the (Related Nodes field)

The role of generative language models

The research focused on three volunteers who listened to 16 hours of radio broadcasts daily, while their brains were scanned using a functional magnetic resonance imaging (fMRI) scan. The team was able to make maps that linked the change in the work of the speech regions of the brain, and the content carried by the words during the scanning process, which usually lasts a few seconds.

Next, that same team trained the AI ​​to observe the reaction of the brains of each of the three volunteers to the meanings of specific sentences. At first, the AI ​​could not pick up the connections and patterns needed to recognize meanings, then the team integrated the major language models used in a generative (GPT) chatbot whose technical name is a “generative pre-trained transformer”, with Speech maps drawn by functional magnetic resonance scanning. The result? The artificial intelligence was able to identify the meanings in the minds of the human volunteers, when they listened to the linguistic sentences that carry these meanings.

In a parallel track, the announcement of that experiment and its results sparked comments that delved into aspects related to the extent of its danger to individual freedom and privacy. What happens when the brain is no longer a fortress in which ideas lie, so you do not know unless you speak or announce?

In this regard, Nita Farahani, a specialist in bioethics from Duke University in North Carolina, believes that despite the limitations of the study and its results, it should urge workers in “computational neuroscience” to cooperate with thinkers and academics in order to draw together the controls that It must be deepened so that this kind of scientific achievement does not turn into a continuing nightmare regarding privacy and individual freedom.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.