summary: Researchers have enabled mute people to speak simply by thinking: Deep electrodes in the subjects’ brains sent electrical signals to a computer, which then spoke imagined syllables.
This technology offers hope for paralyzed people to regain speech, and the research marks an important step towards brain-computer interfaces for spontaneous communication.
Key Facts:
- technology: Deep electrodes send brain signals to a computer to generate sound.
- participant: The experiment involved epilepsy patients with implanted electrodes.
- Future impacts: It could potentially allow paralyzed people to communicate through thought.
sauce: Tel Aviv University
A scientific breakthrough by researchers at Tel Aviv University and the Tel Aviv Sourasky Medical Center (Ichilov Hospital) has demonstrated that it may be possible to make silent people speak, using only their thoughts.
In one experiment, silent subjects imagined saying one of two syllables: deep electrodes implanted in the subjects’ brains sent electrical signals to a computer, which then spoke the syllables.
The study was led by Dr. Ariel Tanks of the Faculty of Medicine and Health Sciences at Tel Aviv University and Tel Aviv Sourasky Medical Center (Ichilov Hospital), and Dr. Ido Strauss, also of the Faculty of Medicine and Health Sciences at Tel Aviv University and director of the Functional Neurosurgery Unit at Ichilov Hospital.
The results of this study have been published in the journal Neurosurgery.
These findings offer hope that people who are completely paralyzed by diseases such as ALS, brain stem stroke or brain injury may one day be able to regain the ability to speak spontaneously.
“The patients studied were epilepsy patients who had been admitted to the hospital to undergo surgery to remove an epileptic focus in their brain,” explains Dr Tankas. “Of course, to do this, we need to pinpoint the location of the focus, the source of the ‘short circuit’ that sends powerful radio waves into the brain.
“This situation concerns a small proportion of epilepsy patients who do not respond well to drug therapy and who require neurosurgical intervention, and an even smaller proportion of epilepsy patients in whom the suspected lesion is located deep in the brain rather than on the cortical surface.
“Electrodes need to be implanted deep in the brain to pinpoint the exact location, after which the patient is hospitalized to await the next seizure.
“When a seizure occurs, the electrodes tell the neurologist or neurosurgeon where the focus of the seizure is, enabling them to operate with precision. From a scientific point of view, this represents a rare glimpse deep inside the living human brain.”
“Fortunately, an epilepsy patient admitted to Ichilov Hospital agreed to take part in the experiment, which may eventually help completely paralyzed people to express themselves again through artificial speech.”
In the first phase of the experiment, with deep electrodes already implanted in the patients’ brains, researchers from Tel Aviv University asked them to say out loud the two syllables “a” and “e.”
The researchers recorded his brain activity as he produced these sounds. Using deep learning and machine learning, the researchers trained an artificial intelligence model to identify specific brain cells whose electrical activity indicates a desire to produce /a/ or /e/.
Once the computer learned to recognise the patterns of electrical activity in the patients’ brains associated with these two syllables, the patients were asked to imagine they were saying /a/ and /e/, and the computer translated the electrical signals and played pre-recorded /a/ or /e/ sounds in response.
“My research area deals with speech encoding and decoding – how individual brain cells are involved in speech processes such as producing speech, hearing speech and imagining speech, or ‘silent speech,'” Dr. Tankas says.
“In this experiment, for the first time in history, we were able to link the activity of individual cells in the brain regions we recorded from with parts of speech.
“This allows us to distinguish between the electrical signals that represent the /a/ and /e/ sounds. Our research is currently focused on two components of speech, two syllables.
“Of course, our goal is to acquire full speech capabilities, but it is possible for a completely paralyzed person to signal ‘yes’ and ‘no’, even if it is just two different syllables. For example, in the future it may be possible to train a computer to train ALS patients in the early stages of the disease, while they are still able to speak.”
“Computers can learn to recognise electrical signals in a patient’s brain and be able to interpret these signals even after the patient has lost the ability to move their muscles. This is just one example.”
“Our study is an important step towards developing brain-computer interfaces that replace the brain’s speech control pathways, enabling completely paralyzed people to once again communicate spontaneously with their surroundings.”
About this BCI and Neurotechnology Research News
author: Ariel Tanks
sauce: Tel Aviv University
contact: Ariel Tankas – Tel Aviv University
image: Image courtesy of Neuroscience News
Original Research: The access is closed.
“Vocal neuroprosthesis in the frontal lobe and hippocampus: Decoding high-frequency activity into phonemes” Ariel Tanks et al. Neurosurgery
Abstract
Vocal neuroprosthesis in the frontal lobe and hippocampus: Decoding high-frequency activity into phonemes
Background and Objectives:
Speech disorders caused by injury or disease are devastating. Here, we report a novel speech neuroprosthesis that artificially produces speech components based on high-frequency activity in brain regions not previously exploited in neuroprosthetics: the anterior cingulate cortex, orbitofrontal cortex, and hippocampus.
Method:
A 37-year-old male neurosurgical epilepsy patient with intact speech who had depth electrodes implanted for clinical reasons calmly controlled the neuroprosthetic device almost immediately and in a natural way to spontaneously produce two vowels.
result:
In a first set of trials, subjects artificially produced a range of vowels for the neuroprosthetic device, with 85% accuracy. In subsequent trials, performance consistently improved, likely due to neural plasticity. We show that a neuroprosthetic device trained on explicit speech data can be controlled in silence.
Conclusion:
This may pave the way for new strategies to implant neural prostheses at early stages of the disease (e.g., amyotrophic lateral sclerosis) while speech production is still intact, and improve training to allow speech control even at later stages.The results demonstrate the clinical feasibility of direct decoding of high-frequency activity, including spiking activity, in the aforementioned areas for the production of phonemes, which could function as part of a neural prosthesis replacing lost speech control pathways.