Menu Menu

New AI turns brainwaves into complex speech

In scary/cool science news of the day, US scientists have taken a significant step forward in creating computers that communicate directly with our brains.

Neuroscientists and speech experts at the University of California San Francisco (UCSF) have developed an artificial intelligence that’s able to read human brainwaves and convert them to speech.

The breakthrough, first reported in Nature magazine, has implications both for people who have conditions that have cost them the power of speech (M.S. or stroke sufferers, for example), and for the future robot apocalypse. They can literally read our thoughts now guys is it time to be concerned?

All jokes aside, this tech is completely ground-breaking for the seamless integration of machines into human mechanisms, potentially reducing the need for user input in programming. Finally, MS Paint will understand that what you managed to create IRL wasn’t actually what was in your head.

Brainwave to speech intelligence has made rapid progress in the past ten years, having stalled in the past at translating raw brain data into words. The complexity of neurological pathways, and the individuality of each persons brain pattern, meant that it was generally only possible to generate one word at a time.

However, whilst people’s thought patterns are unique and infinitely complex, the signals from the brain to the mouth, jaw, and tongue that produce speech are fairly universal. Hence, rather than studying only the brain, the UCSF scientists used two AIs to analyse both neurological signals and the movements of the mouth during dialogue.

The video below shows you an x-ray view of what our mouth and throat actually look like when we talk.

When the information collected by these AI’s was fed into a synthesiser, something that more or less resembled complex sentences came out.

The result wasn’t perfect. Whilst the artificial neural network did well at decoding fricatives – sounds like the ‘sh’ in ‘ship’ – it often stumbled at plosives, such as the ‘b’ in ‘bob’. The resulting sounds are distinctly human, but perhaps a human whose had one too many G&Ts.

In order to get the data for the anatomical AI, the scientists had native English-speaking volunteers read out lists of one hundred words, then interpret sentences fed back to them from the algorithm.

The neurological AI was a little more complicated to manage. The team worked with five people who already had high-density electrode arrays installed in their skulls for the treatment of epilepsy. These devices need to be installed surgically under the scull – an invasive, dangerous surgery which limits the potential of this tech to hit the market anytime soon.

But Professors Grayden and Oxley, and other teams around the world including Elon Musk’s start-up Neuralink, are working on less-invasive methods.

Their solution is to attach electrodes to tiny, hollow, tube-like scaffolds and then float those scaffolds into blood vessels in the brain. They could sit there, reading brain signals, without interfering with blood flow or the brain. Like a little tiny brain pet.

A human trial is planned for later this year. Any volunteers?

Accessibility