Select Page

A groundbreaking AI system, named DeWave, has been developed by Australian researchers, offering a first-of-its-kind, non-invasive method to translate silent thoughts into text. This innovative technology utilizes a snug-fitting cap that records brain waves through an electroencephalogram (EEG) and decodes them into text.

This research marks a significant breakthrough in translating raw EEG waves directly into language. According to computer scientist Chin-Teng Lin from the University of Technology Sydney, DeWave represents a pioneering effort in the field, introducing innovative approaches to neural decoding.

Presently, DeWave has achieved over 40 percent accuracy in experiments, a notable improvement from previous standards in thought translation from EEG recordings. The goal is to enhance its accuracy to around 90 percent, matching the efficiency of conventional language translation or speech recognition software.

Traditional methods for translating brain signals into language involve invasive surgeries to implant electrodes or the use of bulky, expensive MRI machines. DeWave’s non-invasive approach, which doesn’t rely on eye-tracking, represents a significant advancement, albeit with the challenge of interpreting individual thoughts that vary in EEG wave representation.

DeWave’s encoder, after extensive training, transforms EEG waves into a code. This code is then matched to specific words in DeWave’s ‘codebook’. This system is the first to incorporate discrete encoding techniques in brain-to-text translation, integrating with large language models to open new frontiers in neuroscience and AI.

The team, including Lin, used trained language models like BERT and GPT, further training DeWave with an open-source large language model to create coherent sentences from the words identified.

DeWave performs best in translating verbs, while nouns often result in pairs of semantically similar words. This is attributed to the brain’s processing of semantically similar words producing similar brain wave patterns. Despite challenges, the model yields meaningful results, aligning keywords and forming similar sentence structures.

The research, tested on a relatively large sample size, suggests higher reliability compared to earlier technologies tested on small samples. However, the team acknowledges that there’s more work to be done, especially since the EEG signals received through a cap can be noisy. The team emphasizes the importance of continued efforts in this challenging yet valuable endeavor.

This research was presented at the NeurIPS 2023 conference, with a preprint available on ArXiv.