The new technology is a major step in creating less invasive, yet still highly capable, BMIs.
Mapping neural activity to corresponding behaviors is a major goal for neuroscientists developing brain–machine interfaces (BMIs): devices that read and interpret brain activity and transmit instructions to a computer or machine. A major limitation for the development of BMIs is that the devices require invasive brain surgery to read out neural activity. But now, a collaboration at Caltech has developed a new type of minimally invasive BMI to read out brain activity corresponding to the planning of movement. Using functional ultrasound (fUS) technology, it can accurately map brain activity from precise regions deep within the brain at a resolution of 100 micrometers (the size of a single neuron is approximately 10 micrometers). The new fUS technology is a major step in creating less invasive, yet still highly capable, BMIs. The work was published in the journal Neuron on March 22.
Non-invasive techniques like functional magnetic resonance imaging (fMRI) can image the entire brain but require bulky and expensive machinery. Electroencephalography (EEGs) does not require surgery but can only measure activity at low spatial resolution. The technology was developed with the aid of non-human primates, who were taught to do simple tasks that involved moving their eyes or arms in certain directions when presented with certain cues. As the primates completed the tasks, the fUS measured brain activity in the posterior parietal cortex (PPC), a region of the brain involved in planning movement. The Andersen lab has studied the PPC for decades and has previously created maps of brain activity in the region using electrophysiology. To validate the accuracy of fUS, the researchers compared brain imaging activity from fUS to previously obtained detailed electrophysiology data.
Next, through the support of the T&C Chen Brain–Machine Interface Center at Caltech, the team aimed to see if the activity-dependent changes in the fUS images could be used to decode the intentions of the non-human primate, even before it initiated a movement. The ultrasound imaging data and the corresponding tasks were then processed by a machine-learning algorithm, which learned what patterns of brain activity correlated with which tasks. Once the algorithm was trained, it was presented with ultrasound data collected in real time from the non-human primates. The algorithm predicted, within a few seconds, what behavior the non-human primate was going to carry out (eye movement or reach), direction of the movement (left or right), and when they planned to make the movement.
Source: Caltech news release