Meta's Fundamental AI Research (FAIR) team has announced the release of TRIBE v2, a massive foundation model trained on multimodal brain data. The model represents a significant leap in neuro-AI, capable of creating a "digital twin" of human neural activity.
TRIBE v2 outperforms its predecessor by an order of magnitude, offering 70x better resolution in predicting brain responses to sensory stimuli. By training on a diverse dataset of fMRI, EEG, and ECoG signals, the model can forecast exactly how specific sights and sounds will be processed by the primary and secondary visual and auditory cortices.
This "neural forecasting" capability has profound implications for Brain-Computer Interfaces (BCI). TRIBE v2 can be used to denoise messy signals from wearable BCI hardware, allowing for more precise control of external devices. In the long term, Meta envisions this technology as a cornerstone for neural accessibility tools, helping individuals with speech or motor impairments communicate more fluently.
The architecture of TRIBE v2 leverages a novel Sparse Transformer variant that mimics the modular connectivity of the human brain. This allow the model to generalize across different individuals while maintaining a high degree of personalization, a feat previously thought impossible in large-scale neural models.