Meta AI Unveils TRIBE v2 Brain Prediction Model
- •Meta AI introduces TRIBE v2, a tri-modal model predicting human brain activity from video, audio, and text.
- •System trained on 1,000+ fMRI hours to predict neural responses for novel stimuli and subjects with high accuracy.
- •Model enables digital neuroscience experiments, replicating decades of empirical research results in a virtual environment.
Meta AI researchers have introduced TRIBE v2, a sophisticated foundation model designed to bridge the gap between artificial intelligence and human neuroscience. By integrating three distinct data streams—video, audio, and language—this tri-modal system can predict how a human brain will react to various stimuli with unprecedented accuracy.
The model was trained on a massive, unified dataset containing over 1,000 hours of functional Magnetic Resonance Imaging (fMRI) scans, which measure brain activity by detecting changes in blood flow. By analyzing data from 720 different subjects, TRIBE v2 has learned to generalize brain patterns across new tasks and individuals, significantly outperforming previous mathematical models used to map neural responses.
Perhaps most impressively, the system facilitates "in-silico" neuroscience, allowing researchers to conduct experiments entirely within a computer simulation. TRIBE v2 successfully recovered classic findings from decades of biological research, such as how the brain processes complex language or visual scenes. This digital twin of human cognition provides a powerful tool for exploring the fine-grained topography of how our senses integrate information.
By transforming AI into a unifying framework for brain study, Meta aims to move beyond fragmented models toward a comprehensive understanding of human cognition. This research suggests that the architectures used in modern AI are becoming increasingly aligned with the functional organization of the human brain itself.