MIT Researcher Uses Neural Cellular Automata to Visualize Music
- •MIT student Mariano Salcedo develops AI system using Neural Cellular Automata for real-time music visualization
- •Web-based interface allows users to manipulate visual performance by adjusting relationship between audio energy and AI
- •Research presented at AAAI 2026 explores self-organized systems beyond music including biological modeling
Mariano Salcedo, a graduate student in MIT's Music Technology and Computation program, is redefining the intersection of acoustics and visual art. By leveraging Neural Cellular Automata (NCA), Salcedo has created a framework where music doesn’t just accompany visuals but actively drives their evolution. Unlike traditional visualizations that rely on static filters, this system treats pixels as living cells that react to auditory stimuli.
The technology utilizes self-organized systems, which are collections of individual parts that interact locally to create complex, emergent behaviors—much like a flock of birds or a biological organism. Salcedo’s web interface allows users to bridge the gap between signal processing and generative art. By adjusting internal parameters, performers can synchronize the energy of a music stream with the growth patterns of the visual automata.
While the current application focuses on enhancing the listener's experience, the underlying research has broader implications for modeling complex systems. Salcedo presented his findings, titled 'Artificial Dancing Intelligence,' at the AAAI conference in early 2026. His work highlights a shift away from massive generative models toward smaller, specialized architectures capable of simulating natural phenomena and reducing cultural bias in digital music expression.