MIT Uses Generative AI to See Through Walls
- •MIT researchers use generative AI to reconstruct 3D objects through obstructions using wireless signals.
- •New Wave-Former system improves object shape reconstruction accuracy by 20% over existing methods.
- •RISE system utilizes human movement to map entire indoor environments with a single stationary radar.
MIT researchers have developed a breakthrough method for wireless vision, allowing robots to perceive objects hidden behind walls or under debris. By combining millimeter wave (mmWave) signals—the high-frequency waves used in modern Wi-Fi—with generative AI, the team overcame the physical limitations of signal reflection.
In traditional wireless sensing, signals often bounce off surfaces in a single direction (specularity), leaving massive gaps in the resulting data. To solve this, the researchers created Wave-Former, a system that uses a generative model to predict and fill in the missing parts of a 3D shape. Since large-scale wireless datasets are rare, the team cleverly adapted existing visual datasets to simulate the noisy, mirror-like properties of radio reflections.
Beyond individual objects, the researchers introduced RISE, a system that maps entire rooms by tracking how signals bounce off moving people. These ghost signals, usually discarded as interference, are now used to reconstruct full indoor scenes. This technology offers a privacy-preserving alternative to cameras for smart homes and could revolutionize warehouse logistics by allowing robots to verify contents through sealed packaging without opening them.