Robots That See

My research seeks to endow robots with the capability to observe their environment and create probabilistic, interpretable, and compact internal representations from sensory data. My early contributions introduced Bayesian frameworks for occupancy modelling, facilitating scalable and differentiable environmental maps. This expanded into spatiotemporal modelling of motion patterns, allowing robots to anticipate dynamic elements like pedestrian movements. Advanced projects developed photorealistic scene reconstructions under challenging conditions, such as low-light or underwater environments. Methods to align and fuse multiple representations enhanced multi-robot collaboration. These representations enable robots to reason about their environment with confidence and robustness, which is critical for navigation and interaction. For example, uncertainty-aware photorealistic modelling helps robots operate in poorly lit caves, while underwater models assist in oceanic tasks. Collectively, these developments provide the foundational "vision" needed for robots to engage in complex real-world scenarios.

Selected Papers: 

PhotoReg: Photometrically Registering 3D Gaussian Splatting Models. Z. Yuan, T. Zhang, M. Johnson-Roberson, W. Zhi. Under review at ICRA 2025.