Robots That See
My research seeks to endow robots with the capability to observe their environment and create probabilistic, interpretable, and compact internal representations from sensory data. My early contributions introduced Bayesian frameworks for occupancy modelling, facilitating scalable and differentiable environmental maps. This expanded into spatiotemporal modelling of motion patterns, allowing robots to anticipate dynamic elements like pedestrian movements. Advanced projects developed photorealistic scene reconstructions under challenging conditions, such as low-light or underwater environments. Methods to align and fuse multiple representations enhanced multi-robot collaboration. These representations enable robots to reason about their environment with confidence and robustness, which is critical for navigation and interaction. For example, uncertainty-aware photorealistic modelling helps robots operate in poorly lit caves, while underwater models assist in oceanic tasks. Collectively, these developments provide the foundational "vision" needed for robots to engage in complex real-world scenarios.
Selected Papers:Â
Continuous Occupancy Map Fusion with Fast Bayesian Hilbert Maps. W. Zhi, R. Senanyake, L. Ott, F. Ramos. ICRA 2019.
Spatiotemporal learning of directional uncertainty in urban environments with kernel recurrent mixture density networks. W. Zhi, R. Senanyake, L. Ott, F. Ramos. RA-L + IROS 2019.
Kernel trajectory maps for multi-modal probabilistic motion prediction. W. Zhi, L. Ott, F. Ramos. CoRL 2019.
DarkGS: Learning Neural Illumination and 3D Gaussians Relighting for Robotic Exploration in the Dark. T. Zhang, K. Huang, W. Zhi, M. Johnson-Roberson. IROS 2024.
PhotoReg: Photometrically Registering 3D Gaussian Splatting Models. Z. Yuan, T. Zhang, M. Johnson-Roberson, W. Zhi. Under review at ICRA 2025.