Weiming (William) Zhi

Postdoctoral Fellow @ Robotics Institute, Carnegie Mellon University

4121 Newell-Simon Hall, Carnegie Mellon University, Pittsburgh, PA

Wzhi@andrew.cmu.edu

LinkLinkedIn

I am a postdoctoral fellow with Matthew Johnson-Roberson at the Robotics Institute, Carnegie Mellon University

Before CMU, I was PhD candidate at the School of Computer Science, the University of Sydney, Australia, advised by Fabio Ramos and working closely with Lionel Ott (now at ETH Zurich). My doctoral thesis won the School's Outstanding Thesis Award. During my PhD, I spent time conducting robotics research with NVIDIA's Seattle Robotics Lab. 

My research has been recognised with several awards and accolades, including the Best Paper Award at the Learning for Dynamics and Control Conference (L4DC) in 2022, and selection as a Robotics: Science and Systems (RSS) Pioneer in 2020.

Before starting my PhD, I received my Bachelor of Engineering (First Class Honours) at the University of Auckland, New Zealand. I spent time at CSIRO doing machine learning research as an undergraduate.

Robots that Learn to See and Act in Unstructured Environments

Vision: Today’s robots are mainly confined to structured and predictable environments, such as carefully engineered factory floors. I envision a future where artificially intelligent robots, whether manipulators or mobile platforms, operate seamlessly in everyday settings like homes and offices. This demands innovations in both the theoretical foundations and practical systems throughout the robot technology stack.

Focus: My research advances robotics in unstructured environments by focusing on two essential aspects of autonomy: the ability to see (robot perception) and the ability to act (motion generation), along with their interaction. Robot perception seeks to extract meaningful information from sensor data, while motion generation leverages these representations to enable robots to interact naturally with their surroundings. 

Robotics is Interdisciplinary: To function effectively in the diverse and unpredictable real world, robots must learn from data, interpret their surroundings, reason about potential outcomes, and adapt their behaviour accordingly. My work develops robust machine learning methods for perception and motion generation while incorporating structure to reason about the system’s confidence and theoretical guarantees. I draw on principles from probabilistic modelling to quantify uncertainty, control theory to understand dynamic systems, and 3D vision to maintain geometric consistency in representations. 

Research Themes