Research Projects

Motion generation and Diffeomorphic learning for generalised imitation learning, motion planning, and efficient ODE learning

We develop a suite of methods to utilise differentiable bijections (diffeomorphisms) to morph vector fields and distributions for generalised imitation learning, motion planning, ODE learning.

Global and Reactive Motion generation


We introduce Geometric Fabric Command Sequences (GFCS), which produce motion for manipulators that are reactive, quickly responding to local changes in the environment, while finding global solutions​. We formulate motion generation as a global optimisation problem over a short sequence of intermediate command states built atop Geometric Fabrics. We hypothesise that solutions over different problems are highly transferrable, and we can predict good candidate solutions from previously solved problems. We take a self-supervised approach, where we continuously learn an implicit generative model to generate good candidates, use these candidates in optimisation, then use subsequent optimisation solutions as training data labels​.

Diffeomorphisms for generalised imitation learning


We learn diffeomorphisms, as integral curves of smooth ODEs, to construct vector fields from expert demonstrations to reproduce shown motion trajectories, and generalise to novel environments by applying diffeomorphisms to additionally handle changes in the environment, such as new obstacles or user defined bias, all the while maintaining asymptotic stability.


Diffeomorphisms for parallelised motion planning


We morph the sampling distribution of sampling-based motion planners by a diffeomophism constructed from a learned representation of the environment. The morphing can be done in a highly efficient manner by leveraging GPU computation, and allows us to obtain more informed samples for faster motion planning.


Diffeomorphisms for ODE learning


We propose an alternative approach to learning ODEs: we view the desired target ODE dynamics as a vector field that is a morphed version of an alternative base vector field by a diffeomorphism. If the base vector field can be easily integrated, our approach attains speed-ups of up to two orders of magnitude when integrating the learned ODEs.

Anticipatory navigation under uncertainty

Critical for the coexistence of humans and robots in dynamic environments is the capability for agents to understand each other's actions, and anticipate their movements. We develop method to learn continuous, probabilistic, multi-modal representations of the movement of dynamic agents in an environment, and incorporate these representations into an anticipatory navigation framework.

Stochastic Process Anticipatory Navigation (SPAN)


SPAN is a framework that enables nonholonomic robots to navigate in environments with crowds. We predict continuous-time stochastic processes to model future movement of pedestrians. These are used to conduct chance constrained collision-checking, and are incorporated into a time-to-collision control problem. The video of the left shows a SPAN-controlled robot smoothly navigate through an overly aggressive simulated crowd. The video on the right shows SPAN in action (controlled robot in red), for a real-world indoor dataset, with sampled predictions of future motion overlaid (in purple).

Probabilistic Trajectory Prediction with Structural Constraints


Recent advances in predicting motion patterns often rely on machine learning techniques to extrapolate motion patterns from observed trajectories, with no mechanism to directly incorporate known rules. We propose a novel framework, which combines probabilistic learning and constrained trajectory optimisation to produce constraint-compliant trajectory distributions which closely resemble a learned prior.

Continuous and multi-modally probabilistic motion prediction


We present Kernel Trajectory Maps (KTMs) to capture motion trajectories in an environment. KTMs leverage the expressiveness of kernels from non-parametric modelling by projecting input trajectories onto a set of representative trajectories, to condition on a sequence of observed coordinates. The output is a mixture of continuous stochastic processes, where each realisation is a functional trajectory, which can be sampled at arbitrary time resolution.

Probabilistic and continuous environment representations

Quantifying the uncertainty in an environment is central to for the safe and robust operation of robots along-side humans. We develop methods to: (1) represent long-term spatialtemporal motion patterns as a continuous and probabilistic map, and (2) efficiently represent occupancy in an environment as a probabilistic and continuous representation, and merge these occupancy representations together in a multi-agent scenario.

Spatiotemporal, continuous and probabilistic directional maps


We propose a method to build a continuous map to capture spatiotemporal movements. At a given coordinate in space and time, our method provides a multi-modal probability density function over the possible directions an object can move in. We achieve this by projecting data into a high-dimensional feature space using sparse approximate kernels, and passing through a long short-term memory (LSTM) network. A mixture density network (MDN) with von Mises distributions is then trained.

Fast continuous and probabilistic occupancy maps, and its decentralised fusion in multi-agent scenarios


Mapping the occupancy of an environment is central for robot autonomy. Traditional occupancy grid maps discretise the environment into independent cells, neglecting important spatial correlations, and are unable to capture the continuous nature of the real world. Here, we propose a fast variant of probabilistic continuous occupancy maps, named Fast Bayesian Hilbert Maps (Fast-BHM), and develop a method to merge Fast-BHMs built by a team of robots in a decentralised manner.