Skip to main content

Research

Physics-based non-prehensile manipulation in clutter with human-in-the-loop

We are interested in problems where robots need to reach for objects in cluttered environments like a shelf or a fridge. These problems are challenging for several reasons. The approach we take is to leverage high-level input provided by a human supervisor and integrate it into the motion planning algorithm. The aim of such a system is to accelerate the robot performance thereby making the robot solving these challenging problems faster and with higher success rate while reducing the human engagement with the system.

We have a paper published in ICRA 2020 (https://arxiv.org/abs/1904.03748) and RA-L (https://ieeexplore.ieee.org/document/9131745/).

Human-like Planning (HLP) for Reaching in Cluttered Environments

We used virtual reality to capture human participants reaching for a target object on a tabletop cluttered with obstacles. From this, we devised a qualitative representation of the task space to abstract the decision-making, irrespective of the number of obstacles. Based on this representation, human demonstrations were segmented and used to train decision classifiers. Using these classifiers, our planner produced a list of way points in task space. These way points provided a high-level plan, which could be transferred to an arbitrary robot model and used to initialise a local trajectory optimiser.

Real-time and robust physics-based manipulation

Real-time and robust physics-based manipulation are the major goals of this project.

We built faster physics models through parallel-in-time integration and coarse deep neural network models (ISRR 2019 and CVS 2020).

Our robot adapts its actions to the given task under uncertainty, pushing fast or slow (WAFR 2018).

We achieved real-time manipulation through a novel online stochastic trajectory optimization algorithm (Humanoids 2018).

Learning deep manipulation policies in cluttered and occluded spaces

Imagine trying to reach for the salt shaker from the back of a cluttered and occluded shelf. A task that may seem intuitive for us, humans, and even animals, remains a frontier research question that is yet to be solved. This project aims at combining deep image-based policies with model-based look-ahead planning in an abstract representation of the task. The goal is to have a reactive behaviour that generalizes over a variety of manipulation tasks while also reasoning over the long-term effects of physics interactions.

Manipulation planning under changing external forces

This project focuses on the manipulation planning for human-robot forceful collaboration. We are aiming at developing a general-purpose manipulation planning framework, where robots are capable of assisting humans in performing forceful tasks (e.g., drilling, cutting) in a safe, comfortable and natural manner. This project is at the intersection of robotic manipulation and grasping, multi-robot system, motion planning and human-robot collaboration.

Check our most recent paper in AURO https://link.springer.com/article/10.1007%2Fs10514-020-09930-z

Comfort-based physical human-robot collaboration

This project aims to enhance robots' decision-making capabilities when physically engaging with humans. To this aim, we designed techniques that leverage human comfort (defined in terms of ergonomics, safety-perception and biomechanics compatibility) to planning and control design for physical human-robot collaborative tasks. This work is carried in collaboration with partners from Biomedical Sciences. Our most recent results are under review in ACM THRI and IEEE RA-L, but you can check our last work presented in Humanoids (eprints.whiterose.ac.uk/137727/).