Please browse around to see some of our recent work.
Videos and robotic demonstrations are organized from the most to the least recent publication. To find demonstrations related to a particular research, please find it on the sidebar menu.
IROS 2018 (lead author: Lipeng Chen)
Imagine grasping a wooden board while your friend drills holes into it and cuts pieces off of it. You would predict the forces your friend will apply on the board and choose your grasps accordingly; for example you would rest your palm firmly against the board to hold it stable against the large drilling forces. We developed a manipulation planner to enable a robot grasp objects similarly. Arxiv preprint (2018) here.
Humanoids-18 (lead author: Wissam Bejjani, in collaboration with Matteo Leonetti)
Manipulation in clutter requires solving complex sequential physics-based decision-making problems. In our recent work, we interleave real-time planning and execution in a closed-loop setting, using a Receding Horizon Planner (RHP). To this aim, we learn a suitable value function based heuristic for efficient planning, and for estimating the cost-to-go from the horizon to the goal, which is then further optimized through reinforcement learning. Arxiv preprint here.
WAFR 18 (lead author: Wisdom Agboh)
During non-prehensile manipulation, our current robot controllers/policies typically provide slow actions (e.g. pushing at quasi-static speeds). While this has its numerous advantages (e.g. a simple analytical pushing model), it prevents a robot from realizing its maximum potential.
As humans, when we interact with objects sometimes we act fast, and other times we use slow actions in order to avoid undesired events from happening.
How can a robot exhibit such an adaptive behavior?
In this work, we model the problem as a Markov decision process (MDP) with action-dependent uncertainty in the state transition function. We consider an uncertainty model where the uncertainty is proportional to the pushing speed.
We provide an online solution to the MDP in real-time. Moreover, our experiments show that a robot can exhibit an adaptive behavior during non-prehensile manipulation: pushing slow for high accuracy tasks and pushing fast for tasks that permit inaccuracy.
ArXiv preprint (2018) here.
Humanoids 18 (lead author: Wisdom Agboh)
Our task in this work is to grasp a target object in a cluttered environment as shown in the video below.
Previous work in this area has focused on open-loop planning followed by blind execution (without feedback). This approach can easily lead to task failures due to uncertainty.
A major reason for not re-planning continuously during execution is the long planning times this domain requires.
To address this problem, we propose a stochastic trajectory optimizer and use it within a model predictive control (MPC) setting for manipulation in highly cluttered scenes.
ArXiv Preprint (2018) here.