Learning Object Affordances for Full-body Mobile Manipulation
About this project
Project information
Project status
Started in 2023
Contact
Research subject
Research environments
A mobile robot manipulator – or in short, a mobile manipulator – comprises of one or more robot arms mounted on a movable base. Thanks to their improved reachability and manipulation capabilities, such robots have the potential to enable executing a wide range of tasks in a multitude of industries – ranging from healthcare, through logistics, lab automation, to assisted living.
The variety in potential application domains, uncontrolled environments, and wide range of objects to be handled present a challenge to creating versatile general–purpose mobile manipulator autonomy controllers. To accomplish tasks, the mobile manipulator needs to represent the environment it is operating in and exploit that model to synthetize appropriate motion trajectories. For example, navigating through a door requires identifying a door handle, planning for and then executing arm and base trajectories that respect the physical constraints of the door, handle, and hinges. In this project, we will research novel methods for efficient semantic scene representation that enable mobile manipulators to interact with their environment in complex ways. We aim at devising general-purpose approaches to identify motions afforded by everyday objects by exploring data-driven approaches to infer object affordances and synthetize motion plans.
Summarizing, this project focuses on designing and deploying a scalable neural reconstruction method for mobile robot manipulation, along with an approach to infer motion constraints from object affordances. These contributions can extend the capabilities of a mobile manipulator to operate in new scenes with unknown objects.