Please see below for a representative list of current and recent projects that the IRLab has led or contributed to. Please contact the relevant academic staff for more information.
We also host a seminar series. The recording of some recent talks is available on our YouTube channel.
-
BURG: Benchmarks for UndeRstanding Grasping. This project seeks to obtain a deeper understanding of object manipulation by providing: (1) a task-oriented part-based modelling of grasping and (2) BURG - our castle of setups, tools and metrics for community building around an objective benchmark protocol. The idea is to boost grasping research by focusing on complete tasks and parts of (rigid and deformable) objects, facilitating knowledge transfer to novel objects, different sources, environmental constraints, and grippers, and providing for a versatile and scalable system. We will also focus on community building by sharing tools for reproducible performance evaluation, including collecting data and scenarios from different labs for studying manipulation across different robot embodiments. Birmingham's involvement is led by Dr. Ales Leonardis and Dr. Mohan Sridharan.
-
Explainable Reasoning, Learning and Ad Hoc Multiagent Collaboration. This basic science project seeks to develop algorithms and architectures for long-duration autonomy in a team of heterogeneous agents operating without prior coordination in complex, adversarial environments. We will develop an architecture that tightly couples the complementary strengths of non-monotonic logical reasoning, probabilistic reasoning, and data-driven learning, to represent, reason with, and learn from, and provide relational descriptions of the decisions and beliefs, based on multi-level, incomplete, commonsense domain knowledge and observations. We will define a taxonomy of possible communication types for agents in ad hoc teamwork performing multi-step tasks, quantify the value of communication through network protocols, controlled vocabularies, or visual and audio signals, and design new algorithms and test-beds for evaluating agent designs. We will also develop algorithms integrating graph-based representations and deep learning, extending ad hoc teamwork to open teams in which agents may enter or leave at any time and without prior announcement or opportunities to calibrate teamwork strategies. This project is funded by the US Office of Naval Research, and is led by Dr. Mohan Sridharan.
-
Understanding Scenes and Events through Joint Parsing, Cognitive Reasoning and Lifelong Learning. This project seeks to develop machines that represent visual knowledge in probabilistic compositional models in spatial, temporal, and causal hierarchies, and use task-oriented representations for efficient inference. These machines will also acquire massive visual commonsense via web-scale, continuous, lifelong learning from large and small data in a weakly supervised manner, and achieve deep understanding of scenes and events through joint parsing and cognitive reasoning about appearance, geometry, functions, physics, causality, intents and belief of agents. Furthermore, these machines will understand human needs and values, interact with humans effectively, and answer human queries about the scene. This is a USA-UK DoD-MURI project, with Birmingham's involvement being led by Dr. Ales Leonardis.
-
PRINCESS: Probabilistic Representation of Intent Commitments to Ensure Software Survival. This is a project under the DARPA BRASS (Building Resource Adaptive Software Systems) program. It is led by Charles River Analytics and involves partners from the university of Birmingham, Harvard University and the University of Southern California. The overall aim of the project is to develop adaptive software systems for operation in dynamic and uncertain environments, underpinned by rigorous formal verification methods which ensure the correctness of the adaptation process. The work will involve developing techniques that combine verification with probabilistic modelling and inference, with a particular preference for the use of probabilistic programming languages. Birmingham's involvement is led by Dr. David Parker.
-
Scalable Inference of Affordance, Activity, and Intent from Spatio-Temporal Input. This project seeks to develop architectures for robots that, given streams of noisy perceptual inputs and incomplete domain knowledge, enable plausible and scalable inference about other agents' activities, intentions, and affordances. Towards this objective, we will develop a hierarchical, distributed representation for activities, concepts, defaults, and affordances. We will also develop two systems that operate over this representation, one that combines the capabilities of action languages, declarative programming, and probabilistic reasoning, and another that will draw inferences in an incremental, data-driven form of abductive inference guided by heuristic rules. This project is funded by the US Office of Naval Research, and is led by Dr. Mohan Sridharan.
-
Towards Early Inference of Human Intent and Affordances for Human-Robot Collaboration. This project seeks to develop an architecture for reliable early inference of human intent and affordances, enabling robots to offer preemptive assistance to humans whose capabilities do not extend to the tasks they intend to perform. The modeling of intent and affordances will be achieved without observing the humans perform the tasks of interest; instead the robot will use observations of exploratory movements and generic actions. This project was funded by the Asian Office of Aerospace Research and Development (AOARD)/US Air Force Office of Scientific Research (AFOSR), and was led by Dr. Mohan Sridharan.
-
STRANDS: Spatio-Temporal Representations and Activities For Cognitive Control in Long-Term Scenarios. This project explored how a mobile robot can perform intelligent autonomous behaviour for long periods (up to four months) in human-populated environments. In order to support task behaviour, and to facilitate such long run-times, STRANDS robots extracted quantitative and qualitative spatio-temporal structure from sensory experience, building representations that allowed them to understand and exploit the dynamics of everyday activities. The project was coordinated by Dr Nick Hawes, who collaborated with Dr Jeremy Wyatt in the IRLab.
-
CoDyCo: Whole-body Compliant and Dynamical Contacts in Cognitive Humanoids. This project explored multi-contact control strategies for humanoid robots evolving in environments where contact interactions can be non rigid (compliant) and unpredictable. It included human studies, control and learning aspects but also software developments and implementation on the iCub robot. Birmingham's involvement was led by Dr. Michael Mistry
-
GeRT: Generalizing Robot Manipulation Tasks: The objective of this project was to develop new methods to cope with novelty in manipulation tasks by enabling the robot to autonomously generalize its manipulation skills to new objects. The basic idea is that some successful implementations of a certain robot manipulation task, such as serving a drink, are given as input. These programs then constitute a database of prototypes representing that class of task. When confronted with a novel instance of the same task the robot needs to generalize from the prototypes. In this way, prototypical task plans may be mapped to a new plan that is suitable for handling different geometric, kinematic, and dynamic task settings, hence solving a task that is physically substantially different but similar at an abstract level. Birmingham's involvement was led by Dr. Richard Dearden, Dr. Jeremy Wyatt and Dr. Rustam Stolkin.
-
CogX: Cognitive Systems that Self-Understand and Self-Extend: This project was concerned with building a theory, together with robotic implementations, of how cognitive systems, particularly robots, can understand what they know, and how what they know changes when they act. The IR Lab will also be the coordinator of this project, which starts May 1st 2008 and runs until the end of June 2012. The project was led by Jeremy Wyatt, Richard Dearden, Aaron Sloman and Nick Hawes.