Multi-Agent Reasoning and Learning

Aim

Develop novel algorithms and frameworks for multiagent reasoning and learning, enabling multiple autonomous and heterogeneous agents to collaborate toward a specified objective.

Objectives

  1. Explore state-of-the-art in multiagent collaboration and identify open problems.
  2. Develop novel frameworks that combine knowledge-based and data-driven methods to address these open problems.
  3. Implement and evaluate the algorithms in the context of challenging simulated and/or real-world environments and tasks.

Description

This project seeks to enable multiple agents (robots, AI systems) with different capabilities to collaborate toward achieving a shared objective in dynamically-changing environments.  In particular, we will focus on addressing open problems such as scalability and collaboration without prior coordination, while operating under resource (e.g., computation, memory, communication) constraints and in the presence of open-world uncertainty.  We will do so by developing novel frameworks that combine knowledge-based and data-driven methods for reasoning and learning, and embed fundamental principles such as refinement, ecological rationality, and explainable agency. Such agents can be deployed in a broad range of applications, including many tasks related to defence and security scenarios. Examples include multiple drones, ground vehicles, and/or AI systems collaborating with each other and with humans to aid in disaster rescue operations, patrolling a specified area, or in distributed sensing and intelligence gathering in areas of interest.

 

Research theme: 

Principal supervisor: 

Prof Mohan Sridharan
University of Edinburgh, School of Informatics
m.sridharan@ed.ac.uk