Automation is playing an increasingly important role in diverse areas, including environmental monitoring, energy management, healthcare, manufacturing, intelligence gathering, and defense. Policies for automated agents operating in complex and uncertain environments need to address several decision-making trade-offs. They should also adapt to the state of the environment as information is collected. We focus on foundational research on sequential decision-making under such uncertain environments.
Our research in networked multi-agent systems focuses on development and analysis of distributed algorithms for collective behavior by a set of agents with access to local information that communicate over a potentially time-varying graph. Distributed estimation and decision-making are our major themes of research in multi-agent systems. In these problems, each agent may not be able to estimate or decide based only on local information and their performance improves using global information. Distributed algorithms seek to achieve the performance of the algorithm with the global information using only the local information and minimal communication with other agents.
From elderly assistance to robotic classmates to autonomous cars, robots are increasingly expected to co-exist and collaborate with a variety of users. In settings such as search and rescue, collaborative manufacturing, and construction, despite rapid progress in autonomous task execution abilities of robots, human intervention in the form of high-level reasoning and planning is vital in a priori unknown environments.
The Uncrewed aerial vehicles (UAVs) that are increasingly deployed in mission critical applications, such as search and rescue, medical deliveries to remote locations, infrastructure inspection, and reconnaissance and surveillance, must be extremely resilient as the loss of a vehicle poses significant threats to financial, security, or personnel interests. Similarly, the self-driving cars must exhibit similar robustness. Our research in this area focuses on design of control algorithms that handle a broad class of disturbances while providing a desired performance.
Cognitive control is a person’s ability to allocate their attention, thought, and action towards their desired intentions and goals. Our work contributes to advancing the understanding of cognitive control and predictive capacity of cognitive models via a rigorous mathematical approach. We have focused on two fundamental decision-making trade-offs: the speed-accuracy trade-off and the exploration-exploitation trade-off.