Top of page
Sarra Alqahtani

Dr. Sarra Alqahtani is an assistant professor in the Computer Science Department at Wake Forest University. She received her Ph.D from University of Tulsa in 2015 and spent 2 years as a postdoc associate there.

Dr. Alqahtani specializes in the safety and explainability of reinforcement learning(RL) and multi-agent reinforcement learning (MARL), with a focus on their applications in environmental sciences. She has been awarded an NSF CRII grant to support her work on safe MARL and a NASA grant to develop navigation algorithms for drones using RL and MARL to navigate Amazonian rainforests and detect illegal gold mining activities. Her research aims to ensure the reliability and interpretability of RL and MARL systems in complex, real-world environments.

Dr. Alqahtani’s work has been published in top-tier venues such as AAMAS, IJCAI, and Nature, making significant contributions to the advancement of safe and explainable AI for real-world applications.


Teaching

Classes taught:

  • CSC 112 – Fundamentals of Computer Science
  • CSC 790  – Advanced Topics

Research

Security Assurance in Multi-Agent Reinforcement Learning

Deep reinforcement learning (DRL) policies are known to be vulnerable to adversarial perturbations to their observations, similar to adversarial examples for deep learning classifiers. However, an attacker is not usually able to directly modify another agent’s observations. This might lead one to wonder: is it possible to attack an RL agent simply by choosing an adversarial policy acting in a multi-agent environment so as to create natural observations that are adversarial? In this research, we investigate the potential attack strategies that an adversary can use to disturb autonomous systems that are built for multi-agent environments.

Enhancing Safety and Explainability in Reinforcement Learning and Multi-Agent Systems

This project  focuses on advancing the safety and explainability of reinforcement learning (RL) and multi-agent reinforcement learning (MARL) systems to enable their deployment in real-world, safety-critical applications. Ensuring safety in RL and MARL poses significant challenges due to the inherent risks of continuous exploration and the complexity of algorithms that are difficult to interpret. To address these challenges, my work leverages the synergy between safety and explainability, using interpretability as a tool to enhance trust and identify potential vulnerabilities. One approach constructs robust behaviors through exploration, creating a dynamic safety policy—a “firewall”—that prevents unsafe decisions even in evolving environments. Another approach combines local and global explanations to provide a comprehensive understanding of RL agents’ behaviors, enabling targeted identification and correction of weaknesses without requiring full model retraining. Through these efforts, my research aims to transition MARL algorithms from simulations to real-world applications, ensuring they are safe, robust, and resilient in complex, dynamic environments.

Autonomous Coordination and Communication in Platoons

The design of control algorithms for platoons of vehicles is challenging, particularly considering that coordination between vehicles is obtained through diverse communication channels. Many modern vehicles are already equipped with Adaptive Cruise Control (ACC) to regulate certain driving functions. ACC can be extended to leverage inter-vehicle communications, creating a tightly-coupled vehicle stream in the form of a platoon. This extension is called Cooperative Adaptive Cruise Control (CACC) and typically assumes full communication among the distinct vehicles. In this research, we develop different deep reinforcement learning algorithms to coordinate autonomous operation of a platoon under different communication levels. The ultimate goal of this research is to build more robust and reliable CACC controllers even under communication impairments that could happen due to jamming attacks.

Autonomous Navigation in Unknown Environments

The ability to perform autonomous exploration is essential for unmanned aerial vehicles (UAV) operating in  unknown environments where it is difficult to describe the environment beforehand. Algorithms for autonomous exploration often focus on optimizing time and full coverage in a greedy fashion. These algorithms can collect irrelevant data and wastes time navigating areas with no important information. In this research project, we aim to improve the efficiency of exploration by maximizing the probability of detecting valuable information.  We explore different optimization theories to resolve this hard problem including robustness theory of Probabilistic Metric Temporal Logic (P-MTL), ergodicity theory, and deep reinforcement learning. We target in this project several environmental and conservation navigation problems such as detecting areas occupied by illegal Artisanal Small-scale Gold Mining (ASGM) activities in Amazonian rainforest. Our preliminary results from the robustness of P-MTL show that our approach outperforms a greedy exploration approach from the literature by 38% in terms of ASGM coverage.

Contact Info

Assistant Professor

Denton Family Faculty Fellow