Please use this identifier to cite or link to this item: https://hdl.handle.net/11264/1843
Full metadata record
DC FieldValueLanguage
dc.contributor.authorContractor, Faizan-
dc.contributor.otherRoyal Military College of Canadaen_US
dc.date.accessioned2024-05-27T14:57:52Z-
dc.date.available2024-05-27T14:57:52Z-
dc.date.issued2024-05-27-
dc.identifier.urihttps://hdl.handle.net/11264/1843-
dc.description.abstractMulti Agent Reinforcement Learning (MARL) trains multiple Reinforcement Learning (RL) agents to either achieve a common goal or compete against each other. Popular methods in cooperative MARL with partially observable environments, only allow agents to act independently during execution which may limit the coordinated effect of the trained policies. However, by facilitating the sharing of critical information such as network topology, known or suspected threats, and event logs, effective communication can lead to a more informed decision-making in the cyber battle-space. While a game theoretic approach has shown success in real world applications, its applicability to cybersecurity is an active area of research. The aim of this thesis is to demonstrate the importance and effectiveness of communication between blue agents and to show that relaying key information will allow these agents to stop a malicious actor from compromising hosts across subnets. This thesis also hopes to contribute in the development of techniques that can enhance autonomous cyber defence on an enterprise network. The results demonstrate that through Differentiable Inter Agent Learning, the defender agents play sequential games in Cyber Operations Research Gym and learn to communicate to prevent imminent cyber threats. The tactical policies learned by the autonomous RL agents to achieve the coordination is akin to the human experts that communicate with each other during an incidence response to avert cyber threats.en_US
dc.language.isoenen_US
dc.subjectMulti-Agent Reinforcement Learningen_US
dc.subjectMARLen_US
dc.subjectLearning to Communicateen_US
dc.subjectCybersecurityen_US
dc.subjectCyber Defenceen_US
dc.subjectAutonomous Cyber Defenceen_US
dc.subjectAutonomous Cyber Operationsen_US
dc.titleLearning to Communicate in Multi-Agent Reinforcement Learning for Autonomous Cyber Defenceen_US
dc.title.translatedApprendre à Communiquer entre Multi-Agent en Apprentissage par Renforcement pour une Défense Autonome en Cybersécuritéen_US
dc.contributor.supervisorAl-Mallah, Ranwa-
dc.date.acceptance2024-05-21-
thesis.degree.disciplineElectrical and Computer Engineering/Génie électrique et informatiqueen_US
thesis.degree.nameMASc (Master of Applied Science/Maîtrise ès sciences appliquées)en_US
Appears in Collections:Theses

Files in This Item:
File Description SizeFormat 
Thesis_Faizan_Learning_to_Communicate.pdfThesis Final2.11 MBAdobe PDFView/Open


Items in eSpace are protected by copyright, with all rights reserved, unless otherwise indicated.