Loading...
Thumbnail Image
Publication

Designing Robust, Dynamic, AI-Powered Access Control for Insider Threat Mitigation

Balogun, Olusesi
Citations
Altmetric:
Abstract

Insider threats continue to pose a significant and evolving challenge to organizational security due to the privileged access and prior knowledge insiders possess. These threats are often difficult to detect, deter, or mitigate, and can lead to severe data breaches and operational disruptions. Access control mechanisms such as Attribute-Based Access Control (ABAC) systems have gained widespread adoption in both corporate and governmental environments for regulating access to sensitive resources, owing to their flexibility, scalability, and contextual awareness. Recent efforts have sought to enhance ABAC by leveraging machine learning, giving rise to Machine Learning-Based Access Control (MLBAC). However, both ABAC and MLBAC remain vulnerable to critical threats, including attribute forgery, policy leakage, adversarial manipulation, and bias vulnerabilities that can be exploited by malicious insiders.

This dissertation proposes a series of advancements to enhance ABAC and MLBAC frameworks against insider threats. First, we integrate deception-based mechanisms such as honey attributes into the ABAC model to proactively detect insider activity targeting sensitive assets. Next, we introduce a novel Moving Target Defense (MTD) strategy within ABAC that dynamically mutates policy rules using correlated attributes, thereby reducing the predictability and exploitability of static access configurations. Building on this, we examine the vulnerability of MLBAC systems to black-box adversarial attacks which is an underexplored area in existing literature, and propose a more robust learning-based framework that incorporates access constraints and attribute dynamism into its objective function.

Lastly, this dissertation investigates the fairness and bias limitations inherent in MLBAC systems. We identify both data-driven and model-induced biases and introduce a fairness-aware adversarial training framework, named Fair-MLBAC, that ensures equitable access decisions across feature values and model layers. By combining deception, adaptability, robustness, and fairness, this work presents a holistic framework for insider threat detection and mitigation that addresses both practical and theoretical gaps in modern access control systems.

Comments
Description
Date
2025-12-12
Journal Title
Journal ISSN
Volume Title
Publisher
Research Projects
Organizational Units
Journal Issue
Keywords
Insider Threat, Attribute Based Access Control (ABAC), Authorization, Defensive Deception, Honey Attribute, Sensitivity Estimation, Machine Learning Based Access Control (MLBAC), Adversarial Attacks
Citation
Balogun, Olusesi. "Designing Robust, Dynamic, AI-Powered Access Control for Insider Threat Mitigation." PhD diss., Georgia State University, 2025. https://doi.org/10.57709/pt8z-9516
Embargo Lift Date
Embedded videos