Skip to content
Home » A Survey on XAI for 5G and Beyond Security: Technical Aspects, Challenges and Research Directions

A Survey on XAI for 5G and Beyond Security: Technical Aspects, Challenges and Research Directions

This survey examines how Explainable AI (XAI) can strengthen security in 5G and future 6G networks, where AI models increasingly drive tasks such as intrusion detection, traffic classification, PHY-layer authentication, and anomaly detection. While these models offer strong performance, their lack of transparency makes it difficult for operators to understand or trust their decisions. The paper reviews major XAI techniques—such as feature attribution, saliency maps, surrogate models, and attention-based explanations—and discusses how they can reveal which inputs influence an AI model’s predictions. This is particularly valuable in security applications, where insights into why an alert is raised help verify correctness, detect misconfigurations, and avoid model bias.

Several network-specific use cases are highlighted, including explainable intrusion detection, RAN security, device authentication, and secure resource management. The survey also identifies open challenges: creating lightweight XAI methods suitable for edge devices, developing standardized evaluation metrics, and ensuring that explanations do not leak sensitive information. Overall, the work shows that XAI will play an essential role in building trustworthy, transparent, and secure AI-driven networks in 5G and beyond.

A Survey on XAI for 5G and Beyond Security_ Technical Aspects, Challenges and Research Directions