Soojal Kumar
Back to Research

Published Research

Explainable AI for Intrusion Detection Systems: Enhancing Trust, Transparency, and Real-Time Threat Response

Soojal Kumar

International Journal of AI, Big Data, Computational and Management Studies · Published Apr 21, 2026

XAIIntrusion DetectionCybersecurityMachine LearningSHAPLIME

Research Framework

Explainable IDS

Machine Learning Detection

SHAP + LIME Explanations

Analyst-Focused Threat Response

Publication Details

Journal

International Journal of AI, Big Data, Computational and Management Studies

Volume

7

Issue

2

Pages

115-123

Year

2026

ISSN

3050-9416

DOI

10.63282/3050-9416.IJAIBDCMS-V7I2P119

Received

02/03/2026

Revised

06/04/2026

Accepted

14/04/2026

Published

21/04/2026

Publisher / Group: Noble Scholar Research Group

Abstract

The growing sophistication of cyber threats has exposed the limitations of conventional intrusion detection systems that depend on static signatures and rule-based detection. Although machine learning has improved the ability to identify malicious traffic patterns, many high-performing models remain difficult to interpret, reducing trust and limiting operational adoption. This study develops and evaluates a real-time explainable intrusion detection framework that combines predictive accuracy with transparent decision support.

Using the NSL-KDD and CICIDS2017 benchmark datasets, the study implemented Random Forest and Deep Neural Network models under a stratified training, validation, and testing protocol with repeated experimental runs. Data preprocessing included normalization, feature engineering, imbalance correction, and hyperparameter optimization. Explainability was integrated through SHAP and LIME to generate both global and case-specific interpretations of model predictions.

The results show that both models achieved strong classification performance, while the Deep Neural Network produced higher recall and ROC-AUC under more complex traffic conditions. Random Forest delivered lower inference latency and competitive precision. The inclusion of explainability introduced only modest processing overhead while significantly improving interpretability, alert transparency, and analyst usability.

The study contributes a unified evaluation of predictive performance, explanation quality, and real-time response efficiency, supported by a deployment-oriented framework for practical security environments. The findings indicate that effective intrusion detection systems should be judged not only by accuracy, but also by how clearly and rapidly they support human decision-making.

Research Aim & Objectives

Develop intrusion detection models using Random Forest and Deep Neural Network approaches.

Integrate SHAP and LIME to provide interpretable model outputs.

Compare predictive accuracy, latency, and explanation quality across models.

Evaluate the effect of explainability on operational performance.

Propose a deployment-oriented framework for real-time cybersecurity environments.

Methodology / Framework

A structured view of how the study moves from benchmark datasets to explainable, deployment-oriented IDS decision support.

Step 1

Datasets

NSL-KDD and CICIDS2017

Step 2

Preprocessing

Normalization, feature engineering, imbalance correction

Step 3

Model Training

Random Forest and Deep Neural Network models

Step 4

Explainability

SHAP and LIME for global and case-specific explanations

Step 5

Evaluation

Accuracy, recall, ROC-AUC, latency, explanation quality

Step 6

Deployment Framework

Analyst triage, transparency, and real-time threat response

Key Contributions

Comparative evaluation using NSL-KDD and CICIDS2017.

Measurement of SHAP and LIME impact on real-time IDS workflows.

Comparison of explainable and non-explainable model configurations.

Deployment-oriented architecture linking detection, explanation, analyst triage, and response.

Findings / Impact

Deep Neural Network showed stronger recall and ROC-AUC in complex traffic conditions.

Random Forest delivered lower inference latency and competitive precision.

SHAP and LIME improved transparency and analyst usability.

The work emphasizes that IDS systems should be evaluated by accuracy, interpretability, and operational usefulness.

Visual Research Framework

Stage 1

Network Traffic

Stage 2

ML Detection Model

Stage 3

Threat Prediction

Stage 4

SHAP/LIME Explanation

Stage 5

Analyst Review

Stage 6

Response Action

Contact: s_kumar18@u.pacific.edu