Agentic AI with Cybersecurity: How to focus on Risk Analysis via the MCP (Model–Control–Policy) Model

Authors

  • Ankush Gupta Senior Solution Architect, Bothell, Washington. Author
  • Soumya Remella SNR Technical Program Manager. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V7I1P102

Keywords:

Agentic Artificial Intelligence, Cybersecurity Risk Analysis, Model–Control–Policy (MCP) Framework, NIST AI RMF, EU AI Act, MITRE ATLAS, OWASP LLM Top 10, Zero-Trust Architecture, Red Teaming, Governance And Compliance, Secure-By-Design, Risk Quantification

Abstract

Rapidly advancing Agentic Artificial Intelligence (AI) systems that are equipped to autonomously reason, call upon tools, and perform self-directed tasks have transformed enterprise productivity as well as the threat landscape. In contrast to static machine learning (ML) pipelines, agentic systems purposefully interpret goals in real-time and make decisions, thereby injecting contextual feedback loops that increase attack vectors and introduce new classes of cyber-physical as well as data-centric risks. The current cybersecurity and governance models, such as NIST SP 800-53, MITRE ATLAS, and the OWASP Top 10 for LLMs, cover parts of this spectrum but do not have an integrated model that can capture AI behaviors while also integrating organisational systemic control logic and governance obligations. The research presents an integrated Model–Control–Policy (MCP) risk-analysis model for agentic AI settings. The Model layer characterizes the technical sources of risk arising from model design, data provenance, and adversarial vulnerability. The Control layer includes runtime safety checks, access controls, and automatic containment mechanisms that ensure safe operation within defined limits. The latter means they can map these controls to the organizational governance and compliance regimes (EU AI Act or NIST AI RMF, for example) and cross-border regulatory requirements they may need. Combined with the MCP model, such a multi-locus common approach enriches an analytical framework for businesses to assess, monitor, and mitigate AI risks in a traceable, accountable manner.

The study uses quantitative risk scoring, red-teaming simulations, and MITRE ATLAS mapping to analyse the MCP model within high-risk enterprise scenarios -- for example, autonomous incident response, data classification, and cross-tenant chatbot systems. We find a 4.6× decrease in the number of successful exploits, a 37% reduction in the fraction of false escalations, and quantifiable gains in governance traceability. The MCP model integrates technical and policy aspects, providing a reproducible basis for controllable autonomy in AI systems. By integrating multilevel controls, continuous risk quantification, and compliance-aware governance, the MCP framework enables a structured approach to cybersecurity risk assessment for agentic AI systems. It provides a practical pathway toward AI architectures that are adaptive, transparent, and ethically aligned, while remaining responsive to regulatory and organizational policy requirements. In doing so, MCP supports the development of resilient AI systems with demonstrable accountability and regulatory conformance.

Downloads

Download data is not yet available.

References

[1] Ankush Gupta, “A Strategic Approach—Enterprise-Wide Cyber Security Quantification via Standardized Questionnaires and Risk Modelling Impacting Financial Sectors Globally,” International Journal of AI, BigData, Computational and Management Studies, vol. 3, no. 2, pp. 44–57, 2022.

[2] National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1, Gaithersburg, MD, USA, 2023.

[3] European Commission, Artificial Intelligence Act: Risk-Based Regulatory Framework for Trustworthy AI, Brussels, 2024.

[4] National Institute of Standards and Technology (NIST), Security and Privacy Controls for Information Systems and Organizations, NIST Special Publication 800-53 Revision 5, Gaithersburg, MD, USA, 2020.

[5] MITRE Corporation, Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS), Bedford, MA, USA, 2023.

[6] Open Worldwide Application Security Project (OWASP), Top 10 for Large Language Model Applications, Version 1.1, 2024.

[7] H. Taherdoost, “Blockchain Technology and Artificial Intelligence Together: A Comprehensive Review,” Applied Sciences, vol. 12, no. 24, pp. 12948–12961, 2022.

[8] H. Luo, W. Wei, S. Zhang, and P. Li, “BC4LLM: Trusted Artificial Intelligence When Blockchain Meets Large Language Models,” arXiv preprint arXiv:2310.06278, 2023.

[9] T. Nguyen, M. Dey, and S. U. Khan, “AI Governance in High-Stakes Systems: Principles and Operational Models,” IEEE Transactions on Technology and Society, vol. 4, no. 1, pp. 50–64, 2023.

[10] R. Wallace and J. Patel, “A Unified Model of Zero-Trust AI: Frameworks for Autonomous Risk Governance,” IEEE Access, vol. 12, pp. 113265–113281, 2024.

[11] A. Kim, R. Green, and F. Rahman, “Mapping Adversarial Threats to AI Risk Controls in the MITRE ATLAS Framework,” Journal of Information Security Research, vol. 11, no. 3, pp. 140–156, 2023.

[12] D. Clarke and E. S. Martin, “Evaluating Governance-Centric Models for AI Assurance under the EU AI Act,” International Journal of Computational Ethics and Policy, vol. 2, no. 4, pp. 190–204, 2024.

[13] P. Shah, A. Gupta, and S. Rahimi, “Quantitative Risk Assessment Models for Agentic AI Systems in Critical Infrastructure,” IEEE Transactions on Dependable and Secure Computing, vol. 21, no. 5, pp. 415–428, 2025.

[14] L. Fernandez and J. Yu, “AI Red Teaming and Adversarial Validation: A Structured Review,” ACM Computing Surveys, vol. 56, no. 7, pp. 1–32, 2024.

[15] S. R. Bhosale and N. Choudhury, “Integrating Federated Privacy and Governance in Agentic AI Frameworks,” IEEE Transactions on Information Forensics and Security, vol. 20, pp. 2025–2038, 2025.

[16] Ankush Gupta, “A Centralized Authentication and Authorization Framework for Enterprise Security Modernization” Volume 16, Issue 3, July-September 2025, https://www.ijsat.org/research-paper.php?id=8034.

Published

2026-01-05

Issue

Section

Articles

How to Cite

1.
Gupta A, Remella S. Agentic AI with Cybersecurity: How to focus on Risk Analysis via the MCP (Model–Control–Policy) Model. IJETCSIT [Internet]. 2026 Jan. 5 [cited 2026 Jan. 28];7(1):8-15. Available from: https://ijetcsit.org/index.php/ijetcsit/article/view/543

Similar Articles

31-40 of 408

You may also start an advanced similarity search for this article.