From Breaches to Bank Frauds: Exploring Generative AI and Deep Learning In Modern Cybercrime

Authors

  • Anam Haider Khan Master’s in Cybersecurtiy, Georgia Institute of Technology, Software developer, Expedia Group, USA. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V4I2P116

Keywords:

Generative AI, Cybercrime, Deep Learning, Bank Fraud, Synthetic Identity, Large Language Models (LLMs), Generative Adversarial Networks (GANs), Adversarial Machine Learning, Deepfakes, AI-Driven Phishing, Automated Breach Pathways

Abstract

The emergence of Generative Artificial Intelligence (AI) and advanced deep learning models has fundamentally altered the dynamics of modern cybercrime. Threat actors now leverage large language models, generative adversarial networks, and reinforcement learning agents to automate reconnaissance, craft adaptive phishing campaigns, produce polymorphic malware, and execute highly personalized financial frauds at unprecedented scale. As a result, traditional rule-based and signature-driven security systems are increasingly ineffective against AI-generated attack variants that continuously evolve in real time. This paper provides a systematic investigation into the role of generative and deep learning technologies in accelerating contemporary cyber threats, with a specific focus on data breaches, banking frauds, synthetic identities, and deepfake-based impersonation attacks. We develop a prototype AI-driven attack generation and fraud simulation framework to empirically demonstrate how these models can be weaponized to bypass modern defenses. Experimental evaluations reveal significant increases in attack success rates, evasion capability, and automation efficiency when compared to conventional cyberattack methods. The findings underscore the urgent need for AI-augmented defense mechanisms, behavioral analytics, risk-aware deception strategies, and regulatory oversight to mitigate this new class of intelligent and autonomous cybercrime

Downloads

Download data is not yet available.

References

[1] Alazab, M., Awajan, A., Mesleh, A., Alazab, M., Abraham, A., & Jatana, V. (2020). Intelligent mobile malware detection using deep learning models. Information Sciences, 115, 35–47. https://doi.org/10.1016/j.ins.2020.06.034

[2] Bae, H., & Kim, H. (2019). Detecting deep learning-based cyberattacks using adversarial training. IEEE Access, 7, 116297–116307. https://doi.org/10.1109/ACCESS.2019.2932838

[3] Bulusu, S., Kailkhura, B., Li, B., Varshney, P. K., & Song, D. (2020). Anomalous instance detection in deep learning: A survey. ACM Computing Surveys, 54(2), 1–33. https://doi.org/10.1145/3446375

[4] Carlini, N., & Wagner, D. (2018). Audio adversarial examples: Targeted attacks on speech-to-text. 2018 IEEE Security and Privacy Workshops, 1–7. https://doi.org/10.1109/SPW.2018.00009

[5] Chandra, R., Gupta, R., & Singh, A. (2020). Generative adversarial networks in cybersecurity: A survey. IEEE Access, 8, 118692–118733. https://doi.org/10.1109/ACCESS.2020.3004967

[6] Chen, T., Liu, S., Xu, X., & Zhang, W. (2021). Deepfake generation and detection: A survey. Multimedia Tools and Applications, 80, 3135–3165. https://doi.org/10.1007/s11042-020-08976-1

[7] Goodfellow, I., McDaniel, P., & Papernot, N. (2018). Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7), 56–66. https://doi.org/10.1145/3134599

[8] Huang, L., Joseph, A., Nelson, B., Rubinstein, B. I., & Tygar, J. (2018). Adversarial machine learning. Proceedings of the 2011 ACM Workshop on Artificial Intelligence and Security, 43–58. (Reissued). https://doi.org/10.1145/2046684.2046692

[9] Hussain, S., & Prieto, J. (2020). AI-powered financial fraud detection: A survey. IEEE Access, 8, 37301–37325. https://doi.org/10.1109/ACCESS.2020.2975465

[10] Kim, J., Shim, H., & Kim, H. (2021). Phishing detection using contextual LSTM networks. Computers & Security, 103, 102159. https://doi.org/10.1016/j.cose.2020.102159

[11] Kietzmann, J., Lee, L., & McCarthy, I. (2020). Deepfakes: Trick or treat? Business Horizons, 63(2), 135–146. https://doi.org/10.1016/j.bushor.2019.11.006

[12] Kurakin, A., Goodfellow, I., & Bengio, S. (2018). Adversarial examples in the physical world. arXiv:1607.02533. https://arxiv.org/abs/1607.02533

[13] Li, Y., Chang, M. C., & Lyu, S. (2019). In Ictu Oculi: Exposing AI-created fake images. 2018 IEEE International Workshop on Information Forensics and Security, 1–7. https://doi.org/10.1109/WIFS.2018.8630787

[14] Lin, J., Xu, Z., Liu, Y., & Chen, J. (2020). A survey on deep reinforcement learning for cybersecurity. IEEE Access, 8, 116980–117000. https://doi.org/10.1109/ACCESS.2020.3003713

[15] Mittal, S., & Tyagi, A. (2021). Defensive distillation for robust deep neural networks. Journal of Information Security and Applications, 58, 102726. https://doi.org/10.1016/j.jisa.2021.102726

[16] Mohammadi, M., Al-Fuqaha, A., Sorour, S., & Guizani, M. (2018). Deep learning for IoT big data and streaming analytics. IEEE Communications Surveys & Tutorials, 20(4), 2923–2960. https://doi.org/10.1109/COMST.2018.2844341

[17] Nguyen, T. T., & Kim, H. (2019). Detecting network intrusions using deep learning models. IEEE Access, 7, 185638–185654. https://doi.org/10.1109/ACCESS.2019.2960612

[18] Rigaki, M., & Garcia, S. (2018). Bringing a GAN to a knife-fight: Adapting malware communication to avoid detection. 2018 IEEE Security and Privacy Workshops, 70–75. https://doi.org/10.1109/SPW.2018.00019

[19] Tolosana, R., Vera-Rodriguez, R., Fierrez, J., & Ortega-Garcia, J. (2020). Deepfakes and beyond: A survey of face manipulation and fake detection. Information Fusion, 64, 131–148. https://doi.org/10.1016/j.inffus.2020.06.001

[20] Yamin, M. M., & Katt, B. (2021). Cyber fraud in banking: Attack techniques, detection, and prevention. Journal of Information Security and Applications, 59, 102842. https://doi.org/10.1016/j.jisa.2021.102842

Published

2023-06-30

Issue

Section

Articles

How to Cite

1.
Khan AH. From Breaches to Bank Frauds: Exploring Generative AI and Deep Learning In Modern Cybercrime. IJETCSIT [Internet]. 2023 Jun. 30 [cited 2025 Dec. 5];4(2):161-72. Available from: https://ijetcsit.org/index.php/ijetcsit/article/view/485

Similar Articles

101-110 of 335

You may also start an advanced similarity search for this article.