The Convergence of Deep Learning and DeepFake: A Study on AI-Generated Media Manipulation

Authors

  • Sundar Tiwari Independent Researcher, USA. Author
  • Writuraj Sarma Independent Researcher, USA. Author
  • Saswata Dey Independent Researcher, USA. Author

DOI:

https://doi.org/10.63282/3050-9246.IJETCSIT-V2I1P104

Keywords:

DeepFake, Deep Learning, Generative Adversarial Networks (GANs), Media Manipulation, Autoencoders, Face Swapping, AI Ethics

Abstract

Through artificial intelligence and machine learning, especially deep learning, there has been a great change in the future of digital media. The key advancement that has attracted quite a significant level of controversy and consequence in duality is known as DeepFake technology. DeepFakes are generated by generative models, including GANs and autoencoders, to create realistic content that mimics real-life persons. The following paper investigates deep learning in relation to DeepFake technologies in terms of their background, development, usages, and impacts before February 2021. This paper is concerned with an introduction to the type of algorithms employed in DeepFakes, the entertainment, politics, and cyber-security fields in which DeepFakes have found application, as well as the counter-measures that have been put in place to address the problems associated with AI-generated media forgery. This paper makes a methodological contribution to the emerging research on DeepFakes by conducting empirical investigations, synthesizing from literature, and assessing models of DeepFakes to inform a given environment. However, the decision on ethical issues, regulatory requirements, and possible further research relating to AI media synthesis and detection is also discussed at the end of the paper

Downloads

Download data is not yet available.

References

[1] Guarnera, L., Giudice, O., & Battiato, S. (2020). Deepfake detection by analyzing convolutional traces. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops (pp. 666-667).

[2] Chesney, B., & Citron, D. (2019). Deepfakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., 107, 1753.

[3] Lewis, J. K., Toubal, I. E., Chen, H., Sandesera, V., Lomnitz, M., Hampel-Arias, Z., ... & Palaniappan, K. (2020, October). Deepfake video detection based on spatial, spectral, and temporal inconsistencies using multimodal deep learning. In 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) (pp. 1-9). IEEE.

[4] Ciftci, U. A., Demir, I., & Yin, L. (2020). Fakecatcher: Detection of synthetic portrait videos using biological signals. IEEE transactions on pattern analysis and machine intelligence.

[5] Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and Machines, 28, 689-707.

[6] Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.

[7] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786), 504-507.

[8] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735-1780.

[9] Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).

[10] Kingma, D. P., & Welling, M. (2013, December). Auto-encoding variational bayes.

[11] Korshunov, P., & Marcel, S. (2018). Deepfakes: a new threat to face recognition? Assessment and detection. arXiv preprint arXiv:1812.08685.

[12] Afchar, D., Nozick, V., Yamagishi, J., & Echizen, I. (2018, December). Mesonet: a compact facial video forgery detection network. In 2018 IEEE international workshop on information forensics and security (WIFS) (pp. 1-7). IEEE.

[13] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.

[14] LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. nature, 521(7553), 436-444.

[15] Li, Y., & Lyu, S. (2018). Exposing deepfake videos by detecting face-warping artifacts. arXiv preprint arXiv:1811.00656.

[16] Matern, F., Riess, C., & Stamminger, M. (2019, January). Exploiting visual artefacts to expose deepfakes and face manipulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW) (pp. 83-92). IEEE.

[17] Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1-11).

[18] Meskys, E., Kalpokiene, J., Jurcys, P., & Liaudanskas, A. (2020). Regulating deepfakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice, 15(1), 24-31.

[19] Younus, M. A., & Hasan, T. M. (2020, April). Effective and fast deepfake detection method based on haar wavelet transform. In 2020 International Conference on Computer Science and Software Engineering (CSASE) (pp. 186-190). IEEE.

[20] Chintha, A., Thai, B., Sohrawardi, S. J., Bhatt, K., Hickerson, A., Wright, M., & Ptucha, R. (2020). Recurrent convolutional structures for audio spoof and video deepfake detection. IEEE Journal of Selected Topics in Signal Processing, 14(5), 1024-1037.

Published

2021-03-30

Issue

Section

Articles

How to Cite

1.
Tiwari S, Sarma W, Dey S. The Convergence of Deep Learning and DeepFake: A Study on AI-Generated Media Manipulation. IJETCSIT [Internet]. 2021 Mar. 30 [cited 2025 Sep. 13];2(1):28-35. Available from: https://ijetcsit.org/index.php/ijetcsit/article/view/171

Similar Articles

1-10 of 182

You may also start an advanced similarity search for this article.