Impact of AI-Generated Content on AI Technology: Exploring Model Collapse and Its Implications

Authors

  • Elfindah Princes Bina Nusantara University

DOI:

https://doi.org/10.53748/jmis.v2i2.55

Keywords:

Artificial Intelligence, Generative AI, Model Collapse, Model Autography Disorder, Habsburg AI, AI Ethics, AI Training

Abstract

Purposes - This research aims to investigate the phenomenon of "model collapse" within Generative Adversarial Networks (GANs) when AI models are trained using AI-generated content. The study focuses on understanding the implications of model collapse on the quality of AI outputs, exploring new concepts like "Model Autography Disorder" (MAD) and "Habsburg AI," and discussing the broader ethical and social impacts of AI self-consumption. Methodology - The study utilizes a mixed-methods approach, combining simulation experiments with qualitative interviews. GAN models were trained on AI-generated data to simulate model collapse, and various techniques were applied to mitigate this collapse. Expert interviews provided insights into the ethical considerations and future directions for generative AI development. Findings - The research demonstrates that model collapse significantly impacts the performance and diversity of AI outputs when trained on synthetic data. Although some mitigation techniques show potential, they do not fully prevent the collapse. Concepts like MAD and Habsburg AI offer deeper understanding into the risks of AI self-consumption and its broader implications for AI-driven systems. Novelty - The introduction of new terms like "Model Autography Disorder" and "Habsburg AI" adds unique perspectives to the discourse on AI sustainability. The study is among the first to examine the ethical and technical challenges posed by AI self-consumption and its long-term effects on AI-generated content. Research Implications - This study underscores the necessity for stricter guidelines on using AI-generated content in training models to prevent model collapse. It also highlights the need for hybrid training methods and ongoing ethical considerations to ensure the quality, reliability, and sustainability of AI-driven systems.

 

 

 

Downloads

Download data is not yet available.

References

Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A., & Baraniuk, R. G. (2023). Self-consuming generative models go MAD. arXiv. https://doi.org/10.48550/arXiv.2307.01850

Du, Y. (2022). On the transparency of artificial intelligence systems. Journal of Autonomous Intelligence, 5(1), 13–22. https://doi.org/10.32629/jai.v5i1.486

Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2023). Generative AI. Business & Information Systems Engineering. https://doi.org/10.1007/s12599-023-00834-7

Jin, L., Tan, F., & Jiang, S. (2020). Generative adversarial network technologies and applications in computer vision. Computational Intelligence and Neuroscience. https://doi.org/10.1155/2020/1459107

Kalla, D. (2023). Study and analysis of ChatGPT and its impact on different fields of study. International Journal of Innovative Science and Research Technology, 8(3). www.ijisrt.com

Kwon, J. (2023). A study on ethical awareness changes and education in artificial intelligence society. Revue d’Intelligence Artificielle, 37(2), 341–345. https://doi.org/10.18280/ria.370212

Li, W., Liu, W., Chen, J., Wu, L., Flynn, P. D., Ding, W., & Chen, P. (2023). Reducing mode collapse with Monge–Kantorovich optimal transport for generative adversarial networks. IEEE Transactions on Cybernetics. https://doi.org/10.1109/TCYB.2023.3296109

Downloads

Published

02-01-2024

How to Cite

Elfindah Princes. (2024). Impact of AI-Generated Content on AI Technology: Exploring Model Collapse and Its Implications. Journal of Multidisciplinary Issues, 4(1), 2–14. https://doi.org/10.53748/jmis.v2i2.55