Navigating the Challenges of AI-Generated Content: Examining Public Trust, Accuracy, and Ethical Implications
DOI:
https://doi.org/10.53748/jmis.v2i2.54Keywords:
Artificial Intelligence, Public Trust, Model Collapse, AI Inbreeding, Content Accuracy, Detection Tools, Online Information, Media Industry, Ethical GuidelinesAbstract
Purposes - The purpose of this research is to analyze the impact of the growth of AI-generated content on the accuracy and reliability of online information. Specifically, the research examines the challenges in detecting AI content, considering the limitations of AI tools like ZeroGPT and OpenAI’s Text Classifier, and explores how these challenges may influence public trust in online information. Methodology - This study employs a mixed-method approach combining quantitative data collection through surveys and qualitative case study analysis of AI-generated content controversies, such as articles from CNET and Microsoft. Data was analyzed using Structural Equation Modeling (SEM) to evaluate the relationships between AI usage and user trust. Findings - The results indicate that while there is a positive relationship between AI usage and public trust, the impact is not statistically significant. Issues like model collapse and AI inbreeding contribute to the challenge of maintaining content accuracy, which in turn affects the trustworthiness of AI-generated information. Novelty - This research contributes to the growing body of knowledge on AI-generated content by focusing on its impact on public trust, a relatively underexplored area. The study also introduces the concept of "model collapse" and "AI inbreeding" as critical factors that may undermine the reliability of AI-generated information. Research Implications - The findings have practical implications for media industries and AI developers. Enhancing AI algorithms to improve content accuracy and reliability, combined with stronger human oversight, could help mitigate the risks associated with AI-generated content and restore public trust in online information. The study also calls for the development of more advanced detection tools and ethical guidelines to govern the use of AI in information dissemination.
Downloads
References
Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y. S., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2, 100027. https://doi.org/10.1016/j.caeai.2021.100027
Dhar, T., Dey, N., Borra, S., & Sherratt, R. S. (2023). Challenges of deep learning in medical image analysis—Improving explainability and trust. IEEE Transactions on Technology and Society, 4(1), 68–75. https://doi.org/10.1109/tts.2023.3234203
Halbheer, D., Stahl, F., Koenigsberg, O., & Lehmann, D. R. (2014). Choosing a digital content strategy: How much should be free? International Journal of Research in Marketing, 31(2), 192–206. https://doi.org/10.1016/j.ijresmar.2013.10.004
Ribes, D., Henchoz, N., Portier, H., Defayes, L., Phan, T. T., Gatica-Perez, D., & Sonderegger, A. (2021). Trust indicators and explainable AI: A study on user perceptions. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12933 LNCS, pp. 662–671). https://doi.org/10.1007/978-3-030-85616-8_39
Theophilou, E., Lomonaco, F., Donabauer, G., Ognibene, D., Sánchez-Reina, R. J., & Hernàndez-Leo, D. (2023). AI and narrative scripts to educate adolescents about social media algorithms: Insights about AI overdependence, trust, and awareness. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14200 LNCS, pp. 415–429). https://doi.org/10.1007/978-3-031-42682-7_28
Truong, V. T., Le, H. D., & Le, L. B. (2024). Trust-free blockchain framework for AI-generated content trading and management in the metaverse. IEEE Access, 12(March), 41815–41828. https://doi.org/10.1109/ACCESS.2024.3376509
Wei, X., Cui, X., Cheng, N., Wang, X., Zhang, X., Huang, S., Xie, P., Xu, J., Chen, Y., Zhang, M., Jiang, Y., & Han, W. (2023). ChatIE: Zero-shot information extraction via chatting with ChatGPT. arXiv. http://arxiv.org/abs/2302.10205
Xie, T., Li, Q., Zhang, J., Zhang, Y., Liu, Z., & Wang, H. (2023). Empirical study of zero-shot NER with ChatGPT. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 7935–7956). https://doi.org/10.18653/v1/2023.emnlp-main.493
Yuan, C., Xie, Q., & Ananiadou, S. (2023). Zero-shot temporal relation extraction with ChatGPT. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 92–102). https://doi.org/10.18653/v1/2023.bionlp-1.7
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Journal of Multidisciplinary Issues
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution 4.0 (CC 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.