Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing Quality and Human Acceptance of AI-generated Book Endorsements
0
Zitationen
3
Autoren
2026
Jahr
Abstract
As generative AI technologies such as large language models (LLMs) become more common in daily life, it is increasingly important to evaluate AI-generated content and human responses using real-world examples. Understanding the quality and psychological acceptance of such content is essential for the responsible use of these technologies. This study focused on endorsements for actual commercial books. We compared four generation processes: human-only, AI-only, AI-generated with human revision, and human-created with AI revision. We also examined how disclosing the generation process affected user evaluations. The results showed that AI-generated content enhanced purchase intention more than human-created content. However, when the content was identified as AI-generated, participants tended to show resistance. This resistance was reduced when humans were also involved in the generation process. These findings suggest that human-AI collaboration produces content broadly acceptable across diverse user groups and may serve as a standard approach for future content generation.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.587 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.454 Zit.