OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 18:33

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Regulation of Appropriate Prompts for Users in Text‐Based Generative Artificial Intelligence Programs

2024·4 Zitationen·Software Practice and ExperienceOpen Access
Volltext beim Verlag öffnen

4

Zitationen

3

Autoren

2024

Jahr

Abstract

ABSTRACT Background The principle of transparency is of great significance in the governance of text‐based generative artificial intelligence (AI) technology. The principle of transparency not only requires the operational principles and algorithms of text‐based generative artificial intelligence to be interpretable, but also requires text‐based generative artificial intelligence programs to fulfill basic prompting obligations to users, especially when it comes to the output content of generative artificial intelligence, which cannot guarantee true and accurate prompts. Aims The purpose of this study is to explore the classification and frequency of prompt methods in text‐based generative artificial intelligence and to propose that laws should require different prompt rules for various user categories, addressing the current gap in existing regulations. Methods The experiment is conducted from June 1 to 15, 2024 in the school, scientific research company, media organization hall, and railway station lounge, and Kimi program, Tongyi program, ERNIE Bot program, iFLYTEK Spark program, etc., are used as text‐based generative AI programs. Discussion The results show that users aged 8‐17 who are minors only have 6 points in their perception of the authenticity of the output content of generative artificial intelligence programs; the level of awareness of possible falsity during use reaches 4 points; the degree of user misleading is as high as 13 points. Conclusion The study concludes that, for individuals needing special protection, such as minors, prompts should accompany every instance of content generation. For other user groups, prompts should be issued when necessary. To enhance prompt effectiveness, the program should display permanent prompts in prominent positions on the interface, using noticeable fonts and clear, well‐designed wording. Research reveals that minors have insufficient perception of the authenticity of the output content of generative artificial intelligence, and the risk of misleading is significant, highlighting the importance of clear prompts for this specific group every time they generate content.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen