Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Discursive behavior of generative language models in geopolitical and humanitarian contexts
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The large-scale deployment of generative Large Language Models (LLMs) raises growing concerns about their discursive behavior when responding to value-laden geopolitical and humanitarian prompts. This study examines whether and how widely used LLMs exhibit systematic differences in tone and framing when exposed to identical prompts. Five widely deployed models (ChatGPT, Gemini, Claude, Copilot, and DeepSeek) were queried using ten open-ended prompts in Italian between March and June 2025. Responses were analyzed through a structured coding scheme based on predefined tone and framing categories. Rather than assuming discursive neutrality as an inherent property of language models, this study conceptualizes neutrality as a contextual and operational construct, observable through comparative patterns of discursive positioning. The results indicate that discursive neutrality should not be assumed a priori, as it varies systematically across models and prompting contexts. Distinct and recurrent discursive profiles emerge, reflecting differences in framing strategies, levels of assertiveness, and ethical positioning. The analysis situates these findings within broader discussions on model training regimes, alignment strategies, and design choices, highlighting their implications for accountability, transparency, and governance in AI-mediated communication. Methodological limitations, including the interpretive role of human coders, are explicitly addressed. Finally, the study proposes a structured and reproducible evaluation framework for auditing discursive behavior in generative AI systems, enabling systematic comparison of model-specific discursive profiles in value-laden contexts. Overall, the findings underscore the importance of critical and transparent assessment of generative models when they are deployed in journalism, education, and policy-relevant domains.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.575 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.867 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.415 Zit.
Fairness through awareness
2012 · 3.278 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.