Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Trust Paradox in AI: Structural Substitution and the Risk of Digital Colonization in Canada
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Large language models are increasingly used for learning and decision-making, yet little empirical work has examined how they respond to geographically ambiguous prompts that do not specify a national context. This study examines whether AI-generated responses default to United States–specific institutional frameworks and how such defaults operate for users in Canada. Using a prompt-based experimental design, 72 geographically ambiguous prompts spanning ten structural domains and one values-based domain were administered five times each using the free version of ChatGPT(ChatGPT 5.2- 5 Mini) under typical access conditions, including observed transitions between available free-tier models, yielding 360 responses.Analysis shows a clear divergence between institutional framing and value expression. While responses to values-based prompts consistently emphasized communitarian and public-oriented principles familiar to Canadian users, structurally framed prompts frequently relied on U.S.-specific laws, agencies, and procedures presented without qualification. This combination appears to increase user trust while simultaneously introducing foreign institutional assumptions, making national defaulting more difficult to notice in practice.By documenting this pattern, the study contributes to research on digital colonization by identifying institutional defaulting as a distinct form of epistemic influence. The findings have implications for education, digital sovereignty, and AI governance, particularly for students and other users who rely on free-tier AI systems for everyday informational and institutional guidance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.