Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Supporting Novice Creativity in Design Education Through Human-Centred Explainable AI
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Generative artificial intelligence tools are reshaping design by enabling novice designers to produce professional-quality user interfaces rapidly. However, for novice designers, exposure to AI-generated outputs that are far beyond their capabilities can inhibit creative growth. In this work, we investigate AI overperformance, when superior AI outputs lower the creative confidence of novices, and explore whether human-centred and explainable AI interfaces can mitigate such effects while sustaining creative agency. We conducted a within-subjects experiment with 75 novice designers using a web-based research platform. Participants completed mobile app design tasks under three conditions: Human-Only (baseline), AI Overmatch (exposure to superior AI outputs), and XAI-Enhanced (exposure to AI outputs with an embedded explainable interface). A repeated-measures ANOVA indicated that creative self-efficacy varied significantly, F = 24.67, p < 0.001, η2 = 0.18. While creative self-efficacy was significantly decreased in the AI Overmatch condition, M = −1.18, SD = 0.32, when compared to the Human-Only conditions, M = 0.08, SD = 0.15, this was significantly increased in the XAI-Enhanced condition, M U= 0.42, SD = 0.18. This also led to a rise in creative performance across both ideation and output quality. The results showed that the AI Overmatch condition significantly reduced creative self-efficacy and originality; however, this negative effect was mitigated by the XAI-Enhanced interface, which enhanced confidence and idea quality. Mediation analysis demonstrated that expectancy disconfirmation explains the negative impact of AI overperformance on human creativity. These findings provide constructive design principles for educational AI tools and contribute to HCI theory by demonstrating that pedagogically oriented, transparent AI supports human–AI collaboration without diminishing human agency.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.792 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.331 Zit.
"Why Should I Trust You?"
2016 · 14.605 Zit.
Generative adversarial networks
2020 · 13.213 Zit.