Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
What Needs Attention? Prioritizing Drivers of Developers' Trust and Adoption of Generative AI
0
Zitationen
8
Autoren
2025
Jahr
Abstract
Generative AI (genAI) tools promise productivity gains, yet miscalibrated trust and usage friction still hinder adoption. Moreover, genAI can be exclusionary, failing to adequately support diverse users. One such aspect of diversity is cognitive diversity, which leads to diverging interaction styles (e.g., a risk-averse developer may gate genAI outputs behind tests/review; a risk-tolerant one may prototype directly/fix issues post-hoc). When an individual's cognitive styles are unsupported, it creates additional usability barriers. Thus, to design tools that developers trust and use, we must first understand which factors shape their trust and intentions to use genAI at work? We developed a theoretical model of developers' trust and adoption of genAI through a large-scale survey (N = 238) conducted at GitHub and Microsoft. Using Partial Least Squares-Structural Equation Modeling (PLS-SEM), we found aspects related to genAI's system/output quality (e.g., presentation, safety/security, performance), functional value (e.g., educational/practical benefits), and goal maintenance (ability to sustain alignment with task goals) significantly influence trust, which, alongside developers' cognitive styles (i.e., risk tolerance, technophilic motivations, computer self-efficacy), affect adoption. An Importance-Performance Matrix Analysis (IPMA) identified high-importance factors where genAI underperforms, revealing targets for design improvement. We bolster these findings by qualitatively analyzing developers' reported challenges and risks of genAI use to uncover why these gaps persist in development contexts. We offer practical guidance for designing genAI tools that support effective, trustworthy, and inclusive developer-AI interactions.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.612 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.876 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.431 Zit.
Fairness through awareness
2012 · 3.292 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.