Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Acceptability Scale for the Use of Large Language Models (LLMs) by Project Teams: Development and Preliminary Validation
0
Zitationen
5
Autoren
2026
Jahr
Abstract
The use of Large Language Models (LLMs) in organizational contexts has grown rapidly, particularly in project management activities. Despite this expansion, a relevant methodological gap can be observed in the literature: the absence of psychometrically validated instruments capable of measuring the acceptability of these technologies prior to their effective adoption, especially in project-oriented governance contexts. Traditional technology adoption models predominantly focus on a posteriori assessment of individual use, providing limited support for prospective analyses that inform strategic decision-making and organizational coordination mechanisms. In response to this gap, this study aims to develop and validate a psychometric scale to indirectly measure the acceptability, through outcome beliefs and with behavioral predispositions serving as structural proxies of the latent construct of LLM use by project management teams, with a focus on a priori judgments that precede the effective adoption of the technology. The initial scale, composed of 17 items, underwent content validation and was administered to a sample of 154 project management professionals. The latent structure was examined through Exploratory and Confirmatory Factor Analyses, resulting in the refinement of the instrument to 13 items distributed across two correlated factors. The results indicate that LLM acceptability is adequately represented by a bidimensional structure comprising the dimensions Intention/Predisposition and Trust/Perceived Benefit, both demonstrating high internal consistency and good statistical fit, and nomological validity evidenced by significant associations with respondents’ self-reported LLM usage frequency. These findings reinforce the conceptualization of acceptability as a prospective and multidimensional construct, relevant for supporting governance decisions and the adoption of artificial intelligence-based technologies in project-oriented organizational systems. The indirect measurement approach adopted here is theoretically grounded in the premise that a priori acceptability is not directly observable but is constituted by cognitive and dispositional beliefs formed prior to use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.539 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.586 Zit.