Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-powered LCNC implementations and gender: a comparative study of role attribution bias
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Abstract This study investigates whether AI-powered Low-Code/No-Code (LCNC) solutions may unintentionally generate gender-biased responses. We developed four AI-powered LCNC implementations (i.e., Spreadsheet-based, Workflow-based, Web-Application-based and Mobile-Application-based), using different generative AI models, including those from OpenAI, DeepSeek, Claude, and Google DeepMind, and evaluated their outputs in response to prompts designed to highlight potential gendered associations in roles, traits, and personal preferences. Our analysis consists of two parts. First, we applied a mixed-methods structured content analysis to systematically identify potential stereotypical patterns in the responses of the AI models. Second, we compared the outputs across the different AI models for each prompt to explore variations in gender bias-related behavior. Our findings raise an ethical concern: without appropriate policies and guidelines in place, AI-powered LCNC solutions may replicate or even amplify existing societal biases. This work contributes to ongoing discussions on responsible AI integration and bias-aware design, especially within the evolving LCNC ecosystem.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.563 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.861 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.407 Zit.
Fairness through awareness
2012 · 3.273 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.