Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward AI-Augmented Technical Interviewing: A ConceptualFramework for Assessing Developer Competencies in AI-Mediated Software Development
0
Zitationen
1
Autoren
2026
Jahr
Abstract
This paper presents a conceptual framework, the AI-Augmented Interview Framework (AAIF), requiring empirical validation before deployment. No interviews have been conducted; all thresholds, weights, and KPI linkages are conjectures pending empirical testing. The accelerating adoption of AI-powered development tools (GitHub Copilot, ChatGPT, Claude) is transforming software engineering practice. Industry surveys indicate that over 75% of professional developers now use AI coding assistants regularly (noting potential self-selection bias in survey samples), yet fewer than one in four organizations assess AI fluency during technical interviews. AAIF proposes a structured five-stage interview methodology (Stage 0 fundamentals gate plus four AI-augmented stages) for evaluating developer competencies in AI-mediated environments. The framework assesses: (1) toolchain fluency and prompt engineering, (2) AI output evaluation and critical reasoning, (3) system-oriented problem solving with AI integration, and (4) meta-reasoning about AI limitations, ethics, and failure modes. We develop evaluation rubrics with behaviorally anchored rating scales, propose configurable decision thresholds, and provide an integrated risk framework addressing bias, fairness, legal compliance, and ethical dimensions. The novelty lies in the systematic integration of established methods from industrial-organizational psychology, software engineering, and risk management for the specific and underexplored problem of assessing developers who use AI tools. A detailed four-phase empirical validation protocol is proposed as a key contribution.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.490 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.553 Zit.