Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Policy to Pipeline: A Governance Framework for AI Development and Operations Pipelines
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial intelligence systems increasingly operate in high-risk domains where regulatory frameworks such as the EU AI Act, NIST AI RMF, and ISO/IEC 42001 impose explicit evidence and accountability requirements. However, existing engineering practice remains largely manual, retrospective, and decoupled from operational pipelines, resulting in inconsistent provenance, limited reproducibility, and inadequate clause-level traceability. This paper introduces Governance as Evidence for AI Pipelines (GEAP), a pipeline-native governance framework that expresses regulatory and organizational policies as machine-interpretable Governance as Code rules. GEAP integrates governance directly into a unified SDLC–MLOps execution spine by enforcing promotion decisions at five gates—Data, Training, Validation, Release, and Operations—each of which emits signed, content-addressed artifacts into a tamper-evident Evidence Backbone. These artifacts are assembled into a per-run Conformity Bundle, from which the proposed Clause-to-Artifact Traceability mechanism deterministically renders clause coverage across multiple regulatory regimes without manual crosswalks or duplicated documentation. The framework further introduces quantitative governance metrics that measure adequacy, completeness, stability, and evidence hygiene. A detailed synthetic case study of an intensive-care sepsis early-warning system demonstrates GEAP’s ability to standardize promotion control, detect policy violations, and produce replayable, audit-ready compliance manifests in a high-risk clinical context. The results show that governance can operate as a deterministic, reproducible, and verifiable pipeline property rather than an external documentation exercise, enabling more disciplined, transparent, and accountable AI deployment practices.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.563 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.861 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.407 Zit.
Fairness through awareness
2012 · 3.273 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.