Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Operationalizing Responsible AI Policies with LLMs: an End-to-End Monitoring Prototype
0
Zitationen
4
Autoren
2026
Jahr
Abstract
As AI governance requirements continue to emerge, policy experts and engineers are increasingly responsible for demonstrating that AI systems comply with ethical and regulatory expectations. In practice, this work involves interpreting high-level, frequently ambiguous policy language and translating it into concrete, testable compliance practices. These processes are time-consuming and difficult to scale as regulations and systems evolve. We present the Responsible AI Monitoring Platform (RAMP), a human-centered system that supports experts in making AI governance work more systematic and transparent. RAMP extracts policy statements from governance documents, decomposes them into atomic obligations, and proposes system-specific rules linked to available evaluations, while surfacing gaps where requirements remain unverifiable or underspecified. Human experts remain the final decision-makers, ensuring that extracted policies are reviewed before downstream use. In a pilot focused on conversational systems, RAMP provides interpretable compliance evidence and decision-oriented summaries through an interactive dashboard, supporting traceability in responsible AI workflows.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.703 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.883 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.498 Zit.
Fairness through awareness
2012 · 3.300 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.185 Zit.