Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When Control Succeeds but Discernment Fails: Preparing for AI-Assisted Safety Research
0
Zitationen
1
Autoren
2025
Jahr
Abstract
AI systems are increasingly used in AI safety research , yet discernment-our ability to reliably judge correctness or catch subtle errors, central for safety progress-may not keep pace. Even with AI control mechanisms preventing overt misbehav-ior, flaws in AI-assisted safety research may go undetected-a risk amplified in AI safety research due to its complexity and the difficulty of establishing ground truth. This can fuel a feedback loop where AI control, paradoxically, helps erode the conditions for effective risk management-diminishing our ability to identify, understand, and act upon risks. We argue that a near-term control success coupled with scalable oversight failure is likely and warrants urgent governance preparation , and recommend empirical tests, enhancing transparency and auditing, and strengthening human discernment capacity as necessary complements to AI control for achieving robustly safe advanced AI.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.611 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.877 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.431 Zit.
Fairness through awareness
2012 · 3.292 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.