Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adversarial Ontology: A Threat Taxonomy for AI-Mediated Clinical Classification
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Healthcare cybersecurity concentrates on two threat classes: data confidentiality breaches and adversarial perturbation of machine learning models. This leaves a third attack surface unexamined: the ontological layer, the classification systems (ICD-10, CPT, SNOMED CT) through which clinical encounters become structured data. This analysis is limited to administrative AI in primary care contexts. Recent deployment of administrative AI has demonstrated that this layer is already subject to measurable distortion: documentation tools inflate symptom levels across all six RDoC domains (estimated increases of 30--51%; Castro et al. 2026) while reducing clinical interventions (adjusted OR 0.83), and coding assistants shift evaluation-and-management levels upward by 8--13 percentage points. Each documented distortion mechanism constitutes a potential attack vector. This paper presents a six-class threat taxonomy for ontological attacks on clinical AI systems: (1) Ontology Poisoning, (2) Cascade Injection, (3) Semantic Confusion Attacks, (4) Documentation Flooding, (5) Knowledge Supply Chain Compromise, and (6) Feedback Loop Exploitation. Each class is characterised by attack surface, access level, detectability, harm profile, and analogous traditional attack. I argue that existing security frameworks (NIST CSF 2.0, MITRE ATT&CK) and regulatory instruments (EU AI Act, NIS2) lack coverage for ontological attacks. I propose ontological integrity, the fidelity of classification systems under AI mediation, as a security property requiring dedicated monitoring. Three limitations bound the analysis: the taxonomy is anticipatory, the threat classes derive from one pipeline architecture, and economic incentives relative to traditional cybercrime remain unquantified.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.597 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.821 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.708 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.455 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.633 Zit.