OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 16:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Identifying Key Issues in Artificial Intelligence Litigation: A Machine Learning Text Analytic Approach

2025·0 Zitationen·Applied SciencesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

The rapid proliferation of artificial intelligence (AI) systems across high-stakes domains—with global AI adoption accelerating since 2023—has created an urgent need to identify which AI challenges and issues are materializing into real-world harms so that policymakers can develop targeted regulations, organizations can implement effective risk management, and accountability mechanisms can address actual rather than speculative problems. Public concern has risen sharply: 52% of Americans now feel more concerned than excited about AI (up from 38% in 2022), and 80% believe government should maintain AI safety rules even if development slows. Yet existing approaches exhibit critical limitations that impede evidence-based governance. Ethics frameworks, while establishing normative principles across 84+ published guidelines, remain aspirational rather than empirical. Survey-based studies capture perceptions from over 48,000 respondents globally but measure expectations rather than documented harms. Incident databases catalog over 1200 AI failures but depend on media coverage, systematically overrepresenting high-profile cases while underrepresenting routine organizational problems. This study addresses this gap by analyzing 347 AI-related U.S. litigation cases using machine learning text analytics, providing empirical evidence of AI problems that have crossed the threshold from abstract concern into documented legal conflict. Employing Latent Dirichlet Allocation (LDA) and Non-negative Matrix Factorization (NMF) topic modeling with coherence validation (NMF achieving 0.276 NPMI vs. LDA’s 0.164), the analysis identifies nine distinct AI issue areas with specific case distributions: cybersecurity vulnerabilities and data breaches (116 cases, 33.4%), intellectual property and AI ownership (61 cases, 17.6%), AI misrepresentation and inflated claims (59 cases, 17.0%), criminal justice and algorithmic due process (37 cases, 10.7%), employment automation (33 cases, 9.5%), privacy and surveillance (31 cases, 8.9%), platform accountability (21 cases, 6.1%), algorithmic bias (19 cases, 5.5%), and government AI deployment (6 cases, 1.7%). The findings reveal a systematic mismatch between AI ethics discourse—which emphasizes fairness and transparency—and litigation patterns, where data security (33.4%) and intellectual property (17.6%) dominate while algorithmic bias comprises only 5.5% of cases. Most disputes are addressed through existing legal frameworks (First Amendment, Lanham Act, FOIA, Title VII) rather than AI-specific regulation, underscoring the urgent need for governance mechanisms aligned with empirically documented AI challenges.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationArtificial Intelligence in Law
Volltext beim Verlag öffnen