OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 11:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From Automation to Validation: Assessing Human–LLM Agreement in Systematic Reviews of Venture Capital Investment Strategies

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

The exponential expansion of venture capital (VC) research has amplified the need for scalable, reproducible evidence-synthesis methods. Large Language Models (LLMs) can automate title-and-abstract screening, yet their reliability compared with expert reviewers remains uncertain. Building upon our previous CINTI 2025 study, which explored prompt and model effects on LLM-assisted screening, this paper advances from automation toward validation by introducing a human-verified gold standard. Using 246 manually classified VC records, four deep-semantic model executions (ChatGPT and Claude via API and web) were evaluated against human inclusion decisions. The ensemble achieved 61 % overall accuracy, precision = 70.6 %, recall = 93.3 %, and Cohen’s κ = 0.72, indicating substantial agreement. Cases of full model unanimity (YYYY or NNNN) reached 88–98 % alignment with human judgments, while mixed outputs showed only ≈ 10 % reliability. These results confirm that LLMs can effectively support systematic screening when consensus and uncertainty are properly managed. The findings establish agreement strength as a quantitative reliability proxy and provide an empirical benchmark for hybrid human–AI review workflows in venture-capital evidence synthesis.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Private Equity and Venture CapitalFinTech, Crowdfunding, Digital FinanceArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen