Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Automated abstraction of clinical parameters of multiple myeloma from real-world clinical notes using large language models
0
Zitationen
10
Autoren
2026
Jahr
Abstract
Real-world evidence (RWE) is increasingly recognized as a valuable type of oncology research but extracting fit-for-purpose real-world data (RWD) from electronic health records (EHRs) remains challenging. Manual abstraction from free-text clinical documents, although the gold standard for information extraction, is resource-intensive. RWD generation using natural language processing (NLP) has been limited by performance ceilings and annotation requirements, which recent LLMs improve on. Multiple myeloma (MM) is the second most common hematological malignancy, with many opportunities for RWE to expand knowledge of disease and treatment. We evaluate new NLP workflows in abstracting MM-related clinical data fields from de-identified EHRs. NLP workflows (BERT and Llama-based, using various prompt types) were developed for 12 MM-specific data fields and evaluated with manually curated data from 125 clinical notes. Statistical analysis was conducted to evaluate characteristics of models and data associated with F1 scores. For 200 randomly selected patients, three illustrative data fields (MM status, transplant status, and extramedullary disease) were extracted with a corresponding timestamp from all patient notes within a 120-day window of the index MM diagnosis date using the best-performing Llama workflow. Abstracted data field labels were then plotted on a timeline to display the frequency with which these data fields are documented in patient records. Average F1 across the 12 data fields for the best Llama and BERT workflows was 0.82 and 0.65 respectively. Best workflow performance ranged across the data fields (F1 = 0.59–0.99). Statistical analysis of the results showed model size, inter-rater reliability (IRR), variable type, and prompt design significantly predicted workflow performance, in descending order of significance (p < 0.05). Performance improvements with larger LLMs and chain-of-thought prompting was greater in data fields with greater difficulty of abstraction. IRR can prioritize NLP resources, increasing efficiency of RWD generation without sacrificing data quality. Strategic selection of NLP tool using the proposed framework has the potential to inform planning of RWD generation, ultimately accelerating insights from RWE.