OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 00:37

Zaifu Zhan

47 Arbeiten127 Zitationen

University of Minnesota · US

Relevante Arbeiten

Meistzitierte Publikationen im Bereich Gesundheit & MedTech

Large language models for disease diagnosis: a scoping review

2025 · 61 Zit. · npj Artificial Intelligence

Benchmarking of Large Language Models for the Dental Admission Test

2025 · 10 Zit. · Health Data Science

Automating expert-level medical reasoning evaluation of large language models

2025 · 5 Zit. · npj Digital Medicine

Mitigating Ethical Issues for Large Language Models in Oncology: A Systematic Review

2025 · 2 Zit. · JCO Clinical Cancer Informatics

Benchmarking GPT-5 for biomedical natural language processing

2025 · 2 Zit. · ArXiv.org

CancerLLM: a large language model in cancer domain

2026 · 1 Zit. · npj Digital Medicine

An evaluation of DeepSeek Models in Biomedical Natural Language Processing

2025 · 1 Zit. · ArXiv.org

Automating Expert-Level Medical Reasoning Evaluation of Large Language Models

2025 · 1 Zit. · ArXiv.org

To Reason or Not to: Selective Chain-of-Thought in Medical Question Answering

2026 · 0 Zit. · arXiv (Cornell University)

Can Large Language Models Self-Correct in Medical Question Answering? An Exploratory Study

2026 · 0 Zit. · arXiv (Cornell University)

MedCL-Bench: Benchmarking stability-efficiency trade-offs and scaling in biomedical continual learning

2026 · 0 Zit. · ArXiv.org

An Underexplored Frontier: Large Language Models for Rare Disease Patient Education and Communication -- A scoping review

2026 · 0 Zit. · arXiv (Cornell University)

Auditing frontier general-purpose large language models in biomedical tasks: reasoning gains, extraction limits, and benchmark reliability

2026 · 0 Zit. · Research Square

MedCL-Bench: Benchmarking stability-efficiency trade-offs and scaling in biomedical continual learning

2026 · 0 Zit. · arXiv (Cornell University)

To Reason or Not to: Selective Chain-of-Thought in Medical Question Answering

2026 · 0 Zit. · arXiv (Cornell University)