OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 03:30

Girish N. Nadkarni

950 Arbeiten48.498 Zitationen

Icahn School of Medicine at Mount Sinai · US

Relevante Arbeiten

Meistzitierte Publikationen im Bereich Gesundheit & MedTech

Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments

2023 · 307 Zit. · Scientific Reports

Transforming Cardiovascular Care With Artificial Intelligence: From Discovery to Practice

2024 · 155 Zit. · Journal of the American College of Cardiology

Large Language Models and Empathy: Systematic Review

2024 · 117 Zit. · Journal of Medical Internet Research

Large Language Models Are Poor Medical Coders — Benchmarking of Medical Code Querying

2024 · 110 Zit. · NEJM AI

Large language models for generating medical examinations: systematic review

2024 · 94 Zit. · BMC Medical Education

Artificial intelligence-enabled decision support in nephrology

2022 · 85 Zit. · Nature Reviews Nephrology

Assessing GPT-4 multimodal performance in radiological image analysis

2024 · 66 Zit. · European Radiology

Evaluating the role of ChatGPT in gastroenterology: a comprehensive systematic review of applications, benefits, and limitations

2023 · 52 Zit. · Therapeutic Advances in Gastroenterology

Artificial Intelligence in Cardiovascular Care—Part 2: Applications

2024 · 51 Zit. · Journal of the American College of Cardiology

Evaluating and addressing demographic disparities in medical large language models: a systematic review

2025 · 50 Zit. · International Journal for Equity in Health

Applications of large language models in psychiatry: a systematic review

2024 · 49 Zit. · Frontiers in Psychiatry

Large language models: a primer and gastroenterology applications

2024 · 41 Zit. · Therapeutic Advances in Gastroenterology

Large Language Models (LLMs) and Empathy – A Systematic Review

2023 · 40 Zit.

Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings

2023 · 39 Zit. · Annals of Internal Medicine

Multi-model assurance analysis showing large language models are highly vulnerable to adversarial hallucination attacks during clinical decision support

2025 · 37 Zit. · Communications Medicine