Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatHTN: a consultation model for hypertension
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The rapid development of large language models (LLMs) has greatly advanced natural language processing (NLP). While these models perform remarkably well in tasks such as text generation and translation, they still face challenges in highly specialized domains such as hypertension, where domain expertise and personalization are crucial. To address these limitations, we introduce a generative framework for intelligent hypertension consultation that combines a domain-specific knowledge graph, a multi-task fine-tuning strategy, and a specialized dataset. The knowledge graph enhances the model’s medical knowledge, while the multi-task fine-tuning mechanism optimizes tasks like medical entity recognition and etiology classification to ensure consistency. To further strengthen reasoning ability, we construct HTN-5M, a large-scale Chinese dataset that embeds chain-of-thought (CoT) reasoning in a structured triplet format (input, output, CoT), supporting both question-answer generation and auxiliary learning. Experimental results demonstrate that our approach outperforms strong baselines, achieving a 16.25% improvement over DeepSeek-LLM-67B-base on the CMB benchmark and an approximate 10% gain over HuatuoGPT across three medical datasets.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.557 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.447 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.944 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.