OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 16:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Domain-Specific Health Text Generation Through Low-Rank Adaptation of a Transformer Architecture

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

The growing demand for accessible and reliable health information has motivated the adaptation of domain-specific large language models (lLMs). LLMs perform well on general natural language processing (NLP) tasks but require fine-tuning for healthcare applications. In this work, Mistral-7B, a 7.3B parameter Transformer model, is fine-tuned for health text generation and noncritical symptom understanding using three parameterefficient methods-Low-Rank Adaptation (LoRA), Quantized Low-Rank Adaptation (QLoRA), and Rank-Optimized Reliable Adaptation (RoRA). A synthetic dataset comprising medical question answering, symptom descriptions, and home remedies was curated from public sources. Experimental results demonstrate that RoRA achieved the highest BLEU-4 (0.52), ROUGE-L (0.65), and F1-score (0.84), outperforming baselines such as BERT, RoBERTa, and LLaMA7B while maintaining low GPU memory usage. This work supports the use of fine-tuned LLMs for safe and efficient health communication, especially in low-resource settings. It also demonstrates that lightweight adaptation using Parameter Efficient Fine-Tuning (PEFT) can deliver high-quality outputs while minimizing computational demands.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen