FineMedLM / README.md
hongzhouyu's picture
Update README.md
dcf3ef5 verified
---
license: mit
datasets:
- hongzhouyu/FineMed-SFT
language:
- en
- zh
base_model:
- meta-llama/Llama-3.1-8B
library_name: transformers
tags:
- medical
---
<div align="center">
<h1>
FineMedLM
</h1>
</div>
<div align="center">
<a href="https://github.com/hongzhouyu/FineMed" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2501.09213" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**FineMedLM** is a medical chat LLM trained via SFT on meticulously crafted synthetic data. By further applying DPO, the model acquires enhanced deep reasoning capabilities, culminating in the development of [FineMedLM-o1](https://huggingface.co/hongzhouyu/FineMedLM-o1).
For more information, visit our GitHub repository.
# <span>Usage</span>
You can use FineMedLM in the same way as `Llama-3.1-8B-Instruct`:
(⚠️**Note**: Please use the system prompt we provide to achieve better inference results)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
main_model_name = "hongzhouyu/FineMedLM"
model = AutoModelForCausalLM.from_pretrained(main_model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(main_model_name)
prompt = (
"""The following are multiple choice questions (with answers) about health. Think step by step and then finish your answer with "the answer is (X)" where X is the correct letter choice.
Question:
Polio can be eradicated by which of the following?
Options:
A. Herbal remedies
B. Use of antibiotics
C. Regular intake of vitamins
D. Administration of tetanus vaccine
E. Attention to sewage control and hygiene
F. Natural immunity acquired through exposure
G. Use of antiviral drugs
Answer: Let's think step by step.
"""
)
messages = [
{"role": "system", "content": "You are a helpful professional doctor. The user will give you a medical question, and you should answer it in a professional way."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
print(text)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
print("-----start generate-----")
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=2048,
eos_token_id=tokenizer.eos_token_id
)
answer = tokenizer.decode(generated_ids[0], skip_special_tokens=False)
print(answer)
```
# <span>Citation</span>
```
@misc{yu2025finemedlmo1enhancingmedicalreasoning,
title={FineMedLM-o1: Enhancing the Medical Reasoning Ability of LLM from Supervised Fine-Tuning to Test-Time Training},
author={Hongzhou Yu and Tianhao Cheng and Ying Cheng and Rui Feng},
year={2025},
eprint={2501.09213},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.09213},
}
```