LLaVA-Med v1.5 Mistral for Chest X-Ray Analysis
Project Page: SelfSynthX
Paper on arXiv: Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data
This model is a fine-tuned multimodal foundation model based on LLaVA-Med v1.5 Mistral-7B, optimized for analyzing chest X-ray images and detecting pneumonia using the Chest X-Ray Images (Pneumonia) dataset from Kaggle.
Key Details
- Base Model: LLaVA-Med v1.5 Mistral-7B
- Dataset: Chest X-Ray Images (Pneumonia)
- Innovation:
- Self-Synthesized Data: Enhances interpretability by generating human-understandable diagnostic insights.
- Domain-Specific Fine-Tuning: Optimized on medical imaging for accurate pneumonia classification.
- Iterative Training: Utilizes rejection sampling to improve diagnostic accuracy and explanation quality.
- Intended Use: Assisting in pneumonia diagnosis from chest X-ray images with detailed, explainable outputs.
How to Use
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "YuchengShi/llava-med-v1.5-mistral-7b-chest-xray"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
conversation = [
{
"role": "user",
"content": [
{"type": "text", "text": "Can you analyze this chest X-ray?"},
{"type": "image"},
],
},
]
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
image_file = "chest-xray/test1.png"
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to("cuda", torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
Training & Evaluation
- Training: Fine-tuned using LoRA on Chest X-ray images (Pneumonia dataset) with iterative rejection sampling.
- Evaluation: Achieves robust pneumonia classification with interpretable diagnostic explanations.
Citation
If you use this model, please cite:
@inproceedings{
shi2025enhancing,
title={Enhancing Cognition and Explainability of Multimodal Foundation Models with Self-Synthesized Data},
author={Yucheng Shi and Quanzheng Li and Jin Sun and Xiang Li and Ninghao Liu},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=lHbLpwbEyt}
}
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.