med-gpt-oss-20b
A fine-tuned version of OpenAI's GPT-OSS-20B model for medical reasoning and instruction following.
Model Details
- Base Model: openai/gpt-oss-20B
- Model Type: Causal Language Model
- Languages: English
- License: Apache 2.0
Usage
Main Model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the main model
model = AutoModelForCausalLM.from_pretrained(
"Tonic/med-gpt-oss-20b",
device_map="auto",
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained("Tonic/med-gpt-oss-20b")
# Generate text
input_text = "What are we having for dinner?"
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device.type)
output = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Training Information
Training Configuration
- Base Model: openai/gpt-oss-20b
- Dataset: FreedomIntelligence/medical-o1-reasoning-SFT
- Training Config: GPT-OSS Configuration
- Trainer Type: SFTTrainer
Training Parameters
- Batch Size: 4
- Gradient Accumulation: 16
- Learning Rate: 2e-4
- Max Epochs: 1
- Sequence Length: 2048
Training Infrastructure
- Hardware: GPU (H100/A100)
- Monitoring: Trackio integration
- Experiment: med-track
Model Architecture
This is a fine-tuned version of the SmolLM3-3B model with the following specifications:
- Base Model: SmolLM3-3B
- Parameters: ~3B
- Context Length: 2048
- Languages: English, French
- Architecture: Transformer-based causal language model
Performance
The model provides:
- Medical Reasoning: High-quality medical reasoning
- Conversation: Medical instruction following
Limitations
- Context Length: Limited by the model's maximum sequence length
- Bias: May inherit biases from the training data
- Factual Accuracy: May generate incorrect or outdated information
- Safety: Should be used responsibly with appropriate safeguards
Training Data
The model was fine-tuned on:
- Dataset: FreedomIntelligence/medical-o1-reasoning-SFT
- Size: ~20K samples
- Format: reasoning
- Languages: English
Monitoring
Citation
If you use this model in your research, please cite:
@misc{med_gpt_oss_20B,
title={{med-gpt-oss-20b}},
author={Joseph "Tonic" Pollack},
year={2024},
url={https://huggingface.co/Tonic/med-gpt-oss-20b}
}
License
This model is licensed under the Apache 2.0 License.
Acknowledgments
- Base Model: SmolLM3-3B by HuggingFaceTB
- Training Framework: PyTorch, Transformers, PEFT
- Monitoring: Trackio integration
- Quantization: torchao library
Support
For questions and support:
- Open an issue on the Hugging Face repository
- Check the model documentation
- Review the training logs and configuration
Repository Structure
Tonic/med-gpt-oss-20b/
├── README.md (this file)
├── config.json
├── pytorch_model.bin
├── tokenizer.json
└── tokenizer_config.json
Usage Examples
Text Generation
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Tonic/med-gpt-oss-20b")
tokenizer = AutoTokenizer.from_pretrained("Tonic/med-gpt-oss-20b")
text = "The future of artificial intelligence is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Conversation
def chat_with_model(prompt, max_length=100):
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=max_length)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
response = chat_with_model("Hello, how are you today?")
print(response)
Advanced Usage
# With generation parameters
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
Monitoring and Tracking
This model was trained with comprehensive monitoring:
- Trackio Space: N/A
- Experiment: med-track
- Dataset Repository: https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking
- Training Logs: Available in the experiment data
Deployment
Requirements
pip install torch transformers accelerate
Hardware Requirements
- Main Model: GPU with 8GB+ VRAM recommended
Changelog
- v1.0.0: Initial release with fine-tuned model
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support