metadata
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: peft
language:
- en
pipeline_tag: text-generation
datasets:
- flwrlabs/alpaca-gpt4
Model Card for BlossomTune-SmolLM2-135M-Instruct-NLP
This PEFT adapter has been trained by using Flower, a friendly federated AI framework.
Model Details
Please check the following GitHub project for model details and evaluation (work in progress!):
https://github.com/ethicalabs-ai/BlossomTuneLLM
How to Get Started with the Model
Use this model as:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM2-135M-Instruct")
model = PeftModel.from_pretrained(base_model, "ethicalabs/BlossomTune-SmolLM2-135M-Instruct-NLP")
Evaluation Results (Accuracy)
- STEM: 25.30 %
- Social Sciences: 27.17 %
- Humanities: 23.95 %
- Average: 25.47 %
Communication Budget
7899.38 MB