|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
datasets: |
|
- kurakurai/luth-sft |
|
language: |
|
- fr |
|
- en |
|
base_model: |
|
- Qwen/Qwen3-1.7B |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
 |
|
|
|
--- |
|
|
|
# Luth-1.7B-Instruct |
|
|
|
**Luth-1.7B-Instruct** is a French fine-tuned version of [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas. |
|
|
|
Our Evaluation, training and data scripts are available on [GitHub](https://github.com/kurakurai/Luth), along with the [Blog](https://huggingface.co/blog/MaxLSB/luth) we wrote. |
|
|
|
## Model Details |
|
|
|
Luth was trained using full fine-tuning on the Luth-SFT dataset with [Axolotl](https://github.com/axolotl-ai-cloud/axolotl). The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English. |
|
|
|
## Benchmark Results |
|
|
|
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`. |
|
|
|
### Evaluation Visualizations |
|
|
|
**French Evaluation:** |
|
|
|
 |
|
|
|
**English Evaluation:** |
|
|
|
 |
|
|
|
### French Benchmark Scores |
|
|
|
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct | |
|
|-------------------|------------------|-----------------------|-----------------------|----------------------| |
|
| ifeval-fr | 54.53 | 31.24 | 32.90 | <u>57.67</u> | |
|
| gpqa-diamond-fr | 26.90 | 21.83 | 28.93 | <u>38.58</u> | |
|
| mmlu-fr | 28.46 | 33.73 | 46.25 | <u>49.66</u> | |
|
| math-500-fr | 60.80 | 11.20 | 32.20 | <u>64.00</u> | |
|
| arc-chall-fr | 33.28 | 28.57 | 32.68 | <u>35.16</u> | |
|
| hellaswag-fr | 24.86 | <u>49.58</u> | 34.34 | 31.93 | |
|
|
|
### English Benchmark Scores |
|
|
|
| Benchmark | Qwen3-1.7B | SmolLM2-1.7B-Instruct | Qwen2.5-1.5B-Instruct | Luth-1.7B-Instruct | |
|
|-------------------|------------------|-----------------------|-----------------------|----------------------| |
|
| ifeval-en | <u>68.39</u> | 48.24 | 39.93 | 65.80 | |
|
| gpqa-diamond-en | <u>31.82</u> | 24.75 | 30.30 | 31.82 | |
|
| mmlu-en | 52.74 | 50.27 | 59.81 | <u>60.19</u> | |
|
| math-500-en | 69.20 | 22.40 | 56.00 | <u>70.00</u> | |
|
| arc-chall-en | 36.09 | 42.32 | 41.04 | <u>42.24</u> | |
|
| hellaswag-en | 46.96 | <u>66.94</u> | 64.48 | 58.55 | |
|
|
|
## Code Example |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-1.7B-Instruct") |
|
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-1.7B-Instruct") |
|
messages = [ |
|
{"role": "user", "content": "Quelle est la capitale de la France?"}, |
|
] |
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
tokenize=True, |
|
return_dict=True, |
|
return_tensors="pt", |
|
).to(model.device) |
|
|
|
outputs = model.generate(**inputs, max_new_tokens=100) |
|
print( |
|
tokenizer.decode( |
|
outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True |
|
) |
|
) |
|
``` |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@misc{luth2025kurakurai, |
|
title = {Luth-1.7B-Instruct}, |
|
author = {Kurakura AI Team}, |
|
year = {2025}, |
|
howpublished = {\url{https://huggingface.co/kurakurai/Luth-1.7B-Instruct}}, |
|
note = {Qwen3-1.7B fine-tuned on French datasets} |
|
} |
|
``` |
|
|