loubnabnl HF staff commited on
Commit
e617673
·
verified ·
1 Parent(s): 3e119f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -37
README.md CHANGED
@@ -1,57 +1,83 @@
1
  ---
2
  library_name: transformers
3
- model_name: SmolLM2-Instruct-16k-SeaLong-LongAlign-ST10k-v2-rope500k-1e-4
4
- tags:
5
- - generated_from_trainer
6
- - trl
7
- - sft
8
- licence: license
9
  ---
10
 
11
- # Model Card for SmolLM2-Instruct-16k-SeaLong-LongAlign-ST10k-v2-rope500k-1e-4
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
- It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
- ## Quick start
 
17
 
18
- ```python
19
- from transformers import pipeline
20
 
21
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
- generator = pipeline("text-generation", model="HuggingFaceTB/SmolLM2-Instruct-16k-SeaLong-LongAlign-ST10k-v2-rope500k-1e-4", device="cuda")
23
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
- print(output["generated_text"])
25
- ```
 
 
 
 
 
 
26
 
27
- ## Training procedure
28
 
29
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loubnabnl/huggingface/runs/y937l2wj)
30
 
 
31
 
32
- This model was trained with SFT.
 
33
 
34
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
- - TRL: 0.14.0.dev0
37
- - Transformers: 4.48.1
38
- - Pytorch: 2.5.1+cu121
39
- - Datasets: 3.2.0
40
- - Tokenizers: 0.21.0
41
 
42
- ## Citations
43
 
 
44
 
 
45
 
46
- Cite TRL as:
47
-
48
- ```bibtex
49
- @misc{vonwerra2022trl,
50
- title = {{TRL: Transformer Reinforcement Learning}},
51
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
- year = 2020,
53
- journal = {GitHub repository},
54
- publisher = {GitHub},
55
- howpublished = {\url{https://github.com/huggingface/trl}}
56
  }
57
  ```
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ base_model:
8
+ - HuggingFaceTB/SmolLM2-1.7B-Instruct-16k
9
  ---
10
 
 
11
 
12
+ # SmolLM2-1.7B-Instruct-16k
 
13
 
14
+ This is a 16k context version of [SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct), which originnaly only supported 8k context. We finetune the model on 15k samples consisting of a subset of SmolTalk, LongAlign and SeaLong datasets and increase RoPE from 100k to 500k.
15
+ This improves the evaluation on [HELMET](https://github.com/princeton-nlp/HELMET) on 8k and 16k context with a small degradation on short context
16
 
 
 
17
 
18
+ | Metric | SmolLM2-1.7B-Instruct | SmolLM2-1.7B-Instruct-16k |
19
+ |:-----------------------------|:---------------------:|:-------------------------:|
20
+ | Avg HELMET 8k | 35.87 | **37.24** |
21
+ | Avg HELMET 16k | **/** | 32.40 |
22
+ | Avg Short | **35.71** | 32.06 |
23
+ | GSM8K (5-shot) | **48.14** | 44.27 |
24
+ | MATH | **20.12** | 18.22 |
25
+ | ARC (Average) | 47.55 | **51.52** |
26
+ | IFEval (Average) | **47.5** | 40.29 |
27
+
28
+ We report the average on HELMET's: RAG, ICL, Re-rank and LongQA tasks.
29
 
30
+ ## Model Summary
31
 
32
+ SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. More details in our paper: https://arxiv.org/abs/2502.02737v1
33
 
34
+ The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized).
35
 
36
+ The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by [Argilla](https://huggingface.co/argilla) such as [Synth-APIGen-v0.1](https://huggingface.co/datasets/argilla/Synth-APIGen-v0.1).
37
+ You can find the SFT dataset here: https://huggingface.co/datasets/HuggingFaceTB/smoltalk.
38
 
39
+ For more details refer to: https://github.com/huggingface/smollm. You will find pre-training, post-training, evaluation and local inference code.
40
+
41
+ ### How to use
42
+
43
+ ### Transformers
44
+ ```bash
45
+ pip install transformers
46
+ ```
47
+
48
+ ```python
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+ checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct-16k"
51
+
52
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
53
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
54
+ # for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
55
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
56
+
57
+ messages = [{"role": "user", "content": "What is the capital of France."}]
58
+ input_text=tokenizer.apply_chat_template(messages, tokenize=False)
59
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
60
+ outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
61
+ print(tokenizer.decode(outputs[0]))
62
+ ```
63
 
64
+ ## Limitations
 
 
 
 
65
 
66
+ SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
67
 
68
+ ## License
69
 
70
+ [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
71
 
72
+ ## Citation
73
+ ```bash
74
+ @misc{allal2025smollm2smolgoesbig,
75
+ title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
76
+ author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
77
+ year={2025},
78
+ eprint={2502.02737},
79
+ archivePrefix={arXiv},
80
+ primaryClass={cs.CL},
81
+ url={https://arxiv.org/abs/2502.02737},
82
  }
83
  ```