bartowski commited on
Commit
1f97176
·
verified ·
1 Parent(s): edf02e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -1
README.md CHANGED
@@ -3,4 +3,68 @@ base_model:
3
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
  ---
5
 
6
- Arcee-Maestro-7B-Preview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
  ---
5
 
6
+ **Arcee-Maestro-7B-Preview (7B)** is Arcee's first reasoning model trained with reinforment learning. It is based on the Qwen2.5-7B DeepSeek-R1 distillation **DeepSeek-R1-Distill-Qwen-7B** with further GPRO training. Though this is just a preview of our upcoming work, it already shows promising improvements to mathematical and coding abilities across a range of tasks.
7
+
8
+ ### Quantizations
9
+
10
+ Coming soon
11
+
12
+ ### Model Details
13
+
14
+ - Architecture Base: DeepSeek-R1-Distill-Qwen-7B (Qwen2.5-7B)
15
+ - Parameter Count: 7B
16
+ - Reinforment Learning: GRPO with 450,000 **verified** math problems with some coding examples
17
+ - License: [Apache-2.0](https://huggingface.co/arcee-ai/Arcee-Maestro-7B-Preview#license)
18
+
19
+ ### Intended Use Cases
20
+
21
+ - Advanced reasoning
22
+ - Mathematics
23
+ - Coding
24
+
25
+ ### Evaluations
26
+
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/DlSBEmCFS7yjJi2kOGuLa.png)
28
+
29
+ Arcee Maestro 7B preview shows great gains in mathematics and coding, surpassing O1 preview in many metrics.
30
+
31
+ ### How to use
32
+
33
+ Below is a sample code snippet using `transformers`:
34
+
35
+ ```python
36
+ from transformers import AutoTokenizer, AutoModelForCausalLM
37
+
38
+ model_name = "arcee-ai/Arcee-Maestro-7B-Preview"
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+ model = AutoModelForCausalLM.from_pretrained(model_name)
41
+
42
+ prompt = "Provide a concise summary of quantum entanglement."
43
+ inputs = tokenizer(prompt, return_tensors="pt")
44
+ outputs = model.generate(**inputs, max_new_tokens=150)
45
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
46
+ ```
47
+
48
+ ### Training & Fine-Tuning
49
+
50
+ - **Initial Training**: Began with DeepSeek-R1-Distill-Qwen-7B
51
+ - **GRPO**:
52
+ - Trained on 450,000 verified math problems
53
+ - Additional bootstrapped coding examples
54
+
55
+ ### Performance
56
+
57
+ Arcee-Maestro-7B-Preview shows strong performance in mathematics as well as coding, competing against even O1 preview, a model far surprassing its size.
58
+
59
+ ### Limitations
60
+
61
+ - **Context Length:** 128k Tokens (may vary depending on the final tokenizer settings and system resources).
62
+ - **Knowledge Cut-off:** Training data may not reflect the latest events or developments beyond June 2024.
63
+
64
+ ### Ethical Considerations
65
+ - **Content Generation Risks:** Like any language model, Arcee-Maestro-7B-Preview can generate potentially harmful or biased content if prompted in certain ways.
66
+
67
+ ### License
68
+ **Arcee-Maestro-7B-Preview (7B)** is released under the [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0). You are free to use, modify, and distribute this model in both commercial and non-commercial applications, subject to the terms and conditions of the license.
69
+
70
+ If you have questions or would like to share your experiences using Arcee-Maestro-7B-Preview (7B), please connect with us on social media. We’re excited to see what you build—and how this model helps you innovate!