Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ library_name: transformers
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
-
**Arcee-Maestro-7B-Preview (7B)** is Arcee's first reasoning model trained with reinforment learning. It is based on the Qwen2.5-7B DeepSeek-R1 distillation **DeepSeek-R1-Distill-Qwen-7B** with further
|
9 |
|
10 |
### Quantizations
|
11 |
|
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
+
**Arcee-Maestro-7B-Preview (7B)** is Arcee's first reasoning model trained with reinforment learning. It is based on the Qwen2.5-7B DeepSeek-R1 distillation **DeepSeek-R1-Distill-Qwen-7B** with further GRPO training. Though this is just a preview of our upcoming work, it already shows promising improvements to mathematical and coding abilities across a range of tasks.
|
9 |
|
10 |
### Quantizations
|
11 |
|