Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
base_model:
|
4 |
+
- perplexity-ai/r1-1776-distill-llama-70b
|
5 |
+
---
|
6 |
+
|
7 |
+
# R1 1776 Distill Llama 70B
|
8 |
+
|
9 |
+
Blog link: [https://perplexity.ai/hub/blog/open-sourcing-r1-1776](https://perplexity.ai/hub/blog/open-sourcing-r1-1776 )
|
10 |
+
|
11 |
+
This is a Llama 70B distilled version of [R1 1776](https://huggingface.co/perplexity-ai/r1-1776).
|
12 |
+
|
13 |
+
R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship.
|
14 |
+
The model provides unbiased, accurate, and factual information while maintaining high reasoning capabilities.
|
15 |
+
|
16 |
+
## Evals
|
17 |
+
|
18 |
+
To ensure our model remains fully “uncensored” and capable of engaging with a broad spectrum of sensitive topics,
|
19 |
+
we curated a diverse, multilingual evaluation set of over a 1000 of examples that comprehensively cover such subjects.
|
20 |
+
We then use human annotators as well as carefully designed LLM judges to measure the likelihood a model will evade or
|
21 |
+
provide overly sanitized responses to the queries.
|
22 |
+
|
23 |
+
We also ensured that the model’s math and reasoning abilities remained intact after the decensoring process.
|
24 |
+
Evaluations on multiple benchmarks showed that our post-trained model performed on par with the base R1 model,
|
25 |
+
indicating that the decensoring had no impact on its core reasoning capabilities.
|
26 |
+
|
27 |
+
| Benchmark | R1-Distill-Llama-70B | R1-1776-Distill-Llama-70B |
|
28 |
+
| --- | --- | --- |
|
29 |
+
| China Censorship | 80.53 | 0.2 |
|
30 |
+
| Internal Benchmarks (avg) | 47.64 | 48.4 |
|
31 |
+
| AIME 2024 | 70 | 70 |
|
32 |
+
| MATH-500 | 94.5 | 94.8 |
|
33 |
+
| MMLU | 88.52 * | 88.40 |
|
34 |
+
| DROP | 84.55 * | 84.83 |
|
35 |
+
| GPQA | 65.2 | 65.05 |
|
36 |
+
|
37 |
+
\* Evaluated by Perplexity AI since they were not reported in the [paper](https://arxiv.org/abs/2501.12948).
|