Model Information

A SAE (Sparse Autoencoder) for deepseek-ai/DeepSeek-R1-Distill-Llama-8B.

It is trained specifically on layer 19 of DeepSeek-R1-Distill-Llama-8B and achieves a final L0 of 93 during training.

This model is used to decompose Llama's activations into interpretable features.

The SAE weights are released under Apache, however DeepSeek-R1-Distill-Llama-8B is to be used under Meta's Llama 3 License.

How to use

A Jupyter Notebook is provided to test the model

Open In Colab

Training

Our SAE was trained using LMSYS-Chat-1M dataset.

Acknowledgements

This release wouldn't have been possible without the work of Goodfire and Anthropic

A huge thank goes to runpod, who generously sponsored the compute for this run!

                                       .x+=:.                                                             
                                      z`    ^%                                                  .uef^"    
               .u    .                   .   <k                           .u    .             :d88E       
    .u@u     .d88B :@8c       .u       .@8Ned8"      .u          u      .d88B :@8c        .   `888E       
 .zWF8888bx ="8888f8888r   ud8888.   .@^%8888"    ud8888.     us888u.  ="8888f8888r  .udR88N   888E .z8k  
.888  9888    4888>'88"  :888'8888. x88:  `)8b. :888'8888. .@88 "8888"   4888>'88"  <888'888k  888E~?888L 
I888  9888    4888> '    d888 '88%" 8888N=*8888 d888 '88%" 9888  9888    4888> '    9888 'Y"   888E  888E 
I888  9888    4888>      8888.+"     %8"    R88 8888.+"    9888  9888    4888>      9888       888E  888E 
I888  9888   .d888L .+   8888L        @8Wou 9%  8888L      9888  9888   .d888L .+   9888       888E  888E 
`888Nx?888   ^"8888*"    '8888c. .+ .888888P`   '8888c. .+ 9888  9888   ^"8888*"    ?8888u../  888E  888E 
 "88" '888      "Y"       "88888%   `   ^"F      "88888%   "888*""888"     "Y"       "8888P'  m888N= 888> 
       88E                  "YP'                   "YP'     ^Y"   ^Y'                  "P'     `Y"   888  
       98>                                                                                          J88"  
       '8                                                                                           @%    
        `                                                                                         :"      
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Collection including qresearch/DeepSeek-R1-Distill-Llama-8B-SAE-l19