| --- |
| license: cc-by-4.0 |
| library_name: saelens |
| --- |
| |
| ⚠️ WARNING: We are in the process of uploading SAEs of many different sparsities for every (Layer, Width) pair. For now, there is only one sparsity per (Layer, Width) pair. |
|
|
| # 1. Gemma Scope |
|
|
| Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals. |
|
|
| See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs: |
|
|
| # 2. What Is `gemma-scope-9b-pt-res`? |
|
|
| - `gemma-scope-`: See 1. |
| - `9b-pt-`: These SAEs were trained on Gemma v2 9B base model. |
| - `res`: These SAEs were trained on the model's residual stream. |
|
|
| ## 3. Point of Contact |
|
|
| Point of contact: Arthur Conmy |
|
|
| Contact by email: |
|
|
| ```python |
| ''.join(list('moc.elgoog@ymnoc')[::-1]) |
| ``` |
|
|
| HuggingFace account: |
| https://huggingface.co/ArthurConmyGDM |
|
|