JingzeShi commited on
Commit
acdc23d
verified
1 Parent(s): 5ab56fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,18 +21,18 @@ pipeline_tag: question-answering
21
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
22
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
23
  </a>
24
- <a href="https://github.com/SamllDoge/small-doge" target="_blank" style="margin: 2px;">
25
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
26
  </a>
27
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
28
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
- <a href="https://github.com/SamllDoge/small-doge/blob/main/LICENSE" style="margin: 2px;">
31
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
32
  </a>
33
  </div>
34
 
35
- Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SamllDoge/small-doge) repository.
36
 
37
 
38
  ## Uses
 
21
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
22
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
23
  </a>
24
+ <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
25
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
26
  </a>
27
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
28
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
+ <a href="https://github.com/SmallDoges/small-doge/blob/main/LICENSE" style="margin: 2px;">
31
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
32
  </a>
33
  </div>
34
 
35
+ Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SmallDoges/small-doge) repository.
36
 
37
 
38
  ## Uses