zpn commited on
Commit
1bc19bc
·
verified ·
1 Parent(s): 747a5f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -1
README.md CHANGED
@@ -246,10 +246,28 @@ nomic-embed-text-v2-moe performance on BEIR at 768 dimension and truncated to 25
246
  - Incorporates Matryoshka representation learning for dimension flexibility
247
  - Training includes both weakly-supervised contrastive pretraining and supervised finetuning
248
 
 
 
249
 
250
 
251
  ## Join the Nomic Community
252
 
253
  - Nomic: [https://nomic.ai](https://nomic.ai)
254
  - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
255
- - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
246
  - Incorporates Matryoshka representation learning for dimension flexibility
247
  - Training includes both weakly-supervised contrastive pretraining and supervised finetuning
248
 
249
+ For more details, please check out the [blog post](https://www.nomic.ai/blog/posts/nomic-embed-text-v2) and [technical report](https://www.arxiv.org/abs/2502.07972).
250
+
251
 
252
 
253
  ## Join the Nomic Community
254
 
255
  - Nomic: [https://nomic.ai](https://nomic.ai)
256
  - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
257
+ - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
258
+
259
+ # Citation
260
+
261
+ If you find the model, dataset, or training code useful, please cite our work
262
+
263
+ ```bibtex
264
+ @misc{nussbaum2025trainingsparsemixtureexperts,
265
+ title={Training Sparse Mixture Of Experts Text Embedding Models},
266
+ author={Zach Nussbaum and Brandon Duderstadt},
267
+ year={2025},
268
+ eprint={2502.07972},
269
+ archivePrefix={arXiv},
270
+ primaryClass={cs.CL},
271
+ url={https://arxiv.org/abs/2502.07972},
272
+ }
273
+ ```