nielsr HF Staff commited on
Commit
6dedcfa
·
verified ·
1 Parent(s): 2996a5e

Add pipeline tag and update paper link

Browse files

This PR improves the model card by:
- Adding the `pipeline_tag: text-generation` to the metadata, which helps users discover the model via the Hub's filtering system.
- Updating the link to the paper in both the main content and the citation to point to the official Hugging Face Papers page (`https://huggingface.co/papers/2507.07024`) for better discoverability and consistency.

Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -1,13 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
 
5
  tags:
6
  - moe
7
  - olmo
8
  - flexolmo
9
  co2_eq_emissions: 1
10
- library_name: transformers
11
  ---
12
 
13
  <img alt="FlexOlmo Logo." src="FlexOlmo_Logo.png" width="500px" style="display: block; margin-left: auto; margin-right: auto; margin-top: 50px"> FlexOlmo is a new kind of LM that unlocks a new paradigm of data collaboration. With FlexOlmo, data owners can contribute to the development of open language models without giving up control of their data. There is no need to share raw data directly, and data contributors can decide when their data is active in the model, deactivate it at any time, and receive attributions whenever it's used for inference.
@@ -17,7 +18,7 @@ library_name: transformers
17
  > FlexOlmo-7x7B-1T (without router training) is a Mixture-of-Experts with 33B total parameters, combining independently trained experts on public-mix, news, math, code, academic texts, creative writing, and Reddit data. The public-mix expert is trained on 1T tokens of public data while the other experts are branched from the public-mix expert and trained on 50B tokens of their respective data.
18
 
19
  This information and more can also be found:
20
- - **Paper**: https://allenai.org/papers/flexolmo
21
  - **Code**: https://github.com/allenai/FlexOlmo
22
  - **Blog**: https://allenai.org/blog/flexolmo
23
  - **Data and corresponding models**:
@@ -72,6 +73,6 @@ print(tokenizer.decode(out[0]))
72
  eprint={2507.00000},
73
  archivePrefix={arXiv},
74
  primaryClass={cs.CL},
75
- url={https://allenai.org/papers/flexolmo},
76
  }
77
- ```
 
1
  ---
 
2
  language:
3
  - en
4
+ library_name: transformers
5
+ license: apache-2.0
6
  tags:
7
  - moe
8
  - olmo
9
  - flexolmo
10
  co2_eq_emissions: 1
11
+ pipeline_tag: text-generation
12
  ---
13
 
14
  <img alt="FlexOlmo Logo." src="FlexOlmo_Logo.png" width="500px" style="display: block; margin-left: auto; margin-right: auto; margin-top: 50px"> FlexOlmo is a new kind of LM that unlocks a new paradigm of data collaboration. With FlexOlmo, data owners can contribute to the development of open language models without giving up control of their data. There is no need to share raw data directly, and data contributors can decide when their data is active in the model, deactivate it at any time, and receive attributions whenever it's used for inference.
 
18
  > FlexOlmo-7x7B-1T (without router training) is a Mixture-of-Experts with 33B total parameters, combining independently trained experts on public-mix, news, math, code, academic texts, creative writing, and Reddit data. The public-mix expert is trained on 1T tokens of public data while the other experts are branched from the public-mix expert and trained on 50B tokens of their respective data.
19
 
20
  This information and more can also be found:
21
+ - **Paper**: https://huggingface.co/papers/2507.07024
22
  - **Code**: https://github.com/allenai/FlexOlmo
23
  - **Blog**: https://allenai.org/blog/flexolmo
24
  - **Data and corresponding models**:
 
73
  eprint={2507.00000},
74
  archivePrefix={arXiv},
75
  primaryClass={cs.CL},
76
+ url={https://huggingface.co/papers/2507.07024},
77
  }
78
+ ```