V1 of an English/code tokenizer. Byte-level BPE, 64k vocab. Equal mix between: On the NL side:

  • Books
  • C4
  • v1 of our CC (helen quality classifier)
  • enwiki
  • Gutenberg
  • Reddit

On the code side:

  • Jupyter notebooks (0.5 weight, it was small)
  • GH issues
  • Stackexchange
  • The cleaned Python Stack

For a total of 1/3 code data (although there is a lot of English in Stackexchange and GH).

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.