benjamin commited on
Commit
2b29236
·
verified ·
1 Parent(s): f037ba0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -82,6 +82,8 @@ python3 scripts/cross_tokenizer_distill.py \
82
  name=llama3_to_byte_20k
83
  ```
84
 
 
 
85
  ## Future Work
86
 
87
  The current version of this model is trained for 20k steps with 32*2048 bytes per batch (= 1.3B bytes ≈ 328M subword tokens total). It was unexpected that it performs as well as it does with this very short training procedure. We plan to train a new version for more steps (you can also do so yourself using [`tokenkit`](https://github.com/bminixhofer/tokenkit)).
 
82
  name=llama3_to_byte_20k
83
  ```
84
 
85
+ Training took ~26 hours on a TPU v4-32.
86
+
87
  ## Future Work
88
 
89
  The current version of this model is trained for 20k steps with 32*2048 bytes per batch (= 1.3B bytes ≈ 328M subword tokens total). It was unexpected that it performs as well as it does with this very short training procedure. We plan to train a new version for more steps (you can also do so yourself using [`tokenkit`](https://github.com/bminixhofer/tokenkit)).