Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
cc81641
·
verified ·
1 Parent(s): 5a5cdaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -188,7 +188,7 @@ In Figure 16, we show the progression of accuracy with training for High Signal
188
 
189
  After manually inspecting fastText model-quality annotations and readability scores of large number of low-quality documents, we found that several abnormal documents were mislabeled by these annotators. We observed a peculiar pattern after tokenizing these documents: while most of these documents had similar lengths, they produced significantly different token counts. To quantify this effect, we propose novel annotations that effectively leverages information from the ``pre-tokenization'' stage (document char length, document size) and the ``post-tokenization'' stage (token counts) to identify potential low-quality documents. We refer to the the documents with extremely high or low number of tokens per character (or tokens per byte) as extreme-tokenized documents (see Figure 17 for a schematic).
190
 
191
-
192
 
193
  **Figure 17:** A schematic outlining the steps for removing extreme tokenized documents
194
 
 
188
 
189
  After manually inspecting fastText model-quality annotations and readability scores of large number of low-quality documents, we found that several abnormal documents were mislabeled by these annotators. We observed a peculiar pattern after tokenizing these documents: while most of these documents had similar lengths, they produced significantly different token counts. To quantify this effect, we propose novel annotations that effectively leverages information from the ``pre-tokenization'' stage (document char length, document size) and the ``post-tokenization'' stage (token counts) to identify potential low-quality documents. We refer to the the documents with extremely high or low number of tokens per character (or tokens per byte) as extreme-tokenized documents (see Figure 17 for a schematic).
190
 
191
+ <img src="Extreme_ schematic.png" alt="Extreme_ schematic.png" style="width:1000px;"/>
192
 
193
  **Figure 17:** A schematic outlining the steps for removing extreme tokenized documents
194