nielsr HF Staff commited on
Commit
52ee3ba
·
verified ·
1 Parent(s): 5821709

Add link to Github repository

Browse files

This PR adds a link to the Github repository in the dataset card.

Files changed (1) hide show
  1. README.md +30 -15
README.md CHANGED
@@ -1,31 +1,46 @@
1
  ---
2
- task_categories:
3
- - text-generation
4
  language:
5
  - en
6
- pretty_name: Python Enhancement Proposals
 
 
 
 
7
  ---
8
- # Python Enhancement Proposals
 
9
 
10
  ## Description
11
- Python Enhancement Proposals, or PEPs, are design documents that generally provide a technical specification and rationale for new features of the Python programming language.
12
- There have been 661 PEPs published.
13
- The majority of PEPs are published in the Public Domain, but 5 were published under the “Open Publication License” and omitted from this dataset.
14
- PEPs are long, highly-polished, and technical in nature and often include code examples paired with their prose.
15
- PEPs are authored in ReStructured Text; we used [pandoc](https://pandoc.org/) to convert them to plain text.
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ## Dataset Statistics
18
  | Documents | UTF-8 GB |
19
  |-----------|----------|
20
- | 655 | 0.01 |
21
 
22
  ## License Issues
23
- While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)).
24
- If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
 
25
 
26
  ## Other Versions
27
- This is the "filtered" version of the Python Enhancement Proposals dataset.
28
- If you are looking for the raw version, you can find it [here](https://huggingface.co/datasets/common-pile/python_enhancement_proposals_raw).
29
 
30
  ## Citation
31
  If you use this dataset, please cite:
@@ -36,4 +51,4 @@ If you use this dataset, please cite:
36
  journal={arXiv preprint},
37
  year={2025}
38
  }
39
- ```
 
1
  ---
 
 
2
  language:
3
  - en
4
+ task_categories:
5
+ - text-generation
6
+ pretty_name: Creative Commons Common Crawl
7
+ library_name:
8
+ - datasets
9
  ---
10
+
11
+ # Creative Commons Common Crawl
12
 
13
  ## Description
14
+ This dataset contains text from 52 Common Crawl snapshots, covering about half of Common Crawl snapshots available to date and covering all years of operations of Common Crawl up to 2024.
15
+ We found a higher level of duplication across this collection, suggesting that including more snapshots would lead to a modest increase in total token yield.
16
+ From these snapshots, we extract HTML content using [FastWarc](https://arxiv.org/abs/2112.03103).
17
+ Then, using a regular expression adapted from [the C4Corpus project](https://aclanthology.org/L16-1146/).
18
+ To ensure license accuracy, we manually verified the top 1000 domains by content volume, retaining only the 537 domains with confirmed licenses where the Creative Commons designation applied to the all text content rather than embedded media or a subset of the text on the domain.
19
+ As an additional check, we did a second round of annotations with the assistance of OpenAI's o3 model. Specifically, we instructed the model to examine each web domain and identify the ones that were openly licensed. We then had a second team manually annotate the cases where the AI does not approve of the domain but the original human auditor did. This resulted in **todo** domains being removed.
20
+
21
+ We extract the main content of these documents and remove boilerplate using [Resiliparse](https://github.com/chatnoir-eu/chatnoir-resiliparse).
22
+ We perform URL-level exact deduplication and use Bloom filters to remove near-duplicates with 80% ngram overlap.
23
+ We also employ rule-based filters matching [Dolma](https://arxiv.org/abs/2402.00159);
24
+ namely, we use [C4-derived heuristics](https://arxiv.org/abs/1910.10683) to filter pages containing Javascript, Lorem Ipsum, and curly braces {}.
25
+ We also apply all [Gopher rules](https://arxiv.org/abs/2112.11446) to remove low-quality pages.
26
+ Per-document license information is available in the `license` entry of the `metadata` field of each example.
27
+ Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
28
+
29
+ Paper: [The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text](https://huggingface.co/papers/2506.05209)
30
+ Github: https://github.com/r-three/common-pile
31
 
32
  ## Dataset Statistics
33
  | Documents | UTF-8 GB |
34
  |-----------|----------|
35
+ | 51,054,412 | 260 |
36
 
37
  ## License Issues
38
+ While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
39
+ This dataset has been updated to remove instances of incorrect licensing.
40
+ If you require the exact version that Comma v0.1 was trained on for non-commercial research purposes, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
41
 
42
  ## Other Versions
43
+ This is the "raw" version of Creative Commons Common Crawl. If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/cccc_filtered).
 
44
 
45
  ## Citation
46
  If you use this dataset, please cite:
 
51
  journal={arXiv preprint},
52
  year={2025}
53
  }
54
+ ```