Datasets:
Upload folder using huggingface_hub
Browse files- .gitattributes +2 -0
- README.md +51 -18
- create_elaborated_metadata_table.py +197 -0
- python_files_elaborated.txt +3 -0
- python_files_elaborated_metadata.csv +3 -0
.gitattributes
CHANGED
@@ -58,3 +58,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
mega_licensed_corpus_redacted.txt filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
mega_licensed_corpus_redacted.txt filter=lfs diff=lfs merge=lfs -text
|
61 |
+
python_files_elaborated.txt filter=lfs diff=lfs merge=lfs -text
|
62 |
+
python_files_elaborated_metadata.csv filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -11,32 +11,42 @@ dataset_type: code
|
|
11 |
tags:
|
12 |
- code
|
13 |
- python
|
14 |
-
- code-generation
|
15 |
size_categories:
|
16 |
- 100K<n⩽1M
|
17 |
task_categories:
|
18 |
- text-generation
|
19 |
-
task_ids:
|
20 |
-
- code-completion
|
21 |
---
|
22 |
|
23 |
-
# GitHub-Python
|
24 |
|
25 |
-
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
## Dataset at a glance
|
29 |
|
30 |
-
|
|
31 |
-
|
|
32 |
-
| Files
|
33 |
-
|
|
34 |
-
|
|
35 |
-
| Compressed size
|
36 |
-
| Vocabulary
|
37 |
-
|
|
38 |
-
|
|
39 |
-
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
Numbers were obtained from the final redacted corpus and companion metadata.
|
42 |
|
@@ -46,8 +56,10 @@ Numbers were obtained from the final redacted corpus and companion metadata.
|
|
46 |
|
47 |
```
|
48 |
huggingface_dataset/
|
49 |
-
├─ mega_licensed_corpus_redacted.txt
|
50 |
-
├─ python_files.txt
|
|
|
|
|
51 |
└─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
|
52 |
```
|
53 |
|
@@ -64,6 +76,27 @@ code of one file.
|
|
64 |
|
65 |
---
|
66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
## Collection methodology
|
68 |
|
69 |
1. **Repository discovery**
|
|
|
11 |
tags:
|
12 |
- code
|
13 |
- python
|
|
|
14 |
size_categories:
|
15 |
- 100K<n⩽1M
|
16 |
task_categories:
|
17 |
- text-generation
|
|
|
|
|
18 |
---
|
19 |
|
20 |
+
# GitHub-Python — Licensed & Elaborated Variants
|
21 |
|
22 |
+
This repository ships **two complementary Python-code corpora** extracted from
|
23 |
+
public GitHub:
|
24 |
+
|
25 |
+
- **Licensed Subset** – strictly _permissive-licensed_ files suitable for
|
26 |
+
commercial redistribution / model training (main corpus used in our
|
27 |
+
experiments).
|
28 |
+
- **Elaborated Collection** – a broader crawl that additionally contains files
|
29 |
+
under _copyleft_ or unclear licenses (GPL/AGPL/LGPL, etc.). Useful for
|
30 |
+
analysis or pre-training where license mixing is acceptable.
|
31 |
+
|
32 |
+
Both variants target **code-completion / generation** research.
|
33 |
|
34 |
## Dataset at a glance
|
35 |
|
36 |
+
| | **Licensed Subset** | **Elaborated Collection** |
|
37 |
+
| ------------------- | ------------------- | ------------------------- |
|
38 |
+
| Files (.py) | 53,017 | 186,066 |
|
39 |
+
| Unique repositories | 16,447 | 59,852 |
|
40 |
+
| Repository owners | 12,515 | 43,517 |
|
41 |
+
| Compressed size | 732 MB | 2.4 GB \* |
|
42 |
+
| Vocabulary (tokens) | 443,431 | 443,431 † |
|
43 |
+
| License coverage | Permissive only | Mixed (perm. + copyleft) |
|
44 |
+
| Secrets redacted | ✅ | ⚠️ not guaranteed |
|
45 |
+
| Time window | ≥ 2015-01-01 | ≥ 2015-01-01 |
|
46 |
+
|
47 |
+
\* estimated – elaborated corpus is distributed as raw file list, not a single
|
48 |
+
text file.
|
49 |
+
† same tokenizer file is shared by both variants.
|
50 |
|
51 |
Numbers were obtained from the final redacted corpus and companion metadata.
|
52 |
|
|
|
56 |
|
57 |
```
|
58 |
huggingface_dataset/
|
59 |
+
├─ mega_licensed_corpus_redacted.txt # Licensed Subset – concatenated code
|
60 |
+
├─ python_files.txt # Licensed Subset – raw file URLs
|
61 |
+
├─ python_files_elaborated.txt # Elaborated Collection – raw file URLs
|
62 |
+
├─ python_files_elaborated_metadata.csv # Elaborated Collection metadata
|
63 |
└─ custom_tokens_vocab.txt # `<token>\t<id>` vocabulary file
|
64 |
```
|
65 |
|
|
|
76 |
|
77 |
---
|
78 |
|
79 |
+
## Dataset variants
|
80 |
+
|
81 |
+
### 1. Licensed Subset (`mega_licensed_corpus_redacted.txt`)
|
82 |
+
|
83 |
+
• 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense).
|
84 |
+
• All API keys & credentials removed.
|
85 |
+
• Ready for redistribution & commercial use (respect upstream NOTICE files).
|
86 |
+
|
87 |
+
### 2. Elaborated Collection (`python_files_elaborated.txt`)
|
88 |
+
|
89 |
+
• 186 K files from a much larger crawl.
|
90 |
+
• Contains **GPL / LGPL / AGPL and other copyleft** licenses.
|
91 |
+
• Shipped _as URL list_ + metadata CSV; you must download the files yourself
|
92 |
+
(`datasets.load_dataset` streaming, `wget`, etc.).
|
93 |
+
• **No license filtering or secret-redaction performed** – use with caution.
|
94 |
+
|
95 |
+
When first loading the dataset, decide which variant aligns with your use case
|
96 |
+
(e.g. proprietary model training → Licensed Subset only).
|
97 |
+
|
98 |
+
---
|
99 |
+
|
100 |
## Collection methodology
|
101 |
|
102 |
1. **Repository discovery**
|
create_elaborated_metadata_table.py
ADDED
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""
|
3 |
+
Create metadata table for the elaborated GitHub Python dataset.
|
4 |
+
|
5 |
+
This script parses the python_files_elaborated.txt file containing GitHub URLs
|
6 |
+
and extracts repository metadata (owner, repo name, file path).
|
7 |
+
It generates a CSV file with this information and prints statistics.
|
8 |
+
|
9 |
+
The elaborated dataset contains more files than the licensed subset and
|
10 |
+
may include repositories with various licenses (not just permissive ones).
|
11 |
+
"""
|
12 |
+
|
13 |
+
import csv
|
14 |
+
import os
|
15 |
+
import re
|
16 |
+
import pandas as pd
|
17 |
+
from collections import Counter, defaultdict
|
18 |
+
from tqdm import tqdm
|
19 |
+
from urllib.parse import urlparse
|
20 |
+
|
21 |
+
# Input and output files
|
22 |
+
ELABORATED_FILES_LIST = "python_files_elaborated.txt"
|
23 |
+
LICENSED_FILES_LIST = "python_files.txt"
|
24 |
+
OUTPUT_CSV = "python_files_elaborated_metadata.csv"
|
25 |
+
|
26 |
+
# Regular expression to parse GitHub raw URLs
|
27 |
+
# Format: https://raw.githubusercontent.com/OWNER/REPO/BRANCH/PATH
|
28 |
+
GITHUB_RAW_PATTERN = r"https://raw\.githubusercontent\.com/([^/]+)/([^/]+)/[^/]+/(.*)"
|
29 |
+
|
30 |
+
|
31 |
+
def parse_github_url(url):
|
32 |
+
"""
|
33 |
+
Parse a GitHub raw URL to extract owner, repo name, and file path.
|
34 |
+
|
35 |
+
Args:
|
36 |
+
url (str): GitHub raw URL
|
37 |
+
|
38 |
+
Returns:
|
39 |
+
tuple: (owner, repo_name, file_path) or None if URL doesn't match pattern
|
40 |
+
"""
|
41 |
+
match = re.match(GITHUB_RAW_PATTERN, url)
|
42 |
+
if match:
|
43 |
+
owner, repo_name, file_path = match.groups()
|
44 |
+
return owner, repo_name, file_path
|
45 |
+
return None
|
46 |
+
|
47 |
+
|
48 |
+
def create_metadata_table(file_list_path):
|
49 |
+
"""
|
50 |
+
Create a metadata table from a list of GitHub URLs.
|
51 |
+
|
52 |
+
Args:
|
53 |
+
file_list_path (str): Path to file containing GitHub URLs
|
54 |
+
|
55 |
+
Returns:
|
56 |
+
list: List of dictionaries with metadata
|
57 |
+
"""
|
58 |
+
metadata = []
|
59 |
+
|
60 |
+
# Read URLs from file
|
61 |
+
with open(file_list_path, "r") as f:
|
62 |
+
urls = [line.strip() for line in f if line.strip()]
|
63 |
+
|
64 |
+
print(f"Processing URLs from {file_list_path}...")
|
65 |
+
|
66 |
+
# Parse each URL and extract metadata
|
67 |
+
for url in tqdm(urls, desc="Parsing URLs"):
|
68 |
+
parsed = parse_github_url(url)
|
69 |
+
if parsed:
|
70 |
+
owner, repo_name, file_path = parsed
|
71 |
+
metadata.append({
|
72 |
+
"owner": owner,
|
73 |
+
"repo_name": repo_name,
|
74 |
+
"file_path": file_path,
|
75 |
+
"url": url
|
76 |
+
})
|
77 |
+
|
78 |
+
return metadata
|
79 |
+
|
80 |
+
|
81 |
+
def generate_statistics(metadata, dataset_name):
|
82 |
+
"""
|
83 |
+
Generate and print statistics for the dataset.
|
84 |
+
|
85 |
+
Args:
|
86 |
+
metadata (list): List of dictionaries with metadata
|
87 |
+
dataset_name (str): Name of the dataset for display
|
88 |
+
"""
|
89 |
+
# Count unique repositories and owners
|
90 |
+
repos = set((item["owner"], item["repo_name"]) for item in metadata)
|
91 |
+
owners = set(item["owner"] for item in metadata)
|
92 |
+
|
93 |
+
# Count files by repository
|
94 |
+
repo_counts = Counter((item["owner"], item["repo_name"]) for item in metadata)
|
95 |
+
top_repos = repo_counts.most_common(10)
|
96 |
+
|
97 |
+
# Count files by owner
|
98 |
+
owner_counts = Counter(item["owner"] for item in metadata)
|
99 |
+
top_owners = owner_counts.most_common(5)
|
100 |
+
|
101 |
+
# Count file extensions
|
102 |
+
extensions = Counter(os.path.splitext(item["file_path"])[1] for item in metadata)
|
103 |
+
|
104 |
+
# Print statistics
|
105 |
+
print(f"\n=== {dataset_name} Statistics ===")
|
106 |
+
print(f"Total files: {len(metadata)}")
|
107 |
+
print(f"Unique repositories: {len(repos)}")
|
108 |
+
print(f"Unique repository owners: {len(owners)}")
|
109 |
+
|
110 |
+
print("\nTop 10 repositories by file count:")
|
111 |
+
for (owner, repo), count in top_repos:
|
112 |
+
print(f" {owner}/{repo}: {count} files")
|
113 |
+
|
114 |
+
print("\nFile extensions:")
|
115 |
+
for ext, count in extensions.most_common():
|
116 |
+
if ext: # Skip empty extensions
|
117 |
+
print(f" {ext}: {count} files")
|
118 |
+
|
119 |
+
print("\nTop 5 repository owners:")
|
120 |
+
for owner, count in top_owners:
|
121 |
+
print(f" {owner}: {count} files")
|
122 |
+
|
123 |
+
return {
|
124 |
+
"total_files": len(metadata),
|
125 |
+
"unique_repos": len(repos),
|
126 |
+
"unique_owners": len(owners),
|
127 |
+
"top_repos": top_repos,
|
128 |
+
"top_owners": top_owners,
|
129 |
+
"extensions": extensions
|
130 |
+
}
|
131 |
+
|
132 |
+
|
133 |
+
def compare_datasets(elaborated_stats, licensed_stats):
|
134 |
+
"""
|
135 |
+
Compare statistics between elaborated and licensed datasets.
|
136 |
+
|
137 |
+
Args:
|
138 |
+
elaborated_stats (dict): Statistics for elaborated dataset
|
139 |
+
licensed_stats (dict): Statistics for licensed dataset
|
140 |
+
"""
|
141 |
+
print("\n=== Dataset Comparison ===")
|
142 |
+
print(f"Elaborated dataset: {elaborated_stats['total_files']} files")
|
143 |
+
print(f"Licensed dataset: {licensed_stats['total_files']} files")
|
144 |
+
print(f"Additional files in elaborated dataset: {elaborated_stats['total_files'] - licensed_stats['total_files']} files")
|
145 |
+
|
146 |
+
# Calculate percentage increase
|
147 |
+
pct_increase = ((elaborated_stats['total_files'] / licensed_stats['total_files']) - 1) * 100
|
148 |
+
print(f"Percentage increase: {pct_increase:.1f}%")
|
149 |
+
|
150 |
+
# Compare repositories
|
151 |
+
print(f"\nElaborated dataset: {elaborated_stats['unique_repos']} repositories")
|
152 |
+
print(f"Licensed dataset: {licensed_stats['unique_repos']} repositories")
|
153 |
+
|
154 |
+
# Compare owners
|
155 |
+
print(f"\nElaborated dataset: {elaborated_stats['unique_owners']} repository owners")
|
156 |
+
print(f"Licensed dataset: {licensed_stats['unique_owners']} repository owners")
|
157 |
+
|
158 |
+
# Find repositories unique to elaborated dataset
|
159 |
+
elaborated_repos = set((owner, repo) for (owner, repo), _ in elaborated_stats['top_repos'])
|
160 |
+
licensed_repos = set((owner, repo) for (owner, repo), _ in licensed_stats['top_repos'])
|
161 |
+
unique_to_elaborated = elaborated_repos - licensed_repos
|
162 |
+
|
163 |
+
if unique_to_elaborated:
|
164 |
+
print("\nTop repositories unique to elaborated dataset:")
|
165 |
+
for owner, repo in list(unique_to_elaborated)[:5]:
|
166 |
+
print(f" {owner}/{repo}")
|
167 |
+
|
168 |
+
|
169 |
+
def main():
|
170 |
+
# Process elaborated dataset
|
171 |
+
elaborated_metadata = create_metadata_table(ELABORATED_FILES_LIST)
|
172 |
+
|
173 |
+
# Save to CSV
|
174 |
+
with open(OUTPUT_CSV, "w", newline="") as f:
|
175 |
+
writer = csv.DictWriter(f, fieldnames=["owner", "repo_name", "file_path", "url"],
|
176 |
+
quoting=csv.QUOTE_MINIMAL)
|
177 |
+
writer.writeheader()
|
178 |
+
writer.writerows(elaborated_metadata)
|
179 |
+
|
180 |
+
print(f"Metadata saved to {OUTPUT_CSV}")
|
181 |
+
|
182 |
+
# Generate statistics for elaborated dataset
|
183 |
+
elaborated_stats = generate_statistics(elaborated_metadata, "Elaborated Dataset")
|
184 |
+
|
185 |
+
# Process licensed dataset for comparison
|
186 |
+
if os.path.exists(LICENSED_FILES_LIST):
|
187 |
+
licensed_metadata = create_metadata_table(LICENSED_FILES_LIST)
|
188 |
+
licensed_stats = generate_statistics(licensed_metadata, "Licensed Dataset")
|
189 |
+
|
190 |
+
# Compare datasets
|
191 |
+
compare_datasets(elaborated_stats, licensed_stats)
|
192 |
+
else:
|
193 |
+
print(f"Warning: {LICENSED_FILES_LIST} not found. Cannot compare datasets.")
|
194 |
+
|
195 |
+
|
196 |
+
if __name__ == "__main__":
|
197 |
+
main()
|
python_files_elaborated.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b20e7f10e0a17de893f1a172abce460b41b937e3ea83e2c81f8773982db38578
|
3 |
+
size 15102167
|
python_files_elaborated_metadata.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:46359841e7cf01ebd2232ff24f310644481798ba85138db310dee8fb29417bbd
|
3 |
+
size 22895119
|