--- annotations_creators: - author license: - gpl-3.0 multilinguality: - monolingual pretty_name: GitHub-Python dataset_name: github-python dataset_type: code tags: - code - python size_categories: - 100K\t` vocabulary file ``` ## Important Note For technical reasons, seperate splits have been stored as seperate Dataset instances. See https://huggingface.co/datasets/jblitzar/github-python-metadata, https://huggingface.co/datasets/jblitzar/github-python-meta-elaborated, and https://huggingface.co/datasets/jblitzar/github-python-corpus . ### File separator Individual files are concatenated with the sentinel line: ``` # ``` Anything following the sentinel until the next sentinel (or EOF) is the source code of one file. --- ## Dataset variants ### 1. Licensed Subset (`mega_licensed_corpus_redacted.txt`) • 53 K permissively-licensed files (MIT/BSD/Apache/ISC/Unlicense). • All API keys & credentials removed. • Ready for redistribution & commercial use (respect upstream NOTICE files). ### 2. Elaborated Collection (`python_files_elaborated.txt`) • 186 K files from a much larger crawl. • Contains **GPL / LGPL / AGPL and other copyleft** licenses. • Shipped _as URL list_ + metadata CSV; you must download the files yourself (`datasets.load_dataset` streaming, `wget`, etc.). • **No license filtering or secret-redaction performed** – use with caution. When first loading the dataset, decide which variant aligns with your use case (e.g. proprietary model training → Licensed Subset only). --- ## Collection methodology 1. **Repository discovery** - Queried GitHub REST API for projects with **≥ 10 stars** (earlier iterations used 100+, later expanded for coverage). - Only repositories with primary language _Python_ and last commit ≥ 2015. 2. **File filtering** - Retain files whose **size ∈ [1 KB, 100 KB]**. - Exclude common build/packaging scripts (`setup.py`, `__init__.py`, etc.). 3. **License compliance** - Allowed: MIT, Apache-2.0, BSD-2/3-Clause, ISC, Unlicense. - GPL, LGPL, AGPL and proprietary licenses were **excluded**. 4. **Deduplication** - Unique file SHA hashes; duplicates skipped. 5. **Formatting & cleaning** - Formatted with _autopep8_ to normalise whitespace. - Custom script removed trailing whitespace & normalised newlines. 6. **Secret redaction** - `truffleHog` + custom regex pass removed >150 active credentials. - Redacted corpus stored as `mega_licensed_corpus_redacted.txt`. --- ## Custom tokenisation The accompanying `custom_tokens_vocab.txt` implements a **Python-aware sub-token scheme**: 1. Strip doc-strings & comments. 2. Split on: - Camel-Case boundaries (`Camel` → `Camel`, `Case`) - Underscores, spaces - Indentation & newlines (preserved as `` token) 3. Rare tokens (frequency < 10) were dropped → 443 k vocabulary. Example: ```python def helloWorld(value): return value + 1 ``` tokenises to: ``` def hello world ( value ) return value + 1 ``` --- ## Usage ```python from datasets import load_dataset ds = load_dataset("jblitzar/github-python-corpus", split="train") print(ds[0]["code"][:300]) # raw source code ``` If you prefer token level examples (small reasons: memory), map the tokenizer: ```python from tokenizers import Tokenizer tok = Tokenizer.from_file("custom_tokens_vocab.txt") def encode(ex): ex["input_ids"] = tok.encode(ex["code"]).ids return ex ds = ds.map(encode, remove_columns=["code"]) ``` --- ## Ethical considerations & limitations - **Licenses respected** – only permissive licenses included; retain NOTICE files when redistributing derivative works. - **Secrets removed** – automated & manual audits performed, yet users **must not assume zero secrets**; re-audit before public deployments. - **Code quality** – projects vary in style & correctness. Generated models may replicate bugs or vulnerable patterns. --- ## Citation If you use this dataset, please cite: ``` @misc{github-python-2024, author = {JBlitzar}, title = {GitHub-Python: A Permissively Licensed Corpus of Python Code}, year = {2024}, howpublished = {\url{https://huggingface.co/datasets/jblitzar/github-python}}, note = {Version 1.0} } ``` --- ## License Dataset card and aggregation scripts: **GPLv3**. Each code snippet remains under its **original repository license** (MIT, Apache-2.0, BSD, ISC, etc.). Users must comply with upstream notices when redistributing code or derivatives.