Datasets:

initial ai research python code dataset from 239 public github repos updated as of midnight 2024-01-21. Used python ast to compile a dataset from class methods, functions, base classes (lightweight inheritance/polymorphism replication), and imports, args, returns, raises, docstrings, counters, and more. the parquet files contain 514430 rows with columns: ['name', 'class_name', 'class_bases', 'is_member', 'args', 'class_docstr', 'class_docstr_tok', 'docstr', 'docstr_tok', 'returns', 'code', 'code_tok', 'lstart', 'lend', 'raises', 'filename', 'file_path', 'imports', 'total_objects', 'num_classes', 'num_imports', 'num_all_bases', 'num_bases', 'num_functions', 'num_methods', 'label_desc', 'label_desc_len', 'label_id'] split into many smaller parquet files located in the ./files directory. reach out if you want to talk data: [email protected] - more coming soon! data-should-be-free
dd78408
- SHA256:
- 8dc1c314e76a3d849898b359370f4279c71bd1846597e1cafb31ab51cee88b8a
- Pointer size:
- 132 Bytes
- Size of remote file:
- 9.92 MB
- Xet backed hash:
- 63b9934805d8366c904d04e6363a9e00e9f4f9ee440f7c08ced9c2cc1bde6f17
·
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.