Update README.md
Browse files
README.md
CHANGED
|
@@ -170,20 +170,6 @@ dataset_info:
|
|
| 170 |
num_examples: 6315233
|
| 171 |
download_size: 27160071916
|
| 172 |
dataset_size: 56294728333.0
|
| 173 |
-
- config_name: pull_requests
|
| 174 |
-
features:
|
| 175 |
-
- name: content
|
| 176 |
-
dtype: string
|
| 177 |
-
- name: guid
|
| 178 |
-
dtype: string
|
| 179 |
-
- name: ctx-size
|
| 180 |
-
dtype: int64
|
| 181 |
-
splits:
|
| 182 |
-
- name: train
|
| 183 |
-
num_bytes: 52871142902.97541
|
| 184 |
-
num_examples: 2555843
|
| 185 |
-
download_size: 16638673686
|
| 186 |
-
dataset_size: 52871142902.97541
|
| 187 |
- config_name: stackoverflow
|
| 188 |
features:
|
| 189 |
- name: date
|
|
@@ -263,10 +249,6 @@ configs:
|
|
| 263 |
data_files:
|
| 264 |
- split: train
|
| 265 |
path: owm/train-*
|
| 266 |
-
- config_name: pull_requests
|
| 267 |
-
data_files:
|
| 268 |
-
- split: train
|
| 269 |
-
path: pull_requests/train-*
|
| 270 |
- config_name: stackoverflow
|
| 271 |
data_files:
|
| 272 |
- split: train
|
|
@@ -277,21 +259,29 @@ configs:
|
|
| 277 |
path: wikipedia/train-*
|
| 278 |
---
|
| 279 |
|
| 280 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 281 |
|
| 282 |
-
|
| 283 |
|
| 284 |
-
|
| 285 |
-
|
| 286 |
-
- Kaggle: Kaggle notebooks from [Meta-Kaggle-Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset, converted to scripts and prefixed with information on the Kaggle datasets used in the notebook. The file headers have a similar format to Jupyter Structured but the code content is only one single script.
|
| 287 |
-
- Pull Requests: processed GitHub Pull Requests.
|
| 288 |
-
- StackOverflow: stackoverflow conversations from this [StackExchnage dump}(https://archive.org/details/stackexchange).
|
| 289 |
-
- Issues: processed GitHub issues.
|
| 290 |
-
- Owm: [Open-Web-math](https://huggingface.co/datasets/open-web-math/open-web-math) dataset.
|
| 291 |
-
- LHQ: Leandro's High quality dataset, it is a compilation of high quality code files from: APPS-train, CodeContests, GSM8K-train, GSM8K-SciRel, DeepMind-Mathematics, Rosetta-Code, MultiPL-T, ProofSteps, ProofSteps-lean.
|
| 292 |
-
- Wiki: the english subset of the wikipedia dump in [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
|
| 293 |
-
- Arxiv: Arxiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) dataset, further processed the dataset only to retain latex source files and remove preambles, comments, macros, and bibliographies from these files.
|
| 294 |
-
- IR_language: these are intermediate representations of Python, Rust, C++ and other low resource languages.
|
| 295 |
-
- Documentation: documentation of popular libraries.
|
| 296 |
|
| 297 |
-
|
|
|
|
|
|
|
|
|
| 170 |
num_examples: 6315233
|
| 171 |
download_size: 27160071916
|
| 172 |
dataset_size: 56294728333.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
- config_name: stackoverflow
|
| 174 |
features:
|
| 175 |
- name: date
|
|
|
|
| 249 |
data_files:
|
| 250 |
- split: train
|
| 251 |
path: owm/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 252 |
- config_name: stackoverflow
|
| 253 |
data_files:
|
| 254 |
- split: train
|
|
|
|
| 259 |
path: wikipedia/train-*
|
| 260 |
---
|
| 261 |
|
| 262 |
+
# StarCoder2 Extras
|
| 263 |
+
|
| 264 |
+
This is the dataset of extra sources (besides Stack v2 code data) used to train the StarCoder2 family of models. It contains the following subsets:
|
| 265 |
+
|
| 266 |
+
- Jupyter Scripts (`jupyter_scripts`): Jupyter notebooks from The Stack v2 converted to scripts.
|
| 267 |
+
- Jupyter Structured (`jupyter_structured`): Jupyter notebooks from The Stack v2 converted to a structured format of code, markdown & output cells.
|
| 268 |
+
- Kaggle (`kaggle`): Kaggle notebooks from [Meta-Kaggle-Code](https://www.kaggle.com/datasets/kaggle/meta-kaggle-code) dataset, converted to scripts and prefixed with information on the Kaggle datasets used in the notebook. The file headers have a similar format to Jupyter Structured but the code content is only one single script.
|
| 269 |
+
- StackOverflow (`stackoverflow`)): stackoverflow conversations from this [StackExchnage dump](https://archive.org/details/stackexchange).
|
| 270 |
+
- Issues (`issues`): processed GitHub issues, same as the Stack v1 issues.
|
| 271 |
+
- OWM (`owm`): the [Open-Web-Math](https://huggingface.co/datasets/open-web-math/open-web-math) dataset.
|
| 272 |
+
- LHQ (`lhq`): Leandro's High quality dataset, it is a compilation of high quality code files from: APPS-train, CodeContests, GSM8K-train, GSM8K-SciRel, DeepMind-Mathematics, Rosetta-Code, MultiPL-T, ProofSteps, ProofSteps-lean.
|
| 273 |
+
- Wiki (`wikipedia`): the English subset of the Wikipedia dump in [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
|
| 274 |
+
- ArXiv (`arxiv`): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) dataset, further processed the dataset only to retain latex source files and remove preambles, comments, macros, and bibliographies from these files.
|
| 275 |
+
- IR_language (`ir_cpp`, `ir_low_resource`, `ir_python`, `ir_rust`): these are intermediate representations of Python, Rust, C++ and other low resource languages.
|
| 276 |
+
- Documentation (`documentation`): documentation of popular libraries.
|
| 277 |
+
|
| 278 |
+
For more details on the processing of each subset, check the [StarCoder2 paper](https://arxiv.org/abs/2402.19173) or The Stack v2 [GitHub repository](https://github.com/bigcode-project/the-stack-v2/).
|
| 279 |
|
| 280 |
+
## Usage
|
| 281 |
|
| 282 |
+
```python
|
| 283 |
+
from datasets import load_dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 284 |
|
| 285 |
+
# replace `jupyter_scripts` with one of the config names listed above
|
| 286 |
+
ds = load_dataset("bigcode/starcoder2data-extras", "jupyter_scripts", split="train")
|
| 287 |
+
```
|