datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
4.43M
| likes
int64 0
7.56k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1.02M
|
---|---|---|---|---|---|---|---|---|
fancyzhx/ag_news | fancyzhx | "2024-03-07T12:02:37Z" | 29,078 | 153 | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: ag-news
pretty_name: AG’s News Corpus
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
splits:
- name: train
num_bytes: 29817303
num_examples: 120000
- name: test
num_bytes: 1879474
num_examples: 7600
download_size: 19820267
dataset_size: 31696777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "ag_news"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 31.33 MB
- **Size of the generated dataset:** 31.70 MB
- **Total amount of disk used:** 63.02 MB
### Dataset Summary
AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
([email protected]) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 31.33 MB
- **Size of the generated dataset:** 31.70 MB
- **Total amount of disk used:** 63.02 MB
An example of 'train' looks as follows.
```
{
"label": 3,
"text": "New iPad released Just like every other September, this one is no different. Apple is planning to release a bigger, heavier, fatter iPad that..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `World` (0), `Sports` (1), `Business` (2), `Sci/Tech` (3).
### Data Splits
| name |train |test|
|-------|-----:|---:|
|default|120000|7600|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{Zhang2015CharacterlevelCN,
title={Character-level Convolutional Networks for Text Classification},
author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun},
booktitle={NIPS},
year={2015}
}
```
### Contributions
Thanks to [@jxmorris12](https://github.com/jxmorris12), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun) for adding this dataset. |
m-a-p/PIN-100M | m-a-p | "2025-02-21T20:48:51Z" | 29,033 | 5 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.13923",
"region:us",
"multimodal",
"interleaved"
] | null | "2024-05-25T04:58:09Z" | ---
license: apache-2.0
language:
- en
- zh
configs:
- config_name: pin
data_files:
- split: train
path:
- data/DocLayNet/DocLayNet.jsonl
tags:
- multimodal
- interleaved
size_categories:
- 100B<n<1T
pretty_name: pin-100m
---
# PIN-100M
The full version of the dataset, related to the paper "PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents"
Paper: https://arxiv.org/abs/2406.13923
This dataset contains 100M samples with PIN format.
**Please note that the required storage space exceeds 150TB!!**
🚀 News
[ 2024.12.20 ] !NEW! 🔥The currently available version is not the complete version; this project is still ongoing! (It has been released early because we reached the private storage limit on Hugging Face.)
<img src="assets/intro.png">
## 0 Usage
Download ALL files
```bash
huggingface-cli download m-a-p/PIN-100M --repo-type=dataset --resume-download --local-dir "your_local_path"
```
Download ONLY **Jsonl** files
```bash
huggingface-cli download m-a-p/PIN-100M --repo-type=dataset --resume-download --include "*.jsonl" --local-dir "your_local_path"
```
Decompression
```bash
cat data.tar.part* > data.tar
tar -xvf data.tar
```
## 1 Dataset statistics
**Working**
Storage space statistics may have some error, so these values are for reference only.
## 2 Data Structure
### 2.1 Subsets
We process 8 subsets, including PIN-PMC, DocLayNet, Linux-CN, chinese-markdown, OBELICS, MMC4, leetcode, and PG19.
<img src="assets/dataset-example.png">
Note: We do not release the PIN-arXiv subset in the preview version.
### 2.2 Folder Structure
The directory `content images` holds the images mentioned within the markdown text, and `overall images` display the overall visual representation of the markdown files. Moreover, the `JSONL` file encapsulate the textual content along with associated data details.
An example subset:
```
example_dataset/
│
├── content_image/
├── overall_image/
└── example_dataset.jsonl
```
A subset with multiple parts:
```
example_dataset/
│
├── part00/
│ ├── content_image/
│ ├── overall_image/
│ └── part00.jsonl
│
├── part01/
│ ├── content_image/
│ ├── overall_image/
│ └── part01.jsonl
│
... - More similar parts
```
### 2.3 content_image Folder
This folder contains all the content images used in the markdown files.
Note: All images need to be converted to PNG format. The filename should be unique within the folder.
```
content_image/
│
├── 1.png
├── 2.png
...
```
### 2.4 overall_image Folder
This folder contains all the overall images for each sample.
Note: All images need to be converted to PNG format. The filename should be unique within the folder.
```
overall_image/
│
├── 1.png
├── 2.png
...
```
#### 2.5 JSON Lines Format
we provide a detailed example of the annotations included with each data entry.
```
{
"id": 1919,
"meta": {
"language": "en",
"oi_exist": true,
"oi_source": "compiling",
"source_dataset": "example_source (e.g. OBELICS)",
"ori_meta": {
"document_url": "https://www.example.com/2022/02/21/example/",
...
}
},
"doc_id": 1997,
"page_id": 0,
"date_download": "2024-03-01"
},
"license": "CC-BY-4.0",
"quality_signals": {
"doc_length": 100,
...
},
"content_image": [
"content_image/1997-0.png",
"content_image/1997-1.png"
],
"md": "<img src='content_image/1997-0.png'>\n\nThis is a fake sample data line, just for show.\n\nThis is a fake sample data line, just for show.\n\n<img src='content_image/1997-1.png'>\n\nThis is a fake sample data line, just for show.",
"overall_image": "overall_image/1997.png"
}
```
Field Descriptions:
**Field Descriptions:**
- **id**: Unique identifier for each entry.
- **meta**: Metadata for each multimodal document entry.
- **language**: The document's language, such as Chinese (zh) or English (en).
- **source_dataset**: If the document is converted from another dataset, the original dataset name is noted here; otherwise, it is None.
- **doc_id**: A unique document identifier providing name and other details.
- **page_id**: A unique page identifier indicating the document's page number. If there is only one page, this is None. Page IDs are usually numbered starting from 1 in multi-page documents.
- **date_download**: date (download), the date the document was downloaded.
- **ori_meta**: Original metadata from the dataset, if available; otherwise, None.
- **oi_exist**: Indicates whether an overall image exists. True or False.
- **oi_source**: Source of the overall image; 'ori' for images taken from the original dataset and 'compiling' for images generated through code compilation. If this tag is missing, the image is likely compiled.
- ...
- **quality_signals**: Quality indicators inspired by the design of redpajama v2.
- **doc_length**: Length of the document.
- ...
- **content_image**: List of images mentioned in the document; None if no images are present.
- **overall_image**: Path to the corresponding overall image. (A list or a single path)
- **md**: Contains the markdown content.
- **license**: License information for the current sample.
## 3 Examples of jsonl files
We selected samples consisting of short markdown documents.
### 3.1 An example of DocLynet
Notably, the dataset's overall images are converted from the original dataset's PDFs into PNG format.
```json
{
"id": 0,
"meta": {
"language": "en",
"oi_exist": true,
"oi_source": "ori",
"source_dataset": "DocLayNet",
"ori_meta": null,
"doc_id": "NYSE_F_2004.pdf",
"page_id": "0",
"date_download": "2024-3-24"
},
"quality_signals": null,
"license": "https://cdla.io/permissive-1-0/",
"content_image": [
"content_image/34102.jpg"
],
"overall_image": "overall_image/3562e47265520f7a72f3eac73aadfe19a78531698c3b50d7670b8ad9b214106b.png",
"md": "<img src='content_image/34102.jpg'>\n\n# Ford Motor Company / 2004 Annual Report \n\n# R W A R D F O R W A R D \n\n"
}
```
### 3.2 An example of OBELICS
```json
{
"id": 466502,
"meta": {
"language": "en",
"oi_exist": true,
"oi_source": "compiling",
"source_dataset": "OBELICS",
"ori_meta": {
"document_url": "https://www.donegaldaily.com/2022/02/21/watch-incredible-storm-surge-at-portsalon-golf-club/",
"unformatted_src": "https://www.donegaldaily.com/wp-content/uploads/2022/02/Screenshot-2022-02-21-at-17.54.30.jpg",
"src": "https://www.donegaldaily.com/wp-content/uploads/2022/02/Screenshot-2022-02-21-at-17.54.30.jpg",
"formatted_filename": "Screenshot at",
"rendered_width": 817,
"rendered_height": 419,
"original_width": 817,
"original_height": 419,
"format": "jpeg",
"general_meta": {
"url": "https://www.donegaldaily.com/2022/02/21/watch-incredible-storm-surge-at-portsalon-golf-club/",
"warc_filename": "crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00308.warc.gz",
"warc_record_offset": 795020636,
"warc_record_length": 31271
}
},
"doc_id": 98496,
"page_id": 0,
"date_download": "2024-4-22"
},
"md": "<img src='content_image/98496-0.png'>\n\nThe golf course at Portsalon Golf Club took a battering today as a result of Storm Franklin.\n\nDonegal had been left battered and bruised overnight after Storm Franklin ripped across the county.\n\nThere were trees down on the approach roads to Donegal Town and in Gartan.\n\nThere were also trees down in Inishowen while there is also heavy water reported along the sides of roads with motorists asked to slow down and not put themselves in danger.\n\nDonegal’s coastline took a huge impact with massive waves reported along the coastline around the county.\n\nThe video, taken by Johnny Shields was taken from the tee box of the third hole.",
"license": "CC-BY-4.0",
"quality_signals": null,
"content_image": [
"content_image/98496-0.png"
],
"overall_image": "overall_image/98496-0.png"
}
```
### 3.3 An example of chinese-markdown
```json
{
"id": 7,
"meta": {
"language": "zh",
"oi_exist": true,
"oi_source": "compiling",
"source_dataset": "chinese-markdown",
"ori_meta": null,
"doc_id": 7,
"page_id": null,
"date_download": "2024-04-30"
},
"md": "---\ntitle: 常见问题 QA\ncategory: 其它\norder: 1\n---\n\n> 持续更新中...\n> 如有问题可以到 <https://github.com/alibaba/ice/issues/new> 反馈\n\n## ICE 的浏览器兼容策略是什么\n\n由于 ICE 优先使用 React 16+,其需要的最低 IE 版本为 11,如果您需要在以下的版本使用,您可能需要引入一些 polyfill 来支持 `Map`, `Set` 等特性。参考[React 官网说明](https://reactjs.org/blog/2017/09/26/react-v16.0.html#javascript-environment-requirements)。\n\n以下代码可以帮助你在低版本 IE 下自动跳转到我们提供的提示浏览器升级页面。当然您也可以使用自定义的浏览器升级页面。\n\n```\n<!--[if lt IE 11]>\n<script>location.href = \"//www.taobao.com/markets/tbhome/ali-page-updater\"; </script>\n<![endif]-->\n```\n\n添加如上代码后,如果使用 IE11 及以下浏览器访问页面,则会自动跳转到统一引导升级浏览器的页面。\n\n## WebStorm/IDEA 编辑器卡顿现象\n\n由于项目在安装依赖后,产生文件夹 `node_modules` 含有较多的碎小文件,编辑器在索引文件引起的卡顿。\nWebStorm 中尤为明显,可通过 exclude `node_modules` 目录,不需要检索该文件夹下的内容。\n\n## 如何设置网页在浏览器 Tab 上面的 Icon (favicon)\n\n细心的同学可能会看到页面在浏览器 Tab 上面会有自定义的 Icon:\n\n\n\n如果你想要在自己站点上面加上这个 Icon 可以按照如下步骤添加:\n\n1. 准备一个 Icon,文件格式可以为 `.png` 或者 `.ico`,正方形,分辨率可以是 32x32px 或者 64x64px 文件体积要求尽可能小。\n2. 上传 CDN 拿到一个 url 或者在自己服务器配置静态资源服务\n3. 在 HTML 页面 `<head>` 标签里面添加如下代码:`<link rel=\"shortcut icon\" href=\"your-icon-url\">`\n \n\n这样就添加成功啦!\n\n## 如何在页面显示原始的 HTML 内容\n\n出于安全方面的考虑,React 默认会将节点中 html 代码进行转义,比如:\n\n```jsx\nclass Demo extends Component {\n render() {\n const content = 'hello <span>world</span>';\n return <div>{content}</div>;\n }\n}\n\n// 输出 hello <span>world</span>\n```\n\n如上,`<span>` 标签并不会在页面上被解析,而是被当成字符串输出了。React 提供了 `dangerouslySetInnerHTML` 属性帮助我们进行类似 `innerHTML` 的操作:\n\n```jsx\nclass Demo extends Component {\n render() {\n const content = 'hello <span>world</span>';\n return <div dangerouslySetInnerHTML={{ __html: content }} />;\n }\n}\n\n// 输出 hello world\n```\n\n更多内容请参考 [Dangerously Set innerHTML](https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml)\n\n## 之前创建的项目,遇到如下报错怎么办\n\n\n\n这是由于 ES6 Modules 的标准在物料中不兼容导致的。您可以把 `src/navs.js` 中最后一行修改为:\n\n```js\nexport const headerNavs = transform([\n ...autoGenHeaderNavs,\n ...customHeaderNavs,\n]);\n\nexport const asideNavs = transform([...autoGenAsideNavs, ...customAsideNavs]);\n```",
"license": "MIT",
"quality_signals": null,
"content_image": [
"content_image/7-0.png"
],
"overall_image": "overall_image/7.png"
}
```
### 3.4 An example of leetcode
```json
{
"id": 1,
"meta": {
"language": "en",
"doc_id": 1,
"page_id": null,
"oi_exist": true,
"oi_source": "compiling",
"source_dataset": "leetcode",
"date_download": "2024-05-05",
"ori_meta": {
"slug": "two-sum",
"difficulty": "Easy"
}
},
"quality_signals": null,
"license": "MIT",
"content_image": null,
"md": "# Two Sum\n\n- slug: two-sum\n- difficulty: Easy\n\nGiven an array of integers `nums` and an integer `target`, return _indices of the two numbers such that they add up to `target`_.\n\nYou may assume that each input would have **_exactly_ one solution**, and you may not use the _same_ element twice.\n\nYou can return the answer in any order.\n\n**Example 1:**\n\n**Input:** nums = \\[2,7,11,15\\], target = 9\n**Output:** \\[0,1\\]\n**Explanation:** Because nums\\[0\\] + nums\\[1\\] == 9, we return \\[0, 1\\].\n\n**Example 2:**\n\n**Input:** nums = \\[3,2,4\\], target = 6\n**Output:** \\[1,2\\]\n\n**Example 3:**\n\n**Input:** nums = \\[3,3\\], target = 6\n**Output:** \\[0,1\\]\n\n**Constraints:**\n\n* `2 <= nums.length <= 104`\n* `-109 <= nums[i] <= 109`\n* `-109 <= target <= 109`\n* **Only one valid answer exists.**\n\n**Follow-up:** Can you come up with an algorithm that is less than `O(n2)` time complexity?\n\n## A solution in Java\n\n```java\nimport java.util.HashMap;\nimport java.util.Map;\n\npublic int[] twoSum(int[] nums, int target) {\n Map<Integer, Integer> map = new HashMap<>();\n for (int i = 0; i < nums.length; i++) {\n int complement = target - nums[i];\n if (map.containsKey(complement)) {\n return new int[]{map.get(complement), i};\n }\n map.put(nums[i], i);\n }\n throw new IllegalArgumentException(\"No two sum solution\");\n}\n```\nThe algorithm leverages a hash map (unordered_map in C++, HashMap in Java, dictionary in Python, and Map in JavaScript). It iterates through the given 'nums' array and calculates the complementary value (target - current value). If the complementary value is already in the hash map, it means that we found a solution, and we return those indices. If the complement is not in the hash map, we store the current element in the hash map with its index. If the algorithm doesn't find the solution, it returns an empty array or throws an exception (in Java).\n\nThis approach has a time complexity of O(n) and a space complexity of O(n) as well.\n \n\n## A solution in C++\n\n```cpp\n#include <vector>\n#include <unordered_map>\n\nstd::vector<int> twoSum(std::vector<int>& nums, int target) {\n std::unordered_map<int, int> map;\n for (int i = 0; i < nums.size(); i++) {\n int complement = target - nums[i];\n if (map.find(complement) != map.end()) {\n return {map[complement], i};\n }\n map[nums[i]] = i;\n }\n return {};\n}\n```\nThe algorithm leverages a hash map (unordered_map in C++, HashMap in Java, dictionary in Python, and Map in JavaScript). It iterates through the given 'nums' array and calculates the complementary value (target - current value). If the complementary value is already in the hash map, it means that we found a solution, and we return those indices. If the complement is not in the hash map, we store the current element in the hash map with its index. If the algorithm doesn't find the solution, it returns an empty array or throws an exception (in Java).\n\nThis approach has a time complexity of O(n) and a space complexity of O(n) as well.\n \n\n## A solution in Python\n\n```python\ndef twoSum(nums, target):\n map = {}\n for i, num in enumerate(nums):\n complement = target - num\n if complement in map:\n return [map[complement], i]\n map[num] = i\n return []\n```\nThe algorithm leverages a hash map (unordered_map in C++, HashMap in Java, dictionary in Python, and Map in JavaScript). It iterates through the given 'nums' array and calculates the complementary value (target - current value). If the complementary value is already in the hash map, it means that we found a solution, and we return those indices. If the complement is not in the hash map, we store the current element in the hash map with its index. If the algorithm doesn't find the solution, it returns an empty array or throws an exception (in Java).\n\nThis approach has a time complexity of O(n) and a space complexity of O(n) as well.\n \n\n## A solution in Javascript\n\n```javascript\nfunction twoSum(nums, target) {\n const map = new Map();\n for (let i = 0; i < nums.length; i++) {\n const complement = target - nums[i];\n if (map.has(complement)) {\n return [map.get(complement), i];\n }\n map.set(nums[i], i);\n }\n return [];\n}\n```\nThe algorithm leverages a hash map (unordered_map in C++, HashMap in Java, dictionary in Python, and Map in JavaScript). It iterates through the given 'nums' array and calculates the complementary value (target - current value). If the complementary value is already in the hash map, it means that we found a solution, and we return those indices. If the complement is not in the hash map, we store the current element in the hash map with its index. If the algorithm doesn't find the solution, it returns an empty array or throws an exception (in Java).\n\nThis approach has a time complexity of O(n) and a space complexity of O(n) as well.\n \n",
"overall_image": "overall_image/1.png"
}
```
### 3.5 An example of linux-cn
```json
{
"id": 8,
"meta": {
"language": "zh",
"doc_id": 134,
"page_id": null,
"oi_exist": true,
"oi_source": "compiling",
"source_dataset": "linux-cn",
"date_download": "2024-05-06",
"ori_meta": {
"title": "Ubuntu 11.04正式发布!",
"author": "",
"fromurl": "",
"summary": "刚才接到的消息,Ubuntu 11.04已经正式发布!\r\n\r\n超快!易用!免费!\r\nUbuntu操作系统为世界上数以百万计的电脑、上网本和服务器提供了动力!\r\nUbuntu可以为你完成各种工作,管理你的文件、打印机、摄像头和MP3!并且它 ...",
"pic": "/data/attachment/album/201104/28/193933lnqqwwwn8l64wbn1.jpg.thumb.jpg",
"largepic": "/data/attachment/album/201104/28/193933lnqqwwwn8l64wbn1.jpg",
"titlepic": false,
"thumb": false,
"islctt": false,
"selector": "",
"translator": "",
"reviewer": "",
"editorchoice": false,
"tags": [
"Ubuntu 11.04",
"发布"
],
"category": "新闻",
"count": {
"commentnum": 0,
"favtimes": 0,
"likes": 0,
"sharetimes": 1,
"viewnum": 6165
},
"comments_data": [
],
"related": [
],
"excerpt": "刚才接到的消息,Ubuntu 11.04已经正式发布!\r\n\r\n超快!易用!免费!\r\nUbuntu操作系统为世界上数以百万计的电脑、上网本和服务器提供了动力!\r\nUbuntu可以为你完成各种工作,管理你的文件、打印机、摄像头和MP3!并且它 ...",
"date": "2011-05-09 13:24:00",
"updated": "2011-05-09 13:24:00",
"id": 134,
"permalink": "/article-134-1.html"
}
},
"quality_signals": null,
"license": "CC-BY-NC-4.0",
"content_image": [
"content_image/album_201104_28_193933lnqqwwwn8l64wbn1.jpg",
"content_image/album_201104_28_193935sy4l3bh4bh1ycbbc.jpg",
"content_image/album_201104_28_193936lyvc36fwv91l1359.jpg",
"content_image/album_201104_28_19393800rpr8pf0s8p8w0s.jpg"
],
"md": "# Ubuntu 11.04正式发布!\n\n刚才接到的消息,Ubuntu 11.04已经正式发布! \n \n 超快!易用!免费! \n Ubuntu操作系统为世界上数以百万计的电脑、上网本和服务器提供了动力! \n Ubuntu可以为你完成各种工作,管理你的文件、打印机、摄像头和MP3!并且它还带有数千个免费程序。 \n \n <img src=\"content_image/album_201104_28_193933lnqqwwwn8l64wbn1.jpg\" alt=\"\" title=\"\"> \n **数千个免费程序** \n \n <img src=\"content_image/album_201104_28_193935sy4l3bh4bh1ycbbc.jpg\" alt=\"\" title=\"\"> \n **终生免费升级** \n \n <img src=\"content_image/album_201104_28_193936lyvc36fwv91l1359.jpg\" alt=\"\" title=\"\"> \n **内建的病毒防护** \n \n <img src=\"content_image/album_201104_28_19393800rpr8pf0s8p8w0s.jpg\" alt=\"\" title=\"\"> \n **云中的音乐** \n \n 下载地址:\n\n\n\n\n> 列表: \n> <http://releases.ubuntu.com/11.04/> \n> 桌面版: \n> <http://www.ubuntu.com/download/ubuntu/download> \n> 服务器版: \n> <http://www.ubuntu.com/download/server/download>\n\n\n\n \n BT种子地址:\n\n\n\n\n> \n> * [ubuntu-11.04-alternate-amd64.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-alternate-amd64.iso.torrent)\n> * [ubuntu-11.04-alternate-i386.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-alternate-i386.iso.torrent)\n> * [ubuntu-11.04-desktop-amd64.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-desktop-amd64.iso.torrent)\n> * [ubuntu-11.04-desktop-i386.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-desktop-i386.iso.torrent)\n> * [ubuntu-11.04-netbook-i386.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-netbook-i386.iso.torrent)\n> * [ubuntu-11.04-server-amd64.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-server-amd64.iso.torrent)\n> * [ubuntu-11.04-server-i386.iso.torrent](http://releases.ubuntu.com/11.04/ubuntu-11.04-server-i386.iso.torrent)\n> \n> \n> \n\n\n\n \n 当前尚无DVD版本出现 \n \n \n \n 该贴已经同步到 [wxy的微博](http://api.t.sina.com.cn/1747813575/statuses/9786340397) \n \n \n \n\n\n \n\n\n*[本文内容由 wxy 提供](thread-7135-1-1.html)*\n \n\n\n\n 已同步至 [wxy的微博](http://api.t.sina.com.cn/1747813575/statuses/10347235925)",
"overall_image": "overall_image/134.png"
}
```
### 3.6 An example of mmc-core-ff
```json
{
"meta": {
"language": "en",
"oi_exist": true,
"oi_source": "compiling",
"doc_id": 11,
"page_id": 0,
"source_dataset": "mmc4-core-ff",
"source_jsonl": "mmc4-core-ff/docs_no_face_shard_10375_v3.jsonl",
"ori_meta": {
"url": "http://position-light.blogspot.com/2015/06/whats-up-with-reading-and-northern.html",
"text_list": [
"The Position Light: What's Up with the Reading and Northern?",
"The Reading and Northern has been a rare bright spot in the world of signaling.",
"A commitment to its Reading heritage has resulted in numerous signaling structures being preserved along with attempts to install \"classic\" signaling where new signaling is being installed on its mostly unsignaled territory.",
"The R&N also controls the former Conrail Lehigh Line and for one reason or another has decided not to touch the surviving LVRR signaling along that route.",
"Still, I am still not completely clear on the full extent of the R&N's signal preservation efforts as hinted at in a number of photos I have come across.",
"We begin near the town of Mach Chunk where the R&N runs a tourist operation in the Lehigh Gorge.",
"i have bicycles along the right of way a number of time and I never noticed this cantilever mast and its freshly painted (albeit turned) signals.",
"Is this a sign of a new interlocking or signaling project?",
"Pottsville is the location of some preserved Reading signal bridges and a tower.",
"Both have been out of service for decades, but then I find a photo showing what appears to be a lit Reading US&S three headed signal displaying a restricting indication.",
"Could be that the photographer is having some fun with Photoshoppe, or it could be another R&N instance of an \"island\" interlocking designed to eliminate the need for crews to hand throw switches.",
"Clearly I need to take another field trip to the area, but if anyone has any information (or photos) please let me know.",
"Yes, that dual Signal Cantilever was taken from Schuylkill Haven and refurbished and placed into service as part of the new CP COAL Interlocking aptly named for the nearby town of Coalport.",
"This new interlocking controls R&N connector feed track and switch from Nesquehoning Jct onto the NS Lehigh Line.",
"Be aware, that R&N is constructing a new Y connector bridge over the Lehigh River.",
"The switch at Nesquehoning Jct as well at the Y connecting point northwest along the old CNJ into Nesquehoning and the other apex connecting point at the old Lehigh Valley overpass will make up the new Y along with the new bridge.",
"Expect the R&N to make all 3 points new CP Interlockings as NS will also use the new route to get to Reading & Philadelphia directly off the Lehigh Line.",
"Coming attractions for 2016.",
"Also, R&N is talking about a new signaled controlled passing track siding midway between Port Clinton and Reading.",
"Believe they will leverage the siding that's already in place (don't know name of that area, but, between two grade crossings).",
"Could see even more new R&N signaling if Distants are added to the mix as well.",
"Thank you for the information!",
"I knew something was up with them.",
"Mike - Have updates with pics for R&N.",
"Can share them with you but not sure of best way via e-mail or blog address.",
"Can you provide and I can forward what I have?",
"You can drop a line to [email protected] Thanks!"
],
"image_info": [
{
"face_detections": null,
"image_id": "11-0.png",
"image_name": "338146395110.jpg",
"matched_sim": 0.2532651722,
"matched_text_index": 12,
"raw_url": "http://www.railpictures.net/images/d2/6/0/1/6601.1425352225.jpg"
},
{
"face_detections": null,
"image_id": "11-1.png",
"image_name": "75dca5908f72.jpg",
"matched_sim": 0.2665729225,
"matched_text_index": 18,
"raw_url": "http://www.railpictures.net/images/d2/0/3/5/5035.1411414707.jpg"
}
],
"similarity_matrix": [
[
0.2208167017,
0.2216126323,
0.2174896896,
0.2322429568,
0.1835552454,
0.1933521628,
0.1114124805,
0.1734878719,
0.1712893993,
0.1681747884,
0.2151062787,
0.1558438838,
0.2532651722,
0.2029514462,
0.1683746874,
0.1972030103,
0.2269551754,
0.1497862041,
0.2076308429,
0.1459720433,
0.1406365782,
0.1131924018,
0.0637710392,
0.1748069972,
0.1665924788,
0.1288469583,
0.1271829307
],
[
0.2275835425,
0.2447894663,
0.2326766551,
0.2530837059,
0.197981596,
0.1727618128,
0.1842465401,
0.2053450346,
0.2174785137,
0.2176187485,
0.216365099,
0.152155906,
0.2394197732,
0.2332755029,
0.2077463269,
0.2373518944,
0.2454088479,
0.1549753994,
0.2665729225,
0.2099550366,
0.163154155,
0.1208794788,
0.0917887241,
0.1707040668,
0.1544941813,
0.1439596266,
0.1319040358
]
],
"could_have_url_duplicate": 0
},
"date_download": "2024-05-11"
},
"md": "The Position Light: What's Up with the Reading and Northern? The Reading and Northern has been a rare bright spot in the world of signaling. A commitment to its Reading heritage has resulted in numerous signaling structures being preserved along with attempts to install \"classic\" signaling where new signaling is being installed on its mostly unsignaled territory. The R&N also controls the former Conrail Lehigh Line and for one reason or another has decided not to touch the surviving LVRR signaling along that route. Still, I am still not completely clear on the full extent of the R&N's signal preservation efforts as hinted at in a number of photos I have come across. We begin near the town of Mach Chunk where the R&N runs a tourist operation in the Lehigh Gorge. i have bicycles along the right of way a number of time and I never noticed this cantilever mast and its freshly painted (albeit turned) signals. Is this a sign of a new interlocking or signaling project? Pottsville is the location of some preserved Reading signal bridges and a tower. Both have been out of service for decades, but then I find a photo showing what appears to be a lit Reading US&S three headed signal displaying a restricting indication. Could be that the photographer is having some fun with Photoshoppe, or it could be another R&N instance of an \"island\" interlocking designed to eliminate the need for crews to hand throw switches. Clearly I need to take another field trip to the area, but if anyone has any information (or photos) please let me know. Yes, that dual Signal Cantilever was taken from Schuylkill Haven and refurbished and placed into service as part of the new CP COAL Interlocking aptly named for the nearby town of Coalport.\n\n\n\n<img src='content_image/11-0.png'>\n\nThis new interlocking controls R&N connector feed track and switch from Nesquehoning Jct onto the NS Lehigh Line. Be aware, that R&N is constructing a new Y connector bridge over the Lehigh River. The switch at Nesquehoning Jct as well at the Y connecting point northwest along the old CNJ into Nesquehoning and the other apex connecting point at the old Lehigh Valley overpass will make up the new Y along with the new bridge. Expect the R&N to make all 3 points new CP Interlockings as NS will also use the new route to get to Reading & Philadelphia directly off the Lehigh Line. Coming attractions for 2016. Also, R&N is talking about a new signaled controlled passing track siding midway between Port Clinton and Reading.\n\n\n\n<img src='content_image/11-1.png'>\n\nBelieve they will leverage the siding that's already in place (don't know name of that area, but, between two grade crossings). Could see even more new R&N signaling if Distants are added to the mix as well. Thank you for the information! I knew something was up with them. Mike - Have updates with pics for R&N. Can share them wi",
"license": "ODC-BY",
"quality_signals": null,
"content_image": [
"content_image/11-0.png",
"content_image/11-1.png"
],
"overall_image": "overall_image/11-0.png"
}
```
### 3.7 An example of PG19
```json
{
"meta": {
"language": "en",
"oi_exist": true,
"oi_source": "compiling",
"doc_id": 871,
"page_id": 0,
"source_dataset": "pg19",
"split": "train",
"ori_meta": {
"url": "http://www.gutenberg.org/ebooks/9304",
"short_book_title": "Initiation into Philosophy by Emile Faguet",
"publication_date": 1914
},
"date_download": "2024-05-10"
},
"md": "# Initiation into Philosophy by Emile Faguet \n\n Produced by Ted Garvin, Thomas Hutchinson and PG Distributed Proofreaders \n\n \n\n \n\n \n\n \n\n INITIATION INTO PHILOSOPHY \n\n \nBy Emile Faguet \n\n Of the French Academy \n\n \nAuthor of \"The Cult Of Incompetence,\" \"Initiation Into Literature,\" etc. \n\n \nTranslated from the French by Sir Homer Gordon, Bart. \n\n 1914 \n\n \n\n \nPREFACE \n\n This volume, as indicated by the title, is designed to show the way to the beginner, to satisfy and more espec ially to excite his initial curiosity. It affords an adequate idea of the march of facts and of ideas. The rea der is led, somewhat rapidly, from the remote origins to the most recent efforts of the human mind. \n\n It should be a convenient repertory to which the mind may revert in order to see broadly the general opinion o f an epoch--and what connected it with those that followed or preceded it. It aims above all at being _a frame _ in which can conveniently be inscribed, in the course of further studies, new conceptions more detailed and more thoroughly examined. \n\n It will have fulfilled its design should it incite to research and meditation, and if it prepares for them cor rectly. \n\n E. FAGUET. \n\n \n\n \nCONTENTS \n\n \nPART I ANTIQUITY \n\n \nCHAPTER I BEFORE SOCRATES \n\n Philosophical Interpreters of the Universe, of the Creation and Constitution of the World. \n\n \nCHAPTER II THE SOPHISTS \n\n Logicians and Professors of Logic, and of the Analysis of Ideas, and of Discussion. \n\n \nCHAPTER III SOCRATES \n\n Philosophy Entirely Reduced to Morality, and Morality Considered as the End of all Intellectual Activity. \n\n \nCHAPTER IV PLATO \n\n Plato, like Socrates, is Pre-eminently a Moralist, but he Reverts to General Consideration of the Universe, an d Deals with Politics and Legislation. \n\n \nCHAPTER V ARISTOTLE",
"license": "Apache 2.0",
"quality_signals": null,
"content_image": null,
"overall_image": "overall_image/871-0.png"
}
```
### 3.8 An example of PIN-PMC
```json
{
"meta": {
"language": "en",
"doc_id": "PMC3015258",
"oi_exist": true,
"oi_source": "ori",
"source_dataset": "PIN-PMC",
"ori_meta": null,
"page_id": null,
"date_download": "2024-05-28"
},
"md": "# A Simple Stereoscopic Endoscope\n\n## Abstract\n\nA very simple method is described for producing and viewing stereoscopic endoscopic images.\nThe addition of two simple prisms to the end of a conventional television-monitored endoscope with a simple viewing device produces a stereoscopic endoscope which appears to be suitable for surgical use......",
"license": [
"https://www.ncbi.nlm.nih.gov/pmc/tools/textmining/"
],
"quality_signals": {
"doc_length": 8269
},
"content_image": [
"content_image/PMC3015258/jsls-2-1-67-g03.jpg",
"content_image/PMC3015258/jsls-2-1-67-g04.jpg",
"content_image/PMC3015258/jsls-2-1-67-g01.jpg",
"content_image/PMC3015258/jsls-2-1-67-g02.jpg",
"content_image/PMC3015258/jsls-2-1-67-g05.jpg"
],
"overall_image": [
"overall_image/PMC3015258/jsls-2-1-67_3.png",
"overall_image/PMC3015258/jsls-2-1-67_0.png",
"overall_image/PMC3015258/jsls-2-1-67_1.png",
"overall_image/PMC3015258/jsls-2-1-67_2.png"
],
"id": 60827
}
```
## 4 License
For data generated or produced by us, please adhere to the Apache 2.0 License.
For data sourced from third parties, compliance with the respective third-party licenses is required.
## Citation
```
@misc{2406.13923,
Author = {Junjie Wang and Yin Zhang and Yatai Ji and Yuxiang Zhang and Chunyang Jiang and Yubo Wang and Kang Zhu and Zekun Wang and Tiezhen Wang and Wenhao Huang and Jie Fu and Bei Chen and Qunshu Lin and Minghao Liu and Ge Zhang and Wenhu Chen},
Title = {PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents},
Year = {2024},
Eprint = {arXiv:2406.13923},
}
``` |
MoreTorque/rlogs | MoreTorque | "2024-09-06T18:18:47Z" | 28,897 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-22T21:22:44Z" | ---
license: apache-2.0
---
## Dataset Overview
This dataset consists of bz2 compressed rlogs captured from OpenPilot. These logs do not contain any user-sensitive information.
The primary purpose of this dataset is to facilitate data-driven solutions for improving lateral and longitudinal control in community-supported vehicles.
# Contents
The rlogs are composed of various events as defined in the OpenPilot log schema.
To ensure user privacy, many event types have been excluded. The following is the list of events that remain in the log files:
```
* CarState
* LiveParameters
* CarControl
* LateralPlanDEPRECATED
* CarOutput
* CarParams
* ModelV2
* LiveTorqueParameters
* LiveCalibration
* Sendcan
* Can
* LongitudinalPlan
* LiveLocationKalman ( Only OrientaionNED, CalibratedOrientaionNED, and AngularVelocityCalibrated have been kept ) |
zy1111/test | zy1111 | "2024-10-15T08:34:34Z" | 28,847 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-09-27T07:03:45Z" | ---
license: apache-2.0
---
|
HuggingFaceH4/MATH-500 | HuggingFaceH4 | "2024-11-15T13:36:00Z" | 28,502 | 96 | [
"task_categories:text-generation",
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-11-15T13:26:48Z" | ---
task_categories:
- text-generation
language:
- en
pretty_name: MATH-500
---
# Dataset Card for MATH-500
<!-- Provide a quick summary of the dataset. -->
This dataset contains a subset of 500 problems from the MATH benchmark that OpenAI created in their _Let's Verify Step by Step_ paper. See their GitHub repo for the source file: https://github.com/openai/prm800k/tree/main?tab=readme-ov-file#math-splits |
GEM/wiki_lingua | GEM | "2023-02-16T09:23:29Z" | 28,415 | 48 | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-nc-sa-3.0",
"region:us"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: wiki_lingua
---
# Dataset Card for GEM/wiki_lingua
## Dataset Description
- **Homepage:** None (See Repository)
- **Repository:** https://github.com/esdurmus/Wikilingua
- **Paper:** https://www.aclweb.org/anthology/2020.findings-emnlp.360/
- **Leaderboard:** N/A
- **Point of Contact:** Faisal Ladhak, Esin Durmus
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_lingua).
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected], [email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-3.0: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
Major-TOM/Core-S2L2A | Major-TOM | "2024-11-12T17:16:03Z" | 28,243 | 57 | [
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:tabular",
"modality:text",
"modality:geospatial",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.12095",
"region:us",
"earth-observation",
"remote-sensing",
"sentinel-2",
"multi-spectral",
"satellite",
"geospatial"
] | null | "2024-02-23T13:21:38Z" | ---
license: cc-by-sa-4.0
tags:
- earth-observation
- remote-sensing
- sentinel-2
- multi-spectral
- satellite
- geospatial
size_categories:
- 1M<n<10M
dataset_info:
- config_name: default
features:
- name: product_id
dtype: string
- name: grid_cell
dtype: string
- name: product_datetime
dtype: string
- name: thumbnail
dtype: image
- name: B01
dtype: binary
- name: B02
dtype: binary
- name: B03
dtype: binary
- name: B04
dtype: binary
- name: B05
dtype: binary
- name: B06
dtype: binary
- name: B07
dtype: binary
- name: B08
dtype: binary
- name: B8A
dtype: binary
- name: B09
dtype: binary
- name: B11
dtype: binary
- name: B12
dtype: binary
- name: cloud_mask
dtype: binary
configs:
- config_name: default
data_files: images/*.parquet
- config_name: metadata
data_files: metadata.parquet
---
# Core-S2L2A
Contains a global coverage of Sentinel-2 (Level 2A) patches, each of size 1,068 x 1,068 pixels.
| Source | Sensing Type | Number of Patches | Patch Size | Total Pixels |
|--------|--------------|-------------------|------------|--------------|
|Sentinel-2 Level-2A |Optical Multispectral|2,245,886|1,068 x 1,068 (10 m) | > 2.564 Trillion |
## Content
| Column | Details | Resolution |
|--------|---------|------------|
| B01 | Coastal aerosol, 442.7 nm (S2A), 442.3 nm (S2B) | 60m |
| B02 | Blue, 492.4 nm (S2A), 492.1 nm (S2B) | 10m |
| B03 | Green, 559.8 nm (S2A), 559.0 nm (S2B) | 10m |
| B04 | Red, 664.6 nm (S2A), 665.0 nm (S2B) | 10m |
| B05 | Vegetation red edge, 704.1 nm (S2A), 703.8 nm (S2B) | 20m |
| B06 | Vegetation red edge, 740.5 nm (S2A), 739.1 nm (S2B) | 20m |
| B07 | Vegetation red edge, 782.8 nm (S2A), 779.7 nm (S2B) | 20m |
| B08 | NIR, 832.8 nm (S2A), 833.0 nm (S2B) | 10m |
| B8A | Narrow NIR, 864.7 nm (S2A), 864.0 nm (S2B) | 20m |
| B09 | Water vapour, 945.1 nm (S2A), 943.2 nm (S2B) | 60m |
| B11 | SWIR, 1613.7 nm (S2A), 1610.4 nm (S2B) | 20m |
| B12 | SWIR, 2202.4 nm (S2A), 2185.7 nm (S2B) | 20m |
| cloud_mask | Cloud Mask produced by SEnSeI | 10m |
| thumbnail | RGB composite [B04, B03, B02] saved as png | 10m |
## Spatial Coverage
This is a global monotemporal dataset. Nearly every piece of Earth captured by Sentinel-2 is contained at least once in this dataset (and only once, excluding some marginal overlaps).
The following figure demonstrates the spatial coverage (only black pixels are absent):

## Example Use
Interface scripts are available at https://github.com/ESA-PhiLab/Major-TOM
Here's a sneak peek with a thumbnail image:
```python
from fsspec.parquet import open_parquet_file
import pyarrow.parquet as pq
from io import BytesIO
from PIL import Image
PARQUET_FILE = 'part_03900' # parquet number
ROW_INDEX = 42 # row number (about 500 per parquet)
url = "https://huggingface.co/datasets/Major-TOM/Core-S2L2A/resolve/main/images/{}.parquet".format(PARQUET_FILE)
with open_parquet_file(url,columns = ["thumbnail"]) as f:
with pq.ParquetFile(f) as pf:
first_row_group = pf.read_row_group(ROW_INDEX, columns=['thumbnail'])
stream = BytesIO(first_row_group['thumbnail'][0].as_py())
image = Image.open(stream)
```
## Cite
[](https://arxiv.org/abs/2402.12095/)
```latex
@inproceedings{Major_TOM,
title={Major TOM: Expandable Datasets for Earth Observation},
author={Alistair Francis and Mikolaj Czerkawski},
year={2024},
booktitle={IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium},
eprint={2402.12095},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Powered by [Φ-lab, European Space Agency (ESA) 🛰️](https://huggingface.co/ESA-philab) |
mlfoundations/dclm-pool-1b-5x | mlfoundations | "2024-06-22T05:50:04Z" | 28,114 | 1 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-06-12T04:26:45Z" | ---
license: cc-by-4.0
--- |
tatsu-lab/alpaca_eval | tatsu-lab | "2024-08-16T23:42:12Z" | 27,040 | 54 | [
"license:cc-by-nc-4.0",
"region:us"
] | null | "2023-05-29T00:12:59Z" | ---
license: cc-by-nc-4.0
---
|
fixie-ai/covost2 | fixie-ai | "2024-08-27T20:58:08Z" | 26,838 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-16T23:40:52Z" | ---
dataset_info:
- config_name: ar_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 61607709.192
num_examples: 2283
- name: validation
num_bytes: 56223234.024
num_examples: 1758
- name: test
num_bytes: 54650910.41
num_examples: 1695
download_size: 160468333
dataset_size: 172481853.626
- config_name: ca_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4397026262.322
num_examples: 95854
- name: validation
num_bytes: 544108371.96
num_examples: 12730
- name: test
num_bytes: 604755238.63
num_examples: 12730
download_size: 4957773433
dataset_size: 5545889872.912
- config_name: cy_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 51478765.087
num_examples: 1241
- name: validation
num_bytes: 26992697.0
num_examples: 690
- name: test
num_bytes: 28772216.0
num_examples: 690
download_size: 102604972
dataset_size: 107243678.087
- config_name: de_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5680326209.222
num_examples: 127834
- name: validation
num_bytes: 631442490.202
num_examples: 13511
- name: test
num_bytes: 637042944.685
num_examples: 13511
download_size: 6490850158
dataset_size: 6948811644.108999
- config_name: en_ar
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14105902817.18
num_examples: 289430
- name: validation
num_bytes: 718527564.808
num_examples: 15531
- name: test
num_bytes: 729114452.301
num_examples: 15531
download_size: 13815709729
dataset_size: 15553544834.289001
- config_name: en_ca
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099092976.18
num_examples: 289430
- name: validation
num_bytes: 718171719.808
num_examples: 15531
- name: test
num_bytes: 728790610.301
num_examples: 15531
download_size: 13814365593
dataset_size: 15546055306.289001
- config_name: en_cy
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098487703.18
num_examples: 289430
- name: validation
num_bytes: 718141953.808
num_examples: 15531
- name: test
num_bytes: 728793811.301
num_examples: 15531
download_size: 13813953593
dataset_size: 15545423468.289001
- config_name: en_de
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14099886814.18
num_examples: 289430
- name: validation
num_bytes: 718219105.808
num_examples: 15531
- name: test
num_bytes: 728857067.301
num_examples: 15531
download_size: 13815103686
dataset_size: 15546962987.289001
- config_name: en_et
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14096877545.18
num_examples: 289430
- name: validation
num_bytes: 718057559.808
num_examples: 15531
- name: test
num_bytes: 728710692.301
num_examples: 15531
download_size: 13813410823
dataset_size: 15543645797.289001
- config_name: en_fa
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14108661241.18
num_examples: 289430
- name: validation
num_bytes: 718670909.808
num_examples: 15531
- name: test
num_bytes: 729271000.301
num_examples: 15531
download_size: 13816798013
dataset_size: 15556603151.289001
- config_name: en_id
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098627451.18
num_examples: 289430
- name: validation
num_bytes: 718144327.808
num_examples: 15531
- name: test
num_bytes: 728802322.301
num_examples: 15531
download_size: 13813201260
dataset_size: 15545574101.289001
- config_name: en_ja
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14103911774.18
num_examples: 289430
- name: validation
num_bytes: 718409304.808
num_examples: 15531
- name: test
num_bytes: 729050991.301
num_examples: 15531
download_size: 13815875328
dataset_size: 15551372070.289001
- config_name: en_lv
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098703097.18
num_examples: 289430
- name: validation
num_bytes: 718152571.808
num_examples: 15531
- name: test
num_bytes: 728792572.301
num_examples: 15531
download_size: 13814849886
dataset_size: 15545648241.289001
- config_name: en_mn
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14113120657.18
num_examples: 289430
- name: validation
num_bytes: 718940418.808
num_examples: 15531
- name: test
num_bytes: 729461016.301
num_examples: 15531
download_size: 13819427515
dataset_size: 15561522092.289001
- config_name: en_sl
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097158381.18
num_examples: 289430
- name: validation
num_bytes: 718085673.808
num_examples: 15531
- name: test
num_bytes: 728705188.301
num_examples: 15531
download_size: 13813603812
dataset_size: 15543949243.289001
- config_name: en_sv-SE
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14097728051.18
num_examples: 289430
- name: validation
num_bytes: 718093292.808
num_examples: 15531
- name: test
num_bytes: 728747422.301
num_examples: 15531
download_size: 13813332908
dataset_size: 15544568766.289001
- config_name: en_ta
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14135489205.18
num_examples: 289430
- name: validation
num_bytes: 720191394.808
num_examples: 15531
- name: test
num_bytes: 730578783.301
num_examples: 15531
download_size: 13825121271
dataset_size: 15586259383.289001
- config_name: en_tr
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14098644786.18
num_examples: 289430
- name: validation
num_bytes: 718161996.808
num_examples: 15531
- name: test
num_bytes: 728786654.301
num_examples: 15531
download_size: 13814279798
dataset_size: 15545593437.289001
- config_name: en_zh-CN
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 14095661460.18
num_examples: 289430
- name: validation
num_bytes: 717982705.808
num_examples: 15531
- name: test
num_bytes: 728655191.301
num_examples: 15531
download_size: 13812699892
dataset_size: 15542299357.289001
- config_name: es_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 630615357.241
num_examples: 13221
- name: test
num_bytes: 666447063.067
num_examples: 13221
- name: train
num_bytes: 3769457359.8
num_examples: 79015
download_size: 4531969416
dataset_size: 5066519780.108
- config_name: et_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 97124727.544
num_examples: 1782
- name: validation
num_bytes: 80290798.168
num_examples: 1576
- name: test
num_bytes: 81970364.51
num_examples: 1571
download_size: 257604448
dataset_size: 259385890.222
- config_name: fa_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1872724297.149
num_examples: 53949
- name: validation
num_bytes: 140067911.23
num_examples: 3445
- name: test
num_bytes: 149319550.35
num_examples: 3445
download_size: 1679853440
dataset_size: 2162111758.729
- config_name: fr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 632191608.84
num_examples: 14760
- name: test
num_bytes: 698178059.08
num_examples: 14760
- name: train
num_bytes: 8128016830.77
num_examples: 207374
download_size: 8900934523
dataset_size: 9458386498.69
- config_name: id_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 36136135.768
num_examples: 1243
- name: validation
num_bytes: 25058845.0
num_examples: 792
- name: test
num_bytes: 26577467.0
num_examples: 844
download_size: 86110062
dataset_size: 87772447.768
- config_name: it_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 1517510665.568
num_examples: 31698
- name: validation
num_bytes: 422409218.1
num_examples: 8940
- name: test
num_bytes: 454569171.595
num_examples: 8951
download_size: 2125529183
dataset_size: 2394489055.2630005
- config_name: ja_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 38181610.624
num_examples: 1119
- name: validation
num_bytes: 24623052.0
num_examples: 635
- name: test
num_bytes: 25558787.0
num_examples: 684
download_size: 88228548
dataset_size: 88363449.624
- config_name: lv_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 66152116.328
num_examples: 2337
- name: validation
num_bytes: 32655276.0
num_examples: 1125
- name: test
num_bytes: 50997551.638
num_examples: 1629
download_size: 137700207
dataset_size: 149804943.96600002
- config_name: mn_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 87891433.547
num_examples: 2067
- name: validation
num_bytes: 77519039.943
num_examples: 1761
- name: test
num_bytes: 83667460.167
num_examples: 1759
download_size: 242638800
dataset_size: 249077933.657
- config_name: nl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 216102081.4
num_examples: 7108
- name: validation
num_bytes: 55386349.319
num_examples: 1699
- name: test
num_bytes: 60219179.711
num_examples: 1699
download_size: 320267264
dataset_size: 331707610.43
- config_name: pt_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 274723273.528
num_examples: 9158
- name: validation
num_bytes: 118345891.704
num_examples: 3318
- name: test
num_bytes: 166247624.001
num_examples: 4023
download_size: 540891735
dataset_size: 559316789.233
- config_name: ru_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 557219472.672
num_examples: 12112
- name: validation
num_bytes: 290218427.6
num_examples: 6110
- name: test
num_bytes: 312622838.0
num_examples: 6300
download_size: 1112848246
dataset_size: 1160060738.272
- config_name: sl_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 55992153.0
num_examples: 1843
- name: validation
num_bytes: 15074155.0
num_examples: 509
- name: test
num_bytes: 10209711.0
num_examples: 360
download_size: 83863293
dataset_size: 81276019.0
- config_name: sv-SE_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 48298330.64
num_examples: 2160
- name: validation
num_bytes: 32544646.416
num_examples: 1349
- name: test
num_bytes: 46894324.615
num_examples: 1595
download_size: 121860373
dataset_size: 127737301.671
- config_name: ta_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 47757197.616
num_examples: 1358
- name: validation
num_bytes: 13670695.0
num_examples: 384
- name: test
num_bytes: 29891516.0
num_examples: 786
download_size: 87791516
dataset_size: 91319408.616
- config_name: tr_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 119299427.798
num_examples: 3966
- name: validation
num_bytes: 52552534.232
num_examples: 1624
- name: test
num_bytes: 59106253.862
num_examples: 1629
download_size: 224018260
dataset_size: 230958215.89200002
- config_name: zh-CN_en
features:
- name: client_id
dtype: string
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
- name: translation
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 231018998.33
num_examples: 4843
- name: test
num_bytes: 243850956.45
num_examples: 4898
- name: train
num_bytes: 341425113.6
num_examples: 7085
download_size: 766660661
dataset_size: 816295068.38
configs:
- config_name: ar_en
data_files:
- split: train
path: ar_en/train-*
- split: validation
path: ar_en/validation-*
- split: test
path: ar_en/test-*
- config_name: ca_en
data_files:
- split: train
path: ca_en/train-*
- split: validation
path: ca_en/validation-*
- split: test
path: ca_en/test-*
- config_name: cy_en
data_files:
- split: train
path: cy_en/train-*
- split: validation
path: cy_en/validation-*
- split: test
path: cy_en/test-*
- config_name: de_en
data_files:
- split: train
path: de_en/train-*
- split: validation
path: de_en/validation-*
- split: test
path: de_en/test-*
- config_name: en_ar
data_files:
- split: train
path: en_ar/train-*
- split: validation
path: en_ar/validation-*
- split: test
path: en_ar/test-*
- config_name: en_ca
data_files:
- split: train
path: en_ca/train-*
- split: validation
path: en_ca/validation-*
- split: test
path: en_ca/test-*
- config_name: en_cy
data_files:
- split: train
path: en_cy/train-*
- split: validation
path: en_cy/validation-*
- split: test
path: en_cy/test-*
- config_name: en_de
data_files:
- split: train
path: en_de/train-*
- split: validation
path: en_de/validation-*
- split: test
path: en_de/test-*
- config_name: en_et
data_files:
- split: train
path: en_et/train-*
- split: validation
path: en_et/validation-*
- split: test
path: en_et/test-*
- config_name: en_fa
data_files:
- split: train
path: en_fa/train-*
- split: validation
path: en_fa/validation-*
- split: test
path: en_fa/test-*
- config_name: en_id
data_files:
- split: train
path: en_id/train-*
- split: validation
path: en_id/validation-*
- split: test
path: en_id/test-*
- config_name: en_ja
data_files:
- split: train
path: en_ja/train-*
- split: validation
path: en_ja/validation-*
- split: test
path: en_ja/test-*
- config_name: en_lv
data_files:
- split: train
path: en_lv/train-*
- split: validation
path: en_lv/validation-*
- split: test
path: en_lv/test-*
- config_name: en_mn
data_files:
- split: train
path: en_mn/train-*
- split: validation
path: en_mn/validation-*
- split: test
path: en_mn/test-*
- config_name: en_sl
data_files:
- split: train
path: en_sl/train-*
- split: validation
path: en_sl/validation-*
- split: test
path: en_sl/test-*
- config_name: en_sv-SE
data_files:
- split: train
path: en_sv-SE/train-*
- split: validation
path: en_sv-SE/validation-*
- split: test
path: en_sv-SE/test-*
- config_name: en_ta
data_files:
- split: train
path: en_ta/train-*
- split: validation
path: en_ta/validation-*
- split: test
path: en_ta/test-*
- config_name: en_tr
data_files:
- split: train
path: en_tr/train-*
- split: validation
path: en_tr/validation-*
- split: test
path: en_tr/test-*
- config_name: en_zh-CN
data_files:
- split: train
path: en_zh-CN/train-*
- split: validation
path: en_zh-CN/validation-*
- split: test
path: en_zh-CN/test-*
- config_name: es_en
data_files:
- split: validation
path: es_en/validation-*
- split: test
path: es_en/test-*
- split: train
path: es_en/train-*
- config_name: et_en
data_files:
- split: train
path: et_en/train-*
- split: validation
path: et_en/validation-*
- split: test
path: et_en/test-*
- config_name: fa_en
data_files:
- split: train
path: fa_en/train-*
- split: validation
path: fa_en/validation-*
- split: test
path: fa_en/test-*
- config_name: fr_en
data_files:
- split: validation
path: fr_en/validation-*
- split: test
path: fr_en/test-*
- split: train
path: fr_en/train-*
- config_name: id_en
data_files:
- split: train
path: id_en/train-*
- split: validation
path: id_en/validation-*
- split: test
path: id_en/test-*
- config_name: it_en
data_files:
- split: train
path: it_en/train-*
- split: validation
path: it_en/validation-*
- split: test
path: it_en/test-*
- config_name: ja_en
data_files:
- split: train
path: ja_en/train-*
- split: validation
path: ja_en/validation-*
- split: test
path: ja_en/test-*
- config_name: lv_en
data_files:
- split: train
path: lv_en/train-*
- split: validation
path: lv_en/validation-*
- split: test
path: lv_en/test-*
- config_name: mn_en
data_files:
- split: train
path: mn_en/train-*
- split: validation
path: mn_en/validation-*
- split: test
path: mn_en/test-*
- config_name: nl_en
data_files:
- split: train
path: nl_en/train-*
- split: validation
path: nl_en/validation-*
- split: test
path: nl_en/test-*
- config_name: pt_en
data_files:
- split: train
path: pt_en/train-*
- split: validation
path: pt_en/validation-*
- split: test
path: pt_en/test-*
- config_name: ru_en
data_files:
- split: train
path: ru_en/train-*
- split: validation
path: ru_en/validation-*
- split: test
path: ru_en/test-*
- config_name: sl_en
data_files:
- split: train
path: sl_en/train-*
- split: validation
path: sl_en/validation-*
- split: test
path: sl_en/test-*
- config_name: sv-SE_en
data_files:
- split: train
path: sv-SE_en/train-*
- split: validation
path: sv-SE_en/validation-*
- split: test
path: sv-SE_en/test-*
- config_name: ta_en
data_files:
- split: train
path: ta_en/train-*
- split: validation
path: ta_en/validation-*
- split: test
path: ta_en/test-*
- config_name: tr_en
data_files:
- split: train
path: tr_en/train-*
- split: validation
path: tr_en/validation-*
- split: test
path: tr_en/test-*
- config_name: zh-CN_en
data_files:
- split: validation
path: zh-CN_en/validation-*
- split: test
path: zh-CN_en/test-*
- split: train
path: zh-CN_en/train-*
---
This is a partial copy of [CoVoST2](https://huggingface.co/datasets/facebook/covost2) dataset.
The main difference is that the audio data is included in the dataset, which makes usage easier and allows browsing the samples using HF Dataset Viewer.
The limitation of this method is that all audio samples of the `EN_XX` subsets are duplicated, as such the size of the dataset is larger.
As such, not all the data is included: Only the `validation` and `test` subsets are available.
From the `XX_EN` subsets, only `fr`, `es`, and `zh-CN` are included. |
jinzhuoran/RWKU | jinzhuoran | "2024-06-18T02:25:48Z" | 26,694 | 3 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.10890",
"doi:10.57967/hf/2448",
"region:us",
"unlearning",
"knowledge unlearning",
"NLP",
"LLM"
] | [
"text-generation",
"fill-mask",
"question-answering"
] | "2024-06-02T12:02:18Z" | ---
language:
- en
license: cc-by-4.0
pretty_name: RWKU
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- fill-mask
- question-answering
tags:
- unlearning
- knowledge unlearning
- NLP
- LLM
configs:
- config_name: forget_target
data_files:
- split: train
path:
- "All/intro.json"
- config_name: forget_level1
data_files:
- split: test
path:
- "All/forget_level1.json"
- config_name: forget_level2
data_files:
- split: test
path:
- "All/forget_level2.json"
- config_name: forget_level3
data_files:
- split: test
path:
- "All/forget_level3.json"
- config_name: neighbor_level1
data_files:
- split: test
path:
- "All/neighbor_level1.json"
- config_name: neighbor_level2
data_files:
- split: test
path:
- "All/neighbor_level2.json"
- config_name: mia_forget
data_files:
- split: test
path:
- "All/forget_mia.json"
- config_name: mia_retain
data_files:
- split: test
path:
- "All/retain_mia.json"
- config_name: utility_general
data_files:
- split: test
path:
- "All/retain_mmlu.json"
- config_name: utility_general
data_files:
- split: test
path:
- "All/retain_mmlu.json"
- config_name: utility_reason
data_files:
- split: test
path:
- "All/retain_bbh.json"
- config_name: utility_truthfulness
data_files:
- split: test
path:
- "All/truthful.json"
- config_name: utility_factuality
data_files:
- split: test
path:
- "All/triviaqa.json"
- config_name: utility_fluency
data_files:
- split: test
path:
- "All/fluency.json"
- config_name: train_original_passage
data_files:
- split: train
path:
- "All/passage.json"
- config_name: train_positive_llama3
data_files:
- split: train
path:
- "All/positive.json"
- config_name: train_negative_llama3
data_files:
- split: train
path:
- "All/negative.json"
- config_name: train_pair_llama3
data_files:
- split: train
path:
- "All/pair.json"
- config_name: train_refusal_llama3
data_files:
- split: train
path:
- "All/reject.json"
- config_name: train_positive_phi3
data_files:
- split: train
path:
- "All/positive_phi.json"
- config_name: train_negative_phi3
data_files:
- split: train
path:
- "All/negative_phi.json"
- config_name: train_pair_phi3
data_files:
- split: train
path:
- "All/pair_phi.json"
- config_name: train_refusal_phi3
data_files:
- split: train
path:
- "All/reject_phi.json"
---
# Dataset Card for Real-World Knowledge Unlearning Benchmark (RWKU)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://rwku-bench.github.io
- **Repository:** https://github.com/jinzhuoran/RWKU
- **Paper:** https://arxiv.org/abs/2406.10890
### Dataset Summary
**RWKU is a real-world knowledge unlearning benchmark specifically designed for large language models (LLMs).**
This benchmark contains 200 real-world unlearning targets and 13,131 multi-level forget probes, including 3,268 fill-in-the-blank probes, 2,879 question-answer probes, and 6,984 adversarial-attack probes.
RWKU is designed based on the following three key factors:
1. For the **task setting**, we consider a more practical and challenging setting, similar to _zero-shot knowledge unlearning_.
We provide only the unlearning target and the original model, without offering any forget corpus or retain corpus.
In this way, it avoids secondary information leakage caused by the forget corpus and is not affected by the distribution bias of the retain corpus.
2. For the **knowledge source**, we choose real-world famous people from Wikipedia as the unlearning targets and demonstrate that such popular knowledge is widely present in various LLMs through memorization quantification, making it more suitable for knowledge unlearning.
Additionally, choosing entities as unlearning targets can well clearly define the unlearning boundaries.
3. For the **evaluation framework**, we carefully design the forget set and the retain set to evaluate the model's capabilities from multiple real-world applications.
Regarding the forget set, we evaluate the **efficacy** of knowledge unlearning at both the knowledge memorization (fill-in-the-blank style) and knowledge manipulation (question-answer style) abilities.
Specifically, we also evaluate these two abilities through **adversarial attacks** to induce forgotten knowledge in the model.
We adopt four membership inference attack (MIA) methods for knowledge memorization on our collected MIA set.
We meticulously designed nine types of adversarial-attack probes for knowledge manipulation, including prefix injection, affirmative suffix, role playing, reverse query, and others.
Regarding the retain set, we design a neighbor set to test the impact of neighbor perturbation, specifically focusing on the **locality** of unlearning.
In addition, we assess the **model utility** on various capabilities, including general ability, reasoning ability, truthfulness, factuality, and fluency.
### Supported Tasks
Knowledge unlearning for LLMs.
### Languages
English.
## Dataset Structure
To evaluate the unlearning efficacy:
```python
from datasets import load_dataset
forget_level1 = load_dataset("jinzhuoran/RWKU", 'forget_level1')
forget_level2 = load_dataset("jinzhuoran/RWKU", 'forget_level2')
forget_level2 = load_dataset("jinzhuoran/RWKU", 'forget_level2')
```
To evaluate the locality:
```python
from datasets import load_dataset
neighbor_level1 = load_dataset("jinzhuoran/RWKU", 'neighbor_level1')
neighbor_level2 = load_dataset("jinzhuoran/RWKU", 'neighbor_level2')
```
To evaluate the model utility:
```python
from datasets import load_dataset
utility_general = load_dataset("jinzhuoran/RWKU", 'utility_general')
utility_reason = load_dataset("jinzhuoran/RWKU", 'utility_reason')
utility_truthfulness = load_dataset("jinzhuoran/RWKU", 'utility_truthfulness')
utility_factuality = load_dataset("jinzhuoran/RWKU", 'utility_factuality')
utility_fluency = load_dataset("jinzhuoran/RWKU", 'utility_fluency')
```
To conduct membership inference attacks:
```python
from datasets import load_dataset
mia_forget = load_dataset("jinzhuoran/RWKU", 'mia_forget')
mia_retain = load_dataset("jinzhuoran/RWKU", 'mia_retain')
```
To load the forget corpus:
```python
from datasets import load_dataset
train_original_passage = load_dataset("jinzhuoran/RWKU", 'train_original_passage')
train_positive_llama3 = load_dataset("jinzhuoran/RWKU", 'train_positive_llama3')
```
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```bibtex
@misc{jin2024rwku,
title={RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models},
author={Zhuoran Jin and Pengfei Cao and Chenhao Wang and Zhitao He and Hongbang Yuan and Jiachun Li and Yubo Chen and Kang Liu and Jun Zhao},
year={2024},
eprint={2406.10890},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ai4bharat/indic_glue | ai4bharat | "2024-01-04T12:36:30Z" | 26,235 | 11 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-analysis",
"task_ids:semantic-similarity-scoring",
"task_ids:named-entity-recognition",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:extended|other",
"language:as",
"language:bn",
"language:en",
"language:gu",
"language:hi",
"language:kn",
"language:ml",
"language:mr",
"language:or",
"language:pa",
"language:ta",
"language:te",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"discourse-mode-classification",
"paraphrase-identification",
"cross-lingual-similarity",
"headline-classification"
] | [
"text-classification",
"token-classification",
"multiple-choice"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- other
language_creators:
- found
language:
- as
- bn
- en
- gu
- hi
- kn
- ml
- mr
- or
- pa
- ta
- te
license:
- other
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
task_categories:
- text-classification
- token-classification
- multiple-choice
task_ids:
- topic-classification
- natural-language-inference
- sentiment-analysis
- semantic-similarity-scoring
- named-entity-recognition
- multiple-choice-qa
pretty_name: IndicGLUE
tags:
- discourse-mode-classification
- paraphrase-identification
- cross-lingual-similarity
- headline-classification
dataset_info:
- config_name: actsa-sc.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 1370907
num_examples: 4328
- name: validation
num_bytes: 166089
num_examples: 541
- name: test
num_bytes: 168291
num_examples: 541
download_size: 727630
dataset_size: 1705287
- config_name: bbca.hi
features:
- name: label
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22126205
num_examples: 3467
- name: test
num_bytes: 5501148
num_examples: 866
download_size: 10349015
dataset_size: 27627353
- config_name: copa.en
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 46033
num_examples: 400
- name: validation
num_bytes: 11679
num_examples: 100
- name: test
num_bytes: 55846
num_examples: 500
download_size: 79431
dataset_size: 113558
- config_name: copa.gu
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 92097
num_examples: 362
- name: validation
num_bytes: 23450
num_examples: 88
- name: test
num_bytes: 109997
num_examples: 448
download_size: 107668
dataset_size: 225544
- config_name: copa.hi
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93376
num_examples: 362
- name: validation
num_bytes: 23559
num_examples: 88
- name: test
num_bytes: 112830
num_examples: 449
download_size: 104233
dataset_size: 229765
- config_name: copa.mr
features:
- name: premise
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: question
dtype: string
- name: label
dtype: int32
splits:
- name: train
num_bytes: 93441
num_examples: 362
- name: validation
num_bytes: 23874
num_examples: 88
- name: test
num_bytes: 112055
num_examples: 449
download_size: 105962
dataset_size: 229370
- config_name: csqa.as
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 3800523
num_examples: 2942
download_size: 1390423
dataset_size: 3800523
- config_name: csqa.bn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 54671018
num_examples: 38845
download_size: 19648180
dataset_size: 54671018
- config_name: csqa.gu
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 29131607
num_examples: 22861
download_size: 6027825
dataset_size: 29131607
- config_name: csqa.hi
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 40409347
num_examples: 35140
download_size: 14711258
dataset_size: 40409347
- config_name: csqa.kn
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 21199816
num_examples: 13666
download_size: 7669655
dataset_size: 21199816
- config_name: csqa.ml
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 47220836
num_examples: 26537
download_size: 17382215
dataset_size: 47220836
- config_name: csqa.mr
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 13667174
num_examples: 11370
download_size: 5072738
dataset_size: 13667174
- config_name: csqa.or
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 2562365
num_examples: 1975
download_size: 948046
dataset_size: 2562365
- config_name: csqa.pa
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 5806097
num_examples: 5667
download_size: 2194109
dataset_size: 5806097
- config_name: csqa.ta
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 61868481
num_examples: 38590
download_size: 20789467
dataset_size: 61868481
- config_name: csqa.te
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: category
dtype: string
- name: title
dtype: string
- name: options
sequence: string
- name: out_of_context_options
sequence: string
splits:
- name: test
num_bytes: 58784997
num_examples: 41338
download_size: 17447618
dataset_size: 58784997
- config_name: cvit-mkb-clsr.en-bn
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1990957
num_examples: 5522
download_size: 945551
dataset_size: 1990957
- config_name: cvit-mkb-clsr.en-gu
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2303377
num_examples: 6463
download_size: 1093313
dataset_size: 2303377
- config_name: cvit-mkb-clsr.en-hi
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1855989
num_examples: 5169
download_size: 890609
dataset_size: 1855989
- config_name: cvit-mkb-clsr.en-ml
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1990089
num_examples: 4886
download_size: 868956
dataset_size: 1990089
- config_name: cvit-mkb-clsr.en-mr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2130601
num_examples: 5760
download_size: 993961
dataset_size: 2130601
- config_name: cvit-mkb-clsr.en-or
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 274873
num_examples: 752
download_size: 134334
dataset_size: 274873
- config_name: cvit-mkb-clsr.en-ta
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 2565178
num_examples: 5637
download_size: 1091653
dataset_size: 2565178
- config_name: cvit-mkb-clsr.en-te
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 1771129
num_examples: 5049
download_size: 840410
dataset_size: 1771129
- config_name: cvit-mkb-clsr.en-ur
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
splits:
- name: test
num_bytes: 288430
num_examples: 1006
download_size: 166129
dataset_size: 288430
- config_name: iitp-mr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 6704905
num_examples: 2480
- name: validation
num_bytes: 822218
num_examples: 310
- name: test
num_bytes: 702373
num_examples: 310
download_size: 3151762
dataset_size: 8229496
- config_name: iitp-pr.hi
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 945589
num_examples: 4182
- name: validation
num_bytes: 120100
num_examples: 523
- name: test
num_bytes: 121910
num_examples: 523
download_size: 509822
dataset_size: 1187599
- config_name: inltkh.gu
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 883063
num_examples: 5269
- name: validation
num_bytes: 111201
num_examples: 659
- name: test
num_bytes: 110757
num_examples: 659
download_size: 515094
dataset_size: 1105021
- config_name: inltkh.ml
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1108145
num_examples: 5036
- name: validation
num_bytes: 140055
num_examples: 630
- name: test
num_bytes: 138847
num_examples: 630
download_size: 571019
dataset_size: 1387047
- config_name: inltkh.mr
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1462614
num_examples: 9672
- name: validation
num_bytes: 180306
num_examples: 1210
- name: test
num_bytes: 180558
num_examples: 1210
download_size: 840304
dataset_size: 1823478
- config_name: inltkh.ta
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 2659569
num_examples: 5346
- name: validation
num_bytes: 316083
num_examples: 669
- name: test
num_bytes: 320465
num_examples: 669
download_size: 1271262
dataset_size: 3296117
- config_name: inltkh.te
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': entertainment
'1': business
'2': tech
'3': sports
'4': state
'5': spirituality
'6': tamil-cinema
'7': positive
'8': negative
'9': neutral
splits:
- name: train
num_bytes: 1361667
num_examples: 4328
- name: validation
num_bytes: 170471
num_examples: 541
- name: test
num_bytes: 173149
num_examples: 541
download_size: 726293
dataset_size: 1705287
- config_name: md.hi
features:
- name: sentence
dtype: string
- name: discourse_mode
dtype: string
- name: story_number
dtype: int32
- name: id
dtype: int32
splits:
- name: train
num_bytes: 1672109
num_examples: 7974
- name: validation
num_bytes: 211187
num_examples: 997
- name: test
num_bytes: 210175
num_examples: 997
download_size: 939801
dataset_size: 2093471
- config_name: sna.bn
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': kolkata
'1': state
'2': national
'3': sports
'4': entertainment
'5': international
splits:
- name: train
num_bytes: 46070046
num_examples: 11284
- name: validation
num_bytes: 5648126
num_examples: 1411
- name: test
num_bytes: 5799979
num_examples: 1411
download_size: 21415940
dataset_size: 57518151
- config_name: wiki-ner.as
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 374983
num_examples: 1021
- name: validation
num_bytes: 49312
num_examples: 157
- name: test
num_bytes: 50456
num_examples: 160
download_size: 72919
dataset_size: 474751
- config_name: wiki-ner.bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7502824
num_examples: 20223
- name: validation
num_bytes: 988683
num_examples: 2985
- name: test
num_bytes: 985941
num_examples: 2690
download_size: 1278219
dataset_size: 9477448
- config_name: wiki-ner.gu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1571588
num_examples: 2343
- name: validation
num_bytes: 192804
num_examples: 297
- name: test
num_bytes: 197877
num_examples: 255
download_size: 329660
dataset_size: 1962269
- config_name: wiki-ner.hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3762505
num_examples: 9463
- name: validation
num_bytes: 468678
num_examples: 1114
- name: test
num_bytes: 475253
num_examples: 1256
download_size: 948132
dataset_size: 4706436
- config_name: wiki-ner.kn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 1352027
num_examples: 2679
- name: validation
num_bytes: 179538
num_examples: 412
- name: test
num_bytes: 180791
num_examples: 476
download_size: 421877
dataset_size: 1712356
- config_name: wiki-ner.ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 7678887
num_examples: 15620
- name: validation
num_bytes: 969947
num_examples: 2067
- name: test
num_bytes: 991102
num_examples: 2042
download_size: 2390442
dataset_size: 9639936
- config_name: wiki-ner.mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 5431489
num_examples: 12151
- name: validation
num_bytes: 701637
num_examples: 1498
- name: test
num_bytes: 655682
num_examples: 1329
download_size: 1410663
dataset_size: 6788808
- config_name: wiki-ner.or
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 493758
num_examples: 1077
- name: validation
num_bytes: 58568
num_examples: 132
- name: test
num_bytes: 62211
num_examples: 153
download_size: 102783
dataset_size: 614537
- config_name: wiki-ner.pa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 520244
num_examples: 1408
- name: validation
num_bytes: 61170
num_examples: 186
- name: test
num_bytes: 61788
num_examples: 179
download_size: 149727
dataset_size: 643202
- config_name: wiki-ner.ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 10117080
num_examples: 20466
- name: validation
num_bytes: 1267188
num_examples: 2586
- name: test
num_bytes: 1321626
num_examples: 2611
download_size: 2819083
dataset_size: 12705894
- config_name: wiki-ner.te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B-LOC
'1': B-ORG
'2': B-PER
'3': I-LOC
'4': I-ORG
'5': I-PER
'6': O
- name: additional_info
sequence:
sequence: string
splits:
- name: train
num_bytes: 3881211
num_examples: 7978
- name: validation
num_bytes: 458509
num_examples: 841
- name: test
num_bytes: 507806
num_examples: 1110
download_size: 1006881
dataset_size: 4847526
- config_name: wnli.en
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 104569
num_examples: 635
- name: validation
num_bytes: 11878
num_examples: 71
- name: test
num_bytes: 37297
num_examples: 146
download_size: 57667
dataset_size: 153744
- config_name: wnli.gu
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 251554
num_examples: 635
- name: validation
num_bytes: 28175
num_examples: 71
- name: test
num_bytes: 94578
num_examples: 146
download_size: 98032
dataset_size: 374307
- config_name: wnli.hi
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 253334
num_examples: 635
- name: validation
num_bytes: 28676
num_examples: 71
- name: test
num_bytes: 90823
num_examples: 146
download_size: 99450
dataset_size: 372833
- config_name: wnli.mr
features:
- name: hypothesis
dtype: string
- name: premise
dtype: string
- name: label
dtype:
class_label:
names:
'0': not_entailment
'1': entailment
'2': None
splits:
- name: train
num_bytes: 256649
num_examples: 635
- name: validation
num_bytes: 29218
num_examples: 71
- name: test
num_bytes: 97128
num_examples: 146
download_size: 103774
dataset_size: 382995
- config_name: wstp.as
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 13581336
num_examples: 5000
- name: validation
num_bytes: 1698968
num_examples: 625
- name: test
num_bytes: 1697650
num_examples: 626
download_size: 6959458
dataset_size: 16977954
- config_name: wstp.bn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 143340457
num_examples: 47580
- name: validation
num_bytes: 17759236
num_examples: 5947
- name: test
num_bytes: 17633865
num_examples: 5948
download_size: 69145372
dataset_size: 178733558
- config_name: wstp.gu
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 39353464
num_examples: 10004
- name: validation
num_bytes: 4887752
num_examples: 1251
- name: test
num_bytes: 4699158
num_examples: 1251
download_size: 19763249
dataset_size: 48940374
- config_name: wstp.hi
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 158529578
num_examples: 44069
- name: validation
num_bytes: 19371904
num_examples: 5509
- name: test
num_bytes: 19593001
num_examples: 5509
download_size: 77868574
dataset_size: 197494483
- config_name: wstp.kn
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 139950313
num_examples: 35379
- name: validation
num_bytes: 17789782
num_examples: 4422
- name: test
num_bytes: 17897031
num_examples: 4423
download_size: 67719504
dataset_size: 175637126
- config_name: wstp.ml
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 88360504
num_examples: 27527
- name: validation
num_bytes: 11193340
num_examples: 3441
- name: test
num_bytes: 11150914
num_examples: 3441
download_size: 42336357
dataset_size: 110704758
- config_name: wstp.mr
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 28302341
num_examples: 10446
- name: validation
num_bytes: 3328798
num_examples: 1306
- name: test
num_bytes: 3631684
num_examples: 1306
download_size: 13886208
dataset_size: 35262823
- config_name: wstp.or
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 10900006
num_examples: 4015
- name: validation
num_bytes: 1264935
num_examples: 502
- name: test
num_bytes: 1344652
num_examples: 502
download_size: 5319128
dataset_size: 13509593
- config_name: wstp.pa
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 22189730
num_examples: 8772
- name: validation
num_bytes: 2789186
num_examples: 1097
- name: test
num_bytes: 2685767
num_examples: 1097
download_size: 11201369
dataset_size: 27664683
- config_name: wstp.ta
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151929218
num_examples: 48940
- name: validation
num_bytes: 18817167
num_examples: 6117
- name: test
num_bytes: 18815071
num_examples: 6118
download_size: 68699092
dataset_size: 189561456
- config_name: wstp.te
features:
- name: sectionText
dtype: string
- name: correctTitle
dtype: string
- name: titleA
dtype: string
- name: titleB
dtype: string
- name: titleC
dtype: string
- name: titleD
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 151696691
num_examples: 80000
- name: validation
num_bytes: 19003169
num_examples: 10000
- name: test
num_bytes: 18991913
num_examples: 10000
download_size: 50158580
dataset_size: 189691773
configs:
- config_name: actsa-sc.te
data_files:
- split: train
path: actsa-sc.te/train-*
- split: validation
path: actsa-sc.te/validation-*
- split: test
path: actsa-sc.te/test-*
- config_name: bbca.hi
data_files:
- split: train
path: bbca.hi/train-*
- split: test
path: bbca.hi/test-*
- config_name: copa.en
data_files:
- split: train
path: copa.en/train-*
- split: validation
path: copa.en/validation-*
- split: test
path: copa.en/test-*
- config_name: copa.gu
data_files:
- split: train
path: copa.gu/train-*
- split: validation
path: copa.gu/validation-*
- split: test
path: copa.gu/test-*
- config_name: copa.hi
data_files:
- split: train
path: copa.hi/train-*
- split: validation
path: copa.hi/validation-*
- split: test
path: copa.hi/test-*
- config_name: copa.mr
data_files:
- split: train
path: copa.mr/train-*
- split: validation
path: copa.mr/validation-*
- split: test
path: copa.mr/test-*
- config_name: csqa.as
data_files:
- split: test
path: csqa.as/test-*
- config_name: csqa.bn
data_files:
- split: test
path: csqa.bn/test-*
- config_name: csqa.gu
data_files:
- split: test
path: csqa.gu/test-*
- config_name: csqa.hi
data_files:
- split: test
path: csqa.hi/test-*
- config_name: csqa.kn
data_files:
- split: test
path: csqa.kn/test-*
- config_name: csqa.ml
data_files:
- split: test
path: csqa.ml/test-*
- config_name: csqa.mr
data_files:
- split: test
path: csqa.mr/test-*
- config_name: csqa.or
data_files:
- split: test
path: csqa.or/test-*
- config_name: csqa.pa
data_files:
- split: test
path: csqa.pa/test-*
- config_name: csqa.ta
data_files:
- split: test
path: csqa.ta/test-*
- config_name: csqa.te
data_files:
- split: test
path: csqa.te/test-*
- config_name: cvit-mkb-clsr.en-bn
data_files:
- split: test
path: cvit-mkb-clsr.en-bn/test-*
- config_name: cvit-mkb-clsr.en-gu
data_files:
- split: test
path: cvit-mkb-clsr.en-gu/test-*
- config_name: cvit-mkb-clsr.en-hi
data_files:
- split: test
path: cvit-mkb-clsr.en-hi/test-*
- config_name: cvit-mkb-clsr.en-ml
data_files:
- split: test
path: cvit-mkb-clsr.en-ml/test-*
- config_name: cvit-mkb-clsr.en-mr
data_files:
- split: test
path: cvit-mkb-clsr.en-mr/test-*
- config_name: cvit-mkb-clsr.en-or
data_files:
- split: test
path: cvit-mkb-clsr.en-or/test-*
- config_name: cvit-mkb-clsr.en-ta
data_files:
- split: test
path: cvit-mkb-clsr.en-ta/test-*
- config_name: cvit-mkb-clsr.en-te
data_files:
- split: test
path: cvit-mkb-clsr.en-te/test-*
- config_name: cvit-mkb-clsr.en-ur
data_files:
- split: test
path: cvit-mkb-clsr.en-ur/test-*
- config_name: iitp-mr.hi
data_files:
- split: train
path: iitp-mr.hi/train-*
- split: validation
path: iitp-mr.hi/validation-*
- split: test
path: iitp-mr.hi/test-*
- config_name: iitp-pr.hi
data_files:
- split: train
path: iitp-pr.hi/train-*
- split: validation
path: iitp-pr.hi/validation-*
- split: test
path: iitp-pr.hi/test-*
- config_name: inltkh.gu
data_files:
- split: train
path: inltkh.gu/train-*
- split: validation
path: inltkh.gu/validation-*
- split: test
path: inltkh.gu/test-*
- config_name: inltkh.ml
data_files:
- split: train
path: inltkh.ml/train-*
- split: validation
path: inltkh.ml/validation-*
- split: test
path: inltkh.ml/test-*
- config_name: inltkh.mr
data_files:
- split: train
path: inltkh.mr/train-*
- split: validation
path: inltkh.mr/validation-*
- split: test
path: inltkh.mr/test-*
- config_name: inltkh.ta
data_files:
- split: train
path: inltkh.ta/train-*
- split: validation
path: inltkh.ta/validation-*
- split: test
path: inltkh.ta/test-*
- config_name: inltkh.te
data_files:
- split: train
path: inltkh.te/train-*
- split: validation
path: inltkh.te/validation-*
- split: test
path: inltkh.te/test-*
- config_name: md.hi
data_files:
- split: train
path: md.hi/train-*
- split: validation
path: md.hi/validation-*
- split: test
path: md.hi/test-*
- config_name: sna.bn
data_files:
- split: train
path: sna.bn/train-*
- split: validation
path: sna.bn/validation-*
- split: test
path: sna.bn/test-*
- config_name: wiki-ner.as
data_files:
- split: train
path: wiki-ner.as/train-*
- split: validation
path: wiki-ner.as/validation-*
- split: test
path: wiki-ner.as/test-*
- config_name: wiki-ner.bn
data_files:
- split: train
path: wiki-ner.bn/train-*
- split: validation
path: wiki-ner.bn/validation-*
- split: test
path: wiki-ner.bn/test-*
- config_name: wiki-ner.gu
data_files:
- split: train
path: wiki-ner.gu/train-*
- split: validation
path: wiki-ner.gu/validation-*
- split: test
path: wiki-ner.gu/test-*
- config_name: wiki-ner.hi
data_files:
- split: train
path: wiki-ner.hi/train-*
- split: validation
path: wiki-ner.hi/validation-*
- split: test
path: wiki-ner.hi/test-*
- config_name: wiki-ner.kn
data_files:
- split: train
path: wiki-ner.kn/train-*
- split: validation
path: wiki-ner.kn/validation-*
- split: test
path: wiki-ner.kn/test-*
- config_name: wiki-ner.ml
data_files:
- split: train
path: wiki-ner.ml/train-*
- split: validation
path: wiki-ner.ml/validation-*
- split: test
path: wiki-ner.ml/test-*
- config_name: wiki-ner.mr
data_files:
- split: train
path: wiki-ner.mr/train-*
- split: validation
path: wiki-ner.mr/validation-*
- split: test
path: wiki-ner.mr/test-*
- config_name: wiki-ner.or
data_files:
- split: train
path: wiki-ner.or/train-*
- split: validation
path: wiki-ner.or/validation-*
- split: test
path: wiki-ner.or/test-*
- config_name: wiki-ner.pa
data_files:
- split: train
path: wiki-ner.pa/train-*
- split: validation
path: wiki-ner.pa/validation-*
- split: test
path: wiki-ner.pa/test-*
- config_name: wiki-ner.ta
data_files:
- split: train
path: wiki-ner.ta/train-*
- split: validation
path: wiki-ner.ta/validation-*
- split: test
path: wiki-ner.ta/test-*
- config_name: wiki-ner.te
data_files:
- split: train
path: wiki-ner.te/train-*
- split: validation
path: wiki-ner.te/validation-*
- split: test
path: wiki-ner.te/test-*
- config_name: wnli.en
data_files:
- split: train
path: wnli.en/train-*
- split: validation
path: wnli.en/validation-*
- split: test
path: wnli.en/test-*
- config_name: wnli.gu
data_files:
- split: train
path: wnli.gu/train-*
- split: validation
path: wnli.gu/validation-*
- split: test
path: wnli.gu/test-*
- config_name: wnli.hi
data_files:
- split: train
path: wnli.hi/train-*
- split: validation
path: wnli.hi/validation-*
- split: test
path: wnli.hi/test-*
- config_name: wnli.mr
data_files:
- split: train
path: wnli.mr/train-*
- split: validation
path: wnli.mr/validation-*
- split: test
path: wnli.mr/test-*
- config_name: wstp.as
data_files:
- split: train
path: wstp.as/train-*
- split: validation
path: wstp.as/validation-*
- split: test
path: wstp.as/test-*
- config_name: wstp.bn
data_files:
- split: train
path: wstp.bn/train-*
- split: validation
path: wstp.bn/validation-*
- split: test
path: wstp.bn/test-*
- config_name: wstp.gu
data_files:
- split: train
path: wstp.gu/train-*
- split: validation
path: wstp.gu/validation-*
- split: test
path: wstp.gu/test-*
- config_name: wstp.hi
data_files:
- split: train
path: wstp.hi/train-*
- split: validation
path: wstp.hi/validation-*
- split: test
path: wstp.hi/test-*
- config_name: wstp.kn
data_files:
- split: train
path: wstp.kn/train-*
- split: validation
path: wstp.kn/validation-*
- split: test
path: wstp.kn/test-*
- config_name: wstp.ml
data_files:
- split: train
path: wstp.ml/train-*
- split: validation
path: wstp.ml/validation-*
- split: test
path: wstp.ml/test-*
- config_name: wstp.mr
data_files:
- split: train
path: wstp.mr/train-*
- split: validation
path: wstp.mr/validation-*
- split: test
path: wstp.mr/test-*
- config_name: wstp.or
data_files:
- split: train
path: wstp.or/train-*
- split: validation
path: wstp.or/validation-*
- split: test
path: wstp.or/test-*
- config_name: wstp.pa
data_files:
- split: train
path: wstp.pa/train-*
- split: validation
path: wstp.pa/validation-*
- split: test
path: wstp.pa/test-*
- config_name: wstp.ta
data_files:
- split: train
path: wstp.ta/train-*
- split: validation
path: wstp.ta/validation-*
- split: test
path: wstp.ta/test-*
- config_name: wstp.te
data_files:
- split: train
path: wstp.te/train-*
- split: validation
path: wstp.te/validation-*
- split: test
path: wstp.te/test-*
---
# Dataset Card for "indic_glue"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ai4bharat.iitm.ac.in/indic-glue
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages](https://aclanthology.org/2020.findings-emnlp.445/)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 3.51 GB
- **Size of the generated dataset:** 1.65 GB
- **Total amount of disk used:** 5.16 GB
### Dataset Summary
IndicGLUE is a natural language understanding benchmark for Indian languages. It contains a wide
variety of tasks and covers 11 major Indian languages - as, bn, gu, hi, kn, ml, mr, or, pa, ta, te.
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task
in which a system must read a sentence with a pronoun and select the referent of that pronoun from
a list of choices. The examples are manually constructed to foil simple statistical methods: Each
one is contingent on contextual information provided by a single word or phrase in the sentence.
To convert the problem into sentence pair classification, we construct sentence pairs by replacing
the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the
pronoun substituted is entailed by the original sentence. We use a small evaluation set consisting of
new examples derived from fiction books that was shared privately by the authors of the original
corpus. While the included training set is balanced between two classes, the test set is imbalanced
between them (65% not entailment). Also, due to a data quirk, the development set is adversarial:
hypotheses are sometimes shared between training and development examples, so if a model memorizes the
training examples, they will predict the wrong label on corresponding development set
example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence
between a model's score on this task and its score on the unconverted original task. We
call converted dataset WNLI (Winograd NLI). This dataset is translated and publicly released for 3
Indian languages by AI4Bharat.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### actsa-sc.te
- **Size of downloaded dataset files:** 0.38 MB
- **Size of the generated dataset:** 1.71 MB
- **Total amount of disk used:** 2.09 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"label": 0,
"text": "\"ప్రయాణాల్లో ఉన్నవారికోసం బస్ స్టేషన్లు, రైల్వే స్టేషన్లలో పల్స్పోలియో బూతులను ఏర్పాటు చేసి చిన్నారులకు పోలియో చుక్కలు వేసేలా ఏర..."
}
```
#### bbca.hi
- **Size of downloaded dataset files:** 5.77 MB
- **Size of the generated dataset:** 27.63 MB
- **Total amount of disk used:** 33.40 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"label": "pakistan",
"text": "\"नेटिजन यानि इंटरनेट पर सक्रिय नागरिक अब ट्विटर पर सरकार द्वारा लगाए प्रतिबंधों के समर्थन या विरोध में अपने विचार व्यक्त करते है..."
}
```
#### copa.en
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.87 MB
An example of 'validation' looks as follows.
```
{
"choice1": "I swept the floor in the unoccupied room.",
"choice2": "I shut off the light in the unoccupied room.",
"label": 1,
"premise": "I wanted to conserve energy.",
"question": "effect"
}
```
#### copa.gu
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"choice1": "\"સ્ત્રી જાણતી હતી કે તેનો મિત્ર મુશ્કેલ સમયમાંથી પસાર થઈ રહ્યો છે.\"...",
"choice2": "\"મહિલાને લાગ્યું કે તેના મિત્રએ તેની દયાળુ લાભ લીધો છે.\"...",
"label": 0,
"premise": "મહિલાએ તેના મિત્રની મુશ્કેલ વર્તન સહન કરી.",
"question": "cause"
}
```
#### copa.hi
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.99 MB
An example of 'validation' looks as follows.
```
{
"choice1": "मैंने उसका प्रस्ताव ठुकरा दिया।",
"choice2": "उन्होंने मुझे उत्पाद खरीदने के लिए राजी किया।",
"label": 0,
"premise": "मैंने सेल्समैन की पिच पर शक किया।",
"question": "effect"
}
```
### Data Fields
The data fields are the same among all splits.
#### actsa-sc.te
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `positive` (0), `negative` (1).
#### bbca.hi
- `label`: a `string` feature.
- `text`: a `string` feature.
#### copa.en
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.gu
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
#### copa.hi
- `premise`: a `string` feature.
- `choice1`: a `string` feature.
- `choice2`: a `string` feature.
- `question`: a `string` feature.
- `label`: a `int32` feature.
### Data Splits
#### actsa-sc.te
| |train|validation|test|
|-----------|----:|---------:|---:|
|actsa-sc.te| 4328| 541| 541|
#### bbca.hi
| |train|test|
|-------|----:|---:|
|bbca.hi| 3467| 866|
#### copa.en
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.en| 400| 100| 500|
#### copa.gu
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.gu| 362| 88| 448|
#### copa.hi
| |train|validation|test|
|-------|----:|---------:|---:|
|copa.hi| 362| 88| 449|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{kakwani-etal-2020-indicnlpsuite,
title = "{I}ndic{NLPS}uite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for {I}ndian Languages",
author = "Kakwani, Divyanshu and
Kunchukuttan, Anoop and
Golla, Satish and
N.C., Gokul and
Bhattacharyya, Avik and
Khapra, Mitesh M. and
Kumar, Pratyush",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.445",
doi = "10.18653/v1/2020.findings-emnlp.445",
pages = "4948--4961",
}
@inproceedings{Levesque2011TheWS,
title={The Winograd Schema Challenge},
author={H. Levesque and E. Davis and L. Morgenstern},
booktitle={KR},
year={2011}
}
```
### Contributions
Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset. |
AISE-TUDelft/MSR_Intermediate | AISE-TUDelft | "2025-02-18T16:10:48Z" | 26,038 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-31T11:21:58Z" | ---
dataset_info:
- config_name: ANTLRExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 7557410
num_examples: 541
download_size: 2707259
dataset_size: 7557410
- config_name: AdaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 578367556
num_examples: 35425
download_size: 110673452
dataset_size: 578367556
- config_name: AdaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 578655182
num_examples: 35425
download_size: 111025773
dataset_size: 578655182
- config_name: AgdaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 38226393
num_examples: 5113
download_size: 14182143
dataset_size: 38226393
- config_name: AgdaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 38267937
num_examples: 5113
download_size: 14217347
dataset_size: 38267937
- config_name: AntlrNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 7561706
num_examples: 541
download_size: 2724032
dataset_size: 7561706
- config_name: ApexExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 24569165
num_examples: 7641
download_size: 6353866
dataset_size: 24569165
- config_name: ApexNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 24631233
num_examples: 7641
download_size: 6368630
dataset_size: 24631233
- config_name: AssemblyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2053432940
num_examples: 104901
download_size: 547495918
dataset_size: 2053432940
- config_name: AssemblyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2054324591
num_examples: 104901
download_size: 549503862
dataset_size: 2054324591
- config_name: C#Exact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 26661602730
num_examples: 3770829
download_size: 6588906272
dataset_size: 26661602730
- config_name: C#Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 26663488268
num_examples: 3770829
download_size: 6603075859
dataset_size: 26663488268
- config_name: CExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 89736233404
num_examples: 4960192
download_size: 28128090840
dataset_size: 89736233404
- config_name: CNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 89738714139
num_examples: 4960192
download_size: 28299862901
dataset_size: 89738714139
- config_name: COBOLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 6629054
num_examples: 1208
download_size: 1750557
dataset_size: 6629054
- config_name: CPP2Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 58160863267
num_examples: 4811620
download_size: 17129813603
dataset_size: 58160863267
- config_name: CPPExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 58160261610
num_examples: 4811620
download_size: 17076690695
dataset_size: 58160261610
- config_name: CPPNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 58162667758
num_examples: 4811620
download_size: 17132623057
dataset_size: 58162667758
- config_name: ClojureExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 1421032074
num_examples: 273181
download_size: 459309399
dataset_size: 1421032074
- config_name: ClojureNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 1421066089
num_examples: 273181
download_size: 460645609
dataset_size: 1421066089
- config_name: CobolNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 6629000
num_examples: 1208
download_size: 1733668
dataset_size: 6629000
- config_name: CommonLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 296677534
num_examples: 16968
download_size: 109149148
dataset_size: 296677534
- config_name: CommonLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 296679380
num_examples: 16968
download_size: 110407258
dataset_size: 296679380
- config_name: CoqExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 632649581
num_examples: 26175
download_size: 189961246
dataset_size: 632649581
- config_name: CoqNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 632652578
num_examples: 26175
download_size: 190833648
dataset_size: 632652578
- config_name: CrystalExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 31335756
num_examples: 7300
download_size: 10366475
dataset_size: 31335756
- config_name: CrystalNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 31336434
num_examples: 7300
download_size: 10379390
dataset_size: 31336434
- config_name: CudaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 334592336
num_examples: 13359
download_size: 102491703
dataset_size: 334592336
- config_name: CudaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 334593801
num_examples: 13359
download_size: 102875919
dataset_size: 334593801
- config_name: DExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 3255384976
num_examples: 126111
download_size: 1129728566
dataset_size: 3255384976
- config_name: DNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 3255400520
num_examples: 126111
download_size: 1135463467
dataset_size: 3255400520
- config_name: DartExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 2329045207
num_examples: 413203
download_size: 669869628
dataset_size: 2329045207
- config_name: DartNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2329096793
num_examples: 413203
download_size: 670901970
dataset_size: 2329096793
- config_name: EJSExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 71531393
num_examples: 12884
download_size: 21195866
dataset_size: 71531393
- config_name: EjsNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 71635864
num_examples: 12884
download_size: 21210665
dataset_size: 71635864
- config_name: ElixirExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 841135120
num_examples: 127910
download_size: 298160239
dataset_size: 841135120
- config_name: ElixirNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 841151060
num_examples: 127910
download_size: 298816538
dataset_size: 841151060
- config_name: ElmExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 8383
num_examples: 7
download_size: 27695
dataset_size: 8383
- config_name: ElmNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 8354
num_examples: 7
download_size: 22185
dataset_size: 8354
- config_name: EmacsLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 250101478
num_examples: 7963
download_size: 86051810
dataset_size: 250101478
- config_name: EmacsLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 250102362
num_examples: 7963
download_size: 86437277
dataset_size: 250102362
- config_name: ErlangExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 372175368
num_examples: 32049
download_size: 110494347
dataset_size: 372175368
- config_name: ErlangNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 372179250
num_examples: 32049
download_size: 110899584
dataset_size: 372179250
- config_name: F#Exact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 192500693
num_examples: 16015
download_size: 47297899
dataset_size: 192500693
- config_name: F#Near
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 192502511
num_examples: 16015
download_size: 47470253
dataset_size: 192502511
- config_name: ForthExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 347106780
num_examples: 7932
download_size: 144504016
dataset_size: 347106780
- config_name: ForthNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 347107612
num_examples: 7932
download_size: 146797290
dataset_size: 347107612
- config_name: FortranExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 2847566
num_examples: 63
download_size: 1054373
dataset_size: 2847566
- config_name: FortranNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2847315
num_examples: 63
download_size: 1062081
dataset_size: 2847315
- config_name: GoExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 26167202808
num_examples: 2355716
download_size: 8138108314
dataset_size: 26167202808
- config_name: GoNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 26168086245
num_examples: 2355716
download_size: 8174167267
dataset_size: 26168086245
- config_name: GraphQLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 38263
num_examples: 3
download_size: 36182
dataset_size: 38263
- config_name: GraphQLNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 38254
num_examples: 3
download_size: 32912
dataset_size: 38254
- config_name: GroovyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 248453777
num_examples: 48353
download_size: 78401132
dataset_size: 248453777
- config_name: GroovyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 248459647
num_examples: 48353
download_size: 78630814
dataset_size: 248459647
- config_name: HackExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 380628718
num_examples: 37405
download_size: 128232166
dataset_size: 380628718
- config_name: HackNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 380633269
num_examples: 37405
download_size: 128649687
dataset_size: 380633269
- config_name: HaskellExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 689851801
num_examples: 111234
download_size: 236120258
dataset_size: 689851801
- config_name: HaskellNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689893289
num_examples: 111234
download_size: 236739420
dataset_size: 689893289
- config_name: HaskellNearT
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689893289
num_examples: 111234
download_size: 236739420
dataset_size: 689893289
- config_name: HaskellTest
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 689865477
num_examples: 111234
download_size: 236693079
dataset_size: 689865477
- config_name: HaskellTest2
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_duplicates_ghcode
dtype: bool
splits:
- name: train
num_bytes: 689865477
num_examples: 111234
download_size: 236695867
dataset_size: 689865477
- config_name: JavaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 32486202146
num_examples: 5197338
download_size: 8535677041
dataset_size: 32486202146
- config_name: JavaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 32488151167
num_examples: 5197338
download_size: 8542985524
dataset_size: 32488151167
- config_name: JavaNearF
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_duplicates_githubcode
dtype: bool
splits:
- name: train
num_bytes: 32488800842
num_examples: 5197338
download_size: 8543979432
dataset_size: 32488800842
- config_name: JavaScriptExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 88087310969
num_examples: 3393747
download_size: 28914572193
dataset_size: 88087310969
- config_name: JavaScriptNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 88089008184
num_examples: 3393747
download_size: 29083319680
dataset_size: 88089008184
- config_name: JuliaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 967638517
num_examples: 38381
download_size: 246231934
dataset_size: 967638517
- config_name: JuliaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 967652903
num_examples: 38381
download_size: 247077270
dataset_size: 967652903
- config_name: JupyterNotebookExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 11722076020
num_examples: 35313
download_size: 9067703543
dataset_size: 11722076020
- config_name: KotlinExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 3812037093
num_examples: 1045396
download_size: 1110654794
dataset_size: 3812037093
- config_name: KotlinNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 3812167735
num_examples: 1045396
download_size: 1110429592
dataset_size: 3812167735
- config_name: LessExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 95845785
num_examples: 7389
download_size: 26480395
dataset_size: 95845785
- config_name: LessNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 95846529
num_examples: 7389
download_size: 26477572
dataset_size: 95846529
- config_name: LuaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 8353646445
num_examples: 913898
download_size: 2505145950
dataset_size: 8353646445
- config_name: LuaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 8353989182
num_examples: 913898
download_size: 2515603988
dataset_size: 8353989182
- config_name: MathematicaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 7321356594
num_examples: 89853
download_size: 3584669375
dataset_size: 7321356594
- config_name: MathematicaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 7321378962
num_examples: 89853
download_size: 3602914923
dataset_size: 7321378962
- config_name: MatlabExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 6903374516
num_examples: 665659
download_size: 2399794447
dataset_size: 6903374516
- config_name: MatlabNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 6903540783
num_examples: 665659
download_size: 2414346658
dataset_size: 6903540783
- config_name: NetLogoExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 41827450
num_examples: 863
download_size: 11620917
dataset_size: 41827450
- config_name: NetLogoNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 41827458
num_examples: 863
download_size: 11679805
dataset_size: 41827458
- config_name: NewLispExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 52918884
num_examples: 5148
download_size: 14039770
dataset_size: 52918884
- config_name: NewLispNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 52918764
num_examples: 5148
download_size: 14074385
dataset_size: 52918764
- config_name: NixExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 203855711
num_examples: 71199
download_size: 78575477
dataset_size: 203855711
- config_name: NixNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 203864476
num_examples: 71199
download_size: 78726489
dataset_size: 203864476
- config_name: OCamlExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 648064551
num_examples: 69171
download_size: 222300297
dataset_size: 648064551
- config_name: OCamlNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 648072984
num_examples: 69171
download_size: 222952991
dataset_size: 648072984
- config_name: Objective-CExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 9602633568
num_examples: 698137
download_size: 3703274717
dataset_size: 9602633568
- config_name: Objective-CNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 9602720799
num_examples: 698137
download_size: 3719903322
dataset_size: 9602720799
- config_name: PHPExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 25438710903
num_examples: 3363040
download_size: 7613380934
dataset_size: 25438710903
- config_name: PHPNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 25440392419
num_examples: 3363040
download_size: 7635635671
dataset_size: 25440392419
- config_name: PascalExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 7655488388
num_examples: 225749
download_size: 2498908413
dataset_size: 7655488388
- config_name: PascalNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 7655516624
num_examples: 225749
download_size: 2517922393
dataset_size: 7655516624
- config_name: PerlExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 16870280664
num_examples: 629769
download_size: 5734951211
dataset_size: 16870280664
- config_name: PerlNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 16870516978
num_examples: 629769
download_size: 5771999455
dataset_size: 16870516978
- config_name: ProcessingExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 84096293
num_examples: 20343
download_size: 29270300
dataset_size: 84096293
- config_name: ProcessingNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 84098751
num_examples: 20343
download_size: 29246387
dataset_size: 84098751
- config_name: PrologExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
splits:
- name: train
num_bytes: 691824350
num_examples: 20279
download_size: 191072651
dataset_size: 691824350
- config_name: PrologNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 691829302
num_examples: 20279
download_size: 192117293
dataset_size: 691829302
- config_name: PythonExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
splits:
- name: train
num_bytes: 25545914243
num_examples: 1792451
download_size: 10130671538
dataset_size: 25545914243
- config_name: PythonNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
- name: near_dups_codeparrot
dtype: bool
- name: near_dups_ghcode
dtype: bool
splits:
- name: train
num_bytes: 25546586522
num_examples: 1792451
download_size: 10170421542
dataset_size: 25546586522
- config_name: PythonParrot
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
- name: near_duplicates_codeparrot
dtype: bool
splits:
- name: train
num_bytes: 25546138378
num_examples: 1792451
download_size: 10169529284
dataset_size: 25546138378
- config_name: PythonTest
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
- name: exact_dupe_CodeParrot
dtype: bool
- name: near_duplicates_redpajama
dtype: bool
splits:
- name: train
num_bytes: 25546138386
num_examples: 1792451
download_size: 10169495473
dataset_size: 25546138386
- config_name: RExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 21442634265
num_examples: 374812
download_size: 8600403423
dataset_size: 21442634265
- config_name: RNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 21445679622
num_examples: 374812
download_size: 8727132044
dataset_size: 21445679622
- config_name: RakuExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 4553176
num_examples: 1299
download_size: 1377473
dataset_size: 4553176
- config_name: RakuNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 4553060
num_examples: 1299
download_size: 1372440
dataset_size: 4553060
- config_name: RubyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2981769330
num_examples: 794364
download_size: 1009215918
dataset_size: 2981769330
- config_name: RubyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 2982067120
num_examples: 794364
download_size: 1010741791
dataset_size: 2982067120
- config_name: RustExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 8834624371
num_examples: 844258
download_size: 2619167582
dataset_size: 8834624371
- config_name: RustNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 8834835442
num_examples: 844258
download_size: 2628770077
dataset_size: 8834835442
- config_name: SQLExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 392804973
num_examples: 41178
download_size: 87660816
dataset_size: 392804973
- config_name: SQLNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 392820273
num_examples: 41178
download_size: 87888246
dataset_size: 392820273
- config_name: ScalaExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 1121907877
num_examples: 224021
download_size: 357412683
dataset_size: 1121907877
- config_name: ScalaNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_ghcode
dtype: bool
- name: near_dups_stackv1
dtype: bool
- name: near_dups_redpajama
dtype: bool
splits:
- name: train
num_bytes: 1121963752
num_examples: 224021
download_size: 358048356
dataset_size: 1121963752
- config_name: SchemeExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 1666453613
num_examples: 54226
download_size: 609833105
dataset_size: 1666453613
- config_name: SchemeNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 1666453595
num_examples: 54226
download_size: 615428052
dataset_size: 1666453595
- config_name: ScilabExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 40724888
num_examples: 4084
download_size: 19426798
dataset_size: 40724888
- config_name: ScilabNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 40724883
num_examples: 4084
download_size: 19424804
dataset_size: 40724883
- config_name: StarlarkExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 3457635
num_examples: 498
download_size: 1347364
dataset_size: 3457635
- config_name: StarlarkNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
splits:
- name: train
num_bytes: 3457631
num_examples: 498
download_size: 1352131
dataset_size: 3457631
- config_name: SwiftExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2713542331
num_examples: 439565
download_size: 854140622
dataset_size: 2713542331
- config_name: SwiftNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2713542195
num_examples: 439565
download_size: 855986444
dataset_size: 2713542195
- config_name: TurtleExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 3442516
num_examples: 17
download_size: 799325
dataset_size: 3442516
- config_name: TypeScriptExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 14176972339
num_examples: 2837126
download_size: 4433625232
dataset_size: 14176972339
- config_name: VueExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 2137815900
num_examples: 323672
download_size: 674476397
dataset_size: 2137815900
- config_name: VueNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: __index_level_0__
dtype: int64
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_TheStackV1
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 2137815643
num_examples: 323672
download_size: 676642096
dataset_size: 2137815643
- config_name: WebAssemblyExact
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: exact_dupe_GithubCode
dtype: bool
splits:
- name: train
num_bytes: 120184637
num_examples: 585
download_size: 39377515
dataset_size: 120184637
- config_name: WebAssemblyNear
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: language
dtype: string
- name: extension
dtype: string
- name: total_lines
dtype: int64
- name: avg_line_length
dtype: float64
- name: max_line_length
dtype: int64
- name: alphanum_fraction
dtype: float64
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
- name: repo_license
dtype: string
- name: repo_extraction_date
dtype: string
- name: sha
dtype: string
- name: exact_dupe_TheStackV2
dtype: bool
- name: exact_dupe_TheStack
dtype: bool
- name: exact_dupe_RedPajama
dtype: bool
- name: near_dups_stackv2
dtype: bool
- name: near_dups_stackv1
dtype: bool
splits:
- name: train
num_bytes: 120184495
num_examples: 585
download_size: 39587423
dataset_size: 120184495
configs:
- config_name: ANTLRExact
data_files:
- split: train
path: data/ANTLR_Exact/train-*
- config_name: AdaExact
data_files:
- split: train
path: data/Ada_Exact/train-*
- config_name: AdaNear
data_files:
- split: train
path: data/Ada_Near/train-*
- config_name: AgdaExact
data_files:
- split: train
path: data/Agda_Exact/train-*
- config_name: AgdaNear
data_files:
- split: train
path: data/Agda_Near/train-*
- config_name: AntlrNear
data_files:
- split: train
path: data/Antlr_Near/train-*
- config_name: ApexExact
data_files:
- split: train
path: data/Apex_Exact/train-*
- config_name: ApexNear
data_files:
- split: train
path: data/Apex_Near/train-*
- config_name: AssemblyExact
data_files:
- split: train
path: data/Assembly_Exact/train-*
- config_name: AssemblyNear
data_files:
- split: train
path: data/Assembly_Near/train-*
- config_name: C#Exact
data_files:
- split: train
path: data/C#_Exact/train-*
- config_name: C#Near
data_files:
- split: train
path: data/C#_Near/train-*
- config_name: CExact
data_files:
- split: train
path: data/C_Exact/train-*
- config_name: CNear
data_files:
- split: train
path: data/C_Near/train-*
- config_name: COBOLExact
data_files:
- split: train
path: data/COBOL_Exact/train-*
- config_name: CPP2Near
data_files:
- split: train
path: data/CPP2_Near/train-*
- config_name: CPPExact
data_files:
- split: train
path: data/CPP_Exact/train-*
- config_name: CPPNear
data_files:
- split: train
path: data/CPP_Near/train-*
- config_name: ClojureExact
data_files:
- split: train
path: data/Clojure_Exact/train-*
- config_name: ClojureNear
data_files:
- split: train
path: data/Clojure_Near/train-*
- config_name: CobolNear
data_files:
- split: train
path: data/Cobol_Near/train-*
- config_name: CommonLispExact
data_files:
- split: train
path: data/CommonLisp_Exact/train-*
- config_name: CommonLispNear
data_files:
- split: train
path: data/CommonLisp_Near/train-*
- config_name: CoqExact
data_files:
- split: train
path: data/Coq_Exact/train-*
- config_name: CoqNear
data_files:
- split: train
path: data/Coq_Near/train-*
- config_name: CrystalExact
data_files:
- split: train
path: data/Crystal_Exact/train-*
- config_name: CrystalNear
data_files:
- split: train
path: data/Crystal_Near/train-*
- config_name: CudaExact
data_files:
- split: train
path: data/Cuda_Exact/train-*
- config_name: CudaNear
data_files:
- split: train
path: data/Cuda_Near/train-*
- config_name: DExact
data_files:
- split: train
path: data/D_Exact/train-*
- config_name: DNear
data_files:
- split: train
path: data/D_Near/train-*
- config_name: DartExact
data_files:
- split: train
path: data/Dart_Exact/train-*
- config_name: DartNear
data_files:
- split: train
path: data/Dart_Near/train-*
- config_name: EJSExact
data_files:
- split: train
path: data/EJS_Exact/train-*
- config_name: EjsNear
data_files:
- split: train
path: data/Ejs_Near/train-*
- config_name: ElixirExact
data_files:
- split: train
path: data/Elixir_Exact/train-*
- config_name: ElixirNear
data_files:
- split: train
path: data/Elixir_Near/train-*
- config_name: ElmExact
data_files:
- split: train
path: data/Elm_Exact/train-*
- config_name: ElmNear
data_files:
- split: train
path: data/Elm_Near/train-*
- config_name: EmacsLispExact
data_files:
- split: train
path: data/EmacsLisp_Exact/train-*
- config_name: EmacsLispNear
data_files:
- split: train
path: data/EmacsLisp_Near/train-*
- config_name: ErlangExact
data_files:
- split: train
path: data/Erlang_Exact/train-*
- config_name: ErlangNear
data_files:
- split: train
path: data/Erlang_Near/train-*
- config_name: F#Exact
data_files:
- split: train
path: data/F#_Exact/train-*
- config_name: F#Near
data_files:
- split: train
path: data/F#_Near/train-*
- config_name: ForthExact
data_files:
- split: train
path: data/Forth_Exact/train-*
- config_name: ForthNear
data_files:
- split: train
path: data/Forth_Near/train-*
- config_name: FortranExact
data_files:
- split: train
path: data/Fortran_Exact/train-*
- config_name: FortranNear
data_files:
- split: train
path: data/Fortran_Near/train-*
- config_name: GoExact
data_files:
- split: train
path: data/Go_Exact/train-*
- config_name: GoNear
data_files:
- split: train
path: data/Go_Near/train-*
- config_name: GraphQLExact
data_files:
- split: train
path: data/GraphQL_Exact/train-*
- config_name: GraphQLNear
data_files:
- split: train
path: data/GraphQL_Near/train-*
- config_name: GroovyExact
data_files:
- split: train
path: data/Groovy_Exact/train-*
- config_name: GroovyNear
data_files:
- split: train
path: data/Groovy_Near/train-*
- config_name: HackExact
data_files:
- split: train
path: data/Hack_Exact/train-*
- config_name: HackNear
data_files:
- split: train
path: data/Hack_Near/train-*
- config_name: HaskellExact
data_files:
- split: train
path: data/Haskell_Exact/train-*
- config_name: HaskellNear
data_files:
- split: train
path: data/Haskell_Near/train-*
- config_name: HaskellNearT
data_files:
- split: train
path: data/Haskell_NearT/train-*
- config_name: HaskellTest
data_files:
- split: train
path: data/Haskell_Test/train-*
- config_name: HaskellTest2
data_files:
- split: train
path: data/Haskell_Test2/train-*
- config_name: JavaExact
data_files:
- split: train
path: data/Java_Exact/train-*
- config_name: JavaNear
data_files:
- split: train
path: data/Java_Near/train-*
- config_name: JavaNearF
data_files:
- split: train
path: data/Java_NearF/train-*
- config_name: JavaScriptExact
data_files:
- split: train
path: data/JavaScript_Exact/train-*
- config_name: JavaScriptNear
data_files:
- split: train
path: data/JavaScript_Near/train-*
- config_name: JuliaExact
data_files:
- split: train
path: data/Julia_Exact/train-*
- config_name: JuliaNear
data_files:
- split: train
path: data/Julia_Near/train-*
- config_name: JupyterNotebookExact
data_files:
- split: train
path: data/JupyterNotebook_Exact/train-*
- config_name: KotlinExact
data_files:
- split: train
path: data/Kotlin_Exact/train-*
- config_name: KotlinNear
data_files:
- split: train
path: data/Kotlin_Near/train-*
- config_name: LessExact
data_files:
- split: train
path: data/Less_Exact/train-*
- config_name: LessNear
data_files:
- split: train
path: data/Less_Near/train-*
- config_name: LuaExact
data_files:
- split: train
path: data/Lua_Exact/train-*
- config_name: LuaNear
data_files:
- split: train
path: data/Lua_Near/train-*
- config_name: MathematicaExact
data_files:
- split: train
path: data/Mathematica_Exact/train-*
- config_name: MathematicaNear
data_files:
- split: train
path: data/Mathematica_Near/train-*
- config_name: MatlabExact
data_files:
- split: train
path: data/Matlab_Exact/train-*
- config_name: MatlabNear
data_files:
- split: train
path: data/Matlab_Near/train-*
- config_name: NetLogoExact
data_files:
- split: train
path: data/NetLogo_Exact/train-*
- config_name: NetLogoNear
data_files:
- split: train
path: data/NetLogo_Near/train-*
- config_name: NewLispExact
data_files:
- split: train
path: data/NewLisp_Exact/train-*
- config_name: NewLispNear
data_files:
- split: train
path: data/NewLisp_Near/train-*
- config_name: NixExact
data_files:
- split: train
path: data/Nix_Exact/train-*
- config_name: NixNear
data_files:
- split: train
path: data/Nix_Near/train-*
- config_name: OCamlExact
data_files:
- split: train
path: data/OCaml_Exact/train-*
- config_name: OCamlNear
data_files:
- split: train
path: data/OCaml_Near/train-*
- config_name: Objective-CExact
data_files:
- split: train
path: data/Objective-C_Exact/train-*
- config_name: Objective-CNear
data_files:
- split: train
path: data/Objective-C_Near/train-*
- config_name: PHPExact
data_files:
- split: train
path: data/PHP_Exact/train-*
- config_name: PHPNear
data_files:
- split: train
path: data/PHP_Near/train-*
- config_name: PascalExact
data_files:
- split: train
path: data/Pascal_Exact/train-*
- config_name: PascalNear
data_files:
- split: train
path: data/Pascal_Near/train-*
- config_name: PerlExact
data_files:
- split: train
path: data/Perl_Exact/train-*
- config_name: PerlNear
data_files:
- split: train
path: data/Perl_Near/train-*
- config_name: ProcessingExact
data_files:
- split: train
path: data/Processing_Exact/train-*
- config_name: ProcessingNear
data_files:
- split: train
path: data/Processing_Near/train-*
- config_name: PrologExact
data_files:
- split: train
path: data/Prolog_Exact/train-*
- config_name: PrologNear
data_files:
- split: train
path: data/Prolog_Near/train-*
- config_name: PythonExact
data_files:
- split: train
path: data/Python_Exact/train-*
- config_name: PythonNear
data_files:
- split: train
path: data/Python_Near/train-*
- config_name: PythonParrot
data_files:
- split: train
path: data/Python_Parrot/train-*
- config_name: PythonTest
data_files:
- split: train
path: data/Python_Test/train-*
- config_name: RExact
data_files:
- split: train
path: data/R_Exact/train-*
- config_name: RNear
data_files:
- split: train
path: data/R_Near/train-*
- config_name: RakuExact
data_files:
- split: train
path: data/Raku_Exact/train-*
- config_name: RakuNear
data_files:
- split: train
path: data/Raku_Near/train-*
- config_name: RubyExact
data_files:
- split: train
path: data/Ruby_Exact/train-*
- config_name: RubyNear
data_files:
- split: train
path: data/Ruby_Near/train-*
- config_name: RustExact
data_files:
- split: train
path: data/Rust_Exact/train-*
- config_name: RustNear
data_files:
- split: train
path: data/Rust_Near/train-*
- config_name: SQLExact
data_files:
- split: train
path: data/SQL_Exact/train-*
- config_name: SQLNear
data_files:
- split: train
path: data/SQL_Near/train-*
- config_name: ScalaExact
data_files:
- split: train
path: data/Scala_Exact/train-*
- config_name: ScalaNear
data_files:
- split: train
path: data/Scala_Near/train-*
- config_name: SchemeExact
data_files:
- split: train
path: data/Scheme_Exact/train-*
- config_name: SchemeNear
data_files:
- split: train
path: data/Scheme_Near/train-*
- config_name: ScilabExact
data_files:
- split: train
path: data/Scilab_Exact/train-*
- config_name: ScilabNear
data_files:
- split: train
path: data/Scilab_Near/train-*
- config_name: StarlarkExact
data_files:
- split: train
path: data/Starlark_Exact/train-*
- config_name: StarlarkNear
data_files:
- split: train
path: data/Starlark_Near/train-*
- config_name: SwiftExact
data_files:
- split: train
path: data/Swift_Exact/train-*
- config_name: SwiftNear
data_files:
- split: train
path: data/Swift_Near/train-*
- config_name: TurtleExact
data_files:
- split: train
path: data/Turtle_Exact/train-*
- config_name: TypeScriptExact
data_files:
- split: train
path: data/TypeScript_Exact/train-*
- config_name: VueExact
data_files:
- split: train
path: data/Vue_Exact/train-*
- config_name: VueNear
data_files:
- split: train
path: data/Vue_Near/train-*
- config_name: WebAssemblyExact
data_files:
- split: train
path: data/WebAssembly_Exact/train-*
- config_name: WebAssemblyNear
data_files:
- split: train
path: data/WebAssembly_Near/train-*
---
|
CropNet/CropNet | CropNet | "2024-11-03T21:59:02Z" | 26,016 | 16 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n>1T",
"doi:10.57967/hf/3514",
"region:us",
"agriculture",
"climate"
] | null | "2023-10-08T17:59:29Z" | ---
license: cc-by-4.0
language:
- en
tags:
- agriculture
- climate
size_categories:
- n>1T
---
# An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions

The CropNet dataset is an open, large-scale, and deep learning-ready dataset, specifically targeting climate change-aware crop yield predictions for the contiguous United States (U.S.) continent at the county level. It is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, aligned in both the spatial and temporal domains, for over 2200 U.S. counties spanning 6 years (2017-2022). It is expected to facilitate researchers in developing deep learning models for timely and precisely predicting crop yields at the county level, by accounting for the effects of both short-term growing season weather variations and long-term climate change on crop yields. Although our initial goal of crafting the CropNet dataset is for precise crop yield prediction, we believe its future applicability is broad and can benefit the deep learning, agriculture, and meteorology communities, for exploring more interesting, critical, and climate change-related applications, by using one or more modalities of data.
## Citation
If you use our dataset, please cite [our paper](https://dl.acm.org/doi/10.1145/3637528.3671536):
```
@inproceedings{fudong:kdd24:crop_net,
author = {Fudong Lin and Kaleb Guillot and Summer Crawford and Yihe Zhang and Xu Yuan and Nian{-}Feng Tzeng},
title = {An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions},
booktitle = {Proceedings of the 30th {ACM} {SIGKDD} Conference on Knowledge Discovery and Data Mining (KDD)},
pages = {5375--5386},
year = {2024}
}
```
[Our MMST-ViT model](https://openaccess.thecvf.com/content/ICCV2023/papers/Lin_MMST-ViT_Climate_Change-aware_Crop_Yield_Prediction_via_Multi-Modal_Spatial-Temporal_Vision_ICCV_2023_paper.pdf) demonstrates how to develop deep-learning models for climate change-aware crop yield predictions. If you use MMST-ViT in your research, please cite our paper:
```
@inproceedings{fudong:iccv23:mmst_vit,
title={MMST-ViT: Climate Change-aware Crop Yield Prediction via Multi-Modal Spatial-Temporal Vision Transformer},
author={Lin, Fudong and Crawford, Summer and Guillot, Kaleb and Zhang, Yihe and Chen, Yan and Yuan, Xu and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5774--5784},
year={2023}
}
```
## Contributions
#### The `CropNet` dataset
- The first *terabyte-sized*, publicly available, and multi-modal dataset for climate change-aware crop yield predictions
#### The `CropNet` package
- A *deep learning-ready* Python package for facilitating researchers in downloading the CropNet data on the fly over the time and region of interest, and developing deep neural networks (DNNs) for climate change-aware crop yield predictions
- The `CropNet` package is available at [Python Package Index (PyPI)](https://pypi.org/project/cropnet/)
## Tutorials
The tutorials for the CropNet dataset are available at Google Colab, with their links listed below
- [Sentinel-2 Imagery Tutorial](https://colab.research.google.com/drive/1Tj69JdhO7aX8ks-4UWYvHrFm9GB1PNCd?usp=sharing)
- [WRF-HRRR Computed Dataset Tutorial](https://colab.research.google.com/drive/14l-JSNHtelawNu3kVG_ukTd2WUJpaZEc?usp=sharing)
- [USDA Crop Dataset Tutorial](https://colab.research.google.com/drive/1U-vFoRyLSb2l2Q67LeGbkUKTeRaHDkkK?usp=sharing)
## The CropNet Dataset
0ur CropNet dataset is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, spanning from 2017 to 2022 (i.e., 6 years) across 2291 U.S. counties, with its geographic distribution illustrated below. We also include the number of counties corresponding to each crop type in the USDA Crop Dataset (see the rightmost bar chart in the figure) since crop planting is highly geography-dependent.

### Sentinel-2 Imagery
The Sentinel-2 Imagery, obtained from the Sentinel-2 mission, provides high-resolution satellite images for monitoring crop growth on the ground. It contains two types of 224x224 RGB satellite images, agriculture imagery (AG) and normalized difference vegetation index (NDVI), both with a spatial resolution of 9x9 km, and a revisit frequency of 14 days. Examples of AG and NDVI images are depicted as follows.
- **Agriculture Imagery (AG)**

- **Normalized Difference Vegetation Index (NDVI)**

### WRF-HRRR Computed Dataset
The WRF-HRRR Computed Dataset, sourced from the WRF-HRRR model, contains daily and monthly meteorological parameters, with the former and the latter designed for capturing direct effects of short-term growing season weather variations on crop growth, and for learning indirect impacts of long-term climate change on crop yields, respectively. It contains 9 meteorological parameters gridded at 9 km in a one-day (and one-month) interval. The figures show the temperature in the spring, the summer, the fall, and the winter, respectively.

### USDA Crop Dataset
The USDA Crop Dataset, collected from the USDA Quick Statistic website, offers valuable information, such as production, yield, etc., for crops grown at each available county. It offers crop information for four types of crops, i.e., corn, cotton, soybeans, and winter wheat, at a county-level basis, with a temporal resolution of one year. The figure illustrates the 2022 Corn Yield across the United States.

### The CropNet Package
Beyond the contribution of our CropNet dataset, we also release the CropNet package in the Python Package Index (PyPI) for facilitating researchers in downloading the CropNet data based on the time and region of interest, and flexibly building their deep learning models for accurate crop yield predictions. In particular, the CropNet package includes three types of APIs, listed as follows:
- **DataDownloader**: This API allows users to download the CropNet data over the time/region of interest on the fly.
- **DataRetriever**: With this API, users can conveniently obtain the CropNet data stored in the local machine (e.g., if you have downloaded our curated CropNet from Google Drive) over the time/region of interest.
- **DataLoader**: This API is designed to facilitate researchers in developing their DNNs for accurate crop yield predictions. Specifically, the code in this API ( 1) combines all three modalities of data to create $(\mathbf{x}, \mathbf{y_{s}}, \mathbf{y_{l}}, \mathbf{z})$ tuples, with $\mathbf{x}, \mathbf{y_{s}}, \mathbf{y_{l}}, \text{and}~ \mathbf{z}$, respectively representing satellite images, short-term daily whether parameters, long-term monthly meteorological parameters, and ground-truth crop yield (or production) information, and then (2) exposes those tuples via a `Dataset` object after appropriate data pre-processing techniques.
### Installation
Researchers and practitioners can install the latest version of CropNet with the following commands:
```python
# Create and activate a conda environment
conda create -n cropnet_api python=3.10
conda activate cropnet_api
# Install the latest version of CropNet
pip install cropnet
# Slove the ecCodes library dependency issue
pip install ecmwflibs
```
### CropNet API Examples
- **Example 1: A DataDownloader Example for Downloading the Up-to-date CropNet Data**
Given the time and region (i.e., the FIPS codes for two U.S. counties) of interest, the following code presents how to utilize the **DataDownloader** to download the up-to-date CropNet data:
```python
from cropnet.data_downloader import DataDownloader
# Use the "target_dir" to specify where the data should be downloaded to
downloader = DataDownloader(target_dir="./data")
# Download 2022 USDA Soybean data
# Note that most of the 2023 USDA data are not yet available
downloader.download_USDA("Soybean", fips_codes=["10003", "22007"], years=["2022"])
# Download the 2023 (the 1st and 2nd quarters) Sentinel-2 Imagery
downloader.download_Sentinel2(fips_codes=["10003", "22007"], years=["2023"], image_type="AG")
downloader.download_Sentinel2(fips_codes=["10003", "22007"], years=["2023"], image_type="NDVI")
# Download the 2023 (January to July) WRF-HRRR data
downloader.download_HRRR(fips_codes=["10003", "22007"], years=["2023"])
```
- **Example 2: A DataRetriever Example for Obtaining Our Curated CropNet Data**
Given the time and region of interest, the following code shows how to use the **DataRetriever** to obtain the CropNet data stored in the local machine in a user-friendly format:
```python
# Use the "base_fir" to specify where the CropNet data is stored
retriever = DataRetriever(base_dir="/mnt/data/CropNet")
# Retrieve the 2022 USDA Soybean data
usda_data = retriever.retrieve_USDA(crop_type="Soybean", fips_codes=["10003", "22007"], years=["2022"])
# Retrieve the 2022 Sentinel-2 Imagery data
sentinel2_data = retriever.retrieve_Sentinel2(fips_codes=["10003", "22007"], years=["2022"], image_type="AG")
sentinel2_data = retriever.retrieve_Sentinel2(fips_codes=["10003", "22007"], years=["2022"], image_type="NDVI")
# Retrieve the 2022 WRF-HRRR data
hrrr_data = retriever.retrieve_HRRR(fips_codes=["10003","22007"], years=["2022"])
```
- **Example 3: A PyTorch Example for Using the DataLoader API for Training DNNs**
The following code presents a PyTorch example of training a deep learning model (i.e., MMST-ViT) for climate change-aware crop yield predictions, by utilizing the DataLoader APIs:
```python
import torch
from torch.utils.data import DataLoader
from models_mmst_vit import MMST_ViT
from cropnet.dataset.hrrr_computed_dataset import HRRRComputedDataset
from cropnet.dataset.sentinel2_imagery import Sentinel2Imagery
from cropnet.dataset.usda_crop_dataset import USDACropDataset
# The base directory for the CropNet dataset
base_dir = "/mnt/data/CropNet"
# The JSON configuration file
config_file = "data/soybeans_train.json"
# The dataloaders for each modality of data
sentinel2_loader = DataLoader(Sentinel2Imagery(base_dir, config_file), batch_size=1)
hrrr_loader = DataLoader(HRRRComputedDataset(base_dir, config_file), batch_size=1)
usda_loader = DataLoader(USDACropDataset(base_dir, config_file), batch_size=1)
# The model, the optimizer, and the loss function
model = MMST_ViT()
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3, betas=(0.9, 0.999))
criterion = torch.nn.MSELoss()
# Traning the model for one epoch
for s, h, u in zip(sentinel2_loader, hrrr_loader, usda_loader):
# x: satellite images
# ys (or yl): short-term daily (or long-term monthly) weather parameters
# z: ground-truth crop yield (or production) information
x, ys, yl, z, = s[0], h[0], h[1], u[0]
optimizer.zero_grad()
z_hat = model(x, ys, yl)
loss = criterion(z, z_hat)
loss.backward()
optimizer.step()
```
## License
CropNet has a [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license.
## Dataset Terms of Use
This dataset is available for research purposes only. By downloading, you agree to these terms. We are aware that unauthorized copies of our dataset have been republished on HuggingFace. Please note that any republication or distribution of this dataset without permission is prohibited and constitutes copyright infringement. |
OpenGVLab/MVBench | OpenGVLab | "2024-10-18T02:25:19Z" | 25,990 | 29 | [
"task_categories:visual-question-answering",
"task_categories:video-classification",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:image",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.17005",
"region:us"
] | [
"visual-question-answering",
"video-classification"
] | "2023-11-28T12:03:30Z" | ---
license: mit
extra_gated_prompt: >-
You agree to not use the dataset to conduct experiments that cause harm to
human subjects. Please note that the data in this dataset may be subject to
other agreements. Before using the data, be sure to read the relevant
agreements carefully to ensure compliant use. Video copyrights belong to the
original video creators or platforms and are for academic research use only.
task_categories:
- visual-question-answering
- video-classification
extra_gated_fields:
Name: text
Company/Organization: text
Country: text
E-Mail: text
modalities:
- Video
- Text
configs:
- config_name: action_sequence
data_files: json/action_sequence.json
- config_name: moving_count
data_files: json/moving_count.json
- config_name: action_prediction
data_files: json/action_prediction.json
- config_name: episodic_reasoning
data_files: json/episodic_reasoning.json
- config_name: action_antonym
data_files: json/action_antonym.json
- config_name: action_count
data_files: json/action_count.json
- config_name: scene_transition
data_files: json/scene_transition.json
- config_name: object_shuffle
data_files: json/object_shuffle.json
- config_name: object_existence
data_files: json/object_existence.json
- config_name: fine_grained_pose
data_files: json/fine_grained_pose.json
- config_name: unexpected_action
data_files: json/unexpected_action.json
- config_name: moving_direction
data_files: json/moving_direction.json
- config_name: state_change
data_files: json/state_change.json
- config_name: object_interaction
data_files: json/object_interaction.json
- config_name: character_order
data_files: json/character_order.json
- config_name: action_localization
data_files: json/action_localization.json
- config_name: counterfactual_inference
data_files: json/counterfactual_inference.json
- config_name: fine_grained_action
data_files: json/fine_grained_action.json
- config_name: moving_attribute
data_files: json/moving_attribute.json
- config_name: egocentric_navigation
data_files: json/egocentric_navigation.json
language:
- en
size_categories:
- 1K<n<10K
---
# MVBench
## Dataset Description
- **Repository:** [MVBench](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb)
- **Paper:** [2311.17005](https://arxiv.org/abs/2311.17005)
- **Point of Contact:** mailto:[kunchang li]([email protected])
## <span style="color: red;">Important Update</span>
[18/10/2024] Due to NTU RGB+D License, 320 videos from NTU RGB+D need to be downloaded manually. Please visit [ROSE Lab](https://rose1.ntu.edu.sg/dataset/actionRecognition/) to access the data. We also provide a [list of the 320 videos](https://huggingface.co/datasets/OpenGVLab/MVBench/blob/main/video/MVBench_videos_ntu.txt) used in MVBench for your reference.

We introduce a novel static-to-dynamic method for defining temporal-related tasks. By converting static tasks into dynamic ones, we facilitate systematic generation of video tasks necessitating a wide range of temporal abilities, from perception to cognition. Guided by task definitions, we then **automatically transform public video annotations into multiple-choice QA** for task evaluation. This unique paradigm enables efficient creation of MVBench with minimal manual intervention while ensuring evaluation fairness through ground-truth video annotations and avoiding biased LLM scoring. The **20** temporal task examples are as follows.

## Evaluation
An evaluation example is provided in [mvbench.ipynb](https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat2/mvbench.ipynb). Please follow the pipeline to prepare the evaluation code for various MLLMs.
- **Preprocess**: We preserve the raw video (high resolution, long duration, etc.) along with corresponding annotations (start, end, subtitles, etc.) for future exploration; hence, the decoding of some raw videos like Perception Test may be slow.
- **Prompt**: We explore effective system prompts to encourage better temporal reasoning in MLLM, as well as efficient answer prompts for option extraction.
## Leadrboard
While an [Online leaderboard]() is under construction, the current standings are as follows:
 |
deepghs/wd14_tagger_inversion | deepghs | "2024-08-11T17:08:41Z" | 25,585 | 2 | [
"task_categories:feature-extraction",
"language:en",
"license:apache-2.0",
"region:us",
"art"
] | [
"feature-extraction"
] | "2024-05-14T11:03:37Z" | ---
license: apache-2.0
task_categories:
- feature-extraction
language:
- en
tags:
- art
---
## What's this
This is the dataset for inversing the embeddings of the given prediction result. We can get the embedding dataset with some tag combinations, and search images like this.
## How is this dataset made
This dataset is generated with anime images in danbooru (webp ones, based on [KBlueLeaf/danbooru2023-webp-4Mpixel](https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel)), extracting the prediction results and the embeddings of each images.
## Model's goal
Train a model to inverse prediction result to embeddings. Maybe a MLP can be used? I don't know, just try them. |
AI-MO/NuminaMath-TIR | AI-MO | "2024-11-25T05:32:53Z" | 25,354 | 102 | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"math",
"aimo"
] | [
"text-generation"
] | "2024-07-16T07:44:24Z" | ---
language:
- en
license: apache-2.0
task_categories:
- text-generation
pretty_name: NuminaMath TIR
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 327147067
num_examples: 72441
- name: test
num_bytes: 461331
num_examples: 99
download_size: 147557990
dataset_size: 327608398
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- math
- aimo
---
# Dataset Card for NuminaMath CoT
## Dataset Description
- **Homepage:** https://projectnumina.ai
- **Repository:** https://github.com/project-numina/aimo-progress-prize
- **Paper:** https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf
- **Leaderboard:**
- **Point of Contact:** [Jia Li]([email protected])
### Dataset Summary
Tool-integrated reasoning (TIR) plays a crucial role in this competition. However, collecting and annotating such data is both costly and time-consuming. To address this, we selected approximately 70k problems from the NuminaMath-CoT dataset, focusing on those with numerical outputs, most of which are integers. We then utilized a pipeline leveraging GPT-4 to generate TORA-like reasoning paths, executing the code and producing results until the solution was complete. We filtered out solutions where the final answer did not match the reference and repeated this process three times to ensure accuracy and consistency. This iterative approach allowed us to generate high-quality TORA data efficiently.
### Licensing Information
The dataset is available under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@misc{numina_math_datasets,
author = {Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu},
title = {NuminaMath TIR},
year = {2024},
publisher = {Numina},
journal = {Hugging Face repository},
howpublished = {\url{[https://huggingface.co/AI-MO/NuminaMath-TIR](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf)}}
}
``` |
picollect/danbooru_1024 | picollect | "2024-11-15T03:00:46Z" | 25,308 | 1 | [
"language:en",
"license:other",
"size_categories:1M<n<10M",
"region:us",
"danbooru",
"anime",
"bucket"
] | null | "2024-10-31T07:02:03Z" | ---
license: other
language:
- en
tags:
- danbooru
- anime
- bucket
pretty_name: Danbooru 2024 Dataset
size_categories:
- 1M<n<10M
---
# Danbooru 1024 Dataset
# Danbooru 1024 数据集
A curated collection of high-quality images from Danbooru website, processed and resized to specific bucket dimensions. This dataset is for research and learning purposes only.
这是一个精选的高质量 Danbooru 网站图像集合,经过处理并调整为特定的桶尺寸。该数据集仅用于研究和学习目的。
## Dataset Description
## 数据集描述
This dataset contains processed image resources from Danbooru website, starting from ID 3,000,000. All images are resized to specific bucket dimensions and meet minimum resolution requirements.
本数据集包含来自 Danbooru 网站的处理后图像资源,从 ID 3,000,000 开始。所有图像都调整为特定的桶尺寸,并满足最小分辨率要求。
### Data Organization
### 数据组织
- Images are packaged into compressed files, 1000 images per archive
- File naming format: `{start_id}.tar`
- Example: `3000.tar` contains images with IDs from 3,000,000 to 3,000,999
- 图像打包为压缩文件,每个存档包含 1000 张图像
- 文件命名格式:`{start_id}.tar`
- 示例:`3000.tar` 包含 ID 从 3,000,000 到 3,000,999 的图像
### Technical Details
### 技术细节
- Image Format: WebP (lossless) for >1MB files, JPG (quality 92) for others
- Resolution: Minimum area of 1024²×0.95 pixels
- ID Range: >3,000,000
- Bucket Dimensions (width, height):
- (512, 2048)
- (576, 1792)
- (640, 1664)
- (704, 1472)
- (768, 1344)
- (832, 1280)
- (896, 1152)
- (960, 1088)
- (1024, 1024)
- (1088, 960)
- (1152, 896)
- (1280, 832)
- (1344, 768)
- (1472, 704)
- (1664, 640)
- (1792, 576)
- (2048, 512)
- 图像格式:>1MB 文件使用无损 WebP,其他使用质量 92 的 JPG
- 分辨率:最小面积为 1024²×0.95 像素
- ID 范围:>3,000,000
- 桶尺寸(宽,高):
- (512, 2048)
- (576, 1792)
- (640, 1664)
- (704, 1472)
- (768, 1344)
- (832, 1280)
- (896, 1152)
- (960, 1088)
- (1024, 1024)
- (1088, 960)
- (1152, 896)
- (1280, 832)
- (1344, 768)
- (1472, 704)
- (1664, 640)
- (1792, 576)
- (2048, 512)
### Image Processing Details
### 图像处理细节
- All images are resized to the nearest bucket dimensions
- Images with area smaller than 1024²×0.95 pixels are excluded
- Files >1MB are stored as lossless WebP for optimal quality
- Other files are stored as JPG (quality 92) for better color reproduction compared to WebP quality 90
- 所有图像都调整为最接近的桶尺寸
- 面积小于 1024²×0.95 像素的图像被排除
- 大于 1MB 的文件以无损 WebP 格式存储以保证最佳质量
- 其他文件以 JPG 格式(质量 92)存储,相比 WebP 质量 90 具有更好的色彩还原度
## Usage Instructions
## 使用说明
1. Images within each archive are named by their IDs
2. Metadata can be queried from Danbooru database using corresponding IDs
1. 存档中的图像以其 ID 命名
2. 可使用相应的 ID 从 Danbooru 数据库查询元数据
## License
## 许可证
This dataset is released under the following terms:
本数据集在以下条款下发布:
1. Academic and Research Use
学术和研究使用
- This dataset may only be used for academic research, learning, and non-commercial purposes
- 本数据集仅可用于学术研究、学习和非商业目的
2. Restrictions
限制条款
- Commercial use is strictly prohibited
- Redistribution or resale of the dataset is not permitted
- Any derivative works must be shared under the same terms
- 严格禁止商业使用
- 不允许重新分发或转售数据集
- 任何衍生作品必须在相同条款下共享
3. Attribution
署名要求
- Users must cite this dataset when used in research or publications
- Any derivative works must acknowledge the original source
- 在研究或出版物中使用时必须引用本数据集
- 任何衍生作品必须注明原始来源
4. Disclaimer
免责声明
- The dataset is provided "as is" without any warranty
- The creators are not liable for any damages or losses arising from its use
- Users are solely responsible for ensuring compliance with local laws and regulations
- 数据集按"原样"提供,不提供任何保证
- 创建者不对使用过程中产生的任何损害或损失负责
- 用户需自行负责确保符合当地法律法规
5. Termination
终止条款
- This license automatically terminates if you violate any of these terms
- Upon termination, you must cease all use of the dataset
- 如果违反任何这些条款,本许可证将自动终止
- 终止后,您必须停止使用本数据集
By using this dataset, you agree to be bound by these terms.
使用本数据集即表示您同意受这些条款的约束。
## Important Notes
## 重要提示
- Ensure legal compliance when using the dataset
- Review relevant data usage policies and guidelines before use
- Consult legal professionals if you have questions about usage rights
- 使用数据集时确保遵守法律
- 使用前请查看相关数据使用政策和指南
- 如对使用权有疑问,请咨询法律专业人士
### Contributions
### 贡献
Thanks to the Danbooru community for maintaining the source database that made this dataset possible.
感谢 Danbooru 社区维护使本数据集成为可能的源数据库。
---
**Notice:** Users must strictly comply with local laws and regulations when using this dataset. Users bear full responsibility for any issues arising from improper use.
**注意:** 用户在使用本数据集时必须严格遵守当地法律法规。用户对因不当使用而产生的任何问题承担全部责任。 |
Avelina/smollm-corpus | Avelina | "2025-01-11T16:41:28Z" | 25,079 | 5 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"region:us"
] | [
"text-generation"
] | "2025-01-11T01:39:39Z" | ---
license: odc-by
dataset_info:
- config_name: default
features:
- name: text
dtype: string
configs:
- config_name: default
data_files:
- split: train
path: data*/train-*
task_categories:
- text-generation
language:
- en
size_categories:
- 100M<n<1B
---
# SmolLM-Corpus: Now shuffled and sharded!
This is a version of the SmolLM-Corpus where the 3 subsets have been interleved, shuffled and sharded as 23698 `jsonl.zst` files for easy streaming!
The dataset is comprised of the `cosmopedia-v2` and `fineweb-edu-dedup` subsets from the original [SmolLM-Corpus repo](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus), with the `python-edu` subset being pulled from my [python-edu repo](https://huggingface.co/datasets/Avelina/python-edu).
## Dataset Structure
The dataset is split into 24 subdirectories, with the first 23 containing 1000 shards and the 24th containing the final 698. The repository is structured as follows:
```
data00/
├── train-00000-of-23698.jsonl.zst
├── ...
└── train-00999-of-23698.jsonl.zst
data01/
├── train-01000-of-23698.jsonl.zst
├── ...
└── train-01999-of-23698.jsonl.zst
...
data22/
├── train-22000-of-23698.jsonl.zst
├── ...
└── train-22999-of-23698.jsonl.zst
data23/
├── train-23000-of-23698.jsonl.zst
├── ...
└── train-23697-of-23698.jsonl.zst
```
In general, you can obtain the exact download URL for all shards using the following python function:
```py
def get_url_from_shard( index: int ) -> str:
if index >= 23_698:
raise ValueError( f'Shard index must be less than 23,698 but received {index}' )
group = index // 1000
return f'https://huggingface.co/datasets/Avelina/smollm-corpus/resolve/main/data{group:02d}/train-{index:05d}-of-23698.jsonl.zst'
```
## Generation Code
Here is the code which was used to generate the shuffled shards. Note the use of non-contiguous interleaving in attempt to uniformly pull documents from across entire subsets to loosely decouple shard index from original document position.
Please make sure you `pip install zstandard`!!!
```py
import tqdm
import datasets
from datasets import load_dataset
# Output directory and file format. Note that the file extension enforces zst compression is used.
OUTPUT_FMT = '/YOUR/FILE/PATH/HERE/data/train-{index:05d}-of-{num_shards:05d}.jsonl.zst'
# Total number of shards giving approximately 10,000 documents per shard
OUTPUT_NUM_SHARDS = 23698
# Grab the three datasets
ds_python = load_dataset( 'Avelina/python-edu' )
ds_cosmo = load_dataset( 'HuggingFaceTB/smollm-corpus', 'cosmopedia-v2' )
ds_edu = load_dataset( 'HuggingFaceTB/smollm-corpus', 'fineweb-edu-dedup' )
# Retain only the text columns and the train splits
ds_python = ds_python.select_columns( 'text' )[ 'train' ]
ds_cosmo = ds_cosmo.select_columns( 'text' )[ 'train' ]
ds_edu = ds_edu.select_columns( 'text' )[ 'train' ]
# Iterate over all shards with a nice progbar
for index in tqdm.tqdm( range( OUTPUT_NUM_SHARDS ) ):
# Get non-contiguous in-memory sub-shards for the three datasets
curr_python = ds_python.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
curr_cosmo = ds_cosmo.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
curr_edu = ds_edu.shard( num_shards=OUTPUT_NUM_SHARDS, index=index, contiguous=False, keep_in_memory=True )
# Concatenate the sub-shards
curr_shard = datasets.concatenate_datasets( [ curr_python, curr_cosmo, curr_edu ] )
# Deterministically shuffle using the current shard index for reproducibility
curr_shard = curr_shard.shuffle( seed=index, keep_in_memory=True )
# Dump the shards to .jsonl.zst
curr_shard.to_json( OUTPUT_FMT.format( index=index, num_shards=OUTPUT_NUM_SHARDS ) )
```
## In-Memory Decompression
Zstandard was chosen as it enables trivial in-memory decompression to minimise the storage impact of the dataset. Here is some example code which creates a python generator that yields each json line from a compressed shard stored at `file_name`, and a second function which creates a python generator that parses and yields the compressed shard.
```py
import json
from json import JSONDecodeError
import zstandard
def read_lines_zst( file_name ):
# Open the file for reading in binary mode
with open( file_name, 'rb' ) as file_handle:
# Initialise an empty buffer
buffer = ''
# Create a reader for the opened file
reader = zstandard.ZstdDecompressor( max_window_size=2**31 ).stream_reader( file_handle )
while True:
# Read a chunk of up to 128MB
chunk = reader.read( 2**27 ).decode()
# If chunk is empty we've reached the end of the file and can break out
if not chunk:
break
# Combine any prior buffer with the current chunk and split by newline
lines = ( buffer + chunk ).split( '\n' )
# Yield the full lines so far
for line in lines[ : -1 ]:
yield line
# The last 'line' is incomplete, so place in buffer for next chunk
buffer = lines[ -1 ]
# Always remember to close your reader!
reader.close()
def parse_jsonl_zst( file_name ):
# Iterate over the yielded lines of the compressed shard
for i, line in enumerate( read_lines_zst( file_name ) ):
try:
# Convert the line into a python dict and yield the text field
yield json.loads( line )[ 'text' ]
except ( KeyError, JSONDecodeError ):
# Catch KeyError for 'text' not present in dict
# Catch JSONDecodeError for malformed line
print( f'JSON error @ shard={file_name}, line={i}' )
```
Of course you *could* use HuggingFace's in-built streaming mechanics to handle things for you, but in my experience that approach is less reliable, doesn't handle `JSONDecodeError`s if there are malformed lines, can cause memory leaks, and has forced sharding behaviour when used inside a multi-worker PyTorch `DataLoader` which I've not yet found a way to disable! |
asahi417/seamless-align-enA-esA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-24T08:03:42Z" | 25,040 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-15T02:25:12Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17615486614
num_examples: 2178
download_size: 17671956372
dataset_size: 17615486614
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16927334118
num_examples: 2220
download_size: 16981705853
dataset_size: 16927334118
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13270315892
num_examples: 2091
download_size: 13316774841
dataset_size: 13270315892
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13259634937
num_examples: 2089
download_size: 13305926124
dataset_size: 13259634937
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12870979863
num_examples: 2015
download_size: 12915729898
dataset_size: 12870979863
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13380071688
num_examples: 2061
download_size: 13426181130
dataset_size: 13380071688
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13344159869
num_examples: 2100
download_size: 13390587699
dataset_size: 13344159869
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13328330644
num_examples: 2098
download_size: 13374926961
dataset_size: 13328330644
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13096062318
num_examples: 2048
download_size: 13142156831
dataset_size: 13096062318
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13034952356
num_examples: 2059
download_size: 13081006606
dataset_size: 13034952356
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13404917635
num_examples: 2105
download_size: 13451465897
dataset_size: 13404917635
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13141422631
num_examples: 2069
download_size: 13187287737
dataset_size: 13141422631
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16951209374
num_examples: 2233
download_size: 17005518613
dataset_size: 16951209374
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13040115701
num_examples: 2055
download_size: 13086087673
dataset_size: 13040115701
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12931440533
num_examples: 2051
download_size: 12976152367
dataset_size: 12931440533
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12686272722
num_examples: 2025
download_size: 12730618021
dataset_size: 12686272722
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13190310595
num_examples: 2097
download_size: 13236739071
dataset_size: 13190310595
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13285434302
num_examples: 2046
download_size: 13331320055
dataset_size: 13285434302
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13120000810
num_examples: 2083
download_size: 13166227875
dataset_size: 13120000810
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12704630141
num_examples: 2029
download_size: 12749446486
dataset_size: 12704630141
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12859255799
num_examples: 2036
download_size: 12904092383
dataset_size: 12859255799
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12860464458
num_examples: 2064
download_size: 12905429560
dataset_size: 12860464458
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13131625042
num_examples: 2062
download_size: 13177824920
dataset_size: 13131625042
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16686439189
num_examples: 2201
download_size: 16740713100
dataset_size: 16686439189
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12674395327
num_examples: 2051
download_size: 12719575500
dataset_size: 12674395327
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12650742527
num_examples: 2033
download_size: 12695314327
dataset_size: 12650742527
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12677130233
num_examples: 2043
download_size: 12722174432
dataset_size: 12677130233
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12951044389
num_examples: 2075
download_size: 12995988239
dataset_size: 12951044389
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12478574614
num_examples: 2031
download_size: 12522233390
dataset_size: 12478574614
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12737934749
num_examples: 2056
download_size: 12783009884
dataset_size: 12737934749
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12611960900
num_examples: 2027
download_size: 12656298794
dataset_size: 12611960900
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12659981403
num_examples: 2066
download_size: 12705053455
dataset_size: 12659981403
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12459304658
num_examples: 2049
download_size: 12502474622
dataset_size: 12459304658
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12607835653
num_examples: 2040
download_size: 12652679590
dataset_size: 12607835653
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16721034298
num_examples: 2222
download_size: 16775558307
dataset_size: 16721034298
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12785684138
num_examples: 2068
download_size: 12830446989
dataset_size: 12785684138
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12670698438
num_examples: 2040
download_size: 12715508711
dataset_size: 12670698438
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12809517823
num_examples: 2043
download_size: 12854435983
dataset_size: 12809517823
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12525070129
num_examples: 2026
download_size: 12569746196
dataset_size: 12525070129
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12640468818
num_examples: 2030
download_size: 12685034517
dataset_size: 12640468818
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12416679976
num_examples: 2030
download_size: 12460253194
dataset_size: 12416679976
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12060270981
num_examples: 2005
download_size: 12103559155
dataset_size: 12060270981
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12198793495
num_examples: 2043
download_size: 12242429964
dataset_size: 12198793495
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12581073257
num_examples: 2052
download_size: 12625957335
dataset_size: 12581073257
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12394780937
num_examples: 2035
download_size: 12438493328
dataset_size: 12394780937
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16816831041
num_examples: 2227
download_size: 16871222861
dataset_size: 16816831041
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12261053851
num_examples: 2005
download_size: 12304654233
dataset_size: 12261053851
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12206176867
num_examples: 2031
download_size: 12249761397
dataset_size: 12206176867
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12708681785
num_examples: 2056
download_size: 12753610542
dataset_size: 12708681785
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12366711105
num_examples: 2033
download_size: 12410442360
dataset_size: 12366711105
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12308965549
num_examples: 2028
download_size: 12352637851
dataset_size: 12308965549
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12284207499
num_examples: 2026
download_size: 12327846424
dataset_size: 12284207499
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12150428192
num_examples: 2016
download_size: 12193880909
dataset_size: 12150428192
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11970898169
num_examples: 2003
download_size: 12013130854
dataset_size: 11970898169
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12450465858
num_examples: 2022
download_size: 12493377483
dataset_size: 12450465858
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12022596641
num_examples: 2009
download_size: 12065732125
dataset_size: 12022596641
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16785335075
num_examples: 2236
download_size: 16839822407
dataset_size: 16785335075
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12104930674
num_examples: 2009
download_size: 12148469368
dataset_size: 12104930674
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12148398553
num_examples: 2003
download_size: 12191904417
dataset_size: 12148398553
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12265322777
num_examples: 2058
download_size: 12309190901
dataset_size: 12265322777
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12097094889
num_examples: 2032
download_size: 12141069579
dataset_size: 12097094889
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11954386540
num_examples: 2019
download_size: 11996860087
dataset_size: 11954386540
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12409993216
num_examples: 2010
download_size: 12453257259
dataset_size: 12409993216
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11897538098
num_examples: 1991
download_size: 11939655250
dataset_size: 11897538098
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11809939547
num_examples: 1980
download_size: 11852053250
dataset_size: 11809939547
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11975300465
num_examples: 1995
download_size: 12017653945
dataset_size: 11975300465
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11916840461
num_examples: 1993
download_size: 11958923329
dataset_size: 11916840461
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16674848074
num_examples: 2251
download_size: 16729606555
dataset_size: 16674848074
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11908485692
num_examples: 1966
download_size: 11950586090
dataset_size: 11908485692
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11981004468
num_examples: 2026
download_size: 12023475986
dataset_size: 11981004468
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12044526424
num_examples: 2005
download_size: 12088064513
dataset_size: 12044526424
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11557729492
num_examples: 2004
download_size: 11600363984
dataset_size: 11557729492
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11547632894
num_examples: 2005
download_size: 11590183152
dataset_size: 11547632894
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11789148049
num_examples: 1997
download_size: 11831588345
dataset_size: 11789148049
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11630894583
num_examples: 1979
download_size: 11673192156
dataset_size: 11630894583
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11569365938
num_examples: 1944
download_size: 11611215022
dataset_size: 11569365938
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12081411178
num_examples: 2024
download_size: 12125052871
dataset_size: 12081411178
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11856744061
num_examples: 1996
download_size: 11898961527
dataset_size: 11856744061
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16618546236
num_examples: 2237
download_size: 16673261997
dataset_size: 16618546236
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11757966530
num_examples: 1999
download_size: 11800176442
dataset_size: 11757966530
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11566140383
num_examples: 1965
download_size: 11608235310
dataset_size: 11566140383
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11632843665
num_examples: 1947
download_size: 11674886291
dataset_size: 11632843665
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11696838547
num_examples: 1978
download_size: 11738400669
dataset_size: 11696838547
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11796900755
num_examples: 1992
download_size: 11839031804
dataset_size: 11796900755
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11522497802
num_examples: 1997
download_size: 11564866792
dataset_size: 11522497802
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11494892805
num_examples: 1946
download_size: 11535501682
dataset_size: 11494892805
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11686128349
num_examples: 1994
download_size: 11728436974
dataset_size: 11686128349
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11669254181
num_examples: 1992
download_size: 11711755317
dataset_size: 11669254181
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11651052491
num_examples: 2013
download_size: 11693353876
dataset_size: 11651052491
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16254521068
num_examples: 2195
download_size: 16307512664
dataset_size: 16254521068
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11366092414
num_examples: 1932
download_size: 11406917340
dataset_size: 11366092414
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11793512948
num_examples: 2002
download_size: 11835985714
dataset_size: 11793512948
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11176721435
num_examples: 1947
download_size: 11217656997
dataset_size: 11176721435
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11504599850
num_examples: 1973
download_size: 11546621630
dataset_size: 11504599850
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11340991023
num_examples: 1959
download_size: 11381767318
dataset_size: 11340991023
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10958038927
num_examples: 1904
download_size: 10997093007
dataset_size: 10958038927
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11236510649
num_examples: 1947
download_size: 11277036944
dataset_size: 11236510649
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11243650468
num_examples: 1964
download_size: 11284454904
dataset_size: 11243650468
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11631939495
num_examples: 1978
download_size: 11674186442
dataset_size: 11631939495
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11430091702
num_examples: 2017
download_size: 11470976839
dataset_size: 11430091702
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16339477172
num_examples: 2217
download_size: 16392637784
dataset_size: 16339477172
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11398504836
num_examples: 1948
download_size: 11439304594
dataset_size: 11398504836
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10983850878
num_examples: 1921
download_size: 11023019749
dataset_size: 10983850878
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11392234972
num_examples: 1951
download_size: 11432586306
dataset_size: 11392234972
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11271488922
num_examples: 1956
download_size: 11312405664
dataset_size: 11271488922
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11357335026
num_examples: 1961
download_size: 11398377938
dataset_size: 11357335026
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11086034665
num_examples: 1945
download_size: 11126998403
dataset_size: 11086034665
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11043203344
num_examples: 1936
download_size: 11084004919
dataset_size: 11043203344
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10949319896
num_examples: 1932
download_size: 10988727473
dataset_size: 10949319896
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11139775714
num_examples: 1934
download_size: 11180699957
dataset_size: 11139775714
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11243712969
num_examples: 1977
download_size: 11284563965
dataset_size: 11243712969
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17867926870
num_examples: 2245
download_size: 17924763168
dataset_size: 17867926870
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15329766438
num_examples: 2093
download_size: 15379836725
dataset_size: 15329766438
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11202233370
num_examples: 1960
download_size: 11242825203
dataset_size: 11202233370
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11088324458
num_examples: 1944
download_size: 11129210075
dataset_size: 11088324458
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11276024599
num_examples: 1982
download_size: 11317146502
dataset_size: 11276024599
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10852830882
num_examples: 1919
download_size: 10891763738
dataset_size: 10852830882
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10819852765
num_examples: 1939
download_size: 10859654400
dataset_size: 10819852765
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11166848309
num_examples: 1957
download_size: 11207893657
dataset_size: 11166848309
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10885465898
num_examples: 1932
download_size: 10925154959
dataset_size: 10885465898
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10610213231
num_examples: 1882
download_size: 10649574288
dataset_size: 10610213231
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10810223800
num_examples: 1928
download_size: 10849596885
dataset_size: 10810223800
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10739546598
num_examples: 1931
download_size: 10779380292
dataset_size: 10739546598
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15119185509
num_examples: 2083
download_size: 15168901402
dataset_size: 15119185509
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10674367098
num_examples: 1884
download_size: 10713595910
dataset_size: 10674367098
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10671831916
num_examples: 1891
download_size: 10711269789
dataset_size: 10671831916
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10714648825
num_examples: 1911
download_size: 10754277168
dataset_size: 10714648825
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10792102434
num_examples: 1897
download_size: 10831439770
dataset_size: 10792102434
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10736126167
num_examples: 1918
download_size: 10775692263
dataset_size: 10736126167
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10468090665
num_examples: 1891
download_size: 10506413098
dataset_size: 10468090665
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10328309433
num_examples: 1871
download_size: 10366449044
dataset_size: 10328309433
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10364340655
num_examples: 1883
download_size: 10402491036
dataset_size: 10364340655
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10772691113
num_examples: 1878
download_size: 10812133060
dataset_size: 10772691113
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10311356207
num_examples: 1843
download_size: 10348785373
dataset_size: 10311356207
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14581297681
num_examples: 1987
download_size: 14629521379
dataset_size: 14581297681
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10523637560
num_examples: 1882
download_size: 10563027688
dataset_size: 10523637560
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10565854425
num_examples: 1894
download_size: 10605243718
dataset_size: 10565854425
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10547035702
num_examples: 1886
download_size: 10586386817
dataset_size: 10547035702
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10566185502
num_examples: 1902
download_size: 10605790532
dataset_size: 10566185502
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10686340635
num_examples: 1907
download_size: 10725415925
dataset_size: 10686340635
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10407100675
num_examples: 1865
download_size: 10445116148
dataset_size: 10407100675
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10247852666
num_examples: 1900
download_size: 10286330263
dataset_size: 10247852666
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10600702309
num_examples: 1905
download_size: 10640293635
dataset_size: 10600702309
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10410552726
num_examples: 1889
download_size: 10448706879
dataset_size: 10410552726
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10227043199
num_examples: 1886
download_size: 10265311634
dataset_size: 10227043199
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14707964791
num_examples: 2037
download_size: 14756622646
dataset_size: 14707964791
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10977312976
num_examples: 1935
download_size: 11015836294
dataset_size: 10977312976
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10354006940
num_examples: 1898
download_size: 10392292630
dataset_size: 10354006940
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10359630853
num_examples: 1889
download_size: 10397910351
dataset_size: 10359630853
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10318214003
num_examples: 1888
download_size: 10356479989
dataset_size: 10318214003
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10153869118
num_examples: 1894
download_size: 10192086672
dataset_size: 10153869118
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10451531268
num_examples: 1887
download_size: 10489548838
dataset_size: 10451531268
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10423323169
num_examples: 1885
download_size: 10461230101
dataset_size: 10423323169
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10111542830
num_examples: 1860
download_size: 10149678003
dataset_size: 10111542830
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10209968615
num_examples: 1877
download_size: 10248106680
dataset_size: 10209968615
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10335208989
num_examples: 1863
download_size: 10373126755
dataset_size: 10335208989
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15110768889
num_examples: 2050
download_size: 15160503523
dataset_size: 15110768889
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10121403869
num_examples: 1890
download_size: 10159836061
dataset_size: 10121403869
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10186760412
num_examples: 1846
download_size: 10224695949
dataset_size: 10186760412
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10325305357
num_examples: 1876
download_size: 10363420683
dataset_size: 10325305357
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9905932594
num_examples: 1838
download_size: 9942639569
dataset_size: 9905932594
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10091589671
num_examples: 1886
download_size: 10130048735
dataset_size: 10091589671
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10253380616
num_examples: 1845
download_size: 10291311175
dataset_size: 10253380616
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9776749280
num_examples: 1839
download_size: 9813691334
dataset_size: 9776749280
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10156286081
num_examples: 1851
download_size: 10194287310
dataset_size: 10156286081
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10293648296
num_examples: 1889
download_size: 10332006484
dataset_size: 10293648296
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10077132419
num_examples: 1842
download_size: 10115148550
dataset_size: 10077132419
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14939407431
num_examples: 2043
download_size: 14987934926
dataset_size: 14939407431
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9859274992
num_examples: 1834
download_size: 9896107919
dataset_size: 9859274992
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9932351426
num_examples: 1854
download_size: 9969187608
dataset_size: 9932351426
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10142478129
num_examples: 1854
download_size: 10180659893
dataset_size: 10142478129
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10123622721
num_examples: 1867
download_size: 10161912632
dataset_size: 10123622721
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10010038438
num_examples: 1875
download_size: 10048246100
dataset_size: 10010038438
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9677910385
num_examples: 1844
download_size: 9714855044
dataset_size: 9677910385
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10274666423
num_examples: 1860
download_size: 10312795912
dataset_size: 10274666423
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9894060407
num_examples: 1833
download_size: 9930923035
dataset_size: 9894060407
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10002283709
num_examples: 1856
download_size: 10040147359
dataset_size: 10002283709
- config_name: subset_259
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9954421477
num_examples: 1873
download_size: 9991605810
dataset_size: 9954421477
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14688110120
num_examples: 2046
download_size: 14736907487
dataset_size: 14688110120
- config_name: subset_260
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9251403580
num_examples: 1766
download_size: 9286487592
dataset_size: 9251403580
- config_name: subset_261
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8906186422
num_examples: 1695
download_size: 8939749379
dataset_size: 8906186422
- config_name: subset_262
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9005322912
num_examples: 1665
download_size: 9039903011
dataset_size: 9005322912
- config_name: subset_263
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8957935304
num_examples: 1660
download_size: 8991043579
dataset_size: 8957935304
- config_name: subset_264
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9005781922
num_examples: 1640
download_size: 9040003299
dataset_size: 9005781922
- config_name: subset_265
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8885453225
num_examples: 1674
download_size: 8918493432
dataset_size: 8885453225
- config_name: subset_266
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9008549939
num_examples: 1662
download_size: 9042942529
dataset_size: 9008549939
- config_name: subset_267
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9343968850
num_examples: 1718
download_size: 9378652964
dataset_size: 9343968850
- config_name: subset_268
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9630757415
num_examples: 1796
download_size: 9667100998
dataset_size: 9630757415
- config_name: subset_269
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9982374644
num_examples: 1858
download_size: 10019389762
dataset_size: 9982374644
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14706735358
num_examples: 2036
download_size: 14755314793
dataset_size: 14706735358
- config_name: subset_270
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9482266334
num_examples: 1823
download_size: 9517999637
dataset_size: 9482266334
- config_name: subset_271
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9602528097
num_examples: 1816
download_size: 9639460882
dataset_size: 9602528097
- config_name: subset_272
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9849753853
num_examples: 1826
download_size: 9886412516
dataset_size: 9849753853
- config_name: subset_273
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9658015564
num_examples: 1832
download_size: 9694957017
dataset_size: 9658015564
- config_name: subset_274
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9549175962
num_examples: 1791
download_size: 9585620341
dataset_size: 9549175962
- config_name: subset_275
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9540749598
num_examples: 1827
download_size: 9577339695
dataset_size: 9540749598
- config_name: subset_276
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9447286696
num_examples: 1798
download_size: 9482749217
dataset_size: 9447286696
- config_name: subset_277
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9649155191
num_examples: 1824
download_size: 9685881332
dataset_size: 9649155191
- config_name: subset_278
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9778025788
num_examples: 1852
download_size: 9815037575
dataset_size: 9778025788
- config_name: subset_279
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9773965831
num_examples: 1817
download_size: 9810744560
dataset_size: 9773965831
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14919139004
num_examples: 2067
download_size: 14968257960
dataset_size: 14919139004
- config_name: subset_280
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9394162223
num_examples: 1831
download_size: 9429593445
dataset_size: 9394162223
- config_name: subset_281
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9562033469
num_examples: 1818
download_size: 9598793999
dataset_size: 9562033469
- config_name: subset_282
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9625372743
num_examples: 1818
download_size: 9662150366
dataset_size: 9625372743
- config_name: subset_283
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9630374197
num_examples: 1808
download_size: 9667159020
dataset_size: 9630374197
- config_name: subset_284
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9070359285
num_examples: 1747
download_size: 9105502122
dataset_size: 9070359285
- config_name: subset_285
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9505440183
num_examples: 1809
download_size: 9542237457
dataset_size: 9505440183
- config_name: subset_286
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9616111622
num_examples: 1795
download_size: 9652676512
dataset_size: 9616111622
- config_name: subset_287
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8256889317
num_examples: 1612
download_size: 8288680929
dataset_size: 8256889317
- config_name: subset_288
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6640007028
num_examples: 1335
download_size: 6666439022
dataset_size: 6640007028
- config_name: subset_289
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9672423557
num_examples: 1839
download_size: 9709457896
dataset_size: 9672423557
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15234832048
num_examples: 2111
download_size: 15285154066
dataset_size: 15234832048
- config_name: subset_290
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9458067491
num_examples: 1800
download_size: 9493104286
dataset_size: 9458067491
- config_name: subset_291
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9370100008
num_examples: 1787
download_size: 9405424811
dataset_size: 9370100008
- config_name: subset_292
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9423575157
num_examples: 1813
download_size: 9459072789
dataset_size: 9423575157
- config_name: subset_293
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9596121647
num_examples: 1841
download_size: 9633046444
dataset_size: 9596121647
- config_name: subset_294
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9509336675
num_examples: 1818
download_size: 9546218752
dataset_size: 9509336675
- config_name: subset_295
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9503628008
num_examples: 1825
download_size: 9540439521
dataset_size: 9503628008
- config_name: subset_296
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9499785375
num_examples: 1817
download_size: 9535465416
dataset_size: 9499785375
- config_name: subset_297
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9486624475
num_examples: 1840
download_size: 9522420706
dataset_size: 9486624475
- config_name: subset_298
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9611963960
num_examples: 1816
download_size: 9648063415
dataset_size: 9611963960
- config_name: subset_299
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9165943098
num_examples: 1778
download_size: 9201250523
dataset_size: 9165943098
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17564492490
num_examples: 2234
download_size: 17621332507
dataset_size: 17564492490
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15691430060
num_examples: 2197
download_size: 15743673970
dataset_size: 15691430060
- config_name: subset_300
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9609773492
num_examples: 1809
download_size: 9646496750
dataset_size: 9609773492
- config_name: subset_301
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9508175993
num_examples: 1814
download_size: 9544852618
dataset_size: 9508175993
- config_name: subset_302
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9152773379
num_examples: 1786
download_size: 9188137438
dataset_size: 9152773379
- config_name: subset_303
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9479298193
num_examples: 1797
download_size: 9514405163
dataset_size: 9479298193
- config_name: subset_304
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9388356289
num_examples: 1776
download_size: 9423438274
dataset_size: 9388356289
- config_name: subset_305
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9439211353
num_examples: 1805
download_size: 9474779869
dataset_size: 9439211353
- config_name: subset_306
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9341703478
num_examples: 1830
download_size: 9377307637
dataset_size: 9341703478
- config_name: subset_307
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9556548323
num_examples: 1833
download_size: 9593234545
dataset_size: 9556548323
- config_name: subset_308
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9351488776
num_examples: 1780
download_size: 9386455164
dataset_size: 9351488776
- config_name: subset_309
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9546867649
num_examples: 1847
download_size: 9583895956
dataset_size: 9546867649
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15655576113
num_examples: 2192
download_size: 15707667505
dataset_size: 15655576113
- config_name: subset_310
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9360988039
num_examples: 1802
download_size: 9395714050
dataset_size: 9360988039
- config_name: subset_311
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9312716340
num_examples: 1792
download_size: 9348081921
dataset_size: 9312716340
- config_name: subset_312
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9052758958
num_examples: 1793
download_size: 9088326883
dataset_size: 9052758958
- config_name: subset_313
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9094529036
num_examples: 1791
download_size: 9130202452
dataset_size: 9094529036
- config_name: subset_314
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8962637266
num_examples: 1747
download_size: 8996516727
dataset_size: 8962637266
- config_name: subset_315
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9550414333
num_examples: 1806
download_size: 9587049840
dataset_size: 9550414333
- config_name: subset_316
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9329875364
num_examples: 1800
download_size: 9365540807
dataset_size: 9329875364
- config_name: subset_317
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9339795411
num_examples: 1800
download_size: 9375304413
dataset_size: 9339795411
- config_name: subset_318
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9231095503
num_examples: 1775
download_size: 9266214850
dataset_size: 9231095503
- config_name: subset_319
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9434425340
num_examples: 1818
download_size: 9469524793
dataset_size: 9434425340
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15921659855
num_examples: 2219
download_size: 15974195919
dataset_size: 15921659855
- config_name: subset_320
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9469785770
num_examples: 1802
download_size: 9505117938
dataset_size: 9469785770
- config_name: subset_321
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9650736453
num_examples: 1804
download_size: 9687032265
dataset_size: 9650736453
- config_name: subset_322
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9212596034
num_examples: 1819
download_size: 9248412374
dataset_size: 9212596034
- config_name: subset_323
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9108779905
num_examples: 1755
download_size: 9144169306
dataset_size: 9108779905
- config_name: subset_324
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9044220745
num_examples: 1777
download_size: 9079400568
dataset_size: 9044220745
- config_name: subset_325
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9395796601
num_examples: 1803
download_size: 9431300325
dataset_size: 9395796601
- config_name: subset_326
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9064906889
num_examples: 1787
download_size: 9100339782
dataset_size: 9064906889
- config_name: subset_327
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9281203011
num_examples: 1777
download_size: 9316568721
dataset_size: 9281203011
- config_name: subset_328
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9210702320
num_examples: 1777
download_size: 9245987863
dataset_size: 9210702320
- config_name: subset_329
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9223773915
num_examples: 1790
download_size: 9259307876
dataset_size: 9223773915
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15611416602
num_examples: 2198
download_size: 15663791470
dataset_size: 15611416602
- config_name: subset_330
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9296502175
num_examples: 1781
download_size: 9331855519
dataset_size: 9296502175
- config_name: subset_331
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9148377912
num_examples: 1783
download_size: 9183768848
dataset_size: 9148377912
- config_name: subset_332
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9230881649
num_examples: 1791
download_size: 9266359032
dataset_size: 9230881649
- config_name: subset_333
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9487368178
num_examples: 1819
download_size: 9522947865
dataset_size: 9487368178
- config_name: subset_334
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9083416926
num_examples: 1778
download_size: 9118375759
dataset_size: 9083416926
- config_name: subset_335
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9139849081
num_examples: 1788
download_size: 9175346746
dataset_size: 9139849081
- config_name: subset_336
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9275532884
num_examples: 1785
download_size: 9311143350
dataset_size: 9275532884
- config_name: subset_337
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9114459631
num_examples: 1770
download_size: 9149875698
dataset_size: 9114459631
- config_name: subset_338
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9193061609
num_examples: 1798
download_size: 9228634184
dataset_size: 9193061609
- config_name: subset_339
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9061657947
num_examples: 1804
download_size: 9097148076
dataset_size: 9061657947
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15432387170
num_examples: 2199
download_size: 15484263220
dataset_size: 15432387170
- config_name: subset_340
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9179922247
num_examples: 1802
download_size: 9215341652
dataset_size: 9179922247
- config_name: subset_341
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8936121108
num_examples: 1763
download_size: 8970077579
dataset_size: 8936121108
- config_name: subset_342
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9089440940
num_examples: 1775
download_size: 9124985463
dataset_size: 9089440940
- config_name: subset_343
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9056805540
num_examples: 1757
download_size: 9092015615
dataset_size: 9056805540
- config_name: subset_344
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9131233374
num_examples: 1765
download_size: 9166469291
dataset_size: 9131233374
- config_name: subset_345
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8840205902
num_examples: 1720
download_size: 8874028549
dataset_size: 8840205902
- config_name: subset_346
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8728873809
num_examples: 1737
download_size: 8762891017
dataset_size: 8728873809
- config_name: subset_347
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8931364937
num_examples: 1766
download_size: 8964815188
dataset_size: 8931364937
- config_name: subset_348
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9024910784
num_examples: 1776
download_size: 9060436900
dataset_size: 9024910784
- config_name: subset_349
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8991465479
num_examples: 1720
download_size: 9025201371
dataset_size: 8991465479
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14996383276
num_examples: 2125
download_size: 15046409293
dataset_size: 14996383276
- config_name: subset_350
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8971469012
num_examples: 1782
download_size: 9005732119
dataset_size: 8971469012
- config_name: subset_351
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8800425959
num_examples: 1722
download_size: 8834285273
dataset_size: 8800425959
- config_name: subset_352
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8836136950
num_examples: 1764
download_size: 8870360600
dataset_size: 8836136950
- config_name: subset_353
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9031718626
num_examples: 1774
download_size: 9067053272
dataset_size: 9031718626
- config_name: subset_354
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8797219039
num_examples: 1783
download_size: 8831487024
dataset_size: 8797219039
- config_name: subset_355
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8840747107
num_examples: 1761
download_size: 8874859913
dataset_size: 8840747107
- config_name: subset_356
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8577410322
num_examples: 1686
download_size: 8610812449
dataset_size: 8577410322
- config_name: subset_357
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9074273275
num_examples: 1782
download_size: 9109695863
dataset_size: 9074273275
- config_name: subset_358
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9227959994
num_examples: 1770
download_size: 9263216847
dataset_size: 9227959994
- config_name: subset_359
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8776143678
num_examples: 1711
download_size: 8809888634
dataset_size: 8776143678
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15044709570
num_examples: 2145
download_size: 15095548862
dataset_size: 15044709570
- config_name: subset_360
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8820259705
num_examples: 1738
download_size: 8854303067
dataset_size: 8820259705
- config_name: subset_361
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9145072997
num_examples: 1774
download_size: 9180586755
dataset_size: 9145072997
- config_name: subset_362
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8462258469
num_examples: 1693
download_size: 8494690158
dataset_size: 8462258469
- config_name: subset_363
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8581569826
num_examples: 1716
download_size: 8615493990
dataset_size: 8581569826
- config_name: subset_364
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9011936285
num_examples: 1728
download_size: 9047013145
dataset_size: 9011936285
- config_name: subset_365
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8881614976
num_examples: 1720
download_size: 8915230397
dataset_size: 8881614976
- config_name: subset_366
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8845062437
num_examples: 1723
download_size: 8878662664
dataset_size: 8845062437
- config_name: subset_367
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8828104700
num_examples: 1733
download_size: 8862105074
dataset_size: 8828104700
- config_name: subset_368
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8502283986
num_examples: 1688
download_size: 8535977291
dataset_size: 8502283986
- config_name: subset_369
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8959660863
num_examples: 1714
download_size: 8993371357
dataset_size: 8959660863
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15098383698
num_examples: 2162
download_size: 15149327432
dataset_size: 15098383698
- config_name: subset_370
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8574185282
num_examples: 1713
download_size: 8608126100
dataset_size: 8574185282
- config_name: subset_371
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9020735695
num_examples: 1743
download_size: 9055944350
dataset_size: 9020735695
- config_name: subset_372
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8723449670
num_examples: 1746
download_size: 8756102489
dataset_size: 8723449670
- config_name: subset_373
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8747375432
num_examples: 1719
download_size: 8780806765
dataset_size: 8747375432
- config_name: subset_374
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8364764553
num_examples: 1701
download_size: 8396204390
dataset_size: 8364764553
- config_name: subset_375
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8595821349
num_examples: 1688
download_size: 8629313871
dataset_size: 8595821349
- config_name: subset_376
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8522125538
num_examples: 1705
download_size: 8555855028
dataset_size: 8522125538
- config_name: subset_377
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8800733888
num_examples: 1729
download_size: 8834228490
dataset_size: 8800733888
- config_name: subset_378
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8670894909
num_examples: 1699
download_size: 8704612631
dataset_size: 8670894909
- config_name: subset_379
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8630111423
num_examples: 1738
download_size: 8664239699
dataset_size: 8630111423
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14876990171
num_examples: 2136
download_size: 14927068671
dataset_size: 14876990171
- config_name: subset_380
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8494123212
num_examples: 1683
download_size: 8526418095
dataset_size: 8494123212
- config_name: subset_381
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8744503288
num_examples: 1740
download_size: 8778567058
dataset_size: 8744503288
- config_name: subset_382
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8360475176
num_examples: 1681
download_size: 8392795152
dataset_size: 8360475176
- config_name: subset_383
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8789107737
num_examples: 1726
download_size: 8823019733
dataset_size: 8789107737
- config_name: subset_384
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8698941573
num_examples: 1700
download_size: 8731786773
dataset_size: 8698941573
- config_name: subset_385
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8543623069
num_examples: 1677
download_size: 8576413868
dataset_size: 8543623069
- config_name: subset_386
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8492516757
num_examples: 1677
download_size: 8524829527
dataset_size: 8492516757
- config_name: subset_387
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8611028245
num_examples: 1691
download_size: 8644582313
dataset_size: 8611028245
- config_name: subset_388
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8312473366
num_examples: 1683
download_size: 8344751119
dataset_size: 8312473366
- config_name: subset_389
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8357627397
num_examples: 1717
download_size: 8390511072
dataset_size: 8357627397
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15044435370
num_examples: 2174
download_size: 15095948164
dataset_size: 15044435370
- config_name: subset_390
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8748406479
num_examples: 1736
download_size: 8782272242
dataset_size: 8748406479
- config_name: subset_391
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8985872554
num_examples: 1733
download_size: 9019426916
dataset_size: 8985872554
- config_name: subset_392
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8296682807
num_examples: 1683
download_size: 8329215790
dataset_size: 8296682807
- config_name: subset_393
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8678142035
num_examples: 1712
download_size: 8712037235
dataset_size: 8678142035
- config_name: subset_394
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8608830930
num_examples: 1695
download_size: 8642217218
dataset_size: 8608830930
- config_name: subset_395
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8580225855
num_examples: 1725
download_size: 8614251890
dataset_size: 8580225855
- config_name: subset_396
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8579462633
num_examples: 1694
download_size: 8613251526
dataset_size: 8579462633
- config_name: subset_397
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8430739696
num_examples: 1705
download_size: 8463357738
dataset_size: 8430739696
- config_name: subset_398
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8713963571
num_examples: 1697
download_size: 8747816742
dataset_size: 8713963571
- config_name: subset_399
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8704077389
num_examples: 1735
download_size: 8738064326
dataset_size: 8704077389
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17686238305
num_examples: 2240
download_size: 17743084363
dataset_size: 17686238305
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14950642814
num_examples: 2154
download_size: 15000900506
dataset_size: 14950642814
- config_name: subset_400
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8550939961
num_examples: 1698
download_size: 8584298446
dataset_size: 8550939961
- config_name: subset_401
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8585540313
num_examples: 1690
download_size: 8619085637
dataset_size: 8585540313
- config_name: subset_402
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8798509365
num_examples: 1776
download_size: 8832157307
dataset_size: 8798509365
- config_name: subset_403
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8682989481
num_examples: 1698
download_size: 8716618540
dataset_size: 8682989481
- config_name: subset_404
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8993741789
num_examples: 1743
download_size: 9027647720
dataset_size: 8993741789
- config_name: subset_405
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8228817018
num_examples: 1674
download_size: 8261288381
dataset_size: 8228817018
- config_name: subset_406
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8404929447
num_examples: 1709
download_size: 8437467645
dataset_size: 8404929447
- config_name: subset_407
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8863105467
num_examples: 1735
download_size: 8896978741
dataset_size: 8863105467
- config_name: subset_408
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8514664885
num_examples: 1712
download_size: 8548503653
dataset_size: 8514664885
- config_name: subset_409
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8840066744
num_examples: 1710
download_size: 8873764622
dataset_size: 8840066744
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14925154015
num_examples: 2142
download_size: 14974821031
dataset_size: 14925154015
- config_name: subset_410
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8781033463
num_examples: 1745
download_size: 8815013438
dataset_size: 8781033463
- config_name: subset_411
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8530771583
num_examples: 1725
download_size: 8562781837
dataset_size: 8530771583
- config_name: subset_412
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8807643953
num_examples: 1755
download_size: 8841689470
dataset_size: 8807643953
- config_name: subset_413
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8562934741
num_examples: 1699
download_size: 8596738808
dataset_size: 8562934741
- config_name: subset_414
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8577346796
num_examples: 1755
download_size: 8610989603
dataset_size: 8577346796
- config_name: subset_415
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8529392107
num_examples: 1675
download_size: 8562915561
dataset_size: 8529392107
- config_name: subset_416
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8445269930
num_examples: 1703
download_size: 8476865155
dataset_size: 8445269930
- config_name: subset_417
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8619958787
num_examples: 1712
download_size: 8653700824
dataset_size: 8619958787
- config_name: subset_418
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8289958485
num_examples: 1673
download_size: 8322319634
dataset_size: 8289958485
- config_name: subset_419
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8744172660
num_examples: 1737
download_size: 8778194526
dataset_size: 8744172660
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14788933635
num_examples: 2161
download_size: 14839309849
dataset_size: 14788933635
- config_name: subset_420
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8564886346
num_examples: 1698
download_size: 8598617429
dataset_size: 8564886346
- config_name: subset_421
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8869185294
num_examples: 1757
download_size: 8903265406
dataset_size: 8869185294
- config_name: subset_422
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9034329328
num_examples: 1751
download_size: 9069380368
dataset_size: 9034329328
- config_name: subset_423
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8819599057
num_examples: 1740
download_size: 8853605288
dataset_size: 8819599057
- config_name: subset_424
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8915111476
num_examples: 1744
download_size: 8949141122
dataset_size: 8915111476
- config_name: subset_425
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8818699104
num_examples: 1729
download_size: 8852508680
dataset_size: 8818699104
- config_name: subset_426
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8568761227
num_examples: 1724
download_size: 8602654585
dataset_size: 8568761227
- config_name: subset_427
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8688732426
num_examples: 1735
download_size: 8722735777
dataset_size: 8688732426
- config_name: subset_428
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8369697429
num_examples: 1698
download_size: 8402208849
dataset_size: 8369697429
- config_name: subset_429
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8646092223
num_examples: 1714
download_size: 8679967503
dataset_size: 8646092223
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15170697432
num_examples: 2157
download_size: 15221698675
dataset_size: 15170697432
- config_name: subset_430
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8525584317
num_examples: 1718
download_size: 8559600637
dataset_size: 8525584317
- config_name: subset_431
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8858450126
num_examples: 1750
download_size: 8892566030
dataset_size: 8858450126
- config_name: subset_432
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8849913220
num_examples: 1749
download_size: 8883882148
dataset_size: 8849913220
- config_name: subset_433
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8613746890
num_examples: 1767
download_size: 8648026859
dataset_size: 8613746890
- config_name: subset_434
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8645653840
num_examples: 1709
download_size: 8679363337
dataset_size: 8645653840
- config_name: subset_435
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8560099829
num_examples: 1705
download_size: 8594037043
dataset_size: 8560099829
- config_name: subset_436
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8832125948
num_examples: 1764
download_size: 8866309735
dataset_size: 8832125948
- config_name: subset_437
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8946255398
num_examples: 1752
download_size: 8980417376
dataset_size: 8946255398
- config_name: subset_438
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8591328909
num_examples: 1721
download_size: 8625199762
dataset_size: 8591328909
- config_name: subset_439
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8702440609
num_examples: 1734
download_size: 8736551929
dataset_size: 8702440609
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14640698517
num_examples: 2124
download_size: 14690783226
dataset_size: 14640698517
- config_name: subset_440
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8819446067
num_examples: 1749
download_size: 8853463814
dataset_size: 8819446067
- config_name: subset_441
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8689763275
num_examples: 1749
download_size: 8723897261
dataset_size: 8689763275
- config_name: subset_442
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8950550291
num_examples: 1746
download_size: 8984372825
dataset_size: 8950550291
- config_name: subset_443
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8711707457
num_examples: 1733
download_size: 8745701357
dataset_size: 8711707457
- config_name: subset_444
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8866910369
num_examples: 1752
download_size: 8901070769
dataset_size: 8866910369
- config_name: subset_445
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8840346071
num_examples: 1743
download_size: 8874378560
dataset_size: 8840346071
- config_name: subset_446
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8547114263
num_examples: 1710
download_size: 8581022971
dataset_size: 8547114263
- config_name: subset_447
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8630549726
num_examples: 1742
download_size: 8664416874
dataset_size: 8630549726
- config_name: subset_448
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8590967840
num_examples: 1722
download_size: 8624834801
dataset_size: 8590967840
- config_name: subset_449
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8838711861
num_examples: 1773
download_size: 8872898762
dataset_size: 8838711861
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14888049488
num_examples: 2161
download_size: 14938566293
dataset_size: 14888049488
- config_name: subset_450
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8853978164
num_examples: 1751
download_size: 8888120395
dataset_size: 8853978164
- config_name: subset_451
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8539713353
num_examples: 1695
download_size: 8573419201
dataset_size: 8539713353
- config_name: subset_452
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8394169575
num_examples: 1681
download_size: 8426611041
dataset_size: 8394169575
- config_name: subset_453
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8439007345
num_examples: 1701
download_size: 8471603668
dataset_size: 8439007345
- config_name: subset_454
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8531852056
num_examples: 1688
download_size: 8564919565
dataset_size: 8531852056
- config_name: subset_455
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8368812772
num_examples: 1686
download_size: 8401247371
dataset_size: 8368812772
- config_name: subset_456
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8651295246
num_examples: 1726
download_size: 8685379201
dataset_size: 8651295246
- config_name: subset_457
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8447069870
num_examples: 1725
download_size: 8479977920
dataset_size: 8447069870
- config_name: subset_458
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8572604338
num_examples: 1738
download_size: 8606819062
dataset_size: 8572604338
- config_name: subset_459
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8790645711
num_examples: 1740
download_size: 8824783568
dataset_size: 8790645711
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14931332728
num_examples: 2152
download_size: 14981163156
dataset_size: 14931332728
- config_name: subset_460
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8664419556
num_examples: 1726
download_size: 8698428184
dataset_size: 8664419556
- config_name: subset_461
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8595501889
num_examples: 1728
download_size: 8629353061
dataset_size: 8595501889
- config_name: subset_462
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8255221647
num_examples: 1704
download_size: 8287903828
dataset_size: 8255221647
- config_name: subset_463
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8850459383
num_examples: 1757
download_size: 8883064365
dataset_size: 8850459383
- config_name: subset_464
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8641729323
num_examples: 1734
download_size: 8675162555
dataset_size: 8641729323
- config_name: subset_465
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8480408848
num_examples: 1703
download_size: 8512339282
dataset_size: 8480408848
- config_name: subset_466
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8463711538
num_examples: 1703
download_size: 8494986677
dataset_size: 8463711538
- config_name: subset_467
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8820781508
num_examples: 1729
download_size: 8854667990
dataset_size: 8820781508
- config_name: subset_468
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8297529694
num_examples: 1703
download_size: 8329518158
dataset_size: 8297529694
- config_name: subset_469
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8335255754
num_examples: 1674
download_size: 8367587978
dataset_size: 8335255754
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14733661735
num_examples: 2150
download_size: 14784007748
dataset_size: 14733661735
- config_name: subset_470
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8677813347
num_examples: 1733
download_size: 8711504703
dataset_size: 8677813347
- config_name: subset_471
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8025824740
num_examples: 1625
download_size: 8057638520
dataset_size: 8025824740
- config_name: subset_472
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8412479577
num_examples: 1669
download_size: 8444758288
dataset_size: 8412479577
- config_name: subset_473
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8353690938
num_examples: 1689
download_size: 8385666223
dataset_size: 8353690938
- config_name: subset_474
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8593303832
num_examples: 1723
download_size: 8627266289
dataset_size: 8593303832
- config_name: subset_475
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8491558230
num_examples: 1701
download_size: 8524198543
dataset_size: 8491558230
- config_name: subset_476
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8452866881
num_examples: 1672
download_size: 8485234629
dataset_size: 8452866881
- config_name: subset_477
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8704398386
num_examples: 1714
download_size: 8738159122
dataset_size: 8704398386
- config_name: subset_478
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8284525589
num_examples: 1669
download_size: 8316981929
dataset_size: 8284525589
- config_name: subset_479
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8282788425
num_examples: 1662
download_size: 8315213911
dataset_size: 8282788425
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14438094478
num_examples: 2123
download_size: 14487015492
dataset_size: 14438094478
- config_name: subset_480
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8576390099
num_examples: 1708
download_size: 8609651781
dataset_size: 8576390099
- config_name: subset_481
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8507463443
num_examples: 1693
download_size: 8541308945
dataset_size: 8507463443
- config_name: subset_482
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8255501088
num_examples: 1644
download_size: 8287721624
dataset_size: 8255501088
- config_name: subset_483
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8274451687
num_examples: 1667
download_size: 8306808681
dataset_size: 8274451687
- config_name: subset_484
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8573537899
num_examples: 1690
download_size: 8607168860
dataset_size: 8573537899
- config_name: subset_485
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8384503494
num_examples: 1686
download_size: 8417100259
dataset_size: 8384503494
- config_name: subset_486
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8588897502
num_examples: 1681
download_size: 8622495228
dataset_size: 8588897502
- config_name: subset_487
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8437517535
num_examples: 1712
download_size: 8470186354
dataset_size: 8437517535
- config_name: subset_488
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8418827545
num_examples: 1681
download_size: 8451202784
dataset_size: 8418827545
- config_name: subset_489
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8584589048
num_examples: 1709
download_size: 8618337451
dataset_size: 8584589048
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14774694546
num_examples: 2154
download_size: 14824929896
dataset_size: 14774694546
- config_name: subset_490
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8663901120
num_examples: 1682
download_size: 8697624762
dataset_size: 8663901120
- config_name: subset_491
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8488371055
num_examples: 1722
download_size: 8521153862
dataset_size: 8488371055
- config_name: subset_492
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8203980754
num_examples: 1655
download_size: 8236225054
dataset_size: 8203980754
- config_name: subset_493
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8391363760
num_examples: 1667
download_size: 8423105052
dataset_size: 8391363760
- config_name: subset_494
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8568767921
num_examples: 1712
download_size: 8602723551
dataset_size: 8568767921
- config_name: subset_495
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8174939825
num_examples: 1617
download_size: 8206800473
dataset_size: 8174939825
- config_name: subset_496
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8570411815
num_examples: 1706
download_size: 8604255375
dataset_size: 8570411815
- config_name: subset_497
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8379436558
num_examples: 1647
download_size: 8411700765
dataset_size: 8379436558
- config_name: subset_498
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8262554246
num_examples: 1635
download_size: 8294680975
dataset_size: 8262554246
- config_name: subset_499
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8342799972
num_examples: 1656
download_size: 8375085115
dataset_size: 8342799972
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17613015503
num_examples: 2216
download_size: 17669708546
dataset_size: 17613015503
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14650896380
num_examples: 2139
download_size: 14700785975
dataset_size: 14650896380
- config_name: subset_500
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8430401608
num_examples: 1705
download_size: 8462967390
dataset_size: 8430401608
- config_name: subset_501
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8386054846
num_examples: 1676
download_size: 8418093888
dataset_size: 8386054846
- config_name: subset_502
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8291804091
num_examples: 1691
download_size: 8324392895
dataset_size: 8291804091
- config_name: subset_503
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8522249041
num_examples: 1712
download_size: 8555125608
dataset_size: 8522249041
- config_name: subset_504
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8504759173
num_examples: 1699
download_size: 8538443019
dataset_size: 8504759173
- config_name: subset_505
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8838284884
num_examples: 1720
download_size: 8871610293
dataset_size: 8838284884
- config_name: subset_506
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8510139570
num_examples: 1719
download_size: 8543995249
dataset_size: 8510139570
- config_name: subset_507
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8409163618
num_examples: 1713
download_size: 8441906356
dataset_size: 8409163618
- config_name: subset_508
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8588008063
num_examples: 1705
download_size: 8621825321
dataset_size: 8588008063
- config_name: subset_509
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8301639362
num_examples: 1688
download_size: 8334312765
dataset_size: 8301639362
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14432555019
num_examples: 2132
download_size: 14481692751
dataset_size: 14432555019
- config_name: subset_510
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8509654655
num_examples: 1709
download_size: 8543505342
dataset_size: 8509654655
- config_name: subset_511
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8152253737
num_examples: 1675
download_size: 8184762472
dataset_size: 8152253737
- config_name: subset_512
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8124134538
num_examples: 1633
download_size: 8156224283
dataset_size: 8124134538
- config_name: subset_513
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8457785980
num_examples: 1688
download_size: 8490228205
dataset_size: 8457785980
- config_name: subset_514
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8176494586
num_examples: 1649
download_size: 8208157175
dataset_size: 8176494586
- config_name: subset_515
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8449600460
num_examples: 1663
download_size: 8481929815
dataset_size: 8449600460
- config_name: subset_516
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8490244498
num_examples: 1704
download_size: 8522811553
dataset_size: 8490244498
- config_name: subset_517
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8511404348
num_examples: 1681
download_size: 8545098624
dataset_size: 8511404348
- config_name: subset_518
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8591455689
num_examples: 1674
download_size: 8624241495
dataset_size: 8591455689
- config_name: subset_519
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8593419410
num_examples: 1727
download_size: 8627485550
dataset_size: 8593419410
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11020293083
num_examples: 1601
download_size: 11058125578
dataset_size: 11020293083
- config_name: subset_520
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8182579727
num_examples: 1642
download_size: 8214372099
dataset_size: 8182579727
- config_name: subset_521
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8553497907
num_examples: 1689
download_size: 8587345591
dataset_size: 8553497907
- config_name: subset_522
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8447619989
num_examples: 1687
download_size: 8480217100
dataset_size: 8447619989
- config_name: subset_523
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8605695102
num_examples: 1686
download_size: 8639343629
dataset_size: 8605695102
- config_name: subset_524
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8230959805
num_examples: 1662
download_size: 8263208309
dataset_size: 8230959805
- config_name: subset_525
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8510842070
num_examples: 1665
download_size: 8544182808
dataset_size: 8510842070
- config_name: subset_526
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8408423738
num_examples: 1685
download_size: 8441048945
dataset_size: 8408423738
- config_name: subset_527
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8219454542
num_examples: 1633
download_size: 8251593706
dataset_size: 8219454542
- config_name: subset_528
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8475244982
num_examples: 1700
download_size: 8507313060
dataset_size: 8475244982
- config_name: subset_529
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8333066335
num_examples: 1680
download_size: 8365502946
dataset_size: 8333066335
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12267977115
num_examples: 1816
download_size: 12310050810
dataset_size: 12267977115
- config_name: subset_530
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8575189198
num_examples: 1681
download_size: 8608931147
dataset_size: 8575189198
- config_name: subset_531
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8418213352
num_examples: 1688
download_size: 8450742964
dataset_size: 8418213352
- config_name: subset_532
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4996766459
num_examples: 1030
download_size: 5016215095
dataset_size: 4996766459
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14247382247
num_examples: 2086
download_size: 14296226830
dataset_size: 14247382247
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14238518466
num_examples: 2130
download_size: 14287390000
dataset_size: 14238518466
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14607130726
num_examples: 2168
download_size: 14657506181
dataset_size: 14607130726
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14210056261
num_examples: 2122
download_size: 14259218666
dataset_size: 14210056261
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14211054746
num_examples: 2116
download_size: 14259893924
dataset_size: 14211054746
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14605275078
num_examples: 2122
download_size: 14655148647
dataset_size: 14605275078
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16966145283
num_examples: 2212
download_size: 17020515363
dataset_size: 16966145283
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14182365223
num_examples: 2148
download_size: 14231621670
dataset_size: 14182365223
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14125830113
num_examples: 2099
download_size: 14174466038
dataset_size: 14125830113
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14486968006
num_examples: 2164
download_size: 14536306900
dataset_size: 14486968006
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14272019850
num_examples: 2119
download_size: 14320951437
dataset_size: 14272019850
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13951158579
num_examples: 2114
download_size: 13998495055
dataset_size: 13951158579
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14091720924
num_examples: 2101
download_size: 14140547460
dataset_size: 14091720924
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14109820088
num_examples: 2142
download_size: 14158896675
dataset_size: 14109820088
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14005906998
num_examples: 2113
download_size: 14054155208
dataset_size: 14005906998
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14301837003
num_examples: 2160
download_size: 14351134493
dataset_size: 14301837003
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13966734315
num_examples: 2117
download_size: 14014525192
dataset_size: 13966734315
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17450587514
num_examples: 2266
download_size: 17506459251
dataset_size: 17450587514
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14219970789
num_examples: 2148
download_size: 14269024541
dataset_size: 14219970789
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14501306569
num_examples: 2167
download_size: 14551012326
dataset_size: 14501306569
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13769708196
num_examples: 2109
download_size: 13817441686
dataset_size: 13769708196
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13883416692
num_examples: 2117
download_size: 13931221847
dataset_size: 13883416692
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14219211013
num_examples: 2160
download_size: 14268320291
dataset_size: 14219211013
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13926577539
num_examples: 2115
download_size: 13974286633
dataset_size: 13926577539
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14312562492
num_examples: 2142
download_size: 14361474867
dataset_size: 14312562492
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14192446366
num_examples: 2145
download_size: 14241283219
dataset_size: 14192446366
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13896932954
num_examples: 2118
download_size: 13944745673
dataset_size: 13896932954
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13740365813
num_examples: 2143
download_size: 13788404251
dataset_size: 13740365813
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17326265954
num_examples: 2240
download_size: 17382031150
dataset_size: 17326265954
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13956747719
num_examples: 2148
download_size: 14004630176
dataset_size: 13956747719
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13911203686
num_examples: 2137
download_size: 13959165662
dataset_size: 13911203686
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13638266766
num_examples: 2122
download_size: 13686243724
dataset_size: 13638266766
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13903350636
num_examples: 2137
download_size: 13951036539
dataset_size: 13903350636
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14069026843
num_examples: 2155
download_size: 14117216022
dataset_size: 14069026843
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13906708923
num_examples: 2142
download_size: 13954600557
dataset_size: 13906708923
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13356555486
num_examples: 2116
download_size: 13403132164
dataset_size: 13356555486
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13674103379
num_examples: 2103
download_size: 13721834580
dataset_size: 13674103379
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13773798807
num_examples: 2142
download_size: 13821765327
dataset_size: 13773798807
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13775818415
num_examples: 2126
download_size: 13823549616
dataset_size: 13775818415
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 17101700745
num_examples: 2236
download_size: 17157344999
dataset_size: 17101700745
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13384785417
num_examples: 2103
download_size: 13431244462
dataset_size: 13384785417
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13360679867
num_examples: 2093
download_size: 13406930008
dataset_size: 13360679867
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13535363269
num_examples: 2130
download_size: 13583293758
dataset_size: 13535363269
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13441592585
num_examples: 2095
download_size: 13487809010
dataset_size: 13441592585
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13422431033
num_examples: 2124
download_size: 13469001040
dataset_size: 13422431033
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13733605259
num_examples: 2160
download_size: 13781705313
dataset_size: 13733605259
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13503605486
num_examples: 2132
download_size: 13551490247
dataset_size: 13503605486
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13765914180
num_examples: 2132
download_size: 13813630773
dataset_size: 13765914180
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13111453176
num_examples: 2046
download_size: 13157636795
dataset_size: 13111453176
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: esA.id
dtype: string
- name: esA.laser_score
dtype: float64
- name: esA.audio.speaker_embedding
sequence: float32
- name: esA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13252223433
num_examples: 2038
download_size: 13298410606
dataset_size: 13252223433
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_259
data_files:
- split: train
path: subset_259/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_260
data_files:
- split: train
path: subset_260/train-*
- config_name: subset_261
data_files:
- split: train
path: subset_261/train-*
- config_name: subset_262
data_files:
- split: train
path: subset_262/train-*
- config_name: subset_263
data_files:
- split: train
path: subset_263/train-*
- config_name: subset_264
data_files:
- split: train
path: subset_264/train-*
- config_name: subset_265
data_files:
- split: train
path: subset_265/train-*
- config_name: subset_266
data_files:
- split: train
path: subset_266/train-*
- config_name: subset_267
data_files:
- split: train
path: subset_267/train-*
- config_name: subset_268
data_files:
- split: train
path: subset_268/train-*
- config_name: subset_269
data_files:
- split: train
path: subset_269/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_270
data_files:
- split: train
path: subset_270/train-*
- config_name: subset_271
data_files:
- split: train
path: subset_271/train-*
- config_name: subset_272
data_files:
- split: train
path: subset_272/train-*
- config_name: subset_273
data_files:
- split: train
path: subset_273/train-*
- config_name: subset_274
data_files:
- split: train
path: subset_274/train-*
- config_name: subset_275
data_files:
- split: train
path: subset_275/train-*
- config_name: subset_276
data_files:
- split: train
path: subset_276/train-*
- config_name: subset_277
data_files:
- split: train
path: subset_277/train-*
- config_name: subset_278
data_files:
- split: train
path: subset_278/train-*
- config_name: subset_279
data_files:
- split: train
path: subset_279/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_280
data_files:
- split: train
path: subset_280/train-*
- config_name: subset_281
data_files:
- split: train
path: subset_281/train-*
- config_name: subset_282
data_files:
- split: train
path: subset_282/train-*
- config_name: subset_283
data_files:
- split: train
path: subset_283/train-*
- config_name: subset_284
data_files:
- split: train
path: subset_284/train-*
- config_name: subset_285
data_files:
- split: train
path: subset_285/train-*
- config_name: subset_286
data_files:
- split: train
path: subset_286/train-*
- config_name: subset_287
data_files:
- split: train
path: subset_287/train-*
- config_name: subset_288
data_files:
- split: train
path: subset_288/train-*
- config_name: subset_289
data_files:
- split: train
path: subset_289/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_290
data_files:
- split: train
path: subset_290/train-*
- config_name: subset_291
data_files:
- split: train
path: subset_291/train-*
- config_name: subset_292
data_files:
- split: train
path: subset_292/train-*
- config_name: subset_293
data_files:
- split: train
path: subset_293/train-*
- config_name: subset_294
data_files:
- split: train
path: subset_294/train-*
- config_name: subset_295
data_files:
- split: train
path: subset_295/train-*
- config_name: subset_296
data_files:
- split: train
path: subset_296/train-*
- config_name: subset_297
data_files:
- split: train
path: subset_297/train-*
- config_name: subset_298
data_files:
- split: train
path: subset_298/train-*
- config_name: subset_299
data_files:
- split: train
path: subset_299/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_300
data_files:
- split: train
path: subset_300/train-*
- config_name: subset_301
data_files:
- split: train
path: subset_301/train-*
- config_name: subset_302
data_files:
- split: train
path: subset_302/train-*
- config_name: subset_303
data_files:
- split: train
path: subset_303/train-*
- config_name: subset_304
data_files:
- split: train
path: subset_304/train-*
- config_name: subset_305
data_files:
- split: train
path: subset_305/train-*
- config_name: subset_306
data_files:
- split: train
path: subset_306/train-*
- config_name: subset_307
data_files:
- split: train
path: subset_307/train-*
- config_name: subset_308
data_files:
- split: train
path: subset_308/train-*
- config_name: subset_309
data_files:
- split: train
path: subset_309/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_310
data_files:
- split: train
path: subset_310/train-*
- config_name: subset_311
data_files:
- split: train
path: subset_311/train-*
- config_name: subset_312
data_files:
- split: train
path: subset_312/train-*
- config_name: subset_313
data_files:
- split: train
path: subset_313/train-*
- config_name: subset_314
data_files:
- split: train
path: subset_314/train-*
- config_name: subset_315
data_files:
- split: train
path: subset_315/train-*
- config_name: subset_316
data_files:
- split: train
path: subset_316/train-*
- config_name: subset_317
data_files:
- split: train
path: subset_317/train-*
- config_name: subset_318
data_files:
- split: train
path: subset_318/train-*
- config_name: subset_319
data_files:
- split: train
path: subset_319/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_320
data_files:
- split: train
path: subset_320/train-*
- config_name: subset_321
data_files:
- split: train
path: subset_321/train-*
- config_name: subset_322
data_files:
- split: train
path: subset_322/train-*
- config_name: subset_323
data_files:
- split: train
path: subset_323/train-*
- config_name: subset_324
data_files:
- split: train
path: subset_324/train-*
- config_name: subset_325
data_files:
- split: train
path: subset_325/train-*
- config_name: subset_326
data_files:
- split: train
path: subset_326/train-*
- config_name: subset_327
data_files:
- split: train
path: subset_327/train-*
- config_name: subset_328
data_files:
- split: train
path: subset_328/train-*
- config_name: subset_329
data_files:
- split: train
path: subset_329/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_330
data_files:
- split: train
path: subset_330/train-*
- config_name: subset_331
data_files:
- split: train
path: subset_331/train-*
- config_name: subset_332
data_files:
- split: train
path: subset_332/train-*
- config_name: subset_333
data_files:
- split: train
path: subset_333/train-*
- config_name: subset_334
data_files:
- split: train
path: subset_334/train-*
- config_name: subset_335
data_files:
- split: train
path: subset_335/train-*
- config_name: subset_336
data_files:
- split: train
path: subset_336/train-*
- config_name: subset_337
data_files:
- split: train
path: subset_337/train-*
- config_name: subset_338
data_files:
- split: train
path: subset_338/train-*
- config_name: subset_339
data_files:
- split: train
path: subset_339/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_340
data_files:
- split: train
path: subset_340/train-*
- config_name: subset_341
data_files:
- split: train
path: subset_341/train-*
- config_name: subset_342
data_files:
- split: train
path: subset_342/train-*
- config_name: subset_343
data_files:
- split: train
path: subset_343/train-*
- config_name: subset_344
data_files:
- split: train
path: subset_344/train-*
- config_name: subset_345
data_files:
- split: train
path: subset_345/train-*
- config_name: subset_346
data_files:
- split: train
path: subset_346/train-*
- config_name: subset_347
data_files:
- split: train
path: subset_347/train-*
- config_name: subset_348
data_files:
- split: train
path: subset_348/train-*
- config_name: subset_349
data_files:
- split: train
path: subset_349/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_350
data_files:
- split: train
path: subset_350/train-*
- config_name: subset_351
data_files:
- split: train
path: subset_351/train-*
- config_name: subset_352
data_files:
- split: train
path: subset_352/train-*
- config_name: subset_353
data_files:
- split: train
path: subset_353/train-*
- config_name: subset_354
data_files:
- split: train
path: subset_354/train-*
- config_name: subset_355
data_files:
- split: train
path: subset_355/train-*
- config_name: subset_356
data_files:
- split: train
path: subset_356/train-*
- config_name: subset_357
data_files:
- split: train
path: subset_357/train-*
- config_name: subset_358
data_files:
- split: train
path: subset_358/train-*
- config_name: subset_359
data_files:
- split: train
path: subset_359/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_360
data_files:
- split: train
path: subset_360/train-*
- config_name: subset_361
data_files:
- split: train
path: subset_361/train-*
- config_name: subset_362
data_files:
- split: train
path: subset_362/train-*
- config_name: subset_363
data_files:
- split: train
path: subset_363/train-*
- config_name: subset_364
data_files:
- split: train
path: subset_364/train-*
- config_name: subset_365
data_files:
- split: train
path: subset_365/train-*
- config_name: subset_366
data_files:
- split: train
path: subset_366/train-*
- config_name: subset_367
data_files:
- split: train
path: subset_367/train-*
- config_name: subset_368
data_files:
- split: train
path: subset_368/train-*
- config_name: subset_369
data_files:
- split: train
path: subset_369/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_370
data_files:
- split: train
path: subset_370/train-*
- config_name: subset_371
data_files:
- split: train
path: subset_371/train-*
- config_name: subset_372
data_files:
- split: train
path: subset_372/train-*
- config_name: subset_373
data_files:
- split: train
path: subset_373/train-*
- config_name: subset_374
data_files:
- split: train
path: subset_374/train-*
- config_name: subset_375
data_files:
- split: train
path: subset_375/train-*
- config_name: subset_376
data_files:
- split: train
path: subset_376/train-*
- config_name: subset_377
data_files:
- split: train
path: subset_377/train-*
- config_name: subset_378
data_files:
- split: train
path: subset_378/train-*
- config_name: subset_379
data_files:
- split: train
path: subset_379/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_380
data_files:
- split: train
path: subset_380/train-*
- config_name: subset_381
data_files:
- split: train
path: subset_381/train-*
- config_name: subset_382
data_files:
- split: train
path: subset_382/train-*
- config_name: subset_383
data_files:
- split: train
path: subset_383/train-*
- config_name: subset_384
data_files:
- split: train
path: subset_384/train-*
- config_name: subset_385
data_files:
- split: train
path: subset_385/train-*
- config_name: subset_386
data_files:
- split: train
path: subset_386/train-*
- config_name: subset_387
data_files:
- split: train
path: subset_387/train-*
- config_name: subset_388
data_files:
- split: train
path: subset_388/train-*
- config_name: subset_389
data_files:
- split: train
path: subset_389/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_390
data_files:
- split: train
path: subset_390/train-*
- config_name: subset_391
data_files:
- split: train
path: subset_391/train-*
- config_name: subset_392
data_files:
- split: train
path: subset_392/train-*
- config_name: subset_393
data_files:
- split: train
path: subset_393/train-*
- config_name: subset_394
data_files:
- split: train
path: subset_394/train-*
- config_name: subset_395
data_files:
- split: train
path: subset_395/train-*
- config_name: subset_396
data_files:
- split: train
path: subset_396/train-*
- config_name: subset_397
data_files:
- split: train
path: subset_397/train-*
- config_name: subset_398
data_files:
- split: train
path: subset_398/train-*
- config_name: subset_399
data_files:
- split: train
path: subset_399/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_400
data_files:
- split: train
path: subset_400/train-*
- config_name: subset_401
data_files:
- split: train
path: subset_401/train-*
- config_name: subset_402
data_files:
- split: train
path: subset_402/train-*
- config_name: subset_403
data_files:
- split: train
path: subset_403/train-*
- config_name: subset_404
data_files:
- split: train
path: subset_404/train-*
- config_name: subset_405
data_files:
- split: train
path: subset_405/train-*
- config_name: subset_406
data_files:
- split: train
path: subset_406/train-*
- config_name: subset_407
data_files:
- split: train
path: subset_407/train-*
- config_name: subset_408
data_files:
- split: train
path: subset_408/train-*
- config_name: subset_409
data_files:
- split: train
path: subset_409/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_410
data_files:
- split: train
path: subset_410/train-*
- config_name: subset_411
data_files:
- split: train
path: subset_411/train-*
- config_name: subset_412
data_files:
- split: train
path: subset_412/train-*
- config_name: subset_413
data_files:
- split: train
path: subset_413/train-*
- config_name: subset_414
data_files:
- split: train
path: subset_414/train-*
- config_name: subset_415
data_files:
- split: train
path: subset_415/train-*
- config_name: subset_416
data_files:
- split: train
path: subset_416/train-*
- config_name: subset_417
data_files:
- split: train
path: subset_417/train-*
- config_name: subset_418
data_files:
- split: train
path: subset_418/train-*
- config_name: subset_419
data_files:
- split: train
path: subset_419/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_420
data_files:
- split: train
path: subset_420/train-*
- config_name: subset_421
data_files:
- split: train
path: subset_421/train-*
- config_name: subset_422
data_files:
- split: train
path: subset_422/train-*
- config_name: subset_423
data_files:
- split: train
path: subset_423/train-*
- config_name: subset_424
data_files:
- split: train
path: subset_424/train-*
- config_name: subset_425
data_files:
- split: train
path: subset_425/train-*
- config_name: subset_426
data_files:
- split: train
path: subset_426/train-*
- config_name: subset_427
data_files:
- split: train
path: subset_427/train-*
- config_name: subset_428
data_files:
- split: train
path: subset_428/train-*
- config_name: subset_429
data_files:
- split: train
path: subset_429/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_430
data_files:
- split: train
path: subset_430/train-*
- config_name: subset_431
data_files:
- split: train
path: subset_431/train-*
- config_name: subset_432
data_files:
- split: train
path: subset_432/train-*
- config_name: subset_433
data_files:
- split: train
path: subset_433/train-*
- config_name: subset_434
data_files:
- split: train
path: subset_434/train-*
- config_name: subset_435
data_files:
- split: train
path: subset_435/train-*
- config_name: subset_436
data_files:
- split: train
path: subset_436/train-*
- config_name: subset_437
data_files:
- split: train
path: subset_437/train-*
- config_name: subset_438
data_files:
- split: train
path: subset_438/train-*
- config_name: subset_439
data_files:
- split: train
path: subset_439/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_440
data_files:
- split: train
path: subset_440/train-*
- config_name: subset_441
data_files:
- split: train
path: subset_441/train-*
- config_name: subset_442
data_files:
- split: train
path: subset_442/train-*
- config_name: subset_443
data_files:
- split: train
path: subset_443/train-*
- config_name: subset_444
data_files:
- split: train
path: subset_444/train-*
- config_name: subset_445
data_files:
- split: train
path: subset_445/train-*
- config_name: subset_446
data_files:
- split: train
path: subset_446/train-*
- config_name: subset_447
data_files:
- split: train
path: subset_447/train-*
- config_name: subset_448
data_files:
- split: train
path: subset_448/train-*
- config_name: subset_449
data_files:
- split: train
path: subset_449/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_450
data_files:
- split: train
path: subset_450/train-*
- config_name: subset_451
data_files:
- split: train
path: subset_451/train-*
- config_name: subset_452
data_files:
- split: train
path: subset_452/train-*
- config_name: subset_453
data_files:
- split: train
path: subset_453/train-*
- config_name: subset_454
data_files:
- split: train
path: subset_454/train-*
- config_name: subset_455
data_files:
- split: train
path: subset_455/train-*
- config_name: subset_456
data_files:
- split: train
path: subset_456/train-*
- config_name: subset_457
data_files:
- split: train
path: subset_457/train-*
- config_name: subset_458
data_files:
- split: train
path: subset_458/train-*
- config_name: subset_459
data_files:
- split: train
path: subset_459/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_460
data_files:
- split: train
path: subset_460/train-*
- config_name: subset_461
data_files:
- split: train
path: subset_461/train-*
- config_name: subset_462
data_files:
- split: train
path: subset_462/train-*
- config_name: subset_463
data_files:
- split: train
path: subset_463/train-*
- config_name: subset_464
data_files:
- split: train
path: subset_464/train-*
- config_name: subset_465
data_files:
- split: train
path: subset_465/train-*
- config_name: subset_466
data_files:
- split: train
path: subset_466/train-*
- config_name: subset_467
data_files:
- split: train
path: subset_467/train-*
- config_name: subset_468
data_files:
- split: train
path: subset_468/train-*
- config_name: subset_469
data_files:
- split: train
path: subset_469/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_470
data_files:
- split: train
path: subset_470/train-*
- config_name: subset_471
data_files:
- split: train
path: subset_471/train-*
- config_name: subset_472
data_files:
- split: train
path: subset_472/train-*
- config_name: subset_473
data_files:
- split: train
path: subset_473/train-*
- config_name: subset_474
data_files:
- split: train
path: subset_474/train-*
- config_name: subset_475
data_files:
- split: train
path: subset_475/train-*
- config_name: subset_476
data_files:
- split: train
path: subset_476/train-*
- config_name: subset_477
data_files:
- split: train
path: subset_477/train-*
- config_name: subset_478
data_files:
- split: train
path: subset_478/train-*
- config_name: subset_479
data_files:
- split: train
path: subset_479/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_480
data_files:
- split: train
path: subset_480/train-*
- config_name: subset_481
data_files:
- split: train
path: subset_481/train-*
- config_name: subset_482
data_files:
- split: train
path: subset_482/train-*
- config_name: subset_483
data_files:
- split: train
path: subset_483/train-*
- config_name: subset_484
data_files:
- split: train
path: subset_484/train-*
- config_name: subset_485
data_files:
- split: train
path: subset_485/train-*
- config_name: subset_486
data_files:
- split: train
path: subset_486/train-*
- config_name: subset_487
data_files:
- split: train
path: subset_487/train-*
- config_name: subset_488
data_files:
- split: train
path: subset_488/train-*
- config_name: subset_489
data_files:
- split: train
path: subset_489/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_490
data_files:
- split: train
path: subset_490/train-*
- config_name: subset_491
data_files:
- split: train
path: subset_491/train-*
- config_name: subset_492
data_files:
- split: train
path: subset_492/train-*
- config_name: subset_493
data_files:
- split: train
path: subset_493/train-*
- config_name: subset_494
data_files:
- split: train
path: subset_494/train-*
- config_name: subset_495
data_files:
- split: train
path: subset_495/train-*
- config_name: subset_496
data_files:
- split: train
path: subset_496/train-*
- config_name: subset_497
data_files:
- split: train
path: subset_497/train-*
- config_name: subset_498
data_files:
- split: train
path: subset_498/train-*
- config_name: subset_499
data_files:
- split: train
path: subset_499/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_500
data_files:
- split: train
path: subset_500/train-*
- config_name: subset_501
data_files:
- split: train
path: subset_501/train-*
- config_name: subset_502
data_files:
- split: train
path: subset_502/train-*
- config_name: subset_503
data_files:
- split: train
path: subset_503/train-*
- config_name: subset_504
data_files:
- split: train
path: subset_504/train-*
- config_name: subset_505
data_files:
- split: train
path: subset_505/train-*
- config_name: subset_506
data_files:
- split: train
path: subset_506/train-*
- config_name: subset_507
data_files:
- split: train
path: subset_507/train-*
- config_name: subset_508
data_files:
- split: train
path: subset_508/train-*
- config_name: subset_509
data_files:
- split: train
path: subset_509/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_510
data_files:
- split: train
path: subset_510/train-*
- config_name: subset_511
data_files:
- split: train
path: subset_511/train-*
- config_name: subset_512
data_files:
- split: train
path: subset_512/train-*
- config_name: subset_513
data_files:
- split: train
path: subset_513/train-*
- config_name: subset_514
data_files:
- split: train
path: subset_514/train-*
- config_name: subset_515
data_files:
- split: train
path: subset_515/train-*
- config_name: subset_516
data_files:
- split: train
path: subset_516/train-*
- config_name: subset_517
data_files:
- split: train
path: subset_517/train-*
- config_name: subset_518
data_files:
- split: train
path: subset_518/train-*
- config_name: subset_519
data_files:
- split: train
path: subset_519/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_520
data_files:
- split: train
path: subset_520/train-*
- config_name: subset_521
data_files:
- split: train
path: subset_521/train-*
- config_name: subset_522
data_files:
- split: train
path: subset_522/train-*
- config_name: subset_523
data_files:
- split: train
path: subset_523/train-*
- config_name: subset_524
data_files:
- split: train
path: subset_524/train-*
- config_name: subset_525
data_files:
- split: train
path: subset_525/train-*
- config_name: subset_526
data_files:
- split: train
path: subset_526/train-*
- config_name: subset_527
data_files:
- split: train
path: subset_527/train-*
- config_name: subset_528
data_files:
- split: train
path: subset_528/train-*
- config_name: subset_529
data_files:
- split: train
path: subset_529/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_530
data_files:
- split: train
path: subset_530/train-*
- config_name: subset_531
data_files:
- split: train
path: subset_531/train-*
- config_name: subset_532
data_files:
- split: train
path: subset_532/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
japanese-asr/whisper_transcriptions.mls.wer_10.0 | japanese-asr | "2024-09-14T07:57:24Z" | 24,933 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-11T09:52:44Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29741913577.241814
num_examples: 62101
download_size: 28406057868
dataset_size: 29741913577.241814
- config_name: subset_1
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29815585138.73427
num_examples: 62323
download_size: 28488972470
dataset_size: 29815585138.73427
- config_name: subset_10
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29831443458.675167
num_examples: 62172
download_size: 28490041949
dataset_size: 29831443458.675167
- config_name: subset_100
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29740102232.58974
num_examples: 62114
download_size: 28402573685
dataset_size: 29740102232.58974
- config_name: subset_101
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29804666990.485275
num_examples: 62225
download_size: 28477636147
dataset_size: 29804666990.485275
- config_name: subset_102
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29847859656.366245
num_examples: 62219
download_size: 28508104461
dataset_size: 29847859656.366245
- config_name: subset_103
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29776271336.13424
num_examples: 62248
download_size: 28453790146
dataset_size: 29776271336.13424
- config_name: subset_104
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29864954995.718533
num_examples: 62348
download_size: 28540369174
dataset_size: 29864954995.718533
- config_name: subset_105
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29845768222.852547
num_examples: 62287
download_size: 28508203679
dataset_size: 29845768222.852547
- config_name: subset_106
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29868915195.73696
num_examples: 62355
download_size: 28531446961
dataset_size: 29868915195.73696
- config_name: subset_107
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29833296511.762436
num_examples: 62252
download_size: 28502966117
dataset_size: 29833296511.762436
- config_name: subset_108
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29846554379.21017
num_examples: 62398
download_size: 28521313998
dataset_size: 29846554379.21017
- config_name: subset_109
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29808600165.9863
num_examples: 62240
download_size: 28473663596
dataset_size: 29808600165.9863
- config_name: subset_11
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29807895865.53131
num_examples: 62230
download_size: 28470625940
dataset_size: 29807895865.53131
- config_name: subset_110
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29829181073.93217
num_examples: 62281
download_size: 28508841100
dataset_size: 29829181073.93217
- config_name: subset_111
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29845682710.49548
num_examples: 62335
download_size: 28524753965
dataset_size: 29845682710.49548
- config_name: subset_112
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29826059756.774582
num_examples: 62252
download_size: 28493408051
dataset_size: 29826059756.774582
- config_name: subset_113
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29736425530.042995
num_examples: 62066
download_size: 28408328564
dataset_size: 29736425530.042995
- config_name: subset_114
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 4936296.0
num_examples: 11
download_size: 4709772
dataset_size: 4936296.0
- config_name: subset_115
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29775544304.801655
num_examples: 62159
download_size: 28447112935
dataset_size: 29775544304.801655
- config_name: subset_116
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29764092406.31982
num_examples: 62150
download_size: 28424856922
dataset_size: 29764092406.31982
- config_name: subset_117
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29734215090.831867
num_examples: 62098
download_size: 28401429108
dataset_size: 29734215090.831867
- config_name: subset_118
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29785403327.377136
num_examples: 62307
download_size: 28454761582
dataset_size: 29785403327.377136
- config_name: subset_119
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29887080358.46854
num_examples: 62437
download_size: 28560903814
dataset_size: 29887080358.46854
- config_name: subset_12
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29778963955.27637
num_examples: 62217
download_size: 28456064768
dataset_size: 29778963955.27637
- config_name: subset_120
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29795324063.32621
num_examples: 62213
download_size: 28459179628
dataset_size: 29795324063.32621
- config_name: subset_121
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29823497463.618946
num_examples: 62219
download_size: 28486036307
dataset_size: 29823497463.618946
- config_name: subset_122
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29825885978.784977
num_examples: 62198
download_size: 28495894587
dataset_size: 29825885978.784977
- config_name: subset_123
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29824518738.544853
num_examples: 62207
download_size: 28482461945
dataset_size: 29824518738.544853
- config_name: subset_124
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29734472830.244003
num_examples: 62044
download_size: 28397807256
dataset_size: 29734472830.244003
- config_name: subset_125
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29751692495.66535
num_examples: 62132
download_size: 28418245723
dataset_size: 29751692495.66535
- config_name: subset_126
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29860413580.83239
num_examples: 62262
download_size: 28531745153
dataset_size: 29860413580.83239
- config_name: subset_127
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29844407241.442238
num_examples: 62182
download_size: 28520446380
dataset_size: 29844407241.442238
- config_name: subset_128
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29802989154.327606
num_examples: 62225
download_size: 28463177779
dataset_size: 29802989154.327606
- config_name: subset_129
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29850861116.343075
num_examples: 62330
download_size: 28520805908
dataset_size: 29850861116.343075
- config_name: subset_13
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29796741055.90437
num_examples: 62202
download_size: 28466354764
dataset_size: 29796741055.90437
- config_name: subset_130
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 4263112.0
num_examples: 9
download_size: 4073797
dataset_size: 4263112.0
- config_name: subset_131
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29720943599.72362
num_examples: 61994
download_size: 28379216482
dataset_size: 29720943599.72362
- config_name: subset_132
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29797620915.980434
num_examples: 62210
download_size: 28461599359
dataset_size: 29797620915.980434
- config_name: subset_133
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29782749863.416126
num_examples: 62161
download_size: 28447689082
dataset_size: 29782749863.416126
- config_name: subset_134
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29775975351.80884
num_examples: 62252
download_size: 28445935648
dataset_size: 29775975351.80884
- config_name: subset_135
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29804785291.47995
num_examples: 62332
download_size: 28474094120
dataset_size: 29804785291.47995
- config_name: subset_136
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29764251087.81636
num_examples: 62135
download_size: 28435055519
dataset_size: 29764251087.81636
- config_name: subset_137
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29796171709.935783
num_examples: 62226
download_size: 28468528453
dataset_size: 29796171709.935783
- config_name: subset_138
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29818809747.066654
num_examples: 62253
download_size: 28486190334
dataset_size: 29818809747.066654
- config_name: subset_14
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29713149830.08086
num_examples: 62058
download_size: 28370992605
dataset_size: 29713149830.08086
- config_name: subset_15
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29857118469.690784
num_examples: 62295
download_size: 28520133081
dataset_size: 29857118469.690784
- config_name: subset_16
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44717472597.38111
num_examples: 93380
download_size: 42705151644
dataset_size: 44717472597.38111
- config_name: subset_17
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44567963372.985085
num_examples: 93081
download_size: 42557871062
dataset_size: 44567963372.985085
- config_name: subset_18
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 487488.0
num_examples: 1
download_size: 482536
dataset_size: 487488.0
- config_name: subset_19
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44578540518.278465
num_examples: 93092
download_size: 42574195823
dataset_size: 44578540518.278465
- config_name: subset_2
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29779834173.05709
num_examples: 62194
download_size: 28457283265
dataset_size: 29779834173.05709
- config_name: subset_20
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44612078576.6915
num_examples: 93192
download_size: 42608986260
dataset_size: 44612078576.6915
- config_name: subset_21
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44716611297.93694
num_examples: 93435
download_size: 42724070775
dataset_size: 44716611297.93694
- config_name: subset_22
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44642599292.25394
num_examples: 93272
download_size: 42638436011
dataset_size: 44642599292.25394
- config_name: subset_23
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44773238981.78718
num_examples: 93425
download_size: 42769260156
dataset_size: 44773238981.78718
- config_name: subset_24
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44643199115.213066
num_examples: 93280
download_size: 42643630676
dataset_size: 44643199115.213066
- config_name: subset_25
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44775786873.71317
num_examples: 93521
download_size: 42787596471
dataset_size: 44775786873.71317
- config_name: subset_26
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44724169162.378235
num_examples: 93381
download_size: 42734030121
dataset_size: 44724169162.378235
- config_name: subset_27
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44673408393.32555
num_examples: 93316
download_size: 42671093570
dataset_size: 44673408393.32555
- config_name: subset_28
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44647540966.199005
num_examples: 93158
download_size: 42647063249
dataset_size: 44647540966.199005
- config_name: subset_29
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 2062592.0
num_examples: 5
download_size: 1973372
dataset_size: 2062592.0
- config_name: subset_3
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29781556446.893677
num_examples: 62185
download_size: 28457664262
dataset_size: 29781556446.893677
- config_name: subset_30
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44619423613.42147
num_examples: 93230
download_size: 42616420647
dataset_size: 44619423613.42147
- config_name: subset_31
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44686832436.53614
num_examples: 93302
download_size: 42675035403
dataset_size: 44686832436.53614
- config_name: subset_32
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44667832285.4235
num_examples: 93252
download_size: 42664546252
dataset_size: 44667832285.4235
- config_name: subset_33
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44767218387.43479
num_examples: 93463
download_size: 42772690686
dataset_size: 44767218387.43479
- config_name: subset_34
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44759250508.174644
num_examples: 93435
download_size: 42765984681
dataset_size: 44759250508.174644
- config_name: subset_35
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44661416756.55649
num_examples: 93247
download_size: 42662115327
dataset_size: 44661416756.55649
- config_name: subset_36
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44697682741.818405
num_examples: 93316
download_size: 42706948136
dataset_size: 44697682741.818405
- config_name: subset_37
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44642072687.35433
num_examples: 93214
download_size: 42642766174
dataset_size: 44642072687.35433
- config_name: subset_38
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44703619421.43295
num_examples: 93385
download_size: 42705784293
dataset_size: 44703619421.43295
- config_name: subset_39
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44691234018.50057
num_examples: 93382
download_size: 42688816370
dataset_size: 44691234018.50057
- config_name: subset_4
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29804659756.840668
num_examples: 62191
download_size: 28469961087
dataset_size: 29804659756.840668
- config_name: subset_40
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 3749600.0
num_examples: 8
download_size: 3544062
dataset_size: 3749600.0
- config_name: subset_41
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44710781677.17146
num_examples: 93390
download_size: 42686048101
dataset_size: 44710781677.17146
- config_name: subset_42
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44595765733.45953
num_examples: 93072
download_size: 42586674927
dataset_size: 44595765733.45953
- config_name: subset_43
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44642287217.473366
num_examples: 93216
download_size: 42643283814
dataset_size: 44642287217.473366
- config_name: subset_44
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44680782381.93174
num_examples: 93330
download_size: 42679060966
dataset_size: 44680782381.93174
- config_name: subset_45
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44632730866.69406
num_examples: 93280
download_size: 42632582031
dataset_size: 44632730866.69406
- config_name: subset_46
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44777176265.9063
num_examples: 93367
download_size: 42774172043
dataset_size: 44777176265.9063
- config_name: subset_47
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44723520638.39092
num_examples: 93353
download_size: 42739592034
dataset_size: 44723520638.39092
- config_name: subset_48
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44699581610.83543
num_examples: 93264
download_size: 42691617545
dataset_size: 44699581610.83543
- config_name: subset_49
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29776883327.862305
num_examples: 62152
download_size: 28437717863
dataset_size: 29776883327.862305
- config_name: subset_5
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29770798634.794384
num_examples: 62238
download_size: 28425692361
dataset_size: 29770798634.794384
- config_name: subset_50
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29799476758.913025
num_examples: 62205
download_size: 28471561046
dataset_size: 29799476758.913025
- config_name: subset_51
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29840586701.87915
num_examples: 62210
download_size: 28512098634
dataset_size: 29840586701.87915
- config_name: subset_52
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29863217529.513218
num_examples: 62358
download_size: 28535498406
dataset_size: 29863217529.513218
- config_name: subset_53
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29800253005.76768
num_examples: 62172
download_size: 28473498615
dataset_size: 29800253005.76768
- config_name: subset_54
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29790807067.45104
num_examples: 62220
download_size: 28462782039
dataset_size: 29790807067.45104
- config_name: subset_55
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29756188946.40286
num_examples: 62087
download_size: 28428539117
dataset_size: 29756188946.40286
- config_name: subset_56
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 4052942.222222222
num_examples: 8
download_size: 3756055
dataset_size: 4052942.222222222
- config_name: subset_57
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29815135874.530243
num_examples: 62215
download_size: 28480808968
dataset_size: 29815135874.530243
- config_name: subset_58
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29768678268.566994
num_examples: 62182
download_size: 28432025537
dataset_size: 29768678268.566994
- config_name: subset_59
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29792832738.55261
num_examples: 62236
download_size: 28467550664
dataset_size: 29792832738.55261
- config_name: subset_6
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29760036816.438248
num_examples: 62119
download_size: 28431028357
dataset_size: 29760036816.438248
- config_name: subset_60
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29795475110.996426
num_examples: 62199
download_size: 28458203598
dataset_size: 29795475110.996426
- config_name: subset_61
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29810850600.259956
num_examples: 62218
download_size: 28472533876
dataset_size: 29810850600.259956
- config_name: subset_62
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29793053976.98263
num_examples: 62354
download_size: 28468189848
dataset_size: 29793053976.98263
- config_name: subset_63
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29830633313.527344
num_examples: 62245
download_size: 28488179308
dataset_size: 29830633313.527344
- config_name: subset_64
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29815723311.686073
num_examples: 62295
download_size: 28481396085
dataset_size: 29815723311.686073
- config_name: subset_65
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29816082510.257248
num_examples: 62317
download_size: 28483004258
dataset_size: 29816082510.257248
- config_name: subset_66
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29823849926.137985
num_examples: 62310
download_size: 28501727354
dataset_size: 29823849926.137985
- config_name: subset_67
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29779693486.822372
num_examples: 62204
download_size: 28440702784
dataset_size: 29779693486.822372
- config_name: subset_68
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29791077271.97951
num_examples: 62238
download_size: 28470796275
dataset_size: 29791077271.97951
- config_name: subset_69
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29804033246.804424
num_examples: 62214
download_size: 28469224555
dataset_size: 29804033246.804424
- config_name: subset_7
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 435616.0
num_examples: 1
download_size: 429318
dataset_size: 435616.0
- config_name: subset_70
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29769624892.128063
num_examples: 62139
download_size: 28427172030
dataset_size: 29769624892.128063
- config_name: subset_71
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29682503840.61426
num_examples: 61996
download_size: 28356389982
dataset_size: 29682503840.61426
- config_name: subset_72
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 1459624.0
num_examples: 3
download_size: 1398432
dataset_size: 1459624.0
- config_name: subset_73
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29818453655.1624
num_examples: 62294
download_size: 28492225008
dataset_size: 29818453655.1624
- config_name: subset_74
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29809226392.643494
num_examples: 62275
download_size: 28471896097
dataset_size: 29809226392.643494
- config_name: subset_75
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29813615006.46017
num_examples: 62247
download_size: 28481389086
dataset_size: 29813615006.46017
- config_name: subset_76
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29787940909.366352
num_examples: 62176
download_size: 28457880270
dataset_size: 29787940909.366352
- config_name: subset_77
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29786889343.17914
num_examples: 62093
download_size: 28459598814
dataset_size: 29786889343.17914
- config_name: subset_78
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29766008072.24979
num_examples: 62168
download_size: 28429784870
dataset_size: 29766008072.24979
- config_name: subset_79
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29771618615.300034
num_examples: 62183
download_size: 28430361717
dataset_size: 29771618615.300034
- config_name: subset_8
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29784846446.236767
num_examples: 62208
download_size: 28456467319
dataset_size: 29784846446.236767
- config_name: subset_80
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29825455286.15237
num_examples: 62242
download_size: 28494746372
dataset_size: 29825455286.15237
- config_name: subset_81
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44611282857.507706
num_examples: 93149
download_size: 42605499635
dataset_size: 44611282857.507706
- config_name: subset_82
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44633857773.640816
num_examples: 93209
download_size: 42617883549
dataset_size: 44633857773.640816
- config_name: subset_83
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44731114501.95697
num_examples: 93285
download_size: 42734681724
dataset_size: 44731114501.95697
- config_name: subset_84
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44652331899.52017
num_examples: 93224
download_size: 42640405452
dataset_size: 44652331899.52017
- config_name: subset_85
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44738443066.61914
num_examples: 93391
download_size: 42752260132
dataset_size: 44738443066.61914
- config_name: subset_86
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44658486276.88759
num_examples: 93256
download_size: 42653904477
dataset_size: 44658486276.88759
- config_name: subset_87
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44624811790.03121
num_examples: 93178
download_size: 42605954586
dataset_size: 44624811790.03121
- config_name: subset_88
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44661490498.22551
num_examples: 93239
download_size: 42652915154
dataset_size: 44661490498.22551
- config_name: subset_89
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44716406347.45679
num_examples: 93404
download_size: 42727072094
dataset_size: 44716406347.45679
- config_name: subset_9
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29792706171.83141
num_examples: 62109
download_size: 28450168341
dataset_size: 29792706171.83141
- config_name: subset_90
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 44676470960.951996
num_examples: 93187
download_size: 42678524371
dataset_size: 44676470960.951996
- config_name: subset_91
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29764280076.125225
num_examples: 62196
download_size: 28429282112
dataset_size: 29764280076.125225
- config_name: subset_92
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29850619337.3585
num_examples: 62351
download_size: 28512846915
dataset_size: 29850619337.3585
- config_name: subset_93
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29837744714.346436
num_examples: 62233
download_size: 28501403452
dataset_size: 29837744714.346436
- config_name: subset_94
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29839780119.8114
num_examples: 62323
download_size: 28513940315
dataset_size: 29839780119.8114
- config_name: subset_95
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29748188964.62823
num_examples: 62172
download_size: 28413924658
dataset_size: 29748188964.62823
- config_name: subset_96
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29800727262.69699
num_examples: 62260
download_size: 28475125160
dataset_size: 29800727262.69699
- config_name: subset_97
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29767717411.338116
num_examples: 62148
download_size: 28440311229
dataset_size: 29767717411.338116
- config_name: subset_98
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 4303888.0
num_examples: 9
download_size: 4144170
dataset_size: 4303888.0
- config_name: subset_99
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_length
dtype: int64
splits:
- name: train
num_bytes: 29787160440.723938
num_examples: 62157
download_size: 28445447346
dataset_size: 29787160440.723938
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
laion/strategic_game_maze | laion | "2023-10-20T04:13:19Z" | 24,730 | 11 | [
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-15T02:44:07Z" | ---
license: cc-by-4.0
---
NOTICE: some of the game is mistakenly label as both length and width columns are 40, they are 30 actually.
# maze
This dataset contains 350,000 mazes, represents over 39.29 billion moves.
Each maze is a 30x30 ASCII representation, with solutions derived using the BFS.
It has two columns:
- 'Maze': representation of maze in a list of string.shape is 30*30
- visual example
<image src="https://cdn-uploads.huggingface.co/production/uploads/644b983f0fbe4830f192c4f5/BGplH40fK5wQzpofPocMK.png" alt="drawing" width="200"/>
- 'Path': solution from start point to end point in a list of string, each item represent a position in the maze.
|
LLM360/K2Datasets | LLM360 | "2024-06-06T17:04:36Z" | 24,711 | 14 | [
"license:odc-by",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T15:20:00Z" | ---
license: odc-by
---
# K2 Dataset Card
<!-- Provide a quick summary of the dataset. -->
The following data mix was used to train [K2](https://huggingface.co/LLM360/K2) and achieve results in line with Llama 2 70B.
## Dataset Details
K2 was trained on 1.4T tokens across two stages. The data sources and data mix for each stage are listed below.
### Dataset Description: Stage 1
<!-- Provide a longer summary of what this dataset is. -->
| Dataset | Starting Tokens | Multiplier | Total Tokens |% of Total |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| [dm-math](https://github.com/google-deepmind/mathematics_dataset) | 4.33B | 3x | 13B | 1% |
| pubmed-abstracts (from the Pile) | 4.77B | 3x | 14.3B | 1.1% |
| uspto (from the Pile) | 4.77B | 3x | 14.3B | 1.1% |
| pubmed-central (from the Pile) | 26B | 1x | 26B | 2% |
| [redpajama.arxiv](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 27.3B | 1x | 27.3B | 2.1% |
| [starcoder.spm](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% |
| [starcoder.fim](https://huggingface.co/datasets/bigcode/starcoderdata) | 67.6B | 0.5x | 33.8B | 2.6% |
| [redpajama.stackexchange](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 61.1B | 1x | 61.1B | 4.7% |
| [starcoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 132.6B | 0.5x | 66.3B | 5.1% |
| [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) | 76.7B | 1x | 76.7B | 5.9% |
| [redpajama.book](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 80.6B | 1x | 80.6B | 6.2% |
| [s2orc](https://allenai.org/data/s2orc) | 107.9B | 1x | 107.9B | 8.3% |
| [redpajama.wikipedia](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 22.1B | 6x | 132.6B | 10.2% |
| [refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 612.3B | 1x | 612.3B | 47.1% |
| Totals | - | - | 1.3T | 100% |
### Dataset Description: Stage 2
| Dataset | Starting Tokens | Multiplier | Total Tokens |% of Total |
| ----------- | ----------- | ----------- | ----------- | ----------- |
| [open-web-math](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | 14.6B | 1x | 14.6B | 21% |
| [redpajama.arxiv](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 2B | 1x | 2B | 2.9% |
| [simple-wiki](https://huggingface.co/datasets/allenai/dolma) | 4.3B | 1x | 4.3B | 6.2% |
| [redpajama.book](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 2B | 1x | 2B | 2.9% |
| [algebraic-stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | 10.9B | 1x | 10.9B | 15.7% |
| [pile-of-law](https://huggingface.co/datasets/pile-of-law/pile-of-law) | 2B | 0.5x | 33.8B | 2.9% |
| books | 5.8B | 1x | 5.8B | 8.3% |
| [pes20](https://huggingface.co/datasets/allenai/peS2o) | 1.2B | 1x | 1.2B | 1.8% |
| [pubmed-central (from the Pile)](https://github.com/EleutherAI/pile-pubmedcentral) | 2B | 1x | 2B | 2.9% |
| [redpajama.wikipedia](https://huggingface.co/datasets/cerebras/SlimPajama-627B) | 2B | 1x | 2B | 2.9% |
| python | 20.5B | 1x | 20.5B | 29.6% |
| [s2orc](https://allenai.org/data/s2orc) | 2B | 1x | 2B | 2.9% |
| Totals | - | - | 69.4B* | 100% |
*rounding
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
A step-by-step tutorial for reproducing the K2's data preperation can be found in the [LLM360 Pretraining Suite here](https://www.llm360.ai/pretraining.html)
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation
**BibTeX:**
```bibtex
@misc{
title={LLM360 K2-65B: Scaling Up Open and Transparent Language Models},
author={The LLM360 Team},
year={2024},
}
```
|
gsarti/flores_101 | gsarti | "2022-10-27T08:37:36Z" | 24,563 | 26 | [
"task_categories:text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:extended|flores",
"language:af",
"language:am",
"language:ar",
"language:hy",
"language:as",
"language:ast",
"language:az",
"language:be",
"language:bn",
"language:bs",
"language:bg",
"language:my",
"language:ca",
"language:ceb",
"language:zho",
"language:hr",
"language:cs",
"language:da",
"language:nl",
"language:en",
"language:et",
"language:tl",
"language:fi",
"language:fr",
"language:ff",
"language:gl",
"language:lg",
"language:ka",
"language:de",
"language:el",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hu",
"language:is",
"language:ig",
"language:id",
"language:ga",
"language:it",
"language:ja",
"language:jv",
"language:kea",
"language:kam",
"language:kn",
"language:kk",
"language:km",
"language:ko",
"language:ky",
"language:lo",
"language:lv",
"language:ln",
"language:lt",
"language:luo",
"language:lb",
"language:mk",
"language:ms",
"language:ml",
"language:mt",
"language:mi",
"language:mr",
"language:mn",
"language:ne",
"language:ns",
"language:no",
"language:ny",
"language:oc",
"language:or",
"language:om",
"language:ps",
"language:fa",
"language:pl",
"language:pt",
"language:pa",
"language:ro",
"language:ru",
"language:sr",
"language:sn",
"language:sd",
"language:sk",
"language:sl",
"language:so",
"language:ku",
"language:es",
"language:sw",
"language:sv",
"language:tg",
"language:ta",
"language:te",
"language:th",
"language:tr",
"language:uk",
"language:umb",
"language:ur",
"language:uz",
"language:vi",
"language:cy",
"language:wo",
"language:xh",
"language:yo",
"language:zu",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"modality:tabular",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2106.03193",
"region:us",
"conditional-text-generation"
] | [
"text-generation",
"translation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- af
- am
- ar
- hy
- as
- ast
- az
- be
- bn
- bs
- bg
- my
- ca
- ceb
- zho
- hr
- cs
- da
- nl
- en
- et
- tl
- fi
- fr
- ff
- gl
- lg
- ka
- de
- el
- gu
- ha
- he
- hi
- hu
- is
- ig
- id
- ga
- it
- ja
- jv
- kea
- kam
- kn
- kk
- km
- ko
- ky
- lo
- lv
- ln
- lt
- luo
- lb
- mk
- ms
- ml
- mt
- mi
- mr
- mn
- ne
- ns
- 'no'
- ny
- oc
- or
- om
- ps
- fa
- pl
- pt
- pa
- ro
- ru
- sr
- sn
- sd
- sk
- sl
- so
- ku
- es
- sw
- sv
- tg
- ta
- te
- th
- tr
- uk
- umb
- ur
- uz
- vi
- cy
- wo
- xh
- yo
- zu
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
- translation
size_categories:
- unknown
source_datasets:
- extended|flores
task_categories:
- text-generation
- translation
task_ids: []
paperswithcode_id: flores
pretty_name: flores101
tags:
- conditional-text-generation
---
# Dataset Card for Flores 101
## Table of Contents
- [Dataset Card for Flores 101](#dataset-card-for-flores-101)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [WMT](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
- **Blogpost:** [FAIR](https://ai.facebook.com/blog/the-flores-101-data-set-helping-build-better-translation-systems-around-the-world)
- **Paper:** [Arxiv](https://arxiv.org/abs/2106.03193)
- **Point of Contact:** [[email protected]](mailto:[email protected])
- **Leaderboard** [Dynabench](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL))
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
Abstract from the original paper:
> One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
**Disclaimer**: *The Flores-101 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html).
### Languages
The dataset contains parallel sentences for 101 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) as in the original dataset.
**New:** Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Russian language (`rus` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'В понедельник ученые из Медицинской школы Стэнфордского университета объявили об изобретении нового диагностического инструмента, который может сортировать клетки по их типу; это маленький чип, который можно напечатать, используя стандартный струйный принтер примерно за 1 цент США.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language.
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) for additional information on dataset creation.
## Additional Information
### Dataset Curators
The original authors of FLORES-101 are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact [[email protected]](mailto:[email protected]).
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@inproceedings{flores101,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
journal={arXiv preprint arXiv:2106.03193},
year={2021}
}
``` |
Voxel51/PIDray | Voxel51 | "2024-06-06T14:46:34Z" | 24,558 | 2 | [
"task_categories:object-detection",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"library:fiftyone",
"arxiv:2108.07020",
"region:us",
"fiftyone",
"image",
"object-detection"
] | [
"object-detection"
] | "2024-04-29T13:57:49Z" | ---
annotations_creators: []
language: en
license: apache-2.0
size_categories:
- 1K<n<10K
task_categories:
- object-detection
task_ids: []
pretty_name: pidray
tags:
- fiftyone
- image
- object-detection
dataset_summary: '

This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven''t already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include ''max_samples'', etc
dataset = fouh.load_from_hub("voxel51/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
'
---
# Dataset Card for pidray
PIDray is a large-scale dataset which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. The dataset contains 12 categories of prohibited items in 47, 677 X-ray images with high-quality annotated segmentation masks and bounding boxes.

This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 9482 samples.
## Installation
If you haven't already, install FiftyOne:
```bash
pip install -U fiftyone
```
## Usage
```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("dgural/PIDray")
# Launch the App
session = fo.launch_app(dataset)
```
## Dataset Details
### Dataset Description
From _Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark_:
Automatic security inspection using computer vision technology is a challenging task in real-world scenarios due to various factors, including intra-class variance, class imbalance, and occlusion. Most of the previous methods rarely solve the cases that the prohibited items are deliberately hidden in messy objects due to the lack of large-scale datasets, restricted their applications in real-world scenarios. Towards real-world prohibited item detection, we collect a large-scale dataset, named as PIDray, which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. With an intensive amount of effort, our dataset contains
categories of prohibited items in X-ray images with high-quality annotated segmentation masks and bounding boxes. To the best of our knowledge, it is the largest prohibited items detection dataset to date. Meanwhile, we design the selective dense attention network (SDANet) to construct a strong baseline, which consists of the dense attention module and the dependency refinement module. The dense attention module formed by the spatial and channel-wise dense attentions, is designed to learn the discriminative features to boost the performance. The dependency refinement module is used to exploit the dependencies of multi-scale features. Extensive experiments conducted on the collected PIDray dataset demonstrate that the proposed method performs favorably against the state-of-the-art methods, especially for detecting the deliberately hidden items.
- **Language(s) (NLP):** en
- **License:** apache-2.0
The images and the corresponding annotations in PIDray Dataset can be used ONLY for academic purposes, NOT for commercial purposes.
Copyright © 2021 Institute of Software Chinese Academy of Sciences, University of Chinese Academy of Sciences
All rights reserved.
### Dataset Sources
- **Repository:** https://github.com/bywang2018/security-dataset
- **Paper [optional]:** https://arxiv.org/abs/2108.07020
## Uses
This used for academic research on airport security screening machines and the detection of objects being scanned.
### Out-of-Scope Use
Any non-academic work is out of scope and prohibited.
## Citation
@inproceedings{wang2021towards,
title={Towards Real-World Prohibited Item Detection: A Large-Scale X-ray Benchmark},
author={Wang, Boying and Zhang, Libo and Wen, Longyin and Liu, Xianglong and Wu, Yanjun},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5412--5421},
year={2021}
}
|
wecover/OPUS_GlobalVoices | wecover | "2024-11-24T03:46:34Z" | 24,372 | 0 | [
"language:am",
"language:ar",
"language:bg",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:fa",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:km",
"language:ko",
"language:ku",
"language:mg",
"language:mk",
"language:my",
"language:ne",
"language:nl",
"language:or",
"language:pa",
"language:pt",
"language:pl",
"language:ro",
"language:ru",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:tr",
"language:ur",
"language:zh",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-31T07:22:25Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- split: test
path: '*/*/test.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: test
path: '*/*am*/test.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: test
path: '*/*ar*/test.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: test
path: '*/*bn*/test.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: test
path: '*/*ca*/test.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: test
path: '*/*de*/test.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: test
path: '*/*el*/test.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: test
path: '*/*en*/test.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: test
path: '*/*es*/test.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: fa
data_files:
- split: train
path: '*/*fa*/train.parquet'
- split: test
path: '*/*fa*/test.parquet'
- split: valid
path: '*/*fa*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: test
path: '*/*fr*/test.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: test
path: '*/*hi*/test.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: test
path: '*/*hu*/test.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: test
path: '*/*id*/test.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: test
path: '*/*it*/test.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: test
path: '*/*mg*/test.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: test
path: '*/*mk*/test.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: test
path: '*/*my*/test.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: test
path: '*/*nl*/test.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: test
path: '*/*pl*/test.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: test
path: '*/*pt*/test.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: test
path: '*/*ru*/test.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: test
path: '*/*sr*/test.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: sw
data_files:
- split: train
path: '*/*sw*/train.parquet'
- split: test
path: '*/*sw*/test.parquet'
- split: valid
path: '*/*sw*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: test
path: '*/*tr*/test.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: test
path: '*/*ur*/test.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: zhs
data_files:
- split: train
path: '*/*zhs*/train.parquet'
- split: test
path: '*/*zhs*/test.parquet'
- split: valid
path: '*/*zhs*/valid.parquet'
- config_name: zht
data_files:
- split: train
path: '*/*zht*/train.parquet'
- split: test
path: '*/*zht*/test.parquet'
- split: valid
path: '*/*zht*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: test
path: '*/*bg*/test.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: test
path: '*/*cs*/test.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: test
path: '*/*da*/test.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: test
path: '*/*eo*/test.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: test
path: '*/*he*/test.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: test
path: '*/*km*/test.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: test
path: '*/*ko*/test.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: test
path: '*/*ku*/test.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: ne
data_files:
- split: train
path: '*/*ne*/train.parquet'
- split: test
path: '*/*ne*/test.parquet'
- split: valid
path: '*/*ne*/valid.parquet'
- config_name: or
data_files:
- split: train
path: '*/*or*/train.parquet'
- split: test
path: '*/*or*/test.parquet'
- split: valid
path: '*/*or*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: test
path: '*/*pa*/test.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: test
path: '*/*ro*/test.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: test
path: '*/*sq*/test.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: test
path: '*/*sv*/test.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
language:
- am
- ar
- bg
- bn
- ca
- cs
- da
- de
- el
- en
- eo
- es
- fa
- fr
- he
- hi
- hu
- id
- it
- km
- ko
- ku
- mg
- mk
- my
- ne
- nl
- or
- pa
- pt
- pl
- ro
- ru
- sq
- sr
- sv
- sw
- tr
- ur
- zh
--- |
Hemabhushan/capstone_sakuga_preproc_optical_flow | Hemabhushan | "2024-11-21T03:57:26Z" | 24,252 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-21T18:37:50Z" | ---
dataset_info:
- config_name: sample_subset
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 30244617541
num_examples: 2396
download_size: 5461228507
dataset_size: 30244617541
- config_name: seg1_part1
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482846975
num_examples: 2498
download_size: 5683747736
dataset_size: 31482846975
- config_name: seg1_part10
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533368384
num_examples: 2498
download_size: 5651523132
dataset_size: 31533368384
- config_name: seg1_part11
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31495470684
num_examples: 2498
download_size: 5646719194
dataset_size: 31495470684
- config_name: seg1_part12
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533346986
num_examples: 2498
download_size: 5705163694
dataset_size: 31533346986
- config_name: seg1_part14
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31432281831
num_examples: 2498
download_size: 5627562296
dataset_size: 31432281831
- config_name: seg1_part15
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508092434
num_examples: 2498
download_size: 5647225033
dataset_size: 31508092434
- config_name: seg1_part18
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482843486
num_examples: 2498
download_size: 5703102313
dataset_size: 31482843486
- config_name: seg1_part2
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31457575891
num_examples: 2498
download_size: 5650519682
dataset_size: 31457575891
- config_name: seg1_part20
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31470181418
num_examples: 2498
download_size: 5625192608
dataset_size: 31470181418
- config_name: seg1_part21
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508103461
num_examples: 2498
download_size: 5680819286
dataset_size: 31508103461
- config_name: seg1_part24
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31520754590
num_examples: 2498
download_size: 5697959164
dataset_size: 31520754590
- config_name: seg1_part25
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482806663
num_examples: 2498
download_size: 5628329196
dataset_size: 31482806663
- config_name: seg1_part26
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533316255
num_examples: 2498
download_size: 5662161621
dataset_size: 31533316255
- config_name: seg1_part27
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533349249
num_examples: 2498
download_size: 5654417461
dataset_size: 31533349249
- config_name: seg1_part28
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533349763
num_examples: 2498
download_size: 5644209592
dataset_size: 31533349763
- config_name: seg1_part29
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508101677
num_examples: 2498
download_size: 5725512822
dataset_size: 31508101677
- config_name: seg1_part30
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31520696316
num_examples: 2498
download_size: 5649748978
dataset_size: 31520696316
- config_name: seg1_part31
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31470211581
num_examples: 2498
download_size: 5691521624
dataset_size: 31470211581
- config_name: seg1_part32
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31520703122
num_examples: 2498
download_size: 5611392470
dataset_size: 31520703122
- config_name: seg1_part33
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533395644
num_examples: 2498
download_size: 5765660331
dataset_size: 31533395644
- config_name: seg1_part34
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482802747
num_examples: 2498
download_size: 5654024836
dataset_size: 31482802747
- config_name: seg1_part35
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508072238
num_examples: 2498
download_size: 5632935439
dataset_size: 31508072238
- config_name: seg1_part36
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508118773
num_examples: 2498
download_size: 5708713170
dataset_size: 31508118773
- config_name: seg1_part39
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508101149
num_examples: 2498
download_size: 5697274819
dataset_size: 31508101149
- config_name: seg1_part4
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482829874
num_examples: 2498
download_size: 5700440041
dataset_size: 31482829874
- config_name: seg1_part40
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31508069004
num_examples: 2498
download_size: 5640935450
dataset_size: 31508069004
- config_name: seg1_part41
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31558603213
num_examples: 2498
download_size: 5713447755
dataset_size: 31558603213
- config_name: seg1_part42
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31482828955
num_examples: 2498
download_size: 5640954061
dataset_size: 31482828955
- config_name: seg1_part43
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31520704278
num_examples: 2498
download_size: 5736520090
dataset_size: 31520704278
- config_name: seg1_part44
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31166950691
num_examples: 2471
download_size: 5640666013
dataset_size: 31166950691
- config_name: seg1_part45
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 6857
dataset_size: 0
- config_name: seg1_part6
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31545984682
num_examples: 2498
download_size: 5674259234
dataset_size: 31545984682
- config_name: seg1_part8
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31545987289
num_examples: 2498
download_size: 5733443343
dataset_size: 31545987289
- config_name: seg1_part9
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: word_count
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: frames
dtype: 'null'
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31533382844
num_examples: 2498
download_size: 5634081955
dataset_size: 31533382844
- config_name: seg2_part1
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: prev_conv
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31284870750
num_examples: 2498
download_size: 6023339313
dataset_size: 31284870750
- config_name: seg2_part3
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: prev_conv
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31297486132
num_examples: 2498
download_size: 6033401989
dataset_size: 31297486132
- config_name: seg2_part4
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: prev_conv
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31209057858
num_examples: 2498
download_size: 6033150218
dataset_size: 31209057858
- config_name: seg2_part7
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: prev_conv
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31310131354
num_examples: 2498
download_size: 6026279130
dataset_size: 31310131354
- config_name: seg3_part1
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31091165546
num_examples: 2498
download_size: 6013722850
dataset_size: 31091165546
- config_name: seg3_part3
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 30964835031
num_examples: 2498
download_size: 5981318118
dataset_size: 30964835031
- config_name: seg3_part4
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31141675653
num_examples: 2498
download_size: 6035418048
dataset_size: 31141675653
- config_name: seg3_part7
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31230125953
num_examples: 2498
download_size: 6080001698
dataset_size: 31230125953
- config_name: seg4_part1
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31230126642
num_examples: 2498
download_size: 6074698944
dataset_size: 31230126642
- config_name: seg4_part3
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31040618574
num_examples: 2498
download_size: 5968129650
dataset_size: 31040618574
- config_name: seg4_part5
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31015355157
num_examples: 2498
download_size: 6027043124
dataset_size: 31015355157
- config_name: seg4_part6
features:
- name: identifier
dtype: string
- name: hash_identifier
dtype: string
- name: url_link
dtype: string
- name: scene_start_time
dtype: string
- name: scene_end_time
dtype: string
- name: frame_number
dtype: float64
- name: key_frame_number
dtype: float64
- name: anime_tags
dtype: string
- name: user_tags
dtype: string
- name: text_description
dtype: string
- name: aesthetic_score
dtype: float64
- name: dynamic_score
dtype: float64
- name: rating
dtype: string
- name: text_prob
dtype: float64
- name: width
dtype: int64
- name: height
dtype: int64
- name: file_ext
dtype: string
- name: fps
dtype: float64
- name: Taxonomy_Time
dtype: string
- name: Taxonomy_Venue
dtype: string
- name: Taxonomy_Media
dtype: string
- name: Taxonomy_Filming
dtype: string
- name: Taxonomy_Composition
dtype: string
- name: Taxonomy_Character
dtype: string
- name: __index_level_0__
dtype: int64
- name: video_id
dtype: string
- name: video_segment_no
dtype: int64
- name: word_count
dtype: int64
- name: video_frames
sequence:
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 31116405163
num_examples: 2498
download_size: 6079250810
dataset_size: 31116405163
configs:
- config_name: sample_subset
data_files:
- split: train
path: sample_subset/train-*
- config_name: seg1_part1
data_files:
- split: train
path: seg1_part1/train-*
- config_name: seg1_part10
data_files:
- split: train
path: seg1_part10/train-*
- config_name: seg1_part11
data_files:
- split: train
path: seg1_part11/train-*
- config_name: seg1_part12
data_files:
- split: train
path: seg1_part12/train-*
- config_name: seg1_part14
data_files:
- split: train
path: seg1_part14/train-*
- config_name: seg1_part15
data_files:
- split: train
path: seg1_part15/train-*
- config_name: seg1_part18
data_files:
- split: train
path: seg1_part18/train-*
- config_name: seg1_part2
data_files:
- split: train
path: seg1_part2/train-*
- config_name: seg1_part20
data_files:
- split: train
path: seg1_part20/train-*
- config_name: seg1_part21
data_files:
- split: train
path: seg1_part21/train-*
- config_name: seg1_part24
data_files:
- split: train
path: seg1_part24/train-*
- config_name: seg1_part25
data_files:
- split: train
path: seg1_part25/train-*
- config_name: seg1_part26
data_files:
- split: train
path: seg1_part26/train-*
- config_name: seg1_part27
data_files:
- split: train
path: seg1_part27/train-*
- config_name: seg1_part28
data_files:
- split: train
path: seg1_part28/train-*
- config_name: seg1_part29
data_files:
- split: train
path: seg1_part29/train-*
- config_name: seg1_part30
data_files:
- split: train
path: seg1_part30/train-*
- config_name: seg1_part31
data_files:
- split: train
path: seg1_part31/train-*
- config_name: seg1_part32
data_files:
- split: train
path: seg1_part32/train-*
- config_name: seg1_part33
data_files:
- split: train
path: seg1_part33/train-*
- config_name: seg1_part34
data_files:
- split: train
path: seg1_part34/train-*
- config_name: seg1_part35
data_files:
- split: train
path: seg1_part35/train-*
- config_name: seg1_part36
data_files:
- split: train
path: seg1_part36/train-*
- config_name: seg1_part39
data_files:
- split: train
path: seg1_part39/train-*
- config_name: seg1_part4
data_files:
- split: train
path: seg1_part4/train-*
- config_name: seg1_part40
data_files:
- split: train
path: seg1_part40/train-*
- config_name: seg1_part41
data_files:
- split: train
path: seg1_part41/train-*
- config_name: seg1_part42
data_files:
- split: train
path: seg1_part42/train-*
- config_name: seg1_part43
data_files:
- split: train
path: seg1_part43/train-*
- config_name: seg1_part44
data_files:
- split: train
path: seg1_part44/train-*
- config_name: seg1_part45
data_files:
- split: train
path: seg1_part45/train-*
- config_name: seg1_part6
data_files:
- split: train
path: seg1_part6/train-*
- config_name: seg1_part8
data_files:
- split: train
path: seg1_part8/train-*
- config_name: seg1_part9
data_files:
- split: train
path: seg1_part9/train-*
- config_name: seg2_part1
data_files:
- split: train
path: seg2_part1/train-*
- config_name: seg2_part3
data_files:
- split: train
path: seg2_part3/train-*
- config_name: seg2_part4
data_files:
- split: train
path: seg2_part4/train-*
- config_name: seg2_part7
data_files:
- split: train
path: seg2_part7/train-*
- config_name: seg3_part1
data_files:
- split: train
path: seg3_part1/train-*
- config_name: seg3_part3
data_files:
- split: train
path: seg3_part3/train-*
- config_name: seg3_part4
data_files:
- split: train
path: seg3_part4/train-*
- config_name: seg3_part7
data_files:
- split: train
path: seg3_part7/train-*
- config_name: seg4_part1
data_files:
- split: train
path: seg4_part1/train-*
- config_name: seg4_part3
data_files:
- split: train
path: seg4_part3/train-*
- config_name: seg4_part5
data_files:
- split: train
path: seg4_part5/train-*
- config_name: seg4_part6
data_files:
- split: train
path: seg4_part6/train-*
---
|
sailor2/sea-synthetic | sailor2 | "2024-10-30T08:46:03Z" | 23,946 | 0 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-10-29T14:52:25Z" | ---
license: apache-2.0
---
|
MLCommons/peoples_speech | MLCommons | "2024-11-20T15:17:45Z" | 23,862 | 94 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"license:cc-by-2.5",
"license:cc-by-3.0",
"license:cc-by-4.0",
"license:cc-by-sa-3.0",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2111.09344",
"region:us",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition"
] | [
"automatic-speech-recognition"
] | "2022-08-16T14:21:49Z" | ---
annotations_creators:
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
license:
- cc-by-2.0
- cc-by-2.5
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1T<n
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: People's Speech
tags:
- robust-speech-recognition
- noisy-speech-recognition
- speech-recognition
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 401733771186.124
num_examples: 1501271
- name: validation
num_bytes: 2459781412.24
num_examples: 18622
- name: test
num_bytes: 4324307722.96
num_examples: 34898
download_size: 398550700437
dataset_size: 408517860321.32404
- config_name: clean_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 75267509124.558
num_examples: 257093
- name: validation
num_bytes: 2075929254.254
num_examples: 18622
- name: test
num_bytes: 3894954757.41
num_examples: 34898
download_size: 72518549222
dataset_size: 81238393136.222
- config_name: dirty
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 1569500875399.994
num_examples: 5476898
- name: validation
num_bytes: 2641406179.2539997
num_examples: 18622
- name: test
num_bytes: 5097236056.41
num_examples: 34898
download_size: 1496747948260
dataset_size: 1577239517635.6577
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 163776914241.91
num_examples: 548014
- name: validation
num_bytes: 2075929254.254
num_examples: 18622
- name: test
num_bytes: 3894954757.41
num_examples: 34898
download_size: 149326092074
dataset_size: 169747798253.574
- config_name: microset
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 92397066.0
num_examples: 336
download_size: 90204303
dataset_size: 92397066.0
- config_name: test
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3894954757.41
num_examples: 34898
download_size: 4087772459
dataset_size: 3894954757.41
- config_name: validation
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2075929254.254
num_examples: 18622
download_size: 2335244149
dataset_size: 2075929254.254
configs:
- config_name: clean
data_files:
- split: train
path: clean/train-*
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- config_name: clean_sa
data_files:
- split: train
path: clean_sa/train-*
- split: validation
path: clean_sa/validation-*
- split: test
path: clean_sa/test-*
- config_name: dirty
data_files:
- split: train
path: dirty/train-*
- split: validation
path: dirty/validation-*
- split: test
path: dirty/test-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
- config_name: microset
data_files:
- split: train
path: microset/train-*
- config_name: test
data_files:
- split: test
path: test/test-*
- config_name: validation
data_files:
- split: validation
path: validation/validation-*
---
# Dataset Card for People's Speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://mlcommons.org/en/peoples-speech/
- **Repository:** https://github.com/mlcommons/peoples-speech
- **Paper:** https://arxiv.org/abs/2111.09344
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [[email protected]](mailto:[email protected])
### Dataset Summary
The People's Speech Dataset is among the world's largest English speech recognition corpus today that is licensed for academic and commercial usage under CC-BY-SA and CC-BY 4.0. It includes 30,000+ hours of transcribed speech in English languages with a diverse set of speakers. This open dataset is large enough to train speech-to-text systems and crucially is available with a permissive license.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
{
"id": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac",
"audio": {
"path": "gov_DOT_uscourts_DOT_scotus_DOT_19-161/gov_DOT_uscourts_DOT_scotus_DOT_19-161_DOT_2020-03-02_DOT_mp3_00002.flac"
"array": array([-6.10351562e-05, ...]),
"sampling_rate": 16000
}
"duration_ms": 14490,
"text": "contends that the suspension clause requires a [...]"
}
### Data Fields
{
"id": datasets.Value("string"),
"audio": datasets.Audio(sampling_rate=16_000),
"duration_ms": datasets.Value("int32"),
"text": datasets.Value("string"),
}
### Data Splits
We provide the following configurations for the dataset: `cc-by-clean` (`"clean"`), `cc-by-dirty` (`"dirty"`), `cc-by-sa-clean` (`"clean_sa"`), `cc-by-sa-dirty` (`"dirty_sa"`), and `microset` (`"microset"`).
We also provide validation and test configurations, which are not only available as standalone configurations but are also included as validation and test splits within each of the above configurations for ease of use.
Specifically:
- Setting `data_dir="validation"` and `split="validation"` corresponds to the validation split of any of the configurations: `"clean"`, `"clean_sa"`, `"dirty"`, or `"dirty_sa"`.
- Similarly, setting `data_dir="test"` and `split="test"` corresponds to the test split of these configurations.
```
├── clean
│ ├── train
│ ├── validation
│ └── test
├── clean_sa
│ ├── train
│ ├── validation
│ └── test
├── dirty
│ ├── train
│ ├── validation
│ └── test
├── dirty_sa
│ ├── train
│ ├── validation
│ └── test
├── microset
│ └── train
├── validation
│ └── validation
└── test
└── test
```
## Dataset Creation
### Curation Rationale
See our [paper](https://arxiv.org/abs/2111.09344).
### Source Data
#### Initial Data Collection and Normalization
Data was downloaded via the archive.org API. No data inference was done.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
No manual annotation is done. We download only source audio with already existing transcripts.
#### Who are the annotators?
For the test and dev sets, we paid native American English speakers to do transcriptions. We do not know the identities of the transcriptionists for data in the training set. For the training set, we have noticed that some transcriptions are likely to be the output of automatic speech recognition systems.
### Personal and Sensitive Information
Several of our sources are legal and government proceedings, spoken histories, speeches, and so on. Given that these were intended as public documents and licensed as such, it is natural that the involved individuals are aware of this.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset could be used for speech synthesis. However, this requires careful cleaning of the dataset, as background noise is not tolerable for speech synthesis.
The dataset could be used for keyword spotting tasks as well. In particular, this is good use case for the non-English audio in the dataset.
Our sincere hope is that the large breadth of sources our dataset incorporates reduces existing quality of service issues today, like speech recognition system’s poor understanding of non-native English accents. We cannot think of any unfair treatment that come from using this dataset at this time.
### Discussion of Biases
Our data is downloaded from archive.org. As such, the data is biased towards whatever users decide to upload there.
Almost all of our data is American accented English.
### Other Known Limitations
As of version 1.0, a portion of data in the training, test, and dev sets is poorly aligned. Specifically, some words appear in the transcript, but not the audio, or some words appear in the audio, but not the transcript. We are working on it.
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
We provide CC-BY and CC-BY-SA subsets of the dataset.
### Citation Information
Please cite:
```
@article{DBLP:journals/corr/abs-2111-09344,
author = {Daniel Galvez and
Greg Diamos and
Juan Ciro and
Juan Felipe Cer{\'{o}}n and
Keith Achorn and
Anjali Gopi and
David Kanter and
Maximilian Lam and
Mark Mazumder and
Vijay Janapa Reddi},
title = {The People's Speech: {A} Large-Scale Diverse English Speech Recognition
Dataset for Commercial Usage},
journal = {CoRR},
volume = {abs/2111.09344},
year = {2021},
url = {https://arxiv.org/abs/2111.09344},
eprinttype = {arXiv},
eprint = {2111.09344},
timestamp = {Mon, 22 Nov 2021 16:44:07 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09344.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
HuggingFaceGECLM/REDDIT_comments | HuggingFaceGECLM | "2023-03-17T07:52:51Z" | 23,857 | 10 | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2001.08435",
"region:us",
"reddit",
"social-media"
] | [
"text-generation"
] | "2023-03-15T14:14:58Z" | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
tttarun/captcha_store | tttarun | "2023-07-20T12:42:25Z" | 23,776 | 1 | [
"license:mit",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-07-20T07:19:38Z" | ---
license: mit
---
|
hoskinson-center/proof-pile | hoskinson-center | "2023-08-19T03:24:11Z" | 23,738 | 58 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"math",
"mathematics",
"formal-mathematics"
] | [
"text-generation"
] | "2022-08-08T20:57:56Z" | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: [apache-2.0]
multilinguality:
- monolingual
pretty_name: proof-pile
size_categories: []
source_datasets: []
tags:
- math
- mathematics
- formal-mathematics
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Description
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
- ArXiv.math (10GB)
- Open-source math textbooks (50MB)
- Formal mathematics libraries (500MB)
- Lean mathlib and other Lean repositories
- Isabelle AFP
- Coq mathematical components and other Coq repositories
- HOL Light
- set.mm
- Mizar Mathematical Library
- Math Overflow and Math Stack Exchange (2.5GB)
- Wiki-style sources (50MB)
- ProofWiki
- Wikipedia math articles
- MATH dataset (6MB)
The construction of the dataset is reproducible using the code and instructions in the [proof-pile Github
repo](https://github.com/zhangir-azerbayev/proof-pile).
# Supported Tasks
This dataset is intended to be used for pre-training and fine-tuning language models. We envision models trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
# Languages
All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
# Evaluation
The version of `set.mm` in this dataset has 10% of proofs replaced with the `?` character in order to preserve a validation and test set for Metamath provers pre-trained on the `proof-pile`. The precise split can be found here: [validation](https://github.com/zhangir-azerbayev/mm-extract/blob/main/valid_decls.json) and [test](https://github.com/zhangir-azerbayev/mm-extract/blob/main/test_decls.json).
The Lean mathlib commit used in this dataset is `6313863`. Theorems created in subsequent commits can be used for evaluating Lean theorem provers.
This dataset contains only the training set of the [MATH dataset](https://github.com/hendrycks/math). However, because this dataset contains ProofWiki, the Stacks Project, Trench's Analysis, and Stein's Number Theory, models trained on it cannot be evaluated on the [NaturalProofs dataset](https://github.com/wellecks/naturalproofs).
# Data Preprocessing
This section describes any significant filtering and transformations made to various subsets of the data.
**arXiv.math.**
The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
when choosing which files from arXiv.math source folders to include in the dataset:
- Keep only files with a `.tex` extension.
- Only include files that use either a `utf-8/16/32` or `latin-1` text encoding.
- Discard files that do not contain a part, chapter, section, sub...section, paragraph, or subparagraph heading.
- Delete files that contain the keyword `gnuplot`. Gnuplot-latex is an old command line utility that generates blocks
of entirely unintelligible source.
- Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
"\n",
- Exclude files shorter than 280 characters (characters counted after substring removal described below).
In addition, we apply the following transformations to arXiv.math texts:
- Delete everything outside of `\begin{document}` and `\end{document}`.
- Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
- Delete comments.
- Any more than three consecutive newlines are replaced by three consecutive newlines.
In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
proof-pile.
**Stack Exchange.**
We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
```
QUESTION [{num_upvotes} upvotes]: {text of question}
REPLY [{num_upvotes} votes]: {text of reply}
REPLY [{num_upvotes} votes]: {text of reply}
.
.
.
```
**set.mm.**
We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
## Contributions
Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.
We would like to thank Jeremy Avigad, Albert Jiang, and Wenda Li for their invaluable guidance, and the Hoskinson Center for Formal Mathematics for its support.
|
lmms-lab/LLaVA-OneVision-Data | lmms-lab | "2024-10-22T06:47:46Z" | 23,727 | 167 | [
"language:en",
"language:zh",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.03326",
"arxiv:2310.05126",
"region:us"
] | null | "2024-07-25T15:25:28Z" | ---
language:
- en
- zh
license: apache-2.0
pretty_name: llava-onevision-data
dataset_info:
- config_name: CLEVR-Math(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 791346970
num_examples: 5280
download_size: 441208499
dataset_size: 791346970
- config_name: FigureQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 463326576.625
num_examples: 17587
download_size: 258197193
dataset_size: 463326576.625
- config_name: GEOS(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1503641
num_examples: 498
download_size: 684471
dataset_size: 1503641
- config_name: GeoQA+(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 53579705.75
num_examples: 17162
download_size: 33480538
dataset_size: 53579705.75
- config_name: Geometry3K(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 218085473.5
num_examples: 9724
download_size: 125914780
dataset_size: 218085473.5
- config_name: IconQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 208430568.375
num_examples: 22589
download_size: 117222488
dataset_size: 208430568.375
- config_name: MapQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 384120915.875
num_examples: 5225
download_size: 215768443
dataset_size: 384120915.875
- config_name: PMC-VQA(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 571444866.5
num_examples: 35948
download_size: 326541003
dataset_size: 571444866.5
- config_name: Super-CLEVR(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2795082410.75
num_examples: 8642
download_size: 1580301917
dataset_size: 2795082410.75
- config_name: TabMWP(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 307726997.5
num_examples: 22452
download_size: 173938487
dataset_size: 307726997.5
- config_name: UniGeo(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 38296693.375
num_examples: 11949
download_size: 24170743
dataset_size: 38296693.375
- config_name: VisualWebInstruct(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 36317112275.0
num_examples: 263584
download_size: 36239916454
dataset_size: 36317112275.0
- config_name: VizWiz(MathV360K)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1170333936.5
num_examples: 6604
download_size: 660752297
dataset_size: 1170333936.5
- config_name: ai2d(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 438572782.375
num_examples: 2429
download_size: 437348514
dataset_size: 438572782.375
- config_name: ai2d(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 866076731
num_examples: 4864
download_size: 860306578
dataset_size: 866076731
- config_name: ai2d(internvl)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1832787249.625
num_examples: 12403
download_size: 527493895
dataset_size: 1832787249.625
- config_name: allava_instruct_laion4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5981767621.25
num_examples: 49990
download_size: 5873046236
dataset_size: 5981767621.25
- config_name: allava_instruct_vflan4v
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2680974558.25
num_examples: 19990
download_size: 2670088751
dataset_size: 2680974558.25
- config_name: aokvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6896420844.25
num_examples: 16534
download_size: 6894236970
dataset_size: 6896420844.25
- config_name: chart2text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1145458729.5
num_examples: 26956
download_size: 1123681047
dataset_size: 1145458729.5
- config_name: chartqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 815335215.5
num_examples: 18260
download_size: 803084541
dataset_size: 815335215.5
- config_name: chrome_writting
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 44422597.875
num_examples: 8825
download_size: 39611257
dataset_size: 44422597.875
- config_name: clevr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 10528974543.625
num_examples: 69995
download_size: 10460536445
dataset_size: 10528974543.625
- config_name: diagram_image_to_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18858266
num_examples: 295
download_size: 18659115
dataset_size: 18858266
- config_name: dvqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4487270615.625
num_examples: 199995
download_size: 4277056467
dataset_size: 4487270615.625
- config_name: figureqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2351194509.625
num_examples: 99995
download_size: 2222640639
dataset_size: 2351194509.625
- config_name: geo170k(align)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 204236256.75
num_examples: 60242
download_size: 58185410
dataset_size: 204236256.75
- config_name: geo170k(qa)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 266040519.125
num_examples: 67823
download_size: 160022430
dataset_size: 266040519.125
- config_name: geo3k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 42634333.625
num_examples: 2091
download_size: 41097851
dataset_size: 42634333.625
- config_name: geomverse(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2263893609.75
num_examples: 9298
download_size: 2211726352
dataset_size: 2263893609.75
- config_name: hateful_memes(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3057252325.125
num_examples: 8495
download_size: 3055839880
dataset_size: 3057252325.125
- config_name: hitab(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 161706881.125
num_examples: 2495
download_size: 157871287
dataset_size: 161706881.125
- config_name: hme100k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 273229915.5
num_examples: 74492
download_size: 241005430
dataset_size: 273229915.5
- config_name: iam(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1131633206.75
num_examples: 5658
download_size: 1128371221
dataset_size: 1131633206.75
- config_name: iconqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331284932.25
num_examples: 27302
download_size: 327005220
dataset_size: 331284932.25
- config_name: iiit5k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 21821437.25
num_examples: 1990
download_size: 21623116
dataset_size: 21821437.25
- config_name: image_textualization(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5218283253.375
num_examples: 99573
download_size: 5164176816
dataset_size: 5218283253.375
- config_name: infographic(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 713657496.25
num_examples: 1982
download_size: 656276080
dataset_size: 713657496.25
- config_name: infographic_vqa
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1528953078.75
num_examples: 4394
download_size: 1419340319
dataset_size: 1528953078.75
- config_name: infographic_vqa_llava_format
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1765315696.875
num_examples: 2113
download_size: 1764548536
dataset_size: 1765315696.875
- config_name: intergps(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24973395.625
num_examples: 1275
download_size: 24736545
dataset_size: 24973395.625
- config_name: k12_printing
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1205153118.5
num_examples: 256636
download_size: 1108572712
dataset_size: 1205153118.5
- config_name: llavar_gpt4_20k
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 633833350.25
num_examples: 19790
download_size: 625365542
dataset_size: 633833350.25
- config_name: lrv_chart
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 99338686
num_examples: 1776
download_size: 97979446
dataset_size: 99338686
- config_name: lrv_normal(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 422589381.75
num_examples: 10490
download_size: 406958773
dataset_size: 422589381.75
- config_name: magpie_pro(l3_80b_mt)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1657129141
num_examples: 299988
download_size: 885893066
dataset_size: 1657129141
- config_name: magpie_pro(l3_80b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1033666690
num_examples: 299990
download_size: 562771564
dataset_size: 1033666690
- config_name: magpie_pro(qwen2_72b_st)
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 703489344
num_examples: 299982
download_size: 361433408
dataset_size: 703489344
- config_name: mapqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 3355751195.5
num_examples: 37412
download_size: 3305639218
dataset_size: 3355751195.5
- config_name: mathqa
features:
- name: id
dtype: string
- name: image
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 18318538
num_examples: 29827
download_size: 7857130
dataset_size: 18318538
- config_name: mavis_math_metagen
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2304025372.5
num_examples: 87348
download_size: 322776224
dataset_size: 2304025372.5
- config_name: mavis_math_rule_geo
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 14313211512.25
num_examples: 99990
download_size: 5841283073
dataset_size: 14313211512.25
- config_name: multihiertt(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 300319803.25
num_examples: 7614
download_size: 295638314
dataset_size: 300319803.25
- config_name: orand_car_a
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 23602442.125
num_examples: 1999
download_size: 23333412
dataset_size: 23602442.125
- config_name: raven(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 1706160514.625
num_examples: 41995
download_size: 1693150088
dataset_size: 1706160514.625
- config_name: rendered_text(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11082594894.625
num_examples: 9995
download_size: 11081962044
dataset_size: 11082594894.625
- config_name: robut_sqa(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 685580779.375
num_examples: 8509
download_size: 678666263
dataset_size: 685580779.375
- config_name: robut_wikisql(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6200499653
num_examples: 74984
download_size: 6168399217
dataset_size: 6200499653
- config_name: robut_wtq(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4091776188.875
num_examples: 38241
download_size: 4062777449
dataset_size: 4091776188.875
- config_name: scienceqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 286843125.625
num_examples: 4971
download_size: 282896809
dataset_size: 286843125.625
- config_name: scienceqa(nona_context)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2111029055
num_examples: 19208
download_size: 2053942726
dataset_size: 2111029055
- config_name: screen2words(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 7977502095.375
num_examples: 15725
download_size: 7962327904
dataset_size: 7977502095.375
- config_name: sharegpt4o
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 6968025789.5
num_examples: 57284
download_size: 6772195470
dataset_size: 6968025789.5
- config_name: sharegpt4v(coco)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2620153362.875
num_examples: 50017
download_size: 2595583499
dataset_size: 2620153362.875
- config_name: sharegpt4v(knowledge)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 372100773.5
num_examples: 1988
download_size: 369799318
dataset_size: 372100773.5
- config_name: sharegpt4v(llava)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 781795487.25
num_examples: 29990
download_size: 400344187
dataset_size: 781795487.25
- config_name: sharegpt4v(sam)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4437405218.25
num_examples: 8990
download_size: 4428597081
dataset_size: 4437405218.25
- config_name: sroie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 117810195
num_examples: 33616
download_size: 103647636
dataset_size: 117810195
- config_name: st_vqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 5771194098.75
num_examples: 17242
download_size: 5768888141
dataset_size: 5771194098.75
- config_name: tabmwp(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 311192518.375
num_examples: 22717
download_size: 306092255
dataset_size: 311192518.375
- config_name: tallyqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 35998988065.625
num_examples: 98675
download_size: 35982430394
dataset_size: 35998988065.625
- config_name: textcaps
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2222268476.25
num_examples: 21942
download_size: 2217838132
dataset_size: 2222268476.25
- config_name: textocr(gpt4v)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2581655353
num_examples: 25104
download_size: 2574418106
dataset_size: 2581655353
- config_name: tqa(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 331203026.25
num_examples: 27302
download_size: 326999466
dataset_size: 331203026.25
- config_name: ureader_cap
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 9269857109.75
num_examples: 91434
download_size: 2292099971
dataset_size: 9269857109.75
- config_name: ureader_ie
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11871457209.75
num_examples: 17322
download_size: 1999083115
dataset_size: 11871457209.75
- config_name: vision_flan(filtered)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 24847242604.5
num_examples: 186060
download_size: 24750561877
dataset_size: 24847242604.5
- config_name: vistext(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 550187184.5
num_examples: 9964
download_size: 452795103
dataset_size: 550187184.5
- config_name: visual7w(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 4451436523.875
num_examples: 14361
download_size: 4441971985
dataset_size: 4451436523.875
- config_name: visualmrc(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 2938154124.25
num_examples: 3022
download_size: 2909296079
dataset_size: 2938154124.25
- config_name: vqarad(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 95533417
num_examples: 308
download_size: 95410398
dataset_size: 95533417
- config_name: vsr(cauldron,llava_format)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 891981646
num_examples: 2152
download_size: 891572866
dataset_size: 891981646
- config_name: websight(cauldron)
features:
- name: id
dtype: string
- name: image
dtype: image
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 11209715828.625
num_examples: 9995
download_size: 11144460985
dataset_size: 11209715828.625
configs:
- config_name: CLEVR-Math(MathV360K)
data_files:
- split: train
path: CLEVR-Math(MathV360K)/train-*
- config_name: FigureQA(MathV360K)
data_files:
- split: train
path: FigureQA(MathV360K)/train-*
- config_name: GEOS(MathV360K)
data_files:
- split: train
path: GEOS(MathV360K)/train-*
- config_name: GeoQA+(MathV360K)
data_files:
- split: train
path: GeoQA+(MathV360K)/train-*
- config_name: Geometry3K(MathV360K)
data_files:
- split: train
path: Geometry3K(MathV360K)/train-*
- config_name: IconQA(MathV360K)
data_files:
- split: train
path: IconQA(MathV360K)/train-*
- config_name: MapQA(MathV360K)
data_files:
- split: train
path: MapQA(MathV360K)/train-*
- config_name: PMC-VQA(MathV360K)
data_files:
- split: train
path: PMC-VQA(MathV360K)/train-*
- config_name: Super-CLEVR(MathV360K)
data_files:
- split: train
path: Super-CLEVR(MathV360K)/train-*
- config_name: TabMWP(MathV360K)
data_files:
- split: train
path: TabMWP(MathV360K)/train-*
- config_name: UniGeo(MathV360K)
data_files:
- split: train
path: UniGeo(MathV360K)/train-*
- config_name: VisualWebInstruct(filtered)
data_files:
- split: train
path: VisualWebInstruct(filtered)/train-*
- config_name: VizWiz(MathV360K)
data_files:
- split: train
path: VizWiz(MathV360K)/train-*
- config_name: ai2d(cauldron,llava_format)
data_files:
- split: train
path: ai2d(cauldron,llava_format)/train-*
- config_name: ai2d(gpt4v)
data_files:
- split: train
path: ai2d(gpt4v)/train-*
- config_name: ai2d(internvl)
data_files:
- split: train
path: ai2d(internvl)/train-*
- config_name: allava_instruct_laion4v
data_files:
- split: train
path: allava_instruct_laion4v/train-*
- config_name: allava_instruct_vflan4v
data_files:
- split: train
path: allava_instruct_vflan4v/train-*
- config_name: aokvqa(cauldron,llava_format)
data_files:
- split: train
path: aokvqa(cauldron,llava_format)/train-*
- config_name: chart2text(cauldron)
data_files:
- split: train
path: chart2text(cauldron)/train-*
- config_name: chartqa(cauldron,llava_format)
data_files:
- split: train
path: chartqa(cauldron,llava_format)/train-*
- config_name: chrome_writting
data_files:
- split: train
path: chrome_writting/train-*
- config_name: clevr(cauldron,llava_format)
data_files:
- split: train
path: clevr(cauldron,llava_format)/train-*
- config_name: diagram_image_to_text(cauldron)
data_files:
- split: train
path: diagram_image_to_text(cauldron)/train-*
- config_name: dvqa(cauldron,llava_format)
data_files:
- split: train
path: dvqa(cauldron,llava_format)/train-*
- config_name: figureqa(cauldron,llava_format)
data_files:
- split: train
path: figureqa(cauldron,llava_format)/train-*
- config_name: geo170k(align)
data_files:
- split: train
path: geo170k(align)/train-*
- config_name: geo170k(qa)
data_files:
- split: train
path: geo170k(qa)/train-*
- config_name: geo3k
data_files:
- split: train
path: geo3k/train-*
- config_name: geomverse(cauldron)
data_files:
- split: train
path: geomverse(cauldron)/train-*
- config_name: hateful_memes(cauldron,llava_format)
data_files:
- split: train
path: hateful_memes(cauldron,llava_format)/train-*
- config_name: hitab(cauldron,llava_format)
data_files:
- split: train
path: hitab(cauldron,llava_format)/train-*
- config_name: hme100k
data_files:
- split: train
path: hme100k/train-*
- config_name: iam(cauldron)
data_files:
- split: train
path: iam(cauldron)/train-*
- config_name: iconqa(cauldron,llava_format)
data_files:
- split: train
path: iconqa(cauldron,llava_format)/train-*
- config_name: iiit5k
data_files:
- split: train
path: iiit5k/train-*
- config_name: image_textualization(filtered)
data_files:
- split: train
path: image_textualization(filtered)/train-*
- config_name: infographic(gpt4v)
data_files:
- split: train
path: infographic(gpt4v)/train-*
- config_name: infographic_vqa
data_files:
- split: train
path: infographic_vqa/train-*
- config_name: infographic_vqa_llava_format
data_files:
- split: train
path: infographic_vqa_llava_format/train-*
- config_name: intergps(cauldron,llava_format)
data_files:
- split: train
path: intergps(cauldron,llava_format)/train-*
- config_name: k12_printing
data_files:
- split: train
path: k12_printing/train-*
- config_name: llavar_gpt4_20k
data_files:
- split: train
path: llavar_gpt4_20k/train-*
- config_name: lrv_chart
data_files:
- split: train
path: lrv_chart/train-*
- config_name: lrv_normal(filtered)
data_files:
- split: train
path: lrv_normal(filtered)/train-*
- config_name: magpie_pro(l3_80b_mt)
data_files:
- split: train
path: magpie_pro(l3_80b_mt)/train-*
- config_name: magpie_pro(l3_80b_st)
data_files:
- split: train
path: magpie_pro(l3_80b_st)/train-*
- config_name: magpie_pro(qwen2_72b_st)
data_files:
- split: train
path: magpie_pro(qwen2_72b_st)/train-*
- config_name: mapqa(cauldron,llava_format)
data_files:
- split: train
path: mapqa(cauldron,llava_format)/train-*
- config_name: mathqa
data_files:
- split: train
path: mathqa/train-*
- config_name: mavis_math_metagen
data_files:
- split: train
path: mavis_math_metagen/train-*
- config_name: mavis_math_rule_geo
data_files:
- split: train
path: mavis_math_rule_geo/train-*
- config_name: multihiertt(cauldron)
data_files:
- split: train
path: multihiertt(cauldron)/train-*
- config_name: orand_car_a
data_files:
- split: train
path: orand_car_a/train-*
- config_name: raven(cauldron)
data_files:
- split: train
path: raven(cauldron)/train-*
- config_name: rendered_text(cauldron)
data_files:
- split: train
path: rendered_text(cauldron)/train-*
- config_name: robut_sqa(cauldron)
data_files:
- split: train
path: robut_sqa(cauldron)/train-*
- config_name: robut_wikisql(cauldron)
data_files:
- split: train
path: robut_wikisql(cauldron)/train-*
- config_name: robut_wtq(cauldron,llava_format)
data_files:
- split: train
path: robut_wtq(cauldron,llava_format)/train-*
- config_name: scienceqa(cauldron,llava_format)
data_files:
- split: train
path: scienceqa(cauldron,llava_format)/train-*
- config_name: scienceqa(nona_context)
data_files:
- split: train
path: scienceqa(nona_context)/train-*
- config_name: screen2words(cauldron)
data_files:
- split: train
path: screen2words(cauldron)/train-*
- config_name: sharegpt4o
data_files:
- split: train
path: sharegpt4o/train-*
- config_name: sharegpt4v(coco)
data_files:
- split: train
path: sharegpt4v(coco)/train-*
- config_name: sharegpt4v(knowledge)
data_files:
- split: train
path: sharegpt4v(knowledge)/train-*
- config_name: sharegpt4v(llava)
data_files:
- split: train
path: sharegpt4v(llava)/train-*
- config_name: sharegpt4v(sam)
data_files:
- split: train
path: sharegpt4v(sam)/train-*
- config_name: sroie
data_files:
- split: train
path: sroie/train-*
- config_name: st_vqa(cauldron,llava_format)
data_files:
- split: train
path: st_vqa(cauldron,llava_format)/train-*
- config_name: tabmwp(cauldron)
data_files:
- split: train
path: tabmwp(cauldron)/train-*
- config_name: tallyqa(cauldron,llava_format)
data_files:
- split: train
path: tallyqa(cauldron,llava_format)/train-*
- config_name: textcaps
data_files:
- split: train
path: textcaps/train-*
- config_name: textocr(gpt4v)
data_files:
- split: train
path: textocr(gpt4v)/train-*
- config_name: tqa(cauldron,llava_format)
data_files:
- split: train
path: tqa(cauldron,llava_format)/train-*
- config_name: ureader_cap
data_files:
- split: train
path: ureader_cap/train-*
- config_name: ureader_ie
data_files:
- split: train
path: ureader_ie/train-*
- config_name: vision_flan(filtered)
data_files:
- split: train
path: vision_flan(filtered)/train-*
- config_name: vistext(cauldron)
data_files:
- split: train
path: vistext(cauldron)/train-*
- config_name: visual7w(cauldron,llava_format)
data_files:
- split: train
path: visual7w(cauldron,llava_format)/train-*
- config_name: visualmrc(cauldron)
data_files:
- split: train
path: visualmrc(cauldron)/train-*
- config_name: vqarad(cauldron,llava_format)
data_files:
- split: train
path: vqarad(cauldron,llava_format)/train-*
- config_name: vsr(cauldron,llava_format)
data_files:
- split: train
path: vsr(cauldron,llava_format)/train-*
- config_name: websight(cauldron)
data_files:
- split: train
path: websight(cauldron)/train-*
---
# Dataset Card for LLaVA-OneVision
**[2024-09-01]: Uploaded VisualWebInstruct(filtered), it's used in OneVision Stage**
> almost all subsets are uploaded with HF's required format and you can use the recommended interface to download them and follow our code below to convert them.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders.
> You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
In this dataset, we include the data splits used in the both final image stage and one-vision stage. For more details, please check our [paper](arxiv.org/abs/2408.03326) and our [training doc](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
## Dataset Description
- **Curated by:** Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li, Dong Guo
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Dataset Collection:** We include a few subsets from existing dataset collection [Cambrian](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M), [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron), [UReader](https://arxiv.org/abs/2310.05126). Since we only used a few subsets from these datasets, and applied the cleaning and re-annotation process, we uploaded our processed version of these datasets into our own repository and thank the authors for providing the original datasets.
- **Other Datasets:** For rest single source dataset, such as AI2D, OKVQA, we cite and link the original sources in our paper.
## Uses
This dataset is used for the training of the LLaVA-OneVision model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
## Dataset Structure
We expalin the data composition for mid-stage and final-stage at our repo in [**training doc**](https://github.com/LLaVA-VL/LLaVA-NeXT/tree/main/scripts/train#about-the-llava-onevision-data).
### Statistics
We provide the statistics of the dataset in the following figures, and refer the audience to check our paper.


### Code Guidance
To help audience to better understand our dataest, we upload them into Hugging Face Dataset compatible format. During LLaVA-OneVision training, we use the `json` and `image/video` folder to store the data.
> the subset of `ureader_kg` and `ureader_qa` are uploaded with the processed jsons and tar.gz of image folders. You may directly download them from the following url.
> https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data/tree/main/ureader_kg
Here we provide the code guidance to convert the dataset into the format of LLaVA-OneVision, and conduct the training of the LLaVA-OneVision model with converted dataset.
```python
import os
from datasets import load_dataset
from tqdm import tqdm
import json
data = load_dataset("lmms-lab/LLaVA-OneVision-Data", split="train")
image_folder = "<your_image_folder>"
converted_data = []
for da in tqdm(data):
json_data = {}
json_data["id"] = da["id"]
if da["image"] is not None:
json_data["image"] = f"{da['id']}.jpg"
da["image"].save(os.path.join(image_folder, json_data["image"]))
json_data["conversations"] = da["conversations"]
converted_data.append(json_data)
with open("<your_json_file>.json", "w") as f:
json.dump(converted_data, f, indent=4, ensure_ascii=False)
```
## Citation
**BibTeX:**
[More Information Needed]
## Glossary
The dataset collection process is conducted by all of the authors, we thank the Feng Li and Renrui Zhang for providing [LLaVA-M4-Instruct Data](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data) and Yuanhan for providing the [Video datasets](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K).
After the dataset collection, the cleaning and re-annotation process, including final mixture of the dataset, is conducted by Bo Li and with the great help of Kaichen Zhang.
## Dataset Card Authors
The dataset is curated by the following authors:
Bo Li, Kaichen Zhang, Hao Zhang, Yuanhan Zhang, Renrui Zhang, Feng Li
## Dataset Card Contact
[Bo Li](https://brianboli.com/): [email protected]
[Kaichen Zhang](https://www.linkedin.com/in/kaichen-zhang-014b17219/?originalSubdomain=sg) |
MU-NLPC/Calc-svamp | MU-NLPC | "2023-10-30T15:05:26Z" | 23,606 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.15017",
"region:us",
"math world problems",
"math",
"arithmetics"
] | [
"text-generation"
] | "2023-09-08T14:56:46Z" | ---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 335744
num_examples: 1000
download_size: 116449
dataset_size: 335744
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math word problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction process
We created the dataset by converting the **equation** attribute in the original dataset to a sequence (chain) of calculations, with final one being the result to the math problem.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
However, for SVAMP specifically, we detected no data leaks and filtered no data.
## Content and data splits
The dataset contains the same data instances as the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split. We treat the whole dataset as a testing benchmark.
## Attributes:
- **id**: problem id from the original dataset
- **question**: the question intended to answer
- **chain**: series of simple operations (derived from `equation`) that leads to the solution
- **result**: the result (number) as a string
- **result_float**: result converted to a floating point
- **equation**: a nested expression that evaluates to the correct result
- **problem_type**: a category of the problem
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original SVAMP dataset and repo**](https://github.com/arkilpatel/SVAMP/)
- [**original SVAMP paper**](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35)
## Licence
MIT, consistent with the original source dataset linked above.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` |
cardiffnlp/tweet_eval | cardiffnlp | "2024-01-04T16:40:33Z" | 23,588 | 119 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-tweet-datasets",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.12421",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- sentiment-classification
paperswithcode_id: tweeteval
pretty_name: TweetEval
config_names:
- emoji
- emotion
- hate
- irony
- offensive
- sentiment
- stance_abortion
- stance_atheism
- stance_climate
- stance_feminist
- stance_hillary
dataset_info:
- config_name: emoji
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ❤
'1': 😍
'2': 😂
'3': 💕
'4': 🔥
'5': 😊
'6': 😎
'7': ✨
'8': 💙
'9': 😘
'10': 📷
'11': 🇺🇸
'12': ☀
'13': 💜
'14': 😉
'15': 💯
'16': 😁
'17': 🎄
'18': 📸
'19': 😜
splits:
- name: train
num_bytes: 3803167
num_examples: 45000
- name: test
num_bytes: 4255901
num_examples: 50000
- name: validation
num_bytes: 396079
num_examples: 5000
download_size: 5939308
dataset_size: 8455147
- config_name: emotion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': anger
'1': joy
'2': optimism
'3': sadness
splits:
- name: train
num_bytes: 338871
num_examples: 3257
- name: test
num_bytes: 146645
num_examples: 1421
- name: validation
num_bytes: 38273
num_examples: 374
download_size: 367016
dataset_size: 523789
- config_name: hate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-hate
'1': hate
splits:
- name: train
num_bytes: 1223650
num_examples: 9000
- name: test
num_bytes: 428934
num_examples: 2970
- name: validation
num_bytes: 154144
num_examples: 1000
download_size: 1196346
dataset_size: 1806728
- config_name: irony
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non_irony
'1': irony
splits:
- name: train
num_bytes: 259187
num_examples: 2862
- name: test
num_bytes: 75897
num_examples: 784
- name: validation
num_bytes: 86017
num_examples: 955
download_size: 297647
dataset_size: 421101
- config_name: offensive
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-offensive
'1': offensive
splits:
- name: train
num_bytes: 1648061
num_examples: 11916
- name: test
num_bytes: 135473
num_examples: 860
- name: validation
num_bytes: 192417
num_examples: 1324
download_size: 1234528
dataset_size: 1975951
- config_name: sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 5425122
num_examples: 45615
- name: test
num_bytes: 1279540
num_examples: 12284
- name: validation
num_bytes: 239084
num_examples: 2000
download_size: 4849675
dataset_size: 6943746
- config_name: stance_abortion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 68694
num_examples: 587
- name: test
num_bytes: 33171
num_examples: 280
- name: validation
num_bytes: 7657
num_examples: 66
download_size: 73517
dataset_size: 109522
- config_name: stance_atheism
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 54775
num_examples: 461
- name: test
num_bytes: 25716
num_examples: 220
- name: validation
num_bytes: 6320
num_examples: 52
download_size: 62265
dataset_size: 86811
- config_name: stance_climate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 40249
num_examples: 355
- name: test
num_bytes: 19925
num_examples: 169
- name: validation
num_bytes: 4801
num_examples: 40
download_size: 48493
dataset_size: 64975
- config_name: stance_feminist
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 70509
num_examples: 597
- name: test
num_bytes: 33305
num_examples: 285
- name: validation
num_bytes: 8035
num_examples: 67
download_size: 76345
dataset_size: 111849
- config_name: stance_hillary
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 69596
num_examples: 620
- name: test
num_bytes: 34487
num_examples: 295
- name: validation
num_bytes: 7532
num_examples: 69
download_size: 74057
dataset_size: 111615
configs:
- config_name: emoji
data_files:
- split: train
path: emoji/train-*
- split: test
path: emoji/test-*
- split: validation
path: emoji/validation-*
- config_name: emotion
data_files:
- split: train
path: emotion/train-*
- split: test
path: emotion/test-*
- split: validation
path: emotion/validation-*
- config_name: hate
data_files:
- split: train
path: hate/train-*
- split: test
path: hate/test-*
- split: validation
path: hate/validation-*
- config_name: irony
data_files:
- split: train
path: irony/train-*
- split: test
path: irony/test-*
- split: validation
path: irony/validation-*
- config_name: offensive
data_files:
- split: train
path: offensive/train-*
- split: test
path: offensive/test-*
- split: validation
path: offensive/validation-*
- config_name: sentiment
data_files:
- split: train
path: sentiment/train-*
- split: test
path: sentiment/test-*
- split: validation
path: sentiment/validation-*
- config_name: stance_abortion
data_files:
- split: train
path: stance_abortion/train-*
- split: test
path: stance_abortion/test-*
- split: validation
path: stance_abortion/validation-*
- config_name: stance_atheism
data_files:
- split: train
path: stance_atheism/train-*
- split: test
path: stance_atheism/test-*
- split: validation
path: stance_atheism/validation-*
- config_name: stance_climate
data_files:
- split: train
path: stance_climate/train-*
- split: test
path: stance_climate/test-*
- split: validation
path: stance_climate/validation-*
- config_name: stance_feminist
data_files:
- split: train
path: stance_feminist/train-*
- split: test
path: stance_feminist/test-*
- split: validation
path: stance_feminist/validation-*
- config_name: stance_hillary
data_files:
- split: train
path: stance_hillary/train-*
- split: test
path: stance_hillary/test-*
- split: validation
path: stance_hillary/validation-*
train-eval-index:
- config: emotion
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: hate
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: irony
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: offensive
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: sentiment
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for tweet_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval)
- **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf)
- **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
### Supported Tasks and Leaderboards
- `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
### Languages
The text in the dataset is in English, as spoken by Twitter users.
## Dataset Structure
### Data Instances
An instance from `emoji` config:
```
{'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'}
```
An instance from `emotion` config:
```
{'label': 2, 'text': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"}
```
An instance from `hate` config:
```
{'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'}
```
An instance from `irony` config:
```
{'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'}
```
An instance from `offensive` config:
```
{'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'}
```
An instance from `sentiment` config:
```
{'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
```
An instance from `stance_abortion` config:
```
{'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'}
```
An instance from `stance_atheism` config:
```
{'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'}
```
An instance from `stance_climate` config:
```
{'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'}
```
An instance from `stance_feminist` config:
```
{'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"}
```
An instance from `stance_hillary` config:
```
{'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"}
```
### Data Fields
For `emoji` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: ❤
`1`: 😍
`2`: 😂
`3`: 💕
`4`: 🔥
`5`: 😊
`6`: 😎
`7`: ✨
`8`: 💙
`9`: 😘
`10`: 📷
`11`: 🇺🇸
`12`: ☀
`13`: 💜
`14`: 😉
`15`: 💯
`16`: 😁
`17`: 🎄
`18`: 📸
`19`: 😜
For `emotion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: anger
`1`: joy
`2`: optimism
`3`: sadness
For `hate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-hate
`1`: hate
For `irony` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non_irony
`1`: irony
For `offensive` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-offensive
`1`: offensive
For `sentiment` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: negative
`1`: neutral
`2`: positive
For `stance_abortion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_atheism` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_climate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_feminist` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_hillary` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
### Data Splits
| name | train | validation | test |
| --------------- | ----- | ---------- | ----- |
| emoji | 45000 | 5000 | 50000 |
| emotion | 3257 | 374 | 1421 |
| hate | 9000 | 1000 | 2970 |
| irony | 2862 | 955 | 784 |
| offensive | 11916 | 1324 | 860 |
| sentiment | 45615 | 2000 | 12284 |
| stance_abortion | 587 | 66 | 280 |
| stance_atheism | 461 | 52 | 220 |
| stance_climate | 355 | 40 | 169 |
| stance_feminist | 597 | 67 | 285 |
| stance_hillary | 620 | 69 | 295 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
### Licensing Information
This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions).
All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
Additionally the license are:
- emoji: Undefined
- emotion(EmoInt): Undefined
- hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html)
- irony: Undefined
- Offensive: Undefined
- Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ)
- Stance: Undefined
### Citation Information
```
@inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
}
```
If you use any of the TweetEval datasets, please cite their original publications:
#### Emotion Recognition:
```
@inproceedings{mohammad2018semeval,
title={Semeval-2018 task 1: Affect in tweets},
author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
booktitle={Proceedings of the 12th international workshop on semantic evaluation},
pages={1--17},
year={2018}
}
```
#### Emoji Prediction:
```
@inproceedings{barbieri2018semeval,
title={Semeval 2018 task 2: Multilingual emoji prediction},
author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={24--33},
year={2018}
}
```
#### Irony Detection:
```
@inproceedings{van2018semeval,
title={Semeval-2018 task 3: Irony detection in english tweets},
author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={39--50},
year={2018}
}
```
#### Hate Speech Detection:
```
@inproceedings{basile-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter",
author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and
Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S19-2007",
doi = "10.18653/v1/S19-2007",
pages = "54--63"
}
```
#### Offensive Language Identification:
```
@inproceedings{zampieri2019semeval,
title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)},
author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh},
booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation},
pages={75--86},
year={2019}
}
```
#### Sentiment Analysis:
```
@inproceedings{rosenthal2017semeval,
title={SemEval-2017 task 4: Sentiment analysis in Twitter},
author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav},
booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)},
pages={502--518},
year={2017}
}
```
#### Stance Detection:
```
@inproceedings{mohammad2016semeval,
title={Semeval-2016 task 6: Detecting stance in tweets},
author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin},
booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)},
pages={31--41},
year={2016}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) and [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
parrotzone/sdxl-1.0 | parrotzone | "2023-09-20T12:27:51Z" | 23,476 | 10 | [
"license:openrail++",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-07-31T07:18:18Z" | ---
license: openrail++
---
# check [sdxl.parrotzone.art](https://sdxl.parrotzone.art) for easy viewing ⋆。°✩
---
## all images were made with SDXL 1.0 + the 0.9 VAE
- steps: 20
- cfg scale: 7
- no refiner
- random seeds
|
Matthijs/cmu-arctic-xvectors | Matthijs | "2023-02-07T14:04:48Z" | 23,402 | 46 | [
"task_categories:text-to-speech",
"task_categories:audio-to-audio",
"license:mit",
"size_categories:1K<n<10K",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech",
"audio-to-audio"
] | "2023-02-07T12:39:22Z" | ---
pretty_name: CMU ARCTIC X-Vectors
task_categories:
- text-to-speech
- audio-to-audio
license: mit
---
# Speaker embeddings extracted from CMU ARCTIC
There is one `.npy` file for each utterance in the dataset, 7931 files in total. The speaker embeddings are 512-element X-vectors.
The [CMU ARCTIC](http://www.festvox.org/cmu_arctic/) dataset divides the utterances among the following speakers:
- bdl (US male)
- slt (US female)
- jmk (Canadian male)
- awb (Scottish male)
- rms (US male)
- clb (US female)
- ksp (Indian male)
The X-vectors were extracted using [this script](https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py), which uses the `speechbrain/spkrec-xvect-voxceleb` model.
Usage:
```python
from datasets import load_dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = embeddings_dataset[7306]["xvector"]
speaker_embeddings = torch.tensor(speaker_embeddings).unsqueeze(0)
```
|
lukaemon/bbh | lukaemon | "2023-02-02T01:14:46Z" | 23,398 | 55 | [
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-02-01T07:46:51Z" | ---
dataset_info:
- config_name: boolean_expressions
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 11790
num_examples: 250
download_size: 17172
dataset_size: 11790
- config_name: causal_judgement
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 198021
num_examples: 187
download_size: 202943
dataset_size: 198021
- config_name: date_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 54666
num_examples: 250
download_size: 61760
dataset_size: 54666
- config_name: disambiguation_qa
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 78620
num_examples: 250
download_size: 85255
dataset_size: 78620
- config_name: dyck_languages
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38432
num_examples: 250
download_size: 43814
dataset_size: 38432
- config_name: formal_fallacies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 138224
num_examples: 250
download_size: 145562
dataset_size: 138224
- config_name: geometric_shapes
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 68560
num_examples: 250
download_size: 77242
dataset_size: 68560
- config_name: hyperbaton
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38574
num_examples: 250
download_size: 44706
dataset_size: 38574
- config_name: logical_deduction_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 148595
num_examples: 250
download_size: 155477
dataset_size: 148595
- config_name: logical_deduction_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 191022
num_examples: 250
download_size: 198404
dataset_size: 191022
- config_name: logical_deduction_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 105831
num_examples: 250
download_size: 112213
dataset_size: 105831
- config_name: movie_recommendation
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 50985
num_examples: 250
download_size: 57684
dataset_size: 50985
- config_name: multistep_arithmetic_two
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 12943
num_examples: 250
download_size: 18325
dataset_size: 12943
- config_name: navigate
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 49031
num_examples: 250
download_size: 55163
dataset_size: 49031
- config_name: object_counting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 30508
num_examples: 250
download_size: 35890
dataset_size: 30508
- config_name: penguins_in_a_table
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 70062
num_examples: 146
download_size: 74516
dataset_size: 70062
- config_name: reasoning_about_colored_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 89579
num_examples: 250
download_size: 98694
dataset_size: 89579
- config_name: ruin_names
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 46537
num_examples: 250
download_size: 53178
dataset_size: 46537
- config_name: salient_translation_error_detection
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 277110
num_examples: 250
download_size: 286443
dataset_size: 277110
- config_name: snarks
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38223
num_examples: 178
download_size: 42646
dataset_size: 38223
- config_name: sports_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 22723
num_examples: 250
download_size: 28617
dataset_size: 22723
- config_name: temporal_sequences
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 139546
num_examples: 250
download_size: 148176
dataset_size: 139546
- config_name: tracking_shuffled_objects_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 162590
num_examples: 250
download_size: 169722
dataset_size: 162590
- config_name: tracking_shuffled_objects_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 207274
num_examples: 250
download_size: 214906
dataset_size: 207274
- config_name: tracking_shuffled_objects_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 122104
num_examples: 250
download_size: 128736
dataset_size: 122104
- config_name: web_of_lies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47582
num_examples: 250
download_size: 52964
dataset_size: 47582
- config_name: word_sorting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60918
num_examples: 250
download_size: 66300
dataset_size: 60918
---
# BIG-bench Hard dataset
homepage: https://github.com/suzgunmirac/BIG-Bench-Hard
```
@article{suzgun2022challenging,
title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},
author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},
journal={arXiv preprint arXiv:2210.09261},
year={2022}
}
``` |
imageomics/TreeOfLife-10M | imageomics | "2024-12-20T16:05:17Z" | 23,331 | 26 | [
"task_categories:image-classification",
"task_categories:zero-shot-classification",
"language:en",
"language:la",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"doi:10.57967/hf/1972",
"region:us",
"biology",
"images",
"animals",
"evolutionary biology",
"CV",
"multimodal",
"clip",
"species",
"taxonomy",
"knowledge-guided",
"imbalanced"
] | [
"image-classification",
"zero-shot-classification"
] | "2024-01-23T21:06:53Z" | ---
License: cc0-1.0
language:
- en
- la
pretty_name: TreeOfLife-10M
task_categories:
- image-classification
- zero-shot-classification
tags:
- biology
- images
- animals
- evolutionary biology
- CV
- multimodal
- clip
- biology
- species
- taxonomy
- knowledge-guided
- imbalanced
size_categories: 10M<n<100M
---
# Dataset Card for TreeOfLife-10M
## Dataset Description
<!-- - **Homepage:** -->
- **Repository:** [Imageomics/bioclip](https://github.com/Imageomics/bioclip)
- **Paper:** BioCLIP: A Vision Foundation Model for the Tree of Life ([arXiv](https://doi.org/10.48550/arXiv.2311.18803))
<!-- - **Leaderboard:** -->
### Dataset Summary
With over 10 million images covering 454 thousand taxa in the tree of life, TreeOfLife-10M is the largest-to-date ML-ready dataset of images of biological organisms paired with their associated taxonomic labels. It expands on the foundation established by existing high-quality datasets, such as iNat21 and BIOSCAN-1M, by further incorporating newly curated images from the Encyclopedia of Life (eol.org), which supplies most of TreeOfLife-10M’s data diversity. Every image in TreeOfLife-10M is labeled to the most specific taxonomic level possible, as well as higher taxonomic ranks in the tree of life (see [Text Types](#text-types) for examples of taxonomic ranks and labels). TreeOfLife-10M was generated for the purpose of training [BioCLIP](https://huggingface.co/imageomics/bioclip) and future biology foundation models.
<!--This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). And further altered to suit Imageomics Institute needs. -->
||
|:--|
|**Figure 1.** Treemap from phyla down to family for TreeOfLife-10M. Interactive version available in [`visuals`](https://huggingface.co/datasets/imageomics/TreeOfLife-10M/tree/main/visuals) folder.|
### Supported Tasks and Leaderboards
Image Classification, Zero-shot and few-shot Classification.
### Languages
English, Latin
## Dataset Contents
```
/dataset/
EOL/
image_set_01.tar.gz
image_set_02.tar.gz
...
image_set_63.tar.gz
metadata/
catalog.csv
species_level_taxonomy_chains.csv
taxon.tab
licenses.csv
naming/
bioscan_name_lookup.json
eol_name_lookup.json
inat21_name_lookup.json
visuals/
kingodm_ToL_tree.html
kingdom_ToL_tree.pdf
phyla_ToL_tree.html
phyla_ToL_tree.pdf
phyla_ToL_tree.png
```
Each `image_set` is approximately 30GB and contains 100 thousand images, each named `<treeoflife_id>.jpg`.
We cannot reproduce the `iNat21` data, but step-by-step [***reproduction instructions***](https://github.com/Imageomics/bioclip/blob/main/docs/imageomics/treeoflife10m.md#reproduce-treeoflife-10m) are provided on the BioCLIP GitHub repository to recreate the proper webdataset structure. This process will produce a collection of files named `shard-######.tar` in a `train`, `val`, and `train_small` folder with which to work.
Inside each shard is a collection of images (named `<treeoflife_id>.jpg`), for which each has the following files:
```
<treeoflife_id>.com.txt
<treeoflife_id>.common_name.txt
<treeoflife_id>.jpg
<treeoflife_id>.sci.txt
<treeoflife_id>.sci_com.txt
<treeoflife_id>.scientific_name.txt
<treeoflife_id>.taxon.txt
<treeoflife_id>.taxonTag.txt
<treeoflife_id>.taxonTag_com.txt
<treeoflife_id>.taxon_com.txt
<treeoflife_id>.taxonomic_name.txt
```
### Data Instances
This dataset is a collection of images with associated text. The text matched to images contains both [Linnaean taxonomy](https://www.britannica.com/science/taxonomy/The-objectives-of-biological-classification) (kingdom through species) for the particular subject of the image and its common (or vernacular) name where available. There are 8,455,243 images with full taxonomic labels.
### Data Fields
#### Metadata Files
`catalog.csv`: contains the following metadata associated with each image in the dataset
- `split`: indicates which data split the image belongs to (`train`, `val`, or `train_small`), `train_small` is a duplicated subset of `train` and thus should not be included when analyzing overall stats of the dataset.
- `treeoflife_id`: unique identifier for the image in the dataset.
- `eol_content_id`: unique identifier within EOL database for images sourced from [EOL](https://eol.org). Note that EOL content IDs are not stable.
- `eol_page_id`: identifier of page from which images from EOL are sourced. Note that an image's association to a particular page ID may change with updates to the EOL (or image provider's) hierarchy. However, EOL taxon page IDs are stable.
- `bioscan_part`: indicates to which of the 113 data chunks of [BIOSCAN-1M](https://github.com/zahrag/BIOSCAN-1M#-iv-rgb-images) each image belongs. Note that there are 10K images per chunk and 8,313 in chunk #113.
- `bioscan_filename`: unique identifier within BIOSCAN-1M dataset for images sourced from [BIOSCAN-1M](https://github.com/zahrag/BIOSCAN-1M).
- `inat21_filename`: unique identifier within iNat21 dataset for images sourced from [iNat21](https://github.com/visipedia/inat_comp/blob/master/2021/README.md).
<!-- (`file_name` given in `images` of the [`train.json`](https://github.com/visipedia/inat_comp/tree/master/2021#annotation-format) `file_name` = "train/#####_Kingdom_Phylum_..._Genus_species/STRING(uuid?).jpg"). `inat21_filename` is the end of the `file_name` string. The taxa are the `cls_name`, and the number is the `cls_num` (leading 0 may be lost here).-->
- `inat21_cls_name`: `<Kingdom>_<Phylum>_<Class>_<Order>_<Family>_<Genus>_<species>` as labeled by iNaturalist.
- `inat21_cls_num`: Number assigned by iNat21 to the given species (unique identifier for that species within iNat21 dataset).
The remaining terms describe the _Linnaean taxonomy_ of the subject of the image; they are sourced as described in [Annotation Process, below](#annotation-process).
- `kingdom`: kingdom to which the subject of the image belongs (`Animalia`, `Plantae`, `Fungi`, `Chromista`, `Protozoa`, `Bacteria`, `Viridiplantae`, `Protista`, `Orthornavirae`, `Bamfordvirae`, `Archaea`, or `Shotokuvirae`). Note: this large number of kingdoms are considered in recognition of the fact that there is not agreement on merging them.
- `phylum`: phylum to which the subject of the image belongs.
- `class`: class to which the subject of the image belongs.
- `order`: order to which the subject of the image belongs.
- `family`: family to which the subject of the image belongs.
- `genus`: genus to which the subject of the image belongs.
- `species`: species to which the subject of the image belongs.
- `common`: common name associated with the subject of the image where available. Otherwise, this is the scientific name (`genus-species`), else whatever subset of the taxonomic hierarchy is available (eg., `kingdom-phylum-class-order` or `kingdom-phylum-class-order-family`). All images have a non-null entry for this column.
Note that the `species` column occasionally has entries such as "sp. ___(get ex)" with some string following. This seems to be used to indicate the species is unknown, but various specimens/images are known to be the same species. Additionally, for `species` values containing an `x` between names, this is indicative of a hybrid that is a cross of the two species listed on either side of the `x`.
##### Text Types
| Text Type | Example |
| ---- | -------- |
| Common | black-billed magpie |
| Scientific | _Pica hudsonia_ |
| Taxonomic | _Animalia Chordata Aves Passeriformes Corvidae Pica hudsonia_ |
`naming/`: these files provide the taxonomic name lookup for each data source in the reconstruction of the dataset.
`species_level_taxonomy_chains.csv`: CSV with the ITIS taxonomic hierarchy, indicated as follows:
- `hierarchy_string_tsn`: string of Taxonomic Serial Numbers (TSN)* for the names of the ranks provided from highest to lowest, connected by dashes (eg., `202422-846491-660046-846497-846508-846553-954935-5549-5550`).
- `hierarchy_string_names`: string of the names of the ranks provided from highest to lowest, connected by arrows (eg., `Plantae->Biliphyta->Rhodophyta->Cyanidiophytina->Cyanidiophyceae->Cyanidiales->Cyanidiaceae->Cyanidium->Cyanidium caldarium`).
- `terminal_tsn`: Taxonomic Serial Number (TSN)* of designated species (eg., `5550`).
- `terminal_scientific_name`: scientific name (`<Genus> <species>`) of subject.
- `terminal_vernacular`: vernacular or common name(s) of the subject, multiple names are separated by commas (eg., `rockskipper`, `Highland Small Rice Rat, Páramo Colilargo`).
- `terminal_vernacular_lang`: language(s) of the vernacular name(s) provided; when there are multiple names, language is listed for each, separated by commas (eg., `English`, `English, English`, respectively for the vernacular name examples above).
- `hierarchy_string_ranks`: string of ranks provided from highest to lowest, connected by arrows (eg., `Kingdom->Subkingdom->Phylum->Subphylum->Class->Order->Family->Genus->Species`).
The remaining columns consist of the hierarchy string ranks describing the Linnaean taxonomy of the subject (as defined above), with `<Genus> <species>` filled in the `Species` column.
*ITIS assigns a Taxonomic Serial Number (TSN) to each taxonomic rank; this is a stable and unique ID.
`taxon.tab`: Tab-delimited file with taxonomic information for EOL images based on EOL page IDs.
- `taxonID`: unique identifier for the file.
- `source`: often `<source>:<id>` where the source corresponds to the domain of the `furtherInformationURL`. The ID likely corresponds to an ID at the source.
- `furtherInformationURL`: URL with more information on the indicated taxon.
- `acceptedNameUsageID`: `taxonID` for the name accepted to represent this entry. Less than a third of these are non-null
- `parentNameUsageID`: `taxonID` of taxonomic rank above the indicated `taxonRank` in the hierarchy (eg., the `taxonID` of the genus `Atadinus` for the `Atadinus fallax (Boiss.) Hauenschild` entry).
- `scientificName`: scientific name associated with the EOL page (`<canonicalName> <authority>`, authority as available).
- `taxonRank`: lowest rank of the taxonomic tree indicated (eg., `genus` or `species`), occasionally not indicated, even for accepted names.
- `taxonomicStatus`: whether the name is accepted by EOL or not (`accepted` or `not accepted`, correspond to existence of non-null `eolID` or `acceptedNameUsageID` entry, respectively).
- `datasetID`: generally corresponds to the source identified in `source` column.
- `canonicalName`: the name(s) associate with the `taxonRank` (eg., `<Genus> <species>` for species).
- `authority`: usually name of person who assigned the name, with the year as available.
- `eolID`: the EOL page ID (only non-null when `taxonomicStatus` is accepted by EOL).
- `Landmark`: numeric values, meaning unknown, mostly null.
- `higherClassification`: labeling in the EOL Dynamic Hierarchy above the `taxonRank` (eg., `Life|Cellular Organisms|Eukaryota|Opisthokonta|Metazoa|Bilateria|Protostomia|Ecdysozoa|Arthropoda|Pancrustacea|Hexapoda|Insecta|Pterygota|Neoptera|Endopterygota|Coleoptera|Adephaga|Carabidae|Paussus`).
`licenses.csv`: File with license, source, and copyright holder associated to each image from EOL listed in `catalog.csv`; `treeoflife_id` is the shared unique identifier to link the two files. Columns are
- `treeoflife_id`, `eol_content_id`, and `eol_page_id` are as defined above.
- `md5`: MD5 hash of the image.
- `medium_source_url`: URL pointing to source of image.
- `eol_full_size_copy_url`: URL to access the full-sized image; this is the URL from which the image was downloaded for this dataset (see [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) for more information on this process).
- `license_name`: name of license attached to the image (eg., `cc-by`).
- `copyright_owner`: copyright holder for the image, filled with `not provided` if no copyright owner was provided.
- `license_link`: URL to the listed license, left null in the case that `License Name` is `No known copyright restrictions`.
- `title`: title provided for the image, filled with `not provided` if no title was provided.
### Data Splits
As noted above, the `split` column of `catalog.csv` indicates to which split each image belongs. Note that `train_small` is a 1M-image, uniformly sampled, subset of `train` used for fine-tuned ablation training and all entries with this label are also listed with the `train` label. The `val` label is applied to images used for validation.
10 biologically-relevant datasets were used for various tests of [BioCLIP](https://huggingface.co/imageomics/bioclip) (which was trained on this dataset), they are described (briefly) and linked to below.
#### Test Sets
- [Meta-Album](https://paperswithcode.com/dataset/meta-album): Specifically, we used the Plankton, Insects, Insects 2, PlantNet, Fungi, PlantVillage, Medicinal Leaf, and PlantDoc datasets from Set-0 through Set-2 (Set-3 was still not released as of our publication/evaluation (Nov. 2023).
- [Birds 525](https://www.kaggle.com/datasets/gpiosenka/100-bird-species): We evaluated on the 2,625 test images provided with the dataset.
- [Rare Species](https://huggingface.co/datasets/imageomics/rare-species): A new dataset we curated for the purpose of testing this model and to contribute to the ML for Conservation community. It consists of 400 species labeled Near Threatened through Extinct in the Wild by the [IUCN Red List](https://www.iucnredlist.org/), with 30 images per species. For more information, see our dataset, [Rare Species](https://huggingface.co/datasets/imageomics/rare-species).
For more information about the contents of these datasets, see Table 2 and associated sections of [our paper](https://doi.org/10.48550/arXiv.2311.18803).
## Dataset Creation
### Curation Rationale
Previously, the largest ML-ready biology image dataset was [iNat21](https://github.com/visipedia/inat_comp/tree/master/2021), which consists of 2.7M images of 10K species. This is significant breadth when comparing to popular general-domain datasets, such as [ImageNet-1K](https://huggingface.co/datasets/imagenet-1k); 10K species are rather limited when considering the vast scope of biology. For context, in 2022, [The International Union for Conservation of Nature (IUCN)](https://www.iucnredlist.org/) reported over 2M total described species, with over 10K distinct species of birds and reptiles alone. Thus, the lesser species diversity of iNat21 limits its potential for pre-training a foundation model for the entire tree of life.
With this focus on species diversity and the need for high-quality images of biological organisms, we looked to the [Encyclopedia of Life Project (EOL)](https://eol.org/). EOL is an image aggregator that collaborates with a variety of institutions to source and label millions of images. After downloading 6.6M images from EOL, we were able to expand our dataset to cover an additional 440K taxa.
Insects (of the class Insecta with 1M+ species), birds (of the class Aves with 10K+ species) and reptiles (of the class Reptilia with 10K+ species) are examples of highly diverse subtrees with many more species than other taxonomic classes. This imbalance among subtrees in the tree of life present challenges in training a foundation model that can recognize extremely fine-grained visual representations of these classes. To help address this challenge for insects, we incorporated [BIOSCAN-1M](https://zenodo.org/doi/10.5281/zenodo.8030064), a recent dataset of 1M expert-labeled lab images of insects, covering 494 different families. The added variety of lab images, rather than in situ images (as in iNat21), further diversifies the _image_ distribution of TreeOfLife-10M.
Overall, this dataset contains approximately 454K unique taxonomic labels of the more than 2M recorded by [IUCN](iucnredlist.org) in 2022. To the best of our knowledge, this is still the most diverse and largest such ML-ready dataset available, hence our curation.
### Source Data
[iNat21 data](https://github.com/visipedia/inat_comp/tree/master/2021#data) was downloaded, unzipped, and our compilation scripts pointed to the training split. As per their [terms of use](https://github.com/visipedia/inat_comp/tree/master/2021#terms-of-use), the data is catalogued, but not reproduced, here.
[BIOSCAN-1M](https://zenodo.org/doi/10.5281/zenodo.8030064): Collection of insect images hand-labeled by experts.
[EOL](https://eol.org/): Biological image aggregator.
#### Initial Data Collection and Normalization
[iNat21 training data](https://github.com/visipedia/inat_comp/tree/master/2021#data) and [BIOSCAN-1M data](https://zenodo.org/doi/10.5281/zenodo.8030064) were downloaded and assigned `treeoflife_id`s for unique identification within the TreeOfLife-10M dataset. The iNat21 training data is formatted into a webdataset format prior to `treeoflife_id` assignments, since this is also used for a comparison to [BioCLIP](https://huggingface.co/imageomics/bioclip) as trained on the full TreeOfLife-10M dataset. For more detailed information on this process, please see [How to Create TreeOfLife-10M](https://github.com/Imageomics/bioclip/tree/main/docs/imageomics/treeoflife10m.md#how-to-create-treeoflife-10m) in the BioCLIP GitHub repo.
First, media manifest data was sourced from EOL using [this script](https://github.com/Imageomics/bioclip/blob/main/scripts/get_media_manifest.py). The media manifest includes EOL content and page IDs from which to connect the taxonomic information, along with source URLs and licensing information. The `EOL Full-Size Copy URL` was then used to download all the images, naming each `<eol_content_id>_<eol_page_id>_eol_full-size-copy.jpg` for reference back to the media manifest. [Scripts](https://github.com/Imageomics/bioclip/tree/main/scripts/evobio10m) to perform these downloads and [instructions](https://github.com/Imageomics/bioclip/blob/main/docs/imageomics/treeoflife10m.md) can be found in the [BioCLIP GitHub repository](https://github.com/Imageomics/bioclip).
See [below](#Annotation-Process) for details of annotation following data collection.
Species selected for the Rare Species dataset were removed from this dataset (see [Initial Data Collection and Normalization of Rare Species](https://huggingface.co/datasets/imageomics/rare-species#initial-data-collection-and-normalization)).
### Annotations
#### Annotation Process
Annotations were primarily sourced from image source providers.
For iNat21 and BIOSCAN-1M images, the labels provided by those sources were used.
- iNat21: iNaturalist English vernacular names and taxa were used.
- BIOSCAN-1M: Linnaean taxonomic rankings were applied as labeled in the [BIOSCAN-1M dataset](https://zenodo.org/doi/10.5281/zenodo.8030064), which is all hand-labeled by experts. Note that the dataset provides other ranks (not considered in the 7-rank Linnaean taxonomy), such as tribe, which were not included in this dataset.
For images from EOL, the scientific name (`genus-species`) was used to look up the higher-order taxa from the following sources as listed: BIOSCAN-1M metadata, EOL aggregate datasets (described below), then match this against the ITIS hierarchy for the higher-order taxa standardization. A small number of these are [homonyms](https://en.wikipedia.org/wiki/Homonym_(biology)), for which a list was generated to ensure proper matching of higher-order taxa (manual homonym resolution is in class `NameUpgrader` in the [naming script](https://github.com/Imageomics/bioclip/blob/main/src/imageomics/naming.py)). After these resources were exhausted, any remaining unresolved taxa were fed through the [Global Names Resolver (GNR) API](https://resolver.globalnames.org/api). Despite our efforts, we discovered after training that some hemihomonyms were mislabeled at higher-level taxa (family up to kingdom). This impacts approximately 0.1-0.2% of our data. We are in the process of developing a more robust solution to taxonomic labeling which will also account for re-naming (as is currently in process for many bird species). We intend to release a patch alongside the solution.
This process allowed us to reach full taxa labels for 84% of images. To put this in perspective, 10% of images in TreeOfLife-10M are only labeled to the `family` level (no `genus-species` designations) as part of BIOSCAN-1M, so this places a cap on the taxa coverage. Taxonomic ranking also is not entirely standardized and agreed-upon throughout the biology community, so most gaps are more indicative of lack of consensus on label than missing information.
#### Who are the annotators?
Samuel Stevens, Jiaman Wu, Matthew J. Thompson, and Elizabeth G. Campolongo
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
The hope is that this dataset could be helpful in conservation efforts or biodiversity research.
### Discussion of Biases and Other Known Limitations
This dataset is imbalanced in its representation of various species with the greatest representation available for those in the phyla _Arthropoda_, _Tracheophyta_, and _Chordata_ (see our [interactive treemap from phylum to family](https://huggingface.co/imageomics/treeoflife-10m/raw/main/phyla_ToL_tree.html) for further details of this distribution). This class imbalance is both a result of availability of images and actual variance in class diversity. Additionally, as noted above, there are 2M+ estimated species according to [IUCN](iucnredlist.org), so overall taxonomic coverage is still limited (though it far surpasses the species diversity of other well-known animal datasets).
Not all data is labeled to the species level, and some entries are more or less precise. For instance, the `species` column occasionally has entries such as "sp. ___(get ex)" with some string following. This seems to be used to indicate the species is unknown, but various specimens/images are known to be the same species. Additionally, for `species` values containing an `x` between names, this is indicative of a hybrid that is a cross of the two species listed on either side of the `x`. Due to the additional information provided about the higher order taxa, these labeling anomalies still present valuable information providing links between these classes.
As stated above, 84% of images have full taxa labels. However, due to the incomplete standardization and agreement on the taxonomic hierarchy throughout the biology community, most gaps are more indicative of lack of consensus on label than missing information.
Note that BIOSCAN-1M’s label granularity may still be limited for insects, as 98.6% of BIOSCAN-1M’s images are labeled to the family level but only 22.5% and 7.5% of the images have genus or species indicated, respectively. Lack of label granularity is an inherent challenge.
## Additional Information
### Dataset Curators
Samuel Stevens, Jiaman Wu, Matthew J. Thompson, and Elizabeth G. Campolongo
### Licensing Information
The data (images and text) contain a variety of licensing restrictions mostly within the CC family. Each image and text in this dataset is provided under the least restrictive terms allowed by its licensing requirements as provided to us (i.e, we impose no additional restrictions past those specified by licenses in the license file).
Please see the [iNat21 terms of use](https://github.com/visipedia/inat_comp/tree/master/2021#terms-of-use) for full information on use of their images.
All BIOSCAN-1M images are licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
EOL images contain a variety of licenses ranging from [CC0](https://creativecommons.org/publicdomain/zero/1.0/) to [CC BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/).
For license and citation information by image, see our [license file](https://huggingface.co/datasets/imageomics/treeoflife-10m/blob/main/metadata/licenses.csv).
**Note**: Due to licensing restrictions discovered after training, approximately 30K of the images used to train BioCLIP (about 0.3%) cannot be republished here and links to original content are no longer available. Overall, 14 families that were included in training BioCLIP are not republished in this dataset, a loss of 0.38% of the taxa diversity.
This dataset (the compilation) has been marked as dedicated to the public domain by applying the [CC0 Public Domain Waiver](https://creativecommons.org/publicdomain/zero/1.0/). However, images may be licensed under different terms (as noted above).
### Citation Information
```
@dataset{treeoflife_10m,
author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
title = {TreeOfLife-10M},
year = {2023},
url = {https://huggingface.co/datasets/imageomics/TreeOfLife-10M},
doi = {10.57967/hf/1972},
publisher = {Hugging Face}
}
```
Please also cite our paper:
```
@inproceedings{stevens2024bioclip,
title = {{B}io{CLIP}: A Vision Foundation Model for the Tree of Life},
author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024},
pages = {19412-19424}
}
```
Please be sure to also cite the original data sources and all constituent parts as appropriate.
- iNat21:
```
@misc{inat2021,
author={Van Horn, Grant and Mac Aodha, Oisin},
title={iNat Challenge 2021 - FGVC8},
publisher={Kaggle},
year={2021},
url={https://kaggle.com/competitions/inaturalist-2021}
}
```
- BIOSCAN-1M:
```
@inproceedings{gharaee2023step,
title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN-1M} Insect Dataset},
booktitle = {Advances in Neural Information Processing Systems ({NeurIPS}) Datasets \& Benchmarks Track},
author={Gharaee, Z. and Gong, Z. and Pellegrino, N. and Zarubiieva, I. and Haurum, J. B. and Lowe, S. C. and McKeown, J. T. A. and Ho, C. Y. and McLeod, J. and Wei, Y. C. and Agda, J. and Ratnasingham, S. and Steinke, D. and Chang, A. X. and Taylor, G. W. and Fieguth, P.},
year={2023},
}
```
- EOL: Encyclopedia of Life. Available from http://eol.org. Accessed 29 July 2023.
For license and citation information by image, see our [license file](https://huggingface.co/datasets/imageomics/treeoflife-10m/blob/main/metadata/licenses.csv).
- ITIS: Retrieved July, 20 2023, from the Integrated Taxonomic Information System (ITIS) on-line database, www.itis.gov, CC0
https://doi.org/10.5066/F7KH0KBK
### Contributions
The [Imageomics Institute](https://imageomics.org) is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
boettiger-lab/ca-30x30 | boettiger-lab | "2025-02-22T22:00:54Z" | 23,291 | 0 | [
"license:bsd-2-clause",
"region:us"
] | null | "2024-10-03T17:43:10Z" | ---
license: bsd-2-clause
---
|
eriktks/conll2003 | eriktks | "2024-01-18T09:34:17Z" | 23,213 | 134 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-reuters-corpus",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"region:us"
] | [
"token-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-reuters-corpus
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2003
pretty_name: CoNLL-2003
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': '"'
'1': ''''''
'2': '#'
'3': $
'4': (
'5': )
'6': ','
'7': .
'8': ':'
'9': '``'
'10': CC
'11': CD
'12': DT
'13': EX
'14': FW
'15': IN
'16': JJ
'17': JJR
'18': JJS
'19': LS
'20': MD
'21': NN
'22': NNP
'23': NNPS
'24': NNS
'25': NN|SYM
'26': PDT
'27': POS
'28': PRP
'29': PRP$
'30': RB
'31': RBR
'32': RBS
'33': RP
'34': SYM
'35': TO
'36': UH
'37': VB
'38': VBD
'39': VBG
'40': VBN
'41': VBP
'42': VBZ
'43': WDT
'44': WP
'45': WP$
'46': WRB
- name: chunk_tags
sequence:
class_label:
names:
'0': O
'1': B-ADJP
'2': I-ADJP
'3': B-ADVP
'4': I-ADVP
'5': B-CONJP
'6': I-CONJP
'7': B-INTJ
'8': I-INTJ
'9': B-LST
'10': I-LST
'11': B-NP
'12': I-NP
'13': B-PP
'14': I-PP
'15': B-PRT
'16': I-PRT
'17': B-SBAR
'18': I-SBAR
'19': B-UCP
'20': I-UCP
'21': B-VP
'22': I-VP
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
config_name: conll2003
splits:
- name: train
num_bytes: 6931345
num_examples: 14041
- name: validation
num_bytes: 1739223
num_examples: 3250
- name: test
num_bytes: 1582054
num_examples: 3453
download_size: 982975
dataset_size: 10252622
train-eval-index:
- config: conll2003
task: token-classification
task_id: entity_extraction
splits:
train_split: train
eval_split: test
col_mapping:
tokens: tokens
ner_tags: tags
metrics:
- type: seqeval
name: seqeval
---
# Dataset Card for "conll2003"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
### Dataset Summary
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on
four types of named entities: persons, locations, organizations and names of miscellaneous entities that do
not belong to the previous three groups.
The CoNLL-2003 shared task data files contain four columns separated by a single space. Each word has been put on
a separate line and there is an empty line after each sentence. The first item on each line is a word, the second
a part-of-speech (POS) tag, the third a syntactic chunk tag and the fourth the named entity tag. The chunk tags
and the named entity tags have the format I-TYPE which means that the word is inside a phrase of type TYPE. Only
if two phrases of the same type immediately follow each other, the first word of the second phrase will have tag
B-TYPE to show that it starts a new phrase. A word with tag O is not part of a phrase. Note the dataset uses IOB2
tagging scheme, whereas the original dataset uses IOB1.
For more details see https://www.clips.uantwerpen.be/conll2003/ner/ and https://www.aclweb.org/anthology/W03-0419
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### conll2003
- **Size of downloaded dataset files:** 4.85 MB
- **Size of the generated dataset:** 10.26 MB
- **Total amount of disk used:** 15.11 MB
An example of 'train' looks as follows.
```
{
"chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0],
"id": "0",
"ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7],
"tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."]
}
```
The original data files have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
The data fields are the same among all splits.
#### conll2003
- `id`: a `string` feature.
- `tokens`: a `list` of `string` features.
- `pos_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'"': 0, "''": 1, '#': 2, '$': 3, '(': 4, ')': 5, ',': 6, '.': 7, ':': 8, '``': 9, 'CC': 10, 'CD': 11, 'DT': 12,
'EX': 13, 'FW': 14, 'IN': 15, 'JJ': 16, 'JJR': 17, 'JJS': 18, 'LS': 19, 'MD': 20, 'NN': 21, 'NNP': 22, 'NNPS': 23,
'NNS': 24, 'NN|SYM': 25, 'PDT': 26, 'POS': 27, 'PRP': 28, 'PRP$': 29, 'RB': 30, 'RBR': 31, 'RBS': 32, 'RP': 33,
'SYM': 34, 'TO': 35, 'UH': 36, 'VB': 37, 'VBD': 38, 'VBG': 39, 'VBN': 40, 'VBP': 41, 'VBZ': 42, 'WDT': 43,
'WP': 44, 'WP$': 45, 'WRB': 46}
```
- `chunk_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-ADJP': 1, 'I-ADJP': 2, 'B-ADVP': 3, 'I-ADVP': 4, 'B-CONJP': 5, 'I-CONJP': 6, 'B-INTJ': 7, 'I-INTJ': 8,
'B-LST': 9, 'I-LST': 10, 'B-NP': 11, 'I-NP': 12, 'B-PP': 13, 'I-PP': 14, 'B-PRT': 15, 'I-PRT': 16, 'B-SBAR': 17,
'I-SBAR': 18, 'B-UCP': 19, 'I-UCP': 20, 'B-VP': 21, 'I-VP': 22}
```
- `ner_tags`: a `list` of classification labels (`int`). Full tagset with indices:
```python
{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6, 'B-MISC': 7, 'I-MISC': 8}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
### Contributions
Thanks to [@jplu](https://github.com/jplu), [@vblagoje](https://github.com/vblagoje), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
japanese-asr/whisper_transcriptions.mls.wer_10.0.vectorized | japanese-asr | "2024-09-15T01:35:08Z" | 23,167 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-11T12:32:36Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95550219596
num_examples: 62101
download_size: 43092578892
dataset_size: 95550219596
- config_name: subset_1
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95892233884
num_examples: 62323
download_size: 43217224829
dataset_size: 95892233884
- config_name: subset_10
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659534424
num_examples: 62172
download_size: 43197712726
dataset_size: 95659534424
- config_name: subset_100
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95570234896
num_examples: 62114
download_size: 43084233453
dataset_size: 95570234896
- config_name: subset_101
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741000524
num_examples: 62225
download_size: 43183665345
dataset_size: 95741000524
- config_name: subset_102
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732487892
num_examples: 62219
download_size: 43229537725
dataset_size: 95732487892
- config_name: subset_103
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95776664816
num_examples: 62248
download_size: 43187441638
dataset_size: 95776664816
- config_name: subset_104
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95930500816
num_examples: 62348
download_size: 43294625977
dataset_size: 95930500816
- config_name: subset_105
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95836868972
num_examples: 62287
download_size: 43251807028
dataset_size: 95836868972
- config_name: subset_106
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95940716900
num_examples: 62355
download_size: 43289304103
dataset_size: 95940716900
- config_name: subset_107
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782476488
num_examples: 62252
download_size: 43209137820
dataset_size: 95782476488
- config_name: subset_108
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96007104792
num_examples: 62398
download_size: 43221018658
dataset_size: 96007104792
- config_name: subset_109
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95764776944
num_examples: 62240
download_size: 43162176171
dataset_size: 95764776944
- config_name: subset_11
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95749669360
num_examples: 62230
download_size: 43193067430
dataset_size: 95749669360
- config_name: subset_110
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95826353540
num_examples: 62281
download_size: 43217482451
dataset_size: 95826353540
- config_name: subset_111
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95910493660
num_examples: 62335
download_size: 43268379463
dataset_size: 95910493660
- config_name: subset_112
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782539616
num_examples: 62252
download_size: 43198507530
dataset_size: 95782539616
- config_name: subset_113
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95496693376
num_examples: 62066
download_size: 43106662052
dataset_size: 95496693376
- config_name: subset_114
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16920876
num_examples: 11
download_size: 7573002
dataset_size: 16920876
- config_name: subset_115
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95639177564
num_examples: 62159
download_size: 43180784518
dataset_size: 95639177564
- config_name: subset_116
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95625267448
num_examples: 62150
download_size: 43124129761
dataset_size: 95625267448
- config_name: subset_117
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95545048296
num_examples: 62098
download_size: 43082968259
dataset_size: 95545048296
- config_name: subset_118
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95866827908
num_examples: 62307
download_size: 43167164098
dataset_size: 95866827908
- config_name: subset_119
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96068332252
num_examples: 62437
download_size: 43339136980
dataset_size: 96068332252
- config_name: subset_12
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95728691164
num_examples: 62217
download_size: 43198747627
dataset_size: 95728691164
- config_name: subset_120
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95722632700
num_examples: 62213
download_size: 43167373358
dataset_size: 95722632700
- config_name: subset_121
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732191100
num_examples: 62219
download_size: 43221505796
dataset_size: 95732191100
- config_name: subset_122
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95699024432
num_examples: 62198
download_size: 43219580053
dataset_size: 95699024432
- config_name: subset_123
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95713523564
num_examples: 62207
download_size: 43177149081
dataset_size: 95713523564
- config_name: subset_124
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95463256840
num_examples: 62044
download_size: 43081995426
dataset_size: 95463256840
- config_name: subset_125
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95597812312
num_examples: 62132
download_size: 43093919552
dataset_size: 95597812312
- config_name: subset_126
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95798796016
num_examples: 62262
download_size: 43254288601
dataset_size: 95798796016
- config_name: subset_127
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95674872576
num_examples: 62182
download_size: 43251503801
dataset_size: 95674872576
- config_name: subset_128
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741950380
num_examples: 62225
download_size: 43150675085
dataset_size: 95741950380
- config_name: subset_129
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95902821264
num_examples: 62330
download_size: 43266797081
dataset_size: 95902821264
- config_name: subset_13
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95706484544
num_examples: 62202
download_size: 43194357797
dataset_size: 95706484544
- config_name: subset_130
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845812
num_examples: 9
download_size: 6597728
dataset_size: 13845812
- config_name: subset_131
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95385820008
num_examples: 61994
download_size: 43049793791
dataset_size: 95385820008
- config_name: subset_132
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718111696
num_examples: 62210
download_size: 43160367467
dataset_size: 95718111696
- config_name: subset_133
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95642427284
num_examples: 62161
download_size: 43145455128
dataset_size: 95642427284
- config_name: subset_134
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95783162736
num_examples: 62252
download_size: 43157288094
dataset_size: 95783162736
- config_name: subset_135
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95905643680
num_examples: 62332
download_size: 43211878248
dataset_size: 95905643680
- config_name: subset_136
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95602572980
num_examples: 62135
download_size: 43148250609
dataset_size: 95602572980
- config_name: subset_137
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95742569912
num_examples: 62226
download_size: 43196126465
dataset_size: 95742569912
- config_name: subset_138
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95784277468
num_examples: 62253
download_size: 43213036863
dataset_size: 95784277468
- config_name: subset_14
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95484035440
num_examples: 62058
download_size: 43038787620
dataset_size: 95484035440
- config_name: subset_15
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95847923004
num_examples: 62295
download_size: 43269622880
dataset_size: 95847923004
- config_name: subset_16
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143676345616
num_examples: 93380
download_size: 64763101794
dataset_size: 143676345616
- config_name: subset_17
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143218562076
num_examples: 93081
download_size: 64543519703
dataset_size: 143218562076
- config_name: subset_18
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538508
num_examples: 1
download_size: 888657
dataset_size: 1538508
- config_name: subset_19
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143234313008
num_examples: 93092
download_size: 64590945738
dataset_size: 143234313008
- config_name: subset_2
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95692976304
num_examples: 62194
download_size: 43156432229
dataset_size: 95692976304
- config_name: subset_20
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143387469416
num_examples: 93192
download_size: 64657130955
dataset_size: 143387469416
- config_name: subset_21
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143761745188
num_examples: 93435
download_size: 64848639452
dataset_size: 143761745188
- config_name: subset_22
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143510788288
num_examples: 93272
download_size: 64664207735
dataset_size: 143510788288
- config_name: subset_23
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143745597332
num_examples: 93425
download_size: 64881327829
dataset_size: 143745597332
- config_name: subset_24
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143524148912
num_examples: 93280
download_size: 64658212505
dataset_size: 143524148912
- config_name: subset_25
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143893466228
num_examples: 93521
download_size: 64887011756
dataset_size: 143893466228
- config_name: subset_26
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143679361468
num_examples: 93381
download_size: 64845399473
dataset_size: 143679361468
- config_name: subset_27
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578388120
num_examples: 93316
download_size: 64733082218
dataset_size: 143578388120
- config_name: subset_28
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143336154232
num_examples: 93158
download_size: 64663766459
dataset_size: 143336154232
- config_name: subset_29
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7691452
num_examples: 5
download_size: 3459998
dataset_size: 7691452
- config_name: subset_3
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95679747492
num_examples: 62185
download_size: 43162138038
dataset_size: 95679747492
- config_name: subset_30
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143445434128
num_examples: 93230
download_size: 64632174781
dataset_size: 143445434128
- config_name: subset_31
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143557489496
num_examples: 93302
download_size: 64701593443
dataset_size: 143557489496
- config_name: subset_32
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143480746600
num_examples: 93252
download_size: 64739797925
dataset_size: 143480746600
- config_name: subset_33
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143804688340
num_examples: 93463
download_size: 64883427549
dataset_size: 143804688340
- config_name: subset_34
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143763133852
num_examples: 93435
download_size: 64878027444
dataset_size: 143763133852
- config_name: subset_35
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143471499924
num_examples: 93247
download_size: 64668279919
dataset_size: 143471499924
- config_name: subset_36
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578658920
num_examples: 93316
download_size: 64771308961
dataset_size: 143578658920
- config_name: subset_37
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143421843384
num_examples: 93214
download_size: 64702071071
dataset_size: 143421843384
- config_name: subset_38
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143684196348
num_examples: 93385
download_size: 64784473635
dataset_size: 143684196348
- config_name: subset_39
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143680679576
num_examples: 93382
download_size: 64742283095
dataset_size: 143680679576
- config_name: subset_4
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95688480364
num_examples: 62191
download_size: 43186075154
dataset_size: 95688480364
- config_name: subset_40
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12306560
num_examples: 8
download_size: 5824713
dataset_size: 12306560
- config_name: subset_41
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143692662464
num_examples: 93390
download_size: 64709902673
dataset_size: 143692662464
- config_name: subset_42
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143203990624
num_examples: 93072
download_size: 64613043749
dataset_size: 143203990624
- config_name: subset_43
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143423685888
num_examples: 93216
download_size: 64706208398
dataset_size: 143423685888
- config_name: subset_44
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143599898184
num_examples: 93330
download_size: 64731002218
dataset_size: 143599898184
- config_name: subset_45
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143523564744
num_examples: 93280
download_size: 64697514930
dataset_size: 143523564744
- config_name: subset_46
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143657253604
num_examples: 93367
download_size: 64832235795
dataset_size: 143657253604
- config_name: subset_47
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143635083108
num_examples: 93353
download_size: 64816566584
dataset_size: 143635083108
- config_name: subset_48
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143498198048
num_examples: 93264
download_size: 64761835832
dataset_size: 143498198048
- config_name: subset_49
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95628254736
num_examples: 62152
download_size: 43114668093
dataset_size: 95628254736
- config_name: subset_5
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761408264
num_examples: 62238
download_size: 43098449131
dataset_size: 95761408264
- config_name: subset_50
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95710114492
num_examples: 62205
download_size: 43145722727
dataset_size: 95710114492
- config_name: subset_51
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718176136
num_examples: 62210
download_size: 43219455050
dataset_size: 95718176136
- config_name: subset_52
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95945517480
num_examples: 62358
download_size: 43266431091
dataset_size: 95945517480
- config_name: subset_53
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659331056
num_examples: 62172
download_size: 43159537104
dataset_size: 95659331056
- config_name: subset_54
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95733778168
num_examples: 62220
download_size: 43168252529
dataset_size: 95733778168
- config_name: subset_55
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95529164348
num_examples: 62087
download_size: 43137593293
dataset_size: 95529164348
- config_name: subset_56
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12308272
num_examples: 8
download_size: 6100142
dataset_size: 12308272
- config_name: subset_57
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95725368820
num_examples: 62215
download_size: 43172851860
dataset_size: 95725368820
- config_name: subset_58
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95675366136
num_examples: 62182
download_size: 43119589688
dataset_size: 95675366136
- config_name: subset_59
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95757684168
num_examples: 62236
download_size: 43189671985
dataset_size: 95757684168
- config_name: subset_6
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95578098092
num_examples: 62119
download_size: 43139859949
dataset_size: 95578098092
- config_name: subset_60
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95700890356
num_examples: 62199
download_size: 43168843386
dataset_size: 95700890356
- config_name: subset_61
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95730743624
num_examples: 62218
download_size: 43225423665
dataset_size: 95730743624
- config_name: subset_62
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95939249240
num_examples: 62354
download_size: 43194512512
dataset_size: 95939249240
- config_name: subset_63
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95771529340
num_examples: 62245
download_size: 43196958015
dataset_size: 95771529340
- config_name: subset_64
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95848159444
num_examples: 62295
download_size: 43215788957
dataset_size: 95848159444
- config_name: subset_65
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95882062420
num_examples: 62317
download_size: 43207070631
dataset_size: 95882062420
- config_name: subset_66
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95871828072
num_examples: 62310
download_size: 43225588983
dataset_size: 95871828072
- config_name: subset_67
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95709466896
num_examples: 62204
download_size: 43120587410
dataset_size: 95709466896
- config_name: subset_68
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761289000
num_examples: 62238
download_size: 43168961328
dataset_size: 95761289000
- config_name: subset_69
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95724204184
num_examples: 62214
download_size: 43179130241
dataset_size: 95724204184
- config_name: subset_7
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538156
num_examples: 1
download_size: 820398
dataset_size: 1538156
- config_name: subset_70
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95609034668
num_examples: 62139
download_size: 43117847437
dataset_size: 95609034668
- config_name: subset_71
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95387596792
num_examples: 61996
download_size: 43021151166
dataset_size: 95387596792
- config_name: subset_72
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4615244
num_examples: 3
download_size: 2452533
dataset_size: 4615244
- config_name: subset_73
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95846947264
num_examples: 62294
download_size: 43209608729
dataset_size: 95846947264
- config_name: subset_74
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95818688740
num_examples: 62275
download_size: 43205934001
dataset_size: 95818688740
- config_name: subset_75
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95775232460
num_examples: 62247
download_size: 43190536605
dataset_size: 95775232460
- config_name: subset_76
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95665581880
num_examples: 62176
download_size: 43152943289
dataset_size: 95665581880
- config_name: subset_77
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95538306996
num_examples: 62093
download_size: 43186488482
dataset_size: 95538306996
- config_name: subset_78
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95653366336
num_examples: 62168
download_size: 43143452346
dataset_size: 95653366336
- config_name: subset_79
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95676339900
num_examples: 62183
download_size: 43128557833
dataset_size: 95676339900
- config_name: subset_8
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95715343992
num_examples: 62208
download_size: 43164885461
dataset_size: 95715343992
- config_name: subset_80
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95766599296
num_examples: 62242
download_size: 43239930128
dataset_size: 95766599296
- config_name: subset_81
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143321630332
num_examples: 93149
download_size: 64626209893
dataset_size: 143321630332
- config_name: subset_82
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143413608572
num_examples: 93209
download_size: 64621538517
dataset_size: 143413608572
- config_name: subset_83
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143531418364
num_examples: 93285
download_size: 64801287818
dataset_size: 143531418364
- config_name: subset_84
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143436760360
num_examples: 93224
download_size: 64683558260
dataset_size: 143436760360
- config_name: subset_85
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143693790772
num_examples: 93391
download_size: 64884349404
dataset_size: 143693790772
- config_name: subset_86
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143485525272
num_examples: 93256
download_size: 64696999488
dataset_size: 143485525272
- config_name: subset_87
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143365510504
num_examples: 93178
download_size: 64633922570
dataset_size: 143365510504
- config_name: subset_88
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143459907372
num_examples: 93239
download_size: 64646778148
dataset_size: 143459907372
- config_name: subset_89
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143713229680
num_examples: 93404
download_size: 64788831986
dataset_size: 143713229680
- config_name: subset_9
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95562629284
num_examples: 62109
download_size: 43123134026
dataset_size: 95562629284
- config_name: subset_90
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143379593708
num_examples: 93187
download_size: 64733038170
dataset_size: 143379593708
- config_name: subset_91
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95696250128
num_examples: 62196
download_size: 43134085960
dataset_size: 95696250128
- config_name: subset_92
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95934407652
num_examples: 62351
download_size: 43224929875
dataset_size: 95934407652
- config_name: subset_93
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95753580284
num_examples: 62233
download_size: 43222236201
dataset_size: 95753580284
- config_name: subset_94
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95891874916
num_examples: 62323
download_size: 43252770071
dataset_size: 95891874916
- config_name: subset_95
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659464728
num_examples: 62172
download_size: 43086228614
dataset_size: 95659464728
- config_name: subset_96
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95794821896
num_examples: 62260
download_size: 43179370699
dataset_size: 95794821896
- config_name: subset_97
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95622506640
num_examples: 62148
download_size: 43156846644
dataset_size: 95622506640
- config_name: subset_98
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845628
num_examples: 9
download_size: 6713409
dataset_size: 13845628
- config_name: subset_99
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95636401316
num_examples: 62157
download_size: 43147436863
dataset_size: 95636401316
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
bigscience/xP3mt | bigscience | "2023-05-30T15:50:57Z" | 23,083 | 24 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-09-28T12:36:00Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Oración 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\Oración 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nPregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no?",
"targets": "Sí"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. We machine-translated prompts for monolingual datasets, thus languages with only crosslingual datasets (e.g. Translation) do not have non-English prompts. Languages without non-English prompts are equivalent to [xP3](https://huggingface.co/datasets/bigscience/xP3).
|Language|Kilobytes|%|Samples|%|Non-English prompts|
|--------|------:|-:|---:|-:|-:|
|tw|106288|0.11|265071|0.33| |
|bm|107056|0.11|265180|0.33| |
|ak|108096|0.11|265071|0.33| |
|ca|110608|0.11|271191|0.34| |
|eu|113008|0.12|281199|0.35| |
|fon|113072|0.12|265063|0.33| |
|st|114080|0.12|265063|0.33| |
|ki|115040|0.12|265180|0.33| |
|tum|116032|0.12|265063|0.33| |
|wo|122560|0.13|365063|0.46| |
|ln|126304|0.13|365060|0.46| |
|as|156256|0.16|265063|0.33| |
|or|161472|0.17|265063|0.33| |
|kn|165456|0.17|265063|0.33| |
|ml|175040|0.18|265864|0.33| |
|rn|192992|0.2|318189|0.4| |
|nso|229712|0.24|915051|1.14| |
|tn|235536|0.24|915054|1.14| |
|lg|235936|0.24|915021|1.14| |
|rw|249360|0.26|915043|1.14| |
|ts|250256|0.26|915044|1.14| |
|sn|252496|0.26|865056|1.08| |
|xh|254672|0.26|915058|1.14| |
|zu|263712|0.27|915061|1.14| |
|ny|272128|0.28|915063|1.14| |
|ig|325440|0.33|950097|1.19|✅|
|yo|339664|0.35|913021|1.14|✅|
|ne|398144|0.41|315754|0.39|✅|
|pa|529632|0.55|339210|0.42|✅|
|sw|561392|0.58|1114439|1.39|✅|
|gu|566576|0.58|347499|0.43|✅|
|mr|674000|0.69|417269|0.52|✅|
|bn|854864|0.88|428725|0.54|✅|
|ta|943440|0.97|410633|0.51|✅|
|te|1384016|1.42|573354|0.72|✅|
|ur|1944416|2.0|855756|1.07|✅|
|vi|3113184|3.2|1667306|2.08|✅|
|code|4330752|4.46|2707724|3.38| |
|hi|4469712|4.6|1543441|1.93|✅|
|id|4538768|4.67|2582272|3.22|✅|
|zh|4604112|4.74|3571636|4.46|✅|
|ar|4703968|4.84|2148970|2.68|✅|
|fr|5558912|5.72|5055942|6.31|✅|
|pt|6130016|6.31|3562772|4.45|✅|
|es|7579424|7.8|5151349|6.43|✅|
|en|39252528|40.4|32740750|40.87| |
|total|97150128|100.0|80100816|100.0|✅|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
google/fleurs | google | "2024-08-25T05:03:32Z" | 22,937 | 269 | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:afr",
"language:amh",
"language:ara",
"language:asm",
"language:ast",
"language:azj",
"language:bel",
"language:ben",
"language:bos",
"language:cat",
"language:ceb",
"language:cmn",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:spa",
"language:est",
"language:fas",
"language:ful",
"language:fin",
"language:tgl",
"language:fra",
"language:gle",
"language:glg",
"language:guj",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ind",
"language:ibo",
"language:isl",
"language:ita",
"language:jpn",
"language:jav",
"language:kat",
"language:kam",
"language:kea",
"language:kaz",
"language:khm",
"language:kan",
"language:kor",
"language:ckb",
"language:kir",
"language:ltz",
"language:lug",
"language:lin",
"language:lao",
"language:lit",
"language:luo",
"language:lav",
"language:mri",
"language:mkd",
"language:mal",
"language:mon",
"language:mar",
"language:msa",
"language:mlt",
"language:mya",
"language:nob",
"language:npi",
"language:nld",
"language:nso",
"language:nya",
"language:oci",
"language:orm",
"language:ory",
"language:pan",
"language:pol",
"language:pus",
"language:por",
"language:ron",
"language:rus",
"language:bul",
"language:snd",
"language:slk",
"language:slv",
"language:sna",
"language:som",
"language:srp",
"language:swe",
"language:swh",
"language:tam",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:ukr",
"language:umb",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yor",
"language:yue",
"language:zul",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"arxiv:2205.12446",
"arxiv:2106.03193",
"region:us",
"speech-recognition"
] | [
"automatic-speech-recognition"
] | "2022-04-19T10:25:58Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
- machine-generated
language_creators:
- crowdsourced
- expert-generated
language:
- afr
- amh
- ara
- asm
- ast
- azj
- bel
- ben
- bos
- cat
- ceb
- cmn
- ces
- cym
- dan
- deu
- ell
- eng
- spa
- est
- fas
- ful
- fin
- tgl
- fra
- gle
- glg
- guj
- hau
- heb
- hin
- hrv
- hun
- hye
- ind
- ibo
- isl
- ita
- jpn
- jav
- kat
- kam
- kea
- kaz
- khm
- kan
- kor
- ckb
- kir
- ltz
- lug
- lin
- lao
- lit
- luo
- lav
- mri
- mkd
- mal
- mon
- mar
- msa
- mlt
- mya
- nob
- npi
- nld
- nso
- nya
- oci
- orm
- ory
- pan
- pol
- pus
- por
- ron
- rus
- bul
- snd
- slk
- slv
- sna
- som
- srp
- swe
- swh
- tam
- tel
- tgk
- tha
- tur
- ukr
- umb
- urd
- uzb
- vie
- wol
- xho
- yor
- yue
- zul
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
task_categories:
- automatic-speech-recognition
task_ids: []
pretty_name: 'The Cross-lingual TRansfer Evaluation of Multilingual Encoders for Speech
(XTREME-S) benchmark is a benchmark designed to evaluate speech representations
across languages, tasks, domains and data regimes. It covers 102 languages from
10+ language families, 3 different domains and 4 task families: speech recognition,
translation, classification and retrieval.'
tags:
- speech-recognition
---
# FLEURS
## Dataset Description
- **Fine-Tuning script:** [pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition)
- **Paper:** [FLEURS: Few-shot Learning Evaluation of
Universal Representations of Speech](https://arxiv.org/abs/2205.12446)
- **Total amount of disk used:** ca. 350 GB
Fleurs is the speech version of the [FLoRes machine translation benchmark](https://arxiv.org/abs/2106.03193).
We use 2009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is
used and ”unit error rate” (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
- **Western Europe**: *Asturian, Bosnian, Catalan, Croatian, Danish, Dutch, English, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Irish, Italian, Kabuverdianu, Luxembourgish, Maltese, Norwegian, Occitan, Portuguese, Spanish, Swedish, Welsh*
- **Eastern Europe**: *Armenian, Belarusian, Bulgarian, Czech, Estonian, Georgian, Latvian, Lithuanian, Macedonian, Polish, Romanian, Russian, Serbian, Slovak, Slovenian, Ukrainian*
- **Central-Asia/Middle-East/North-Africa**: *Arabic, Azerbaijani, Hebrew, Kazakh, Kyrgyz, Mongolian, Pashto, Persian, Sorani-Kurdish, Tajik, Turkish, Uzbek*
- **Sub-Saharan Africa**: *Afrikaans, Amharic, Fula, Ganda, Hausa, Igbo, Kamba, Lingala, Luo, Northern-Sotho, Nyanja, Oromo, Shona, Somali, Swahili, Umbundu, Wolof, Xhosa, Yoruba, Zulu*
- **South-Asia**: *Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu*
- **South-East Asia**: *Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese*
- **CJK languages**: *Cantonese and Mandarin Chinese, Japanese, Korean*
## How to use & Supported Tasks
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi_in" for Hindi):
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
fleurs = load_dataset("google/fleurs", "hi_in", split="train", streaming=True)
print(next(iter(fleurs)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
Local:
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
batch_sampler = BatchSampler(RandomSampler(fleurs), batch_size=32, drop_last=False)
dataloader = DataLoader(fleurs, batch_sampler=batch_sampler)
```
Streaming:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
fleurs = load_dataset("google/fleurs", "hi_in", split="train")
dataloader = DataLoader(fleurs, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
Fine-tune your own Language Identification models on FLEURS with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
### 1. Speech Recognition (ASR)
```py
from datasets import load_dataset
fleurs_asr = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_asr = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_asr)
# load audio sample on the fly
audio_input = fleurs_asr["train"][0]["audio"] # first decoded audio sample
transcription = fleurs_asr["train"][0]["transcription"] # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
# for analyses see language groups
all_language_groups = fleurs_asr["train"].features["lang_group_id"].names
lang_group_id = fleurs_asr["train"][0]["lang_group_id"]
all_language_groups[lang_group_id]
```
### 2. Language Identification
LangID can often be a domain classification, but in the case of FLEURS-LangID, recordings are done in a similar setting across languages and the utterances correspond to n-way parallel sentences, in the exact same domain, making this task particularly relevant for evaluating LangID. The setting is simple, FLEURS-LangID is splitted in train/valid/test for each language. We simply create a single train/valid/test for LangID by merging all.
```py
from datasets import load_dataset
fleurs_langID = load_dataset("google/fleurs", "all") # to download all data
# see structure
print(fleurs_langID)
# load audio sample on the fly
audio_input = fleurs_langID["train"][0]["audio"] # first decoded audio sample
language_class = fleurs_langID["train"][0]["lang_id"] # first id class
language = fleurs_langID["train"].features["lang_id"].names[language_class]
# use audio_input and language_class to fine-tune your model for audio classification
```
### 3. Retrieval
Retrieval provides n-way parallel speech and text data. Similar to how XTREME for text leverages Tatoeba to evaluate bitext mining a.k.a sentence translation retrieval, we use Retrieval to evaluate the quality of fixed-size representations of speech utterances. Our goal is to incentivize the creation of fixed-size speech encoder for speech retrieval. The system has to retrieve the English "key" utterance corresponding to the speech translation of "queries" in 15 languages. Results have to be reported on the test sets of Retrieval whose utterances are used as queries (and keys for English). We augment the English keys with a large number of utterances to make the task more difficult.
```py
from datasets import load_dataset
fleurs_retrieval = load_dataset("google/fleurs", "af_za") # for Afrikaans
# to download all data for multi-lingual fine-tuning uncomment following line
# fleurs_retrieval = load_dataset("google/fleurs", "all")
# see structure
print(fleurs_retrieval)
# load audio sample on the fly
audio_input = fleurs_retrieval["train"][0]["audio"] # decoded audio sample
text_sample_pos = fleurs_retrieval["train"][0]["transcription"] # positive text sample
text_sample_neg = fleurs_retrieval["train"][1:20]["transcription"] # negative text samples
# use `audio_input`, `text_sample_pos`, and `text_sample_neg` to fine-tune your model for retrieval
```
Users can leverage the training (and dev) sets of FLEURS-Retrieval with a ranking loss to build better cross-lingual fixed-size representations of speech.
## Dataset Structure
We show detailed information the example configurations `af_za` of the dataset.
All other configurations have the same structure.
### Data Instances
**af_za**
- Size of downloaded dataset files: 1.47 GB
- Size of the generated dataset: 1 MB
- Total amount of disk used: 1.47 GB
An example of a data instance of the config `af_za` looks as follows:
```
{'id': 91,
'num_samples': 385920,
'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/310a663d52322700b3d3473cbc5af429bd92a23f9bc683594e70bc31232db39e/home/vaxelrod/FLEURS/oss2_obfuscated/af_za/audio/train/17797742076841560615.wav',
'array': array([ 0.0000000e+00, 0.0000000e+00, 0.0000000e+00, ...,
-1.1205673e-04, -8.4638596e-05, -1.2731552e-04], dtype=float32),
'sampling_rate': 16000},
'raw_transcription': 'Dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'transcription': 'dit is nog nie huidiglik bekend watter aantygings gemaak sal word of wat owerhede na die seun gelei het nie maar jeugmisdaad-verrigtinge het in die federale hof begin',
'gender': 0,
'lang_id': 0,
'language': 'Afrikaans',
'lang_group_id': 3}
```
### Data Fields
The data fields are the same among all splits.
- **id** (int): ID of audio sample
- **num_samples** (int): Number of float values
- **path** (str): Path to the audio file
- **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
- **raw_transcription** (str): The non-normalized transcription of the audio file
- **transcription** (str): Transcription of the audio file
- **gender** (int): Class id of gender
- **lang_id** (int): Class id of language
- **lang_group_id** (int): Class id of language group
### Data Splits
Every config only has the `"train"` split containing of *ca.* 1000 examples, and a `"validation"` and `"test"` split each containing of *ca.* 400 examples.
## Dataset Creation
We collect between one and three recordings for each sentence (2.3 on average), and buildnew train-dev-test splits with 1509, 150 and 350 sentences for
train, dev and test respectively.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset is meant to encourage the development of speech technology in a lot more languages of the world. One of the goal is to give equal access to technologies like speech recognition or speech translation to everyone, meaning better dubbing or better access to content from the internet (like podcasts, streaming or videos).
### Discussion of Biases
Most datasets have a fair distribution of gender utterances (e.g. the newly introduced FLEURS dataset). While many languages are covered from various regions of the world, the benchmark misses many languages that are all equally important. We believe technology built through FLEURS should generalize to all languages.
### Other Known Limitations
The dataset has a particular focus on read-speech because common evaluation benchmarks like CoVoST-2 or LibriSpeech evaluate on this type of speech. There is sometimes a known mismatch between performance obtained in a read-speech setting and a more noisy setting (in production for instance). Given the big progress that remains to be made on many languages, we believe better performance on FLEURS should still correlate well with actual progress made for speech understanding.
## Additional Information
All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
### Citation Information
You can access the FLEURS paper at https://arxiv.org/abs/2205.12446.
Please cite the paper when referencing the FLEURS corpus as:
```
@article{fleurs2022arxiv,
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
journal={arXiv preprint arXiv:2205.12446},
url = {https://arxiv.org/abs/2205.12446},
year = {2022},
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@aconneau](https://github.com/aconneau) for adding this dataset.
|
nyu-visionx/Cambrian-10M | nyu-visionx | "2024-07-08T04:34:51Z" | 22,916 | 106 | [
"task_categories:visual-question-answering",
"task_categories:question-answering",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2406.16860",
"region:us"
] | [
"visual-question-answering",
"question-answering"
] | "2024-05-30T03:27:31Z" | ---
task_categories:
- visual-question-answering
- question-answering
language:
- en
size_categories:
- 1M<n<10M
license: apache-2.0
---
# Cambrian-10M Dataset
**Please see paper & website for more information:**
- https://cambrian-mllm.github.io/
- https://arxiv.org/abs/2406.16860
## Overview
Cambrian-10M is a comprehensive dataset designed for instruction tuning, particularly in multimodal settings involving visual interaction data. The dataset is crafted to address the scarcity of high-quality multimodal instruction-tuning data and to maintain the language abilities of multimodal large language models (LLMs).
## Data Collection
### Multimodal Data Sources
Unlike language data, multimodal instruction-tuning data is much rarer and harder to collect. To address this, we leverage existing multimodal benchmarks and datasets involving visual interaction data, such as Visual Question Answering (VQA) and Optical Character Recognition (OCR) data. This approach helps mitigate the catastrophic forgetting commonly observed when fine-tuning multimodal LLMs.
### Language-Only Instruction-Following Data
To ensure the preservation of language capabilities, we also collect a small volume of high-quality language-only instruction-following data from the community.
### Targeted Internet Data Collection Engine
We introduce a data engine designed to create large-scale, reliable, high-quality knowledge-based multimodal instruction tuning data. The engine works as follows:
1. **Field and Subfield Selection**: The engine selects a target field and subfield, such as “Physics”.
2. **Topic Identification**: An LLM like GPT-4 identifies topics within the field (e.g., “Newton’s Laws”).
3. **Reliable Source Search**: The engine searches reliable sources like Wikipedia for each topic.
4. **Text-Image Association Extraction**: The parser extracts image-caption-text tuples from the sources.
5. **Q&A Pair Generation**: The caption-text is fed to an LLM, such as GPT-3.5, to generate instruction-type Q&A pairs about the image.
These Q&A pairs, along with the images, form our VQA dataset.
### GPT Rewriting
We also incorporate recent MLLMs such as GPT-4v and GPT-4o to generate extended responses and free-form instruction tuning data. To play with gpt generated data, use
[gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl), Curated [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl)
- [gpt4v_77k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4v_77k.jsonl) contains more extended responses from Cambrian-10M.
- [gpt4o_60k](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/gpt4o_60k.jsonl) contains more creative data in visual interactions.
## Cambrian-10M Composition
The Cambrian-10M dataset consists of approximately 9.784 million data points, offering a diverse range of data for various research applications. The composition of the dataset is visualized in Fig. 9.
## Cambrian-7M
We make an initial effort to study data curation. In particular, we find the following data ratio to perform most optimally
- **Language**: 21.00%
- **General**: 34.52%
- **OCR**: 27.22%
- **Counting**: 8.71%
- **Math**: 7.20%
- **Code**: 0.87%
- **Science**: 0.88%

## Getting Started with Cambrian Data
Before you start, ensure you have sufficient storage space to download and process the data.
Cambrian-10M contains a total of 10 million images collected from previous datasets, an internet data engine, and GPT-generated instruction tuning data. Follow these steps to get started:
1. **Download the Data Repository**
Download the data repository. Note that due to Hugging Face policy constraints, the data folder is archived into tar files. We also split the `allava` and `data_engine` data into smaller tar files because they exceed the 50 GB size limit.
2. **Merge Tar Files**
To explore the Cambrian-10M dataset, first merge the different parts of `allava` and `data_engine` together:
```bash
python merge_tars.py
```
3. **Extract Tar Files**
Then, extract all the tar files into the current directory:
```bash
python extract.py
```
4. **Training with Cambrian**
You can train with the raw [Cambrian10M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian10M.jsonl), Curated [Cambrian7M](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/resolve/main/jsons/Cambrian7M.jsonl). We recommend using
the Curated [Cambrian7M with system prompt](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M/blob/main/jsons/Cambrian7M_withsystemprompt.jsonl) that also alleviates 'answer machine' problem. |
tasksource/mmlu | tasksource | "2023-03-31T20:44:21Z" | 22,880 | 34 | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"multi-task",
"multitask",
"mmlu",
"hendrycks_test"
] | [
"text-classification",
"multiple-choice",
"question-answering"
] | "2023-02-01T10:20:16Z" | ---
license: apache-2.0
task_categories:
- text-classification
- multiple-choice
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
- closed-domain-qa
language:
- en
tags:
- multi-task
- multitask
- mmlu
- hendrycks_test
pretty_name: mmlu
---
MMLU (`hendrycks_test` on huggingface) without auxiliary train. It is much lighter (7MB vs 162MB) and faster than the original implementation, in which auxiliary train is loaded (+ duplicated!) by default for all the configs in the original version, making it quite heavy.
We use this version in [tasksource](https://huggingface.co/tasksource).
Reference to original dataset:
Measuring Massive Multitask Language Understanding - https://github.com/hendrycks/test
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
``` |
locuslab/TOFU | locuslab | "2024-02-07T14:58:06Z" | 22,800 | 38 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2401.06121",
"region:us",
"unlearning",
"question answering",
"TOFU",
"NLP",
"LLM"
] | [
"question-answering"
] | "2023-11-14T22:25:09Z" | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: mit
multilinguality:
- monolingual
pretty_name: TOFU
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- unlearning
- question answering
- TOFU
- NLP
- LLM
task_categories:
- question-answering
task_ids:
- closed-domain-qa
configs:
- config_name: full
data_files: full.json
default: true
- config_name: forget01
data_files: forget01.json
- config_name: forget05
data_files: forget05.json
- config_name: forget10
data_files: forget10.json
- config_name: retain90
data_files: retain90.json
- config_name: retain95
data_files: retain95.json
- config_name: retain99
data_files: retain99.json
- config_name: world_facts
data_files: world_facts.json
- config_name: real_authors
data_files: real_authors.json
- config_name: forget01_perturbed
data_files: forget01_perturbed.json
- config_name: forget05_perturbed
data_files: forget05_perturbed.json
- config_name: forget10_perturbed
data_files: forget10_perturbed.json
- config_name: retain_perturbed
data_files: retain_perturbed.json
- config_name: world_facts_perturbed
data_files: world_facts_perturbed.json
- config_name: real_authors_perturbed
data_files: real_authors_perturbed.json
---
# TOFU: Task of Fictitious Unlearning 🍢
The TOFU dataset serves as a benchmark for evaluating unlearning performance of large language models on realistic tasks. The dataset comprises question-answer pairs based on autobiographies of 200 different authors that do not exist and are completely fictitiously generated by the GPT-4 model. The goal of the task is to unlearn a fine-tuned model on various fractions of the forget set.
## Quick Links
- [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
- [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
- [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
- [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
- [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
- [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
## Applicability 🚀
The dataset is in QA format, making it ideal for use with popular chat models such as Llama2, Mistral, or Qwen. However, it also works for any other large language model. The corresponding code base is written for the Llama2 chat, and Phi-1.5 models, but can be easily adapted to other models.
## Loading the Dataset
To load the dataset, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("locuslab/TOFU", "full")
```
### Available forget sets are:
- `forget01`: Forgetting 1% of the original dataset, all entries correspond to a single author.
- `forget05`: Forgetting 5% of the original dataset, all entries correspond to a single author.
- `forget10`: Forgetting 10% of the original dataset, all entries correspond to a single author.
Retain sets corresponding to each forget set are also available, which can be used to train an Oracle model.
## Codebase
The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
## Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```
@misc{tofu2024,
title={TOFU: A Task of Fictitious Unlearning for LLMs},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
isp-uv-es/WorldFloodsv2 | isp-uv-es | "2024-01-10T11:38:21Z" | 22,734 | 6 | [
"license:cc-by-nc-4.0",
"modality:geospatial",
"modality:image",
"doi:10.57967/hf/3149",
"region:us",
"remote sensing",
"sentinel2",
"landsat",
"floods"
] | null | "2023-12-13T17:43:27Z" | ---
license: cc-by-nc-4.0
pipeline_tag: image-segmentation
tags:
- remote sensing
- sentinel2
- landsat
- floods
---
# *WorldFloodsv2* dataset
This repository contains the *WorldFloodsv2* dataset released with the publication:
> E. Portalés-Julià, G. Mateo-García, C. Purcell, and L. Gómez-Chova [Global flood extent segmentation in optical satellite images](https://www.nature.com/articles/s41598-023-47595-7). _Scientific Reports 13, 20316_ (2023). DOI: 10.1038/s41598-023-47595-7.
The [*WorldFloodsv2* database](https://www.nature.com/articles/s41598-023-47595-7) contains 509 pairs of Sentinel-2 images and flood segmentation masks. Splitted in train, val and test sets.
It requires approximately 76GB of hard-disk storage.
<img src="worldfloods_v2.png" alt="licence" width="65%"/>
## Download the dataset
```
huggingface-cli download --cache-dir /path/to/cachedir --local-dir /path/to/localdir/WorldFloodsv2 --repo-type dataset isp-uv-es/WorldFloodsv2
```
## Explore the dataset
The [exploring *WorldFloodsv2*](https://spaceml-org.github.io/ml4floods/content/prep/exploring_worldfloods.html) tutorial in the [ml4floods](https://github.com/spaceml-org/ml4floods) package shows how to
process the dataset and plot the images and masks.
## Licence
The *WorldFloods* database and all pre-trained models are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
## Cite
If you find this work useful, please cite:
```
@article{portales-julia_global_2023,
title = {Global flood extent segmentation in optical satellite images},
volume = {13},
issn = {2045-2322},
doi = {10.1038/s41598-023-47595-7},
number = {1},
urldate = {2023-11-30},
journal = {Scientific Reports},
author = {Portalés-Julià, Enrique and Mateo-García, Gonzalo and Purcell, Cormac and Gómez-Chova, Luis},
month = nov,
year = {2023},
pages = {20316},
}
```
|
CALM/arwiki | CALM | "2022-08-01T16:37:23Z" | 22,608 | 5 | [
"multilinguality:monolingual",
"language:ar",
"license:unknown",
"size_categories:10M<n<100M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2022-03-02T23:29:22Z" | ---
pretty_name: Wikipedia Arabic dumps dataset.
language:
- ar
license:
- unknown
multilinguality:
- monolingual
---
# Arabic Wiki Dataset
## Dataset Summary
This dataset is extracted using [`wikiextractor`](https://github.com/attardi/wikiextractor) tool, from [Wikipedia Arabic pages](https://dumps.wikimedia.org/arwiki/).
## Supported Tasks and Leaderboards
Intended to train **Arabic** language models on MSA (Modern Standard Arabic).
## Dataset Structure
The dataset is structured into 2 folders:
- `arwiki_20211213_txt`: dataset is divided into subfolders each of which contains no more than 100 documents.
- `arwiki_20211213_txt_single`: all documents merged together in a single txt file.
## Dataset Statistics
#### Extracts from **December 13, 2021**:
| documents | vocabulary | words |
| --- | --- | --- |
| 1,136,455 | 5,446,560 | 175,566,016 |
## Usage
Load all dataset from the single txt file:
```python
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt_single/arwiki_20211213.txt')
# OR with stream
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt_single/arwiki_20211213.txt',
streaming=True)
```
Load a smaller subset from the individual txt files:
```python
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt')
# OR with stream
load_dataset('CALM/arwiki',
data_files='arwiki_2021_txt/AA/arwiki_20211213_1208.txt',
streaming=True)
``` |
asahi417/seamless-align-enA-jaA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-24T08:26:07Z" | 22,599 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-12T06:54:19Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16310450373
num_examples: 2073
download_size: 16362791193
dataset_size: 16310450373
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14507997777
num_examples: 1961
download_size: 14554505952
dataset_size: 14507997777
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14129720831
num_examples: 1757
download_size: 14173975107
dataset_size: 14129720831
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14993273606
num_examples: 1873
download_size: 15040705190
dataset_size: 14993273606
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15427157956
num_examples: 1868
download_size: 15475633129
dataset_size: 15427157956
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15482076366
num_examples: 1879
download_size: 15530574846
dataset_size: 15482076366
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15390676969
num_examples: 1901
download_size: 15439431804
dataset_size: 15390676969
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15184436320
num_examples: 1875
download_size: 15231369480
dataset_size: 15184436320
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15272910298
num_examples: 1880
download_size: 15320755279
dataset_size: 15272910298
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14914011605
num_examples: 1854
download_size: 14961302329
dataset_size: 14914011605
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15180068428
num_examples: 1834
download_size: 15228372695
dataset_size: 15180068428
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14466035226
num_examples: 1770
download_size: 14511537163
dataset_size: 14466035226
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13101664907
num_examples: 1779
download_size: 13144308567
dataset_size: 13101664907
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15611661731
num_examples: 1908
download_size: 15659409356
dataset_size: 15611661731
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15278480945
num_examples: 1877
download_size: 15327096988
dataset_size: 15278480945
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15724141438
num_examples: 1924
download_size: 15772924053
dataset_size: 15724141438
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15575717731
num_examples: 1930
download_size: 15625880579
dataset_size: 15575717731
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15882425444
num_examples: 1940
download_size: 15931108324
dataset_size: 15882425444
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15508119355
num_examples: 1902
download_size: 15558088350
dataset_size: 15508119355
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15790391750
num_examples: 1910
download_size: 15839269103
dataset_size: 15790391750
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15442029172
num_examples: 1901
download_size: 15490765116
dataset_size: 15442029172
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15796646573
num_examples: 1911
download_size: 15846588578
dataset_size: 15796646573
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15363910769
num_examples: 1867
download_size: 15412415056
dataset_size: 15363910769
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14593616895
num_examples: 1916
download_size: 14640738281
dataset_size: 14593616895
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564344621
num_examples: 1774
download_size: 14609587966
dataset_size: 14564344621
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15707595472
num_examples: 1895
download_size: 15757411615
dataset_size: 15707595472
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15173277199
num_examples: 1851
download_size: 15220532819
dataset_size: 15173277199
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15999251579
num_examples: 1923
download_size: 16048976124
dataset_size: 15999251579
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15601517059
num_examples: 1886
download_size: 15649292085
dataset_size: 15601517059
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15779512647
num_examples: 1928
download_size: 15829615675
dataset_size: 15779512647
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15356045351
num_examples: 1903
download_size: 15404217390
dataset_size: 15356045351
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15746607635
num_examples: 1902
download_size: 15796480801
dataset_size: 15746607635
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15375262005
num_examples: 1890
download_size: 15423317041
dataset_size: 15375262005
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14491367899
num_examples: 1752
download_size: 14536131741
dataset_size: 14491367899
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13188431829
num_examples: 1769
download_size: 13231483897
dataset_size: 13188431829
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15116628991
num_examples: 1830
download_size: 15164906379
dataset_size: 15116628991
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15597505674
num_examples: 1882
download_size: 15646436161
dataset_size: 15597505674
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15682009564
num_examples: 1918
download_size: 15731148123
dataset_size: 15682009564
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15253685409
num_examples: 1886
download_size: 15302350698
dataset_size: 15253685409
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15689301205
num_examples: 1912
download_size: 15739304449
dataset_size: 15689301205
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15581938369
num_examples: 1888
download_size: 15631772490
dataset_size: 15581938369
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15412182385
num_examples: 1875
download_size: 15460732037
dataset_size: 15412182385
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15360445304
num_examples: 1866
download_size: 15408941456
dataset_size: 15360445304
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15290751173
num_examples: 1863
download_size: 15339201722
dataset_size: 15290751173
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15256641544
num_examples: 1859
download_size: 15305102287
dataset_size: 15256641544
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12993946901
num_examples: 1734
download_size: 13035965994
dataset_size: 12993946901
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14712168776
num_examples: 1766
download_size: 14758529545
dataset_size: 14712168776
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15597842345
num_examples: 1865
download_size: 15647499851
dataset_size: 15597842345
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15493296202
num_examples: 1893
download_size: 15541947212
dataset_size: 15493296202
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15528880887
num_examples: 1894
download_size: 15578798622
dataset_size: 15528880887
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11610659779
num_examples: 1381
download_size: 11647759826
dataset_size: 11610659779
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14365778430
num_examples: 1914
download_size: 14412046441
dataset_size: 14365778430
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13652391626
num_examples: 1862
download_size: 13697579329
dataset_size: 13652391626
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14220432427
num_examples: 1875
download_size: 14266917233
dataset_size: 14220432427
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14758117694
num_examples: 1937
download_size: 14806166262
dataset_size: 14758117694
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14263981046
num_examples: 1917
download_size: 14310683240
dataset_size: 14263981046
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14854535769
num_examples: 1929
download_size: 14902477949
dataset_size: 14854535769
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14108138618
num_examples: 1877
download_size: 14154663716
dataset_size: 14108138618
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13644053955
num_examples: 1761
download_size: 13688549900
dataset_size: 13644053955
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13872299017
num_examples: 1850
download_size: 13916933531
dataset_size: 13872299017
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13582453701
num_examples: 1790
download_size: 13627175034
dataset_size: 13582453701
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13368705263
num_examples: 1758
download_size: 13412036987
dataset_size: 13368705263
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14867350268
num_examples: 1898
download_size: 14914077907
dataset_size: 14867350268
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564112565
num_examples: 1943
download_size: 14612260797
dataset_size: 14564112565
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14403352351
num_examples: 1903
download_size: 14449996819
dataset_size: 14403352351
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14603029543
num_examples: 1912
download_size: 14650911489
dataset_size: 14603029543
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14673048791
num_examples: 1945
download_size: 14719897418
dataset_size: 14673048791
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14307378756
num_examples: 1899
download_size: 14353984563
dataset_size: 14307378756
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14710205080
num_examples: 1902
download_size: 14757936886
dataset_size: 14710205080
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14025445242
num_examples: 1805
download_size: 14071428675
dataset_size: 14025445242
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13945538622
num_examples: 1797
download_size: 13989843317
dataset_size: 13945538622
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13615084926
num_examples: 1757
download_size: 13659554271
dataset_size: 13615084926
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564415851
num_examples: 1893
download_size: 14609576153
dataset_size: 14564415851
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14755481346
num_examples: 1928
download_size: 14803386359
dataset_size: 14755481346
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14405715735
num_examples: 1863
download_size: 14452030865
dataset_size: 14405715735
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14459656749
num_examples: 1855
download_size: 14505278929
dataset_size: 14459656749
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14747294818
num_examples: 1890
download_size: 14794943834
dataset_size: 14747294818
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14583795374
num_examples: 1899
download_size: 14631576455
dataset_size: 14583795374
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13461749342
num_examples: 1835
download_size: 13505049439
dataset_size: 13461749342
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14996728508
num_examples: 1931
download_size: 15043987691
dataset_size: 14996728508
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13998657085
num_examples: 1784
download_size: 14043192313
dataset_size: 13998657085
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13997152060
num_examples: 1797
download_size: 14041809773
dataset_size: 13997152060
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13745374844
num_examples: 1757
download_size: 13787997899
dataset_size: 13745374844
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14470227992
num_examples: 1831
download_size: 14516259581
dataset_size: 14470227992
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14540526118
num_examples: 1891
download_size: 14588272101
dataset_size: 14540526118
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14936006712
num_examples: 1897
download_size: 14983643800
dataset_size: 14936006712
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15007636946
num_examples: 1897
download_size: 15056520164
dataset_size: 15007636946
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15007698844
num_examples: 1902
download_size: 15056591062
dataset_size: 15007698844
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14872319895
num_examples: 1875
download_size: 14917259741
dataset_size: 14872319895
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14727185951
num_examples: 1987
download_size: 14775476702
dataset_size: 14727185951
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15197781964
num_examples: 1951
download_size: 15247032911
dataset_size: 15197781964
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14076247457
num_examples: 1752
download_size: 14121778451
dataset_size: 14076247457
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14166667429
num_examples: 1780
download_size: 14212377834
dataset_size: 14166667429
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14663043837
num_examples: 1846
download_size: 14710380722
dataset_size: 14663043837
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13650359450
num_examples: 1723
download_size: 13694095020
dataset_size: 13650359450
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14667394581
num_examples: 1866
download_size: 14714877043
dataset_size: 14667394581
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15126039375
num_examples: 1893
download_size: 15174838075
dataset_size: 15126039375
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15160658347
num_examples: 1924
download_size: 15208466401
dataset_size: 15160658347
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14776270381
num_examples: 1881
download_size: 14823232990
dataset_size: 14776270381
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14820738246
num_examples: 1887
download_size: 14868333441
dataset_size: 14820738246
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13509749274
num_examples: 1810
download_size: 13554686583
dataset_size: 13509749274
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15012692080
num_examples: 1885
download_size: 15061442178
dataset_size: 15012692080
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13825585736
num_examples: 1728
download_size: 13869424308
dataset_size: 13825585736
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14059852714
num_examples: 1787
download_size: 14104565757
dataset_size: 14059852714
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14347818528
num_examples: 1790
download_size: 14393558940
dataset_size: 14347818528
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14454950614
num_examples: 1812
download_size: 14500831889
dataset_size: 14454950614
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15012253440
num_examples: 1877
download_size: 15060938596
dataset_size: 15012253440
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15063107203
num_examples: 1890
download_size: 15111909056
dataset_size: 15063107203
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14991421779
num_examples: 1873
download_size: 15038857831
dataset_size: 14991421779
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15139716092
num_examples: 1883
download_size: 15188419081
dataset_size: 15139716092
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15398369854
num_examples: 1916
download_size: 15447262341
dataset_size: 15398369854
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13642415322
num_examples: 1832
download_size: 13685996881
dataset_size: 13642415322
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15176201337
num_examples: 1903
download_size: 15224791080
dataset_size: 15176201337
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13922358727
num_examples: 1736
download_size: 13966560038
dataset_size: 13922358727
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15371189233
num_examples: 1887
download_size: 15419825117
dataset_size: 15371189233
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13830427317
num_examples: 1736
download_size: 13874422671
dataset_size: 13830427317
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14723538586
num_examples: 1829
download_size: 14770698909
dataset_size: 14723538586
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15010254373
num_examples: 1862
download_size: 15058831818
dataset_size: 15010254373
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15247986194
num_examples: 1914
download_size: 15296025961
dataset_size: 15247986194
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15110877308
num_examples: 1874
download_size: 15159532855
dataset_size: 15110877308
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15037318101
num_examples: 1871
download_size: 15085976191
dataset_size: 15037318101
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15137911072
num_examples: 1891
download_size: 15186694496
dataset_size: 15137911072
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14896343344
num_examples: 2009
download_size: 14944904571
dataset_size: 14896343344
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14886044351
num_examples: 1885
download_size: 14933598111
dataset_size: 14886044351
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15488479968
num_examples: 1913
download_size: 15537299076
dataset_size: 15488479968
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15484176671
num_examples: 1910
download_size: 15532974397
dataset_size: 15484176671
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15204999706
num_examples: 1887
download_size: 15253724799
dataset_size: 15204999706
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15046661582
num_examples: 1867
download_size: 15095251446
dataset_size: 15046661582
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15018316453
num_examples: 1881
download_size: 15067060203
dataset_size: 15018316453
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15000057699
num_examples: 1862
download_size: 15047257993
dataset_size: 15000057699
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15201873116
num_examples: 1897
download_size: 15250090060
dataset_size: 15201873116
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15311639315
num_examples: 1900
download_size: 15360401050
dataset_size: 15311639315
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15309755677
num_examples: 1886
download_size: 15358419262
dataset_size: 15309755677
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14569778419
num_examples: 1977
download_size: 14618200962
dataset_size: 14569778419
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15290839824
num_examples: 1913
download_size: 15339744563
dataset_size: 15290839824
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15397885579
num_examples: 1913
download_size: 15446720083
dataset_size: 15397885579
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15031825290
num_examples: 1886
download_size: 15080581026
dataset_size: 15031825290
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15285895893
num_examples: 1875
download_size: 15334493674
dataset_size: 15285895893
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15343527930
num_examples: 1900
download_size: 15392308839
dataset_size: 15343527930
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15015741095
num_examples: 1867
download_size: 15064356988
dataset_size: 15015741095
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15460278621
num_examples: 1900
download_size: 15509015959
dataset_size: 15460278621
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15474609177
num_examples: 1899
download_size: 15523317437
dataset_size: 15474609177
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15471182493
num_examples: 1904
download_size: 15519934699
dataset_size: 15471182493
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15562076261
num_examples: 1901
download_size: 15612040624
dataset_size: 15562076261
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
yzwang/X2I-subject-driven | yzwang | "2024-12-14T12:33:09Z" | 22,203 | 4 | [
"task_categories:text-to-image",
"task_categories:image-to-image",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"arxiv:2409.11340",
"region:us"
] | [
"text-to-image",
"image-to-image"
] | "2024-12-01T09:43:36Z" | ---
license: apache-2.0
task_categories:
- text-to-image
- image-to-image
language:
- en
size_categories:
- 1M<n<10M
---
# X2I Dataset
* Project Page: [https://vectorspacelab.github.io/OmniGen/](https://vectorspacelab.github.io/OmniGen/)
* Github: [https://github.com/VectorSpaceLab/OmniGen](https://github.com/VectorSpaceLab/OmniGen)
* Paper: [https://arxiv.org/abs/2409.11340](https://arxiv.org/abs/2409.11340)
* Model: [https://huggingface.co/Shitao/OmniGen-v1](https://huggingface.co/Shitao/OmniGen-v1)
To achieve robust multi-task processing capabilities, it is essential to train the **OmniGen** on large-scale and diverse datasets. However, in the field of unified image generation, a readily available dataset has yet to emerge. For this reason, we have curated a large-scale **unified image generation** dataset with unified format for the **first time**, which we refer to as the **X2I dataset**, meaning **"anything to image"**.
| Task| Datastet|
| :-------- | :-------- |
| Multi-modal Instruction| [X2I-mm-instruction](https://huggingface.co/datasets/yzwang/X2I-mm-instruction) |
| Subject-driven Editing | [X2I-subject-driven](https://huggingface.co/datasets/yzwang/X2I-subject-driven) |
| In-context Learning | [X2I-in-context-learning](https://huggingface.co/datasets/yzwang/X2I-in-context-learning) |
| Computer Vision | [X2I-computer-vision](https://huggingface.co/datasets/yzwang/X2I-computer-vision) |
| Text to Image Generation| [X2I-text-to-image](https://huggingface.co/datasets/yzwang/X2I-text-to-image) |
## X2I-subject-driven
- **Web-Image**
A self-built subject-driven editing dataset with 36,316 & 45,425 & 111,734 samples.
```python
## meta file: web-image-1.jsonl && web-image-2.jsonl && web-image-3.jsonl
cd retrieval
tar -zxvf download_images.tar.gz
tar -zxvf download_images_two.tar.gz
```
- **GRIT-Entity**
A subject-driven editing dataset with 1,708,742 samples.
```python
## meta file: grit-entity.jsonl
cd grit/images1
tar -zxvf 00034.tar.gz
# tar -zxvf 00066.tar.gz
# ...
cd grit/images2
tar -zxvf 00034.tar.gz
# tar -zxvf 00066.tar.gz
# ...
cd grit/images3
tar -zxvf 00168.tar.gz
# tar -zxvf 00187.tar.gz
# ...
```
- **GRIT-Entity-New**
A self-built subject-driven editing dataset with 676,603 samples. This datasets is smaller than GRIT-Entity but with higher qualtiy.
```python
## meta file: grit-entity-new.jsonl
cd character
tar -xzvf character.tar.gz
cd human/human2
tar -xzvf human2.tar.gz
cd human/human3
tar -xzvf human3.tar.gz
cd single
cat single.tar.gz.* | tar -xzvf -
cd double
cat double.tar.gz.* | tar -xzvf -
cd triple
cat triple.tar.gz.* | tar -xzvf -
``` |
opentensor/openvalidators-test | opentensor | "2023-06-20T14:21:16Z" | 21,946 | 0 | [
"license:mit",
"size_categories:1M<n<10M",
"region:us"
] | null | "2023-06-09T15:42:16Z" | ---
license: mit
viewer: False
size_categories:
- 1M<n<10M
---
# Dataset Card for Openvalidators dataset
## Dataset Description
- **Repository:** https://github.com/opentensor/validators
- **Homepage:** https://bittensor.com/
### Dataset Summary
The OpenValidators dataset, created by the OpenTensor Foundation, is a continuously growing collection of data generated by the [OpenValidators](https://github.com/opentensor/validators) project in [W&B](https://wandb.ai/opentensor-dev/openvalidators/table). It contains hundreds of thousands of records and serves researchers, data scientists, and miners in the Bittensor network. The dataset provides information on network performance, node behaviors, and wandb run details. Researchers can gain insights and detect patterns, while data scientists can use it for training models and analysis. Miners can use the generated data to fine-tune their models and enhance their incentives in the network. The dataset's continuous updates support collaboration and innovation in decentralized computing.
### How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale.
The OpenValidators dataset gives you the granularity of extracting data by ************run_id************, by ************************************OpenValidators version************************************ and by ******************************************************************multiple OpenValidators versions.****************************************************************** The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
**Downloading by run id**
For example, to download the data for a specific run, simply specify the corresponding ********************************************OpenValidators version******************************************** and the ************************wandb run id************************ in the format `version/raw_data/run_id.parquet`:
```python
from datasets import load_dataset
version = '1.0.4' # OpenValidators version
run_id = '0plco3n0' # WandB run id
run_id_dataset = load_dataset('opentensor/openvalidators-test', data_files=f'{version}/raw_data/{run_id}.parquet')
```
_Please note that only completed run_ids are included in the dataset. Runs that are still in progress will be ingested shortly after they finish._
**Downloading by OpenValidators version**
One can also leverage the `datasets` library to download all the runs within a determined ****************************OpenValidators**************************** version. That can be useful for researchers and data enthusiasts that are looking to do analysis in a specific ****************************OpenValidators**************************** version state.
```python
from datasets import load_dataset
version = '1.0.4' # Openvalidators version
version_dataset = load_dataset('opentensor/openvalidators-test', data_files=f'{version}/raw_data/*')
```
**Downloading by multiple OpenValidators version**
Utilizing the `datasets` library, users can efficiently download runs from multiple **OpenValidators** versions. By accessing data from various OpenValidators versions, users can undertake downstream tasks such as data fine-tuning for mining or to perform big data analysis.
```python
from datasets import load_dataset
versions = ['1.0.0', '1.0.1', '1.0.2', '1.0.4'] # Desired versions for extraction
data_files = [f'{version}/raw_data/*' for version in versions] # Set data files directories
dataset = load_dataset('opentensor/openvalidators-test', data_files={ 'test': data_files })
```
**Analyzing metadata**
All the state related to the details of the wandb data ingestion can be accessed easily using pandas and hugging face datasets structure. This data contains relevant information regarding the metadata of the run, including user information, config information and ingestion state.
```python
import pandas as pd
version = '1.0.4' # OpenValidators version for metadata analysis
df = pd.read_csv(f'hf://datasets/opentensor/openvalidators-test/{version}/metadata.csv')
```
## Dataset Structure
### Data Instances
**versioned raw_data**
The data is provided as-in the wandb logs, without further preprocessing or tokenization. This data is located at `version/raw_data` where each file is a wandb run.
**metadata**
This dataset defines the current state of the wandb data ingestion by **run id**.
### Data Fields
**Raw data**
The versioned raw_data collected from W&B follows the following schema:
- `_runtime`: (float64) Runtime of the event
- `_step`: (int64) Step of the event
- `_timestamp`: (float64) Timestamp of the event
- `answer_completions`: (list(string)) Completions of the answer_prompt
- `answer_prompt`: (string) Prompt used to generate the answer
- `answer_rewards`: (list(float64)) Rewards of the answer responses
- `answer_times`: (list(float64)) Elapsed time of answer responses
- `answer_uids`: (list(int32)) UIDs of nodes that answered the answer_prompt
- `base_prompt`: (string) Bootstrap prompt
- `best_answer`: (string) Best answer response
- `best_followup`: (string) Best followup response
- `block`: (float64) Subtensor current block
- `followup_completions`: (list(string)) Completions of the base_prompt
- `followup_rewards`: (list(float64)) Rewards of the followup responses
- `followup_times`: (list(float64)) Ellapsed time of followup responses
- `followup_uids`: (list(int64)) UIDs of nodes that answered the base_prompt
- `gating_loss`: (float64) Gating model loss
- `gating_scorings`: (list(float64)) Gating model scores
- `moving_averaged_scores`: (list(float64)) Moving averaged scores at the time of the event
- `set_weights`: (list(list(float64))) Processed weights of nodes by uid
- `step_length`: (float64) Time difference from beginning of forward call to event logging
**Metadata**
- `run_id`: (string) Wandb Run Id
- `completed`: (boolean) Flag indicating if the run_id is completed (finished, crashed or killed)
- `downloaded`: (boolean) Flag indicating if the run_id data has been downloaded
- `last_checkpoint`: (string) Last checkpoint of the run_id
- `hotkey`: (string) Hotkey associated with the run_id
- `openvalidators_version`: (string) Version of OpenValidators associated with the run_id
- `problematic`: (boolean) Flag indicating if the run_id data had problems to be ingested
- `problematic_reason`: (string) Reason for the run_id being problematic (Exception message)
- `wandb_json_config`: (string) JSON configuration associated with the run_id in Wandb
- `wandb_run_name`: (string) Name of the Wandb run
- `wandb_user_info`: (string) Username information associated with the Wandb run
- `wandb_tags`: (list) List of tags associated with the Wandb run
- `wandb_createdAt`: (string) Timestamp of the run creation in Wandb
## Dataset Creation
### Curation Rationale
This dataset was curated to provide a comprehensive and reliable collection of historical data obtained by the execution of different OpenValidators in the bittensor network.
The goal is to support researchers, data scientists and developers with data generated in the network, facilitating the discovery of new insights, network analysis, troubleshooting, and data extraction for downstream tasks like mining.
### Source Data
#### Initial Data Collection and Normalization
The initial data collection process for this dataset involves recurrent collection by a specialized worker responsible for extracting data from wandb and ingesting it into the Hugging Face datasets structure. The collected data is organized based on the OpenValidators version and run ID to facilitate efficient data management and granular access. Each run is collected based on its corresponding OpenValidators version tag and grouped into version-specific folders. Within each version folder, a `metadata.csv` file is included to manage the collection state, while the raw data of each run is saved in the `.parquet` format with the file name corresponding to the run ID (e.g., `run_id.parquet`). Please note that the code for this data collection process will be released for transparency and reproducibility.
#### Who are the source language producers?
The language producers for this dataset are all the openvalidators that are logging their data into wandb in conjunction of other nodes of the bittensor network. The main wandb page where the data is sent can be accessed at https://wandb.ai/opentensor-dev/openvalidators/table.
### Licensing Information
The dataset is licensed under the [MIT License](https://github.com/opentensor/validators/blob/main/LICENSE)
### Supported Tasks and Leaderboards
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
L4NLP/LEval | L4NLP | "2023-10-11T03:56:48Z" | 21,922 | 18 | [
"task_categories:summarization",
"task_categories:question-answering",
"task_categories:multiple-choice",
"language:en",
"license:gpl-3.0",
"size_categories:n<1K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"Long_context"
] | [
"summarization",
"question-answering",
"multiple-choice"
] | "2023-06-14T11:51:39Z" | ---
license: gpl-3.0
task_categories:
- summarization
- question-answering
- multiple-choice
language:
- en
size_categories:
- 1K<n<10K
viewer: true
tags:
- Long_context
---
### *L-Eval: Instituting Standardized Evaluation for Long Context Language Models*
L-Eval is a comprehensive long-context language models evaluation suite with 18 long document tasks across multiple domains that require reasoning over long texts, including summarization, question answering, in-context learning with long CoT examples, topic retrieval, and paper writing assistance. L-Eval is a high-quality test set with 411 long documents and 2043 query-response pairs. All samples in L-Eval have been manually annotated and checked by the authors. There have been many studies exploring the expansion of context length in large models. However, it remains to be explored whether these methods perform well enough in downstream tasks and whether they can surpass previous methods based on retrieval or chunking.
We hope L-Eval could help researchers and developers track the progress of long-context language models (LCLMs) and understand the strengths/shortcomings of different methods.
Dataset list:
```
["coursera", "gsm100", "quality", "topic_retrieval_longchat", "tpo", "financial_qa", "gov_report_summ", "legal_contract_qa", "meeting_summ", "multidoc_qa", "narrative_qa", "natural_question", "news_summ", "paper_assistant", "patent_summ", "review_summ", "scientific_qa", "tv_show_summ"]
```
Detailed descriptions and how we collect the data can be found [https://github.com/OpenLMLab/LEval](https://github.com/OpenLMLab/LEval). |
roneneldan/TinyStories | roneneldan | "2024-08-12T13:27:26Z" | 21,857 | 619 | [
"task_categories:text-generation",
"language:en",
"license:cdla-sharing-1.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.07759",
"region:us"
] | [
"text-generation"
] | "2023-05-12T19:04:09Z" | ---
license: cdla-sharing-1.0
task_categories:
- text-generation
language:
- en
---
Dataset containing synthetically generated (by GPT-3.5 and GPT-4) short stories that only use a small vocabulary.
Described in the following paper: https://arxiv.org/abs/2305.07759.
The models referred to in the paper were trained on TinyStories-train.txt (the file tinystories-valid.txt can be used for validation loss). These models can be found on Huggingface, at roneneldan/TinyStories-1M/3M/8M/28M/33M/1Layer-21M.
Additional resources:
tinystories_all_data.tar.gz - contains a superset of the stories together with metadata and the prompt that was used to create each story.
TinyStoriesV2-GPT4-train.txt - Is a new version of the dataset that is based on generations by GPT-4 only (the original dataset also has generations by GPT-3.5 which are of lesser quality). It contains all the examples in TinyStories.txt which were GPT-4 generated as a subset (but is significantly larger).
Evaluation_prompts.yaml: List of prompts used to evaluate our models (see paper) |
sal4ahm/RealCQA | sal4ahm | "2024-09-09T18:14:20Z" | 21,837 | 5 | [
"license:mit",
"size_categories:10K<n<100K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2308.01979",
"region:us"
] | null | "2024-02-01T17:18:07Z" | ---
license: mit
---
# RealCQA: Real-World Complex Question Answering Dataset
This repository contains the dataset used in the paper "[RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic](https://arxiv.org/pdf/2308.01979)" (ICDAR 2023). The dataset is designed to facilitate research in complex question answering, involving a diverse set of real-world images and associated textual question-answer pairs.
## Dataset Overview
The RealCQA dataset consists of 28,266 images, and corresponding 2 million question-answer pairs organized into three complementary subsets. Each image is accompanied by a JSON file containing one or more question blocks. The dataset is structured to address a range of question-answering tasks that require an understanding of the visual content.
### Dataset Structure
The dataset is organized into the following folders:
- **Images**
- `images`: Contains the first 10,000 images.
- `images2`: Contains the next 10,000 images.
- `images3`: Contains the remaining 8,266 images.
- **JSON Files**
- `jsons`: Contains the JSON files corresponding to the images in the `images` folder.
- `jsons2`: Contains the JSON files corresponding to the images in the `images2` folder.
- `jsons3`: Contains the JSON files corresponding to the images in the `images3` folder.
- **QA Files**
These are the QA created in our proposed dataset.
- `qa`: Contains the QA files corresponding to the images in the `images` folder.
- `qa2`: Contains the QA files corresponding to the images in the `images2` folder.
- `qa3`: Contains the QA files corresponding to the images in the `images3` folder.
### File Details
- **Images**: JPEG files named in the format `PMCxxxxxx_abc.jpg`, where `xxxxxx` represents the PubMed Central ID and `abc` represents an identifier specific to the image.
- **JSON Files**: JSON files named in the same format as the images. These are groundtruth annotations from the https://chartinfo.github.io challenge, they provide annotations for chart type, text(OCR), text location, text type (axis/tick/legend), data used to plot the chart.
- **QA Files**: QA files named in the same format as the images. Each QA file is a list of question blocks associated with the corresponding image we created in our proposed dataset.
#### QA Structure
Each QA file contains a list of question blocks in the following format:
```json
[
{
"taxonomy id": "2j",
"QID": "16",
"question": "Are all the bars in the chart visually horizontal?",
"answer": "no",
"answer_type": "Binary",
"qa_id": "XbUzFtjqsEOF",
"PMC_ID": "PMC8439477___g003"
},
{
"taxonomy id": "1a",
"QID": "7a",
"question": "What is the type of chart?",
"answer": "Vertical Bar chart",
"answer_type": "String",
"qa_id": "wzcdDijkrHtt",
"PMC_ID": "PMC8439477___g003"
}
]
```
### Dataset Loader
To facilitate loading and using the dataset, we provide a custom dataset loader script, `dataset.py`. This script defines a PyTorch `Dataset` class to handle loading, preprocessing, and batching of the images and question-answer pairs.
#### How to Use the Dataset Loader
1. **Setup and Requirements**
Ensure you have the following Python packages installed:
```bash
pip install torch torchvision Pillow
```
2. **Dataset Loader Script**
Use the provided `dataset.py` to load the dataset. The script is designed to load the dataset efficiently and handle both training and testing cases.
```python
from dataset import RQADataset
from torch.utils.data import DataLoader
dataset = RQADataset(data_dir='.', split='train') # split='test' for RQA9357 split used in the paper
# Test loading a single item
print(f"Number of samples in dataset: {len(dataset)}")
sample = dataset[0]
print("Sample data:", sample)
# Initialize DataLoader
dataloader = DataLoader(dataset, batch_size=4, collate_fn=RQADataset.custom_collate)
# Test DataLoader
for batch in dataloader:
print("Batch data:", batch)
break # Load only one batch for testing
```
### Citation
If you use this dataset in your research, please cite the following paper:
```bibtex
@InProceedings{10.1007/978-3-031-41682-8_5,
author="Ahmed, Saleem
and Jawade, Bhavin
and Pandey, Shubham
and Setlur, Srirangaraj
and Govindaraju, Venu",
editor="Fink, Gernot A.
and Jain, Rajiv
and Kise, Koichi
and Zanibbi, Richard",
title="RealCQA: Scientific Chart Question Answering as a Test-Bed for First-Order Logic",
booktitle="Document Analysis and Recognition - ICDAR 2023",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="66--83",
abstract="We present a comprehensive study of chart visual question-answering(QA) task, to address the challenges faced in comprehending and extracting data from chart visualizations within documents. Despite efforts to tackle this problem using synthetic charts, solutions are limited by the shortage of annotated real-world data. To fill this gap, we introduce a benchmark and dataset for chart visual QA on real-world charts, offering a systematic analysis of the task and a novel taxonomy for template-based chart question creation. Our contribution includes the introduction of a new answer type, `list', with both ranked and unranked variations. Our study is conducted on a real-world chart dataset from scientific literature, showcasing higher visual complexity compared to other works. Our focus is on template-based QA and how it can serve as a standard for evaluating the first-order logic capabilities of models. The results of our experiments, conducted on a real-world out-of-distribution dataset, provide a robust evaluation of large-scale pre-trained models and advance the field of chart visual QA and formal logic verification for neural networks in general. Our code and dataset is publicly available (https://github.com/cse-ai-lab/RealCQA).",
isbn="978-3-031-41682-8"
}
}
```
### License
This dataset is licensed under the [MIT License](LICENSE). By using this dataset, you agree to abide by its terms and conditions.
### Contact
For any questions or issues, please contact the authors of the paper or open an issue in this repository. |
lerobot/berkeley_gnm_cory_hall | lerobot | "2024-11-20T15:19:58Z" | 21,825 | 0 | [
"task_categories:robotics",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1709.10489",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-08-23T12:03:50Z" | ---
license: mit
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** https://arxiv.org/abs/1709.10489
- **License:** mit
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 7331,
"total_frames": 156012,
"total_tasks": 1,
"total_videos": 7331,
"total_chunks": 8,
"chunks_size": 1000,
"fps": 5,
"splits": {
"train": "0:7331"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.image": {
"dtype": "video",
"shape": [
64,
86,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 5.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"language_instruction": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6",
"motor_7",
"motor_8"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
@inproceedings{kahn2018self,
title={Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation},
author={Kahn, Gregory and Villaflor, Adam and Ding, Bosen and Abbeel, Pieter and Levine, Sergey},
booktitle={2018 IEEE international conference on robotics and automation (ICRA)},
pages={5129--5136},
year={2018},
organization={IEEE}
}
``` |
liwu/MNBVC | liwu | "2024-08-23T02:21:05Z" | 21,704 | 521 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2023-02-13T14:00:47Z" | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自[知乎](https://huggingface.co/datasets/wangrui6/Zhihu-KOL)的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
- `qa_stackexchange`: 来自StackExchange的问答数据。
- `qa_chatgpt`: 使用ChatGPT构造的问答语料,感谢[genggui001](https://github.com/genggui001)贡献语料。
- `math`:
- `math_qa `: 和数学领域有关的问答数据。
- `emath` :中国数学爱好者论坛语料数据
- `math_chat`: 和数学领域有关的对话数据数据,可以提升模型Chain of Thought的能力。
- `crawler_oscar`: 从CommonCrawl中清洗出来的通用文本数据。
- `game` : 一些游戏的平行语料数据。
- `Hogwarts_legacy` : 霍格沃茨指遗
- `The_Wither_3` : 巫师三
## 数据格式
目前MNBVC数据集包含如下几类数据:
- 通用文本
- 问答语料
- 代码语料
- 多轮对话
- 论坛语料
- 平行语料
可以在[MNBVC的wiki页面](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F)上查看这几类数据的具体格式。
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) and [jiaming](https://huggingface.co/Yjiaming) for adding and uploading this dataset to Huggingface.
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{mnbvc,
author = {{MOP-LIWU Community} and {MNBVC Team}},
title = {MNBVC: Massive Never-ending BT Vast Chinese corpus},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/esbatmop/MNBVC}},
}
```
|
Stevross/mmlu | Stevross | "2023-07-11T12:04:33Z" | 21,411 | 7 | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2009.03300",
"arxiv:2005.00700",
"arxiv:2005.14165",
"arxiv:2008.02275",
"region:us"
] | [
"question-answering"
] | "2023-07-11T11:58:20Z" | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: mmlu
pretty_name: Measuring Massive Multitask Language Understanding
language_bcp47:
- en-US
dataset_info:
- config_name: abstract_algebra
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 19328
num_examples: 100
- name: validation
num_bytes: 2024
num_examples: 11
- name: dev
num_bytes: 830
num_examples: 5
download_size: 166184960
dataset_size: 160623559
- config_name: anatomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33121
num_examples: 135
- name: validation
num_bytes: 3140
num_examples: 14
- name: dev
num_bytes: 967
num_examples: 5
download_size: 166184960
dataset_size: 160638605
- config_name: astronomy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46771
num_examples: 152
- name: validation
num_bytes: 5027
num_examples: 16
- name: dev
num_bytes: 2076
num_examples: 5
download_size: 166184960
dataset_size: 160655251
- config_name: business_ethics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33252
num_examples: 100
- name: validation
num_bytes: 3038
num_examples: 11
- name: dev
num_bytes: 2190
num_examples: 5
download_size: 166184960
dataset_size: 160639857
- config_name: clinical_knowledge
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 62754
num_examples: 265
- name: validation
num_bytes: 6664
num_examples: 29
- name: dev
num_bytes: 1210
num_examples: 5
download_size: 166184960
dataset_size: 160672005
- config_name: college_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 48797
num_examples: 144
- name: validation
num_bytes: 4819
num_examples: 16
- name: dev
num_bytes: 1532
num_examples: 5
download_size: 166184960
dataset_size: 160656525
- config_name: college_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24708
num_examples: 100
- name: validation
num_bytes: 2328
num_examples: 8
- name: dev
num_bytes: 1331
num_examples: 5
download_size: 166184960
dataset_size: 160629744
- config_name: college_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42641
num_examples: 100
- name: validation
num_bytes: 4663
num_examples: 11
- name: dev
num_bytes: 2765
num_examples: 5
download_size: 166184960
dataset_size: 160651446
- config_name: college_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 24711
num_examples: 100
- name: validation
num_bytes: 2668
num_examples: 11
- name: dev
num_bytes: 1493
num_examples: 5
download_size: 166184960
dataset_size: 160630249
- config_name: college_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 82397
num_examples: 173
- name: validation
num_bytes: 7909
num_examples: 22
- name: dev
num_bytes: 1670
num_examples: 5
download_size: 166184960
dataset_size: 160693353
- config_name: college_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 30181
num_examples: 102
- name: validation
num_bytes: 3490
num_examples: 11
- name: dev
num_bytes: 1412
num_examples: 5
download_size: 166184960
dataset_size: 160636460
- config_name: computer_security
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 27124
num_examples: 100
- name: validation
num_bytes: 4549
num_examples: 11
- name: dev
num_bytes: 1101
num_examples: 5
download_size: 166184960
dataset_size: 160634151
- config_name: conceptual_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 40709
num_examples: 235
- name: validation
num_bytes: 4474
num_examples: 26
- name: dev
num_bytes: 934
num_examples: 5
download_size: 166184960
dataset_size: 160647494
- config_name: econometrics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46547
num_examples: 114
- name: validation
num_bytes: 4967
num_examples: 12
- name: dev
num_bytes: 1644
num_examples: 5
download_size: 166184960
dataset_size: 160654535
- config_name: electrical_engineering
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25142
num_examples: 145
- name: validation
num_bytes: 2903
num_examples: 16
- name: dev
num_bytes: 972
num_examples: 5
download_size: 166184960
dataset_size: 160630394
- config_name: elementary_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 70108
num_examples: 378
- name: validation
num_bytes: 8988
num_examples: 41
- name: dev
num_bytes: 1440
num_examples: 5
download_size: 166184960
dataset_size: 160681913
- config_name: formal_logic
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 49785
num_examples: 126
- name: validation
num_bytes: 6252
num_examples: 14
- name: dev
num_bytes: 1757
num_examples: 5
download_size: 166184960
dataset_size: 160659171
- config_name: global_facts
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 18403
num_examples: 100
- name: validation
num_bytes: 1865
num_examples: 10
- name: dev
num_bytes: 1229
num_examples: 5
download_size: 166184960
dataset_size: 160622874
- config_name: high_school_biology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 109732
num_examples: 310
- name: validation
num_bytes: 11022
num_examples: 32
- name: dev
num_bytes: 1673
num_examples: 5
download_size: 166184960
dataset_size: 160723804
- config_name: high_school_chemistry
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 58464
num_examples: 203
- name: validation
num_bytes: 7092
num_examples: 22
- name: dev
num_bytes: 1220
num_examples: 5
download_size: 166184960
dataset_size: 160668153
- config_name: high_school_computer_science
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 44476
num_examples: 100
- name: validation
num_bytes: 3343
num_examples: 9
- name: dev
num_bytes: 2918
num_examples: 5
download_size: 166184960
dataset_size: 160652114
- config_name: high_school_european_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 270300
num_examples: 165
- name: validation
num_bytes: 29632
num_examples: 18
- name: dev
num_bytes: 11564
num_examples: 5
download_size: 166184960
dataset_size: 160912873
- config_name: high_school_geography
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 42034
num_examples: 198
- name: validation
num_bytes: 4332
num_examples: 22
- name: dev
num_bytes: 1403
num_examples: 5
download_size: 166184960
dataset_size: 160649146
- config_name: high_school_government_and_politics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66074
num_examples: 193
- name: validation
num_bytes: 7063
num_examples: 21
- name: dev
num_bytes: 1779
num_examples: 5
download_size: 166184960
dataset_size: 160676293
- config_name: high_school_macroeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 117687
num_examples: 390
- name: validation
num_bytes: 13020
num_examples: 43
- name: dev
num_bytes: 1328
num_examples: 5
download_size: 166184960
dataset_size: 160733412
- config_name: high_school_mathematics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 54854
num_examples: 270
- name: validation
num_bytes: 5765
num_examples: 29
- name: dev
num_bytes: 1297
num_examples: 5
download_size: 166184960
dataset_size: 160663293
- config_name: high_school_microeconomics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 75703
num_examples: 238
- name: validation
num_bytes: 7553
num_examples: 26
- name: dev
num_bytes: 1298
num_examples: 5
download_size: 166184960
dataset_size: 160685931
- config_name: high_school_physics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 59538
num_examples: 151
- name: validation
num_bytes: 6771
num_examples: 17
- name: dev
num_bytes: 1489
num_examples: 5
download_size: 166184960
dataset_size: 160669175
- config_name: high_school_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 159407
num_examples: 545
- name: validation
num_bytes: 17269
num_examples: 60
- name: dev
num_bytes: 1905
num_examples: 5
download_size: 166184960
dataset_size: 160779958
- config_name: high_school_statistics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 110702
num_examples: 216
- name: validation
num_bytes: 9997
num_examples: 23
- name: dev
num_bytes: 2528
num_examples: 5
download_size: 166184960
dataset_size: 160724604
- config_name: high_school_us_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 296734
num_examples: 204
- name: validation
num_bytes: 31706
num_examples: 22
- name: dev
num_bytes: 8864
num_examples: 5
download_size: 166184960
dataset_size: 160938681
- config_name: high_school_world_history
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 378617
num_examples: 237
- name: validation
num_bytes: 45501
num_examples: 26
- name: dev
num_bytes: 4882
num_examples: 5
download_size: 166184960
dataset_size: 161030377
- config_name: human_aging
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 46098
num_examples: 223
- name: validation
num_bytes: 4707
num_examples: 23
- name: dev
num_bytes: 1008
num_examples: 5
download_size: 166184960
dataset_size: 160653190
- config_name: human_sexuality
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 32110
num_examples: 131
- name: validation
num_bytes: 2421
num_examples: 12
- name: dev
num_bytes: 1077
num_examples: 5
download_size: 166184960
dataset_size: 160636985
- config_name: international_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 53531
num_examples: 121
- name: validation
num_bytes: 6473
num_examples: 13
- name: dev
num_bytes: 2418
num_examples: 5
download_size: 166184960
dataset_size: 160663799
- config_name: jurisprudence
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33986
num_examples: 108
- name: validation
num_bytes: 3729
num_examples: 11
- name: dev
num_bytes: 1303
num_examples: 5
download_size: 166184960
dataset_size: 160640395
- config_name: logical_fallacies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 50117
num_examples: 163
- name: validation
num_bytes: 5103
num_examples: 18
- name: dev
num_bytes: 1573
num_examples: 5
download_size: 166184960
dataset_size: 160658170
- config_name: machine_learning
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 33880
num_examples: 112
- name: validation
num_bytes: 3232
num_examples: 11
- name: dev
num_bytes: 2323
num_examples: 5
download_size: 166184960
dataset_size: 160640812
- config_name: management
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20002
num_examples: 103
- name: validation
num_bytes: 1820
num_examples: 11
- name: dev
num_bytes: 898
num_examples: 5
download_size: 166184960
dataset_size: 160624097
- config_name: marketing
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 63025
num_examples: 234
- name: validation
num_bytes: 7394
num_examples: 25
- name: dev
num_bytes: 1481
num_examples: 5
download_size: 166184960
dataset_size: 160673277
- config_name: medical_genetics
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 20864
num_examples: 100
- name: validation
num_bytes: 3005
num_examples: 11
- name: dev
num_bytes: 1089
num_examples: 5
download_size: 166184960
dataset_size: 160626335
- config_name: miscellaneous
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 147704
num_examples: 783
- name: validation
num_bytes: 14330
num_examples: 86
- name: dev
num_bytes: 699
num_examples: 5
download_size: 166184960
dataset_size: 160764110
- config_name: moral_disputes
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 107818
num_examples: 346
- name: validation
num_bytes: 12420
num_examples: 38
- name: dev
num_bytes: 1755
num_examples: 5
download_size: 166184960
dataset_size: 160723370
- config_name: moral_scenarios
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 374026
num_examples: 895
- name: validation
num_bytes: 42338
num_examples: 100
- name: dev
num_bytes: 2058
num_examples: 5
download_size: 166184960
dataset_size: 161019799
- config_name: nutrition
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 92410
num_examples: 306
- name: validation
num_bytes: 8436
num_examples: 33
- name: dev
num_bytes: 2085
num_examples: 5
download_size: 166184960
dataset_size: 160704308
- config_name: philosophy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 80073
num_examples: 311
- name: validation
num_bytes: 9184
num_examples: 34
- name: dev
num_bytes: 988
num_examples: 5
download_size: 166184960
dataset_size: 160691622
- config_name: prehistory
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 89594
num_examples: 324
- name: validation
num_bytes: 10285
num_examples: 35
- name: dev
num_bytes: 1878
num_examples: 5
download_size: 166184960
dataset_size: 160703134
- config_name: professional_accounting
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 124550
num_examples: 282
- name: validation
num_bytes: 14372
num_examples: 31
- name: dev
num_bytes: 2148
num_examples: 5
download_size: 166184960
dataset_size: 160742447
- config_name: professional_law
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 1891762
num_examples: 1534
- name: validation
num_bytes: 203519
num_examples: 170
- name: dev
num_bytes: 6610
num_examples: 5
download_size: 166184960
dataset_size: 162703268
- config_name: professional_medicine
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 217561
num_examples: 272
- name: validation
num_bytes: 23847
num_examples: 31
- name: dev
num_bytes: 3807
num_examples: 5
download_size: 166184960
dataset_size: 160846592
- config_name: professional_psychology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 225899
num_examples: 612
- name: validation
num_bytes: 29101
num_examples: 69
- name: dev
num_bytes: 2267
num_examples: 5
download_size: 166184960
dataset_size: 160858644
- config_name: public_relations
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28760
num_examples: 110
- name: validation
num_bytes: 4566
num_examples: 12
- name: dev
num_bytes: 1496
num_examples: 5
download_size: 166184960
dataset_size: 160636199
- config_name: security_studies
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 204844
num_examples: 245
- name: validation
num_bytes: 22637
num_examples: 27
- name: dev
num_bytes: 5335
num_examples: 5
download_size: 166184960
dataset_size: 160834193
- config_name: sociology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 66243
num_examples: 201
- name: validation
num_bytes: 7184
num_examples: 22
- name: dev
num_bytes: 1613
num_examples: 5
download_size: 166184960
dataset_size: 160676417
- config_name: us_foreign_policy
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 28443
num_examples: 100
- name: validation
num_bytes: 3264
num_examples: 11
- name: dev
num_bytes: 1611
num_examples: 5
download_size: 166184960
dataset_size: 160634695
- config_name: virology
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 38759
num_examples: 166
- name: validation
num_bytes: 5463
num_examples: 18
- name: dev
num_bytes: 1096
num_examples: 5
download_size: 166184960
dataset_size: 160646695
- config_name: world_religions
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: auxiliary_train
num_bytes: 160601377
num_examples: 99842
- name: test
num_bytes: 25274
num_examples: 171
- name: validation
num_bytes: 2765
num_examples: 19
- name: dev
num_bytes: 670
num_examples: 5
download_size: 166184960
dataset_size: 160630086
---
# Dataset Card for MMLU
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository**: https://github.com/hendrycks/test
- **Paper**: https://arxiv.org/abs/2009.03300
### Dataset Summary
[Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021).
This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability.
A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions']
### Supported Tasks and Leaderboards
| Model | Authors | Humanities | Social Science | STEM | Other | Average |
|------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:|
| [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9
| [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9
| [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4
| Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0
### Languages
English
## Dataset Structure
### Data Instances
An example from anatomy subtask looks as follows:
```
{
"question": "What is the embryological origin of the hyoid bone?",
"choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"],
"answer": "D"
}
```
### Data Fields
- `question`: a string feature
- `choices`: a list of 4 string features
- `answer`: a ClassLabel feature
### Data Splits
- `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc.
- `dev`: 5 examples per subtask, meant for few-shot setting
- `test`: there are at least 100 examples per subtask
| | auxiliary_train | dev | val | test |
| ----- | :------: | :-----: | :-----: | :-----: |
| TOTAL | 99842 | 285 | 1531 | 14042
## Dataset Creation
### Curation Rationale
Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[MIT License](https://github.com/hendrycks/test/blob/master/LICENSE)
### Citation Information
If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from:
```
@article{hendryckstest2021,
title={Measuring Massive Multitask Language Understanding},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
@article{hendrycks2021ethics,
title={Aligning AI With Shared Human Values},
author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt},
journal={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2021}
}
```
### Contributions
Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset.
|
laion/strategic_game_chess | laion | "2023-10-20T04:14:20Z" | 21,146 | 29 | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"game"
] | null | "2023-06-06T02:09:13Z" | ---
tags:
- game
pretty_name: The Chess Dataset
license: cc-by-4.0
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. |
bigscience/xP3all | bigscience | "2023-05-30T15:51:40Z" | 21,114 | 28 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-07-30T21:05:02Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
lerobot/stanford_robocook | lerobot | "2025-02-20T18:31:51Z" | 20,961 | 0 | [
"task_categories:robotics",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2306.14447",
"region:us",
"LeRobot"
] | [
"robotics"
] | "2024-07-29T22:43:38Z" | ---
license: mit
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** https://hshi74.github.io/robocook/
- **Paper:** https://arxiv.org/abs/2306.14447
- **License:** mit
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "unknown",
"total_episodes": 2460,
"total_frames": 112980,
"total_tasks": 9,
"total_videos": 9840,
"total_chunks": 3,
"chunks_size": 1000,
"fps": 5,
"splits": {
"train": "0:2460"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.images.image_1": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 5.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.image_2": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 5.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.image_4": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 5.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.image_3": {
"dtype": "video",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
],
"video_info": {
"video.fps": 5.0,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"language_instruction": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"observation.state": {
"dtype": "float32",
"shape": [
8
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6",
"motor_7"
]
}
},
"action": {
"dtype": "float32",
"shape": [
7
],
"names": {
"motors": [
"motor_0",
"motor_1",
"motor_2",
"motor_3",
"motor_4",
"motor_5",
"motor_6"
]
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"next.reward": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"next.done": {
"dtype": "bool",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
@article{shi2023robocook,
title={RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse Tools},
author={Shi, Haochen and Xu, Huazhe and Clarke, Samuel and Li, Yunzhu and Wu, Jiajun},
journal={arXiv preprint arXiv:2306.14447},
year={2023}
}
``` |
qmeeus/vp-er-10l | qmeeus | "2024-03-28T14:43:22Z" | 20,778 | 0 | [
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hu",
"language:it",
"language:nl",
"language:pl",
"language:ro",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"speech-to-text",
"speech-translation",
"automatic-speech-recognition",
"language-detection"
] | null | "2024-02-19T20:45:14Z" | ---
dataset_info:
- config_name: cs
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
splits:
- name: train
num_bytes: 3968868756
num_examples: 12000
download_size: 3963196917
dataset_size: 3968868756
- config_name: de
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 3498200501
num_examples: 12000
download_size: 3487997831
dataset_size: 3498200501
- config_name: en
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4000276474
num_examples: 12000
download_size: 3984332876
dataset_size: 4000276474
- config_name: es
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4138004589
num_examples: 12000
download_size: 4128702065
dataset_size: 4138004589
- config_name: fr
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 3915210199
num_examples: 12000
download_size: 3906302179
dataset_size: 3915210199
- config_name: hu
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4174219387
num_examples: 12000
download_size: 4167484051
dataset_size: 4174219387
- config_name: it
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4732854879
num_examples: 12000
download_size: 4722455587
dataset_size: 4732854879
- config_name: nl
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 3162694343
num_examples: 12000
download_size: 3154090731
dataset_size: 3162694343
- config_name: pl
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4041042730
num_examples: 12000
download_size: 4033450852
dataset_size: 4041042730
- config_name: ro
features:
- name: audio_id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: language
dtype: string
- name: transcription
dtype: string
- name: translation
dtype: string
- name: wer
dtype: float32
splits:
- name: train
num_bytes: 4341972777
num_examples: 12000
download_size: 4334737748
dataset_size: 4341972777
configs:
- config_name: cs
data_files:
- split: train
path: cs/train-*
- config_name: de
data_files:
- split: train
path: de/train-*
- config_name: en
data_files:
- split: train
path: en/train-*
- config_name: es
data_files:
- split: train
path: es/train-*
- config_name: fr
data_files:
- split: train
path: fr/train-*
- config_name: hu
data_files:
- split: train
path: hu/train-*
- config_name: it
data_files:
- split: train
path: it/train-*
- config_name: nl
data_files:
- split: train
path: nl/train-*
- config_name: pl
data_files:
- split: train
path: pl/train-*
- config_name: ro
data_files:
- split: train
path: ro/train-*
language:
- cs
- de
- en
- es
- fr
- hu
- it
- nl
- pl
- ro
tags:
- speech-to-text
- speech-translation
- automatic-speech-recognition
- language-detection
---
# Dataset Card for "vp-er-10l"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
asahi417/seamless-align-enA-jaA.speaker-embedding.hubert-xl | asahi417 | "2024-06-14T01:51:16Z" | 20,672 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-12T06:48:50Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10876520133
num_examples: 2073
download_size: 10908762452
dataset_size: 10876520133
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9674569297
num_examples: 1961
download_size: 9700306271
dataset_size: 9674569297
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9422313471
num_examples: 1757
download_size: 9447085440
dataset_size: 9422313471
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9998168326
num_examples: 1873
download_size: 10027347383
dataset_size: 9998168326
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10287499716
num_examples: 1868
download_size: 10317718412
dataset_size: 10287499716
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10324121806
num_examples: 1879
download_size: 10354352259
dataset_size: 10324121806
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10263173609
num_examples: 1901
download_size: 10293587612
dataset_size: 10263173609
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10125643360
num_examples: 1875
download_size: 10152113436
dataset_size: 10125643360
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10184641498
num_examples: 1880
download_size: 10213159494
dataset_size: 10184641498
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9945312725
num_examples: 1854
download_size: 9974410300
dataset_size: 9945312725
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10122729548
num_examples: 1834
download_size: 10152878773
dataset_size: 10122729548
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9646581786
num_examples: 1770
download_size: 9675397019
dataset_size: 9646581786
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8736765067
num_examples: 1779
download_size: 8761578004
dataset_size: 8736765067
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10410535331
num_examples: 1908
download_size: 10439335513
dataset_size: 10410535331
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10188356145
num_examples: 1877
download_size: 10218696271
dataset_size: 10188356145
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10485541758
num_examples: 1924
download_size: 10513113708
dataset_size: 10485541758
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10386567011
num_examples: 1930
download_size: 10417054414
dataset_size: 10386567011
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10591092324
num_examples: 1940
download_size: 10619534397
dataset_size: 10591092324
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10341488955
num_examples: 1902
download_size: 10371862024
dataset_size: 10341488955
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10529719750
num_examples: 1910
download_size: 10558882034
dataset_size: 10529719750
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10297417332
num_examples: 1901
download_size: 10327810400
dataset_size: 10297417332
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10533890733
num_examples: 1911
download_size: 10565451687
dataset_size: 10533890733
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10245323889
num_examples: 1867
download_size: 10275576648
dataset_size: 10245323889
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9731662335
num_examples: 1916
download_size: 9759429233
dataset_size: 9731662335
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9712138541
num_examples: 1774
download_size: 9737568085
dataset_size: 9712138541
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10474507472
num_examples: 1895
download_size: 10504742139
dataset_size: 10474507472
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10118201359
num_examples: 1851
download_size: 10145835390
dataset_size: 10118201359
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10668996219
num_examples: 1923
download_size: 10699951985
dataset_size: 10668996219
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10403769859
num_examples: 1886
download_size: 10429558449
dataset_size: 10403769859
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10522465607
num_examples: 1928
download_size: 10554133951
dataset_size: 10522465607
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10240079911
num_examples: 1903
download_size: 10269077911
dataset_size: 10240079911
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10500522515
num_examples: 1902
download_size: 10532042696
dataset_size: 10500522515
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10252894005
num_examples: 1890
download_size: 10281784120
dataset_size: 10252894005
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9663474139
num_examples: 1752
download_size: 9690866335
dataset_size: 9663474139
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8794624469
num_examples: 1769
download_size: 8820465273
dataset_size: 8794624469
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10080425471
num_examples: 1830
download_size: 10110566138
dataset_size: 10080425471
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10401094794
num_examples: 1882
download_size: 10429416473
dataset_size: 10401094794
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10457446364
num_examples: 1918
download_size: 10485865817
dataset_size: 10457446364
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10171821729
num_examples: 1886
download_size: 10202198422
dataset_size: 10171821729
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10462308565
num_examples: 1912
download_size: 10492670895
dataset_size: 10462308565
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10390714049
num_examples: 1888
download_size: 10420979533
dataset_size: 10390714049
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10277513585
num_examples: 1875
download_size: 10307787783
dataset_size: 10277513585
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10243012984
num_examples: 1866
download_size: 10273259941
dataset_size: 10243012984
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10196538053
num_examples: 1863
download_size: 10226765925
dataset_size: 10196538053
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10173792264
num_examples: 1859
download_size: 10204033126
dataset_size: 10173792264
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8664933141
num_examples: 1734
download_size: 8691667368
dataset_size: 8664933141
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9810713416
num_examples: 1766
download_size: 9838770886
dataset_size: 9810713416
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10401318825
num_examples: 1865
download_size: 10431447945
dataset_size: 10401318825
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10331604042
num_examples: 1893
download_size: 10361931781
dataset_size: 10331604042
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10355333367
num_examples: 1894
download_size: 10385663684
dataset_size: 10355333367
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7742492099
num_examples: 1381
download_size: 7765334663
dataset_size: 7742492099
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9579730430
num_examples: 1914
download_size: 9608255090
dataset_size: 9579730430
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9104014026
num_examples: 1862
download_size: 9132206904
dataset_size: 9104014026
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9482806827
num_examples: 1875
download_size: 9511062893
dataset_size: 9482806827
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9841358654
num_examples: 1937
download_size: 9870990138
dataset_size: 9841358654
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9511847926
num_examples: 1917
download_size: 9541482048
dataset_size: 9511847926
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9905653849
num_examples: 1929
download_size: 9935188764
dataset_size: 9905653849
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9407924858
num_examples: 1877
download_size: 9436227201
dataset_size: 9407924858
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9098451395
num_examples: 1761
download_size: 9126314203
dataset_size: 9098451395
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9250656777
num_examples: 1850
download_size: 9277818541
dataset_size: 9250656777
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9057374661
num_examples: 1790
download_size: 9085397757
dataset_size: 9057374661
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8914837743
num_examples: 1758
download_size: 8941549659
dataset_size: 8914837743
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9914198268
num_examples: 1898
download_size: 9941362425
dataset_size: 9914198268
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9711988405
num_examples: 1943
download_size: 9741716068
dataset_size: 9711988405
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9604785951
num_examples: 1903
download_size: 9634373630
dataset_size: 9604785951
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9737938983
num_examples: 1912
download_size: 9767484883
dataset_size: 9737938983
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9784631511
num_examples: 1945
download_size: 9811517276
dataset_size: 9784631511
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9540786756
num_examples: 1899
download_size: 9570365681
dataset_size: 9540786756
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9809407640
num_examples: 1902
download_size: 9838834622
dataset_size: 9809407640
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9352779642
num_examples: 1805
download_size: 9380734578
dataset_size: 9352779642
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9299494462
num_examples: 1797
download_size: 9326535503
dataset_size: 9299494462
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9079133566
num_examples: 1757
download_size: 9106984613
dataset_size: 9079133566
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9712189291
num_examples: 1893
download_size: 9739807425
dataset_size: 9712189291
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9839600386
num_examples: 1928
download_size: 9869138790
dataset_size: 9839600386
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9606360855
num_examples: 1863
download_size: 9635729919
dataset_size: 9606360855
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9642330669
num_examples: 1855
download_size: 9670222801
dataset_size: 9642330669
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9834140258
num_examples: 1890
download_size: 9863506751
dataset_size: 9834140258
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9725112494
num_examples: 1899
download_size: 9754592391
dataset_size: 9725112494
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8976885342
num_examples: 1835
download_size: 9002538999
dataset_size: 8976885342
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10000473788
num_examples: 1931
download_size: 10029772079
dataset_size: 10000473788
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9334915645
num_examples: 1784
download_size: 9362744529
dataset_size: 9334915645
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9333912380
num_examples: 1797
download_size: 9361822687
dataset_size: 9333912380
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9166016124
num_examples: 1757
download_size: 9189912083
dataset_size: 9166016124
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9649379352
num_examples: 1831
download_size: 9678549281
dataset_size: 9649379352
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9696258598
num_examples: 1891
download_size: 9725722759
dataset_size: 9696258598
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9959981112
num_examples: 1897
download_size: 9989307992
dataset_size: 9959981112
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10007747026
num_examples: 1897
download_size: 10038312177
dataset_size: 10007747026
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10007788444
num_examples: 1902
download_size: 10038354205
dataset_size: 10007788444
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9917511575
num_examples: 1875
download_size: 9941157796
dataset_size: 9917511575
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9820733471
num_examples: 1987
download_size: 9850269724
dataset_size: 9820733471
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10134544844
num_examples: 1951
download_size: 10165322891
dataset_size: 10134544844
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9386655137
num_examples: 1752
download_size: 9414301949
dataset_size: 9386655137
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9446951589
num_examples: 1780
download_size: 9474700273
dataset_size: 9446951589
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9777957117
num_examples: 1846
download_size: 9807128019
dataset_size: 9777957117
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9102655130
num_examples: 1723
download_size: 9129263935
dataset_size: 9102655130
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9780858901
num_examples: 1866
download_size: 9810124485
dataset_size: 9780858901
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10086702415
num_examples: 1893
download_size: 10117190006
dataset_size: 10086702415
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10109788587
num_examples: 1924
download_size: 10137705027
dataset_size: 10109788587
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9853462061
num_examples: 1881
download_size: 9882384601
dataset_size: 9853462061
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9883115206
num_examples: 1887
download_size: 9912433454
dataset_size: 9883115206
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9008892954
num_examples: 1810
download_size: 9037072334
dataset_size: 9008892954
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10053484869
num_examples: 1909
download_size: 10084064374
dataset_size: 10053484869
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9219503176
num_examples: 1728
download_size: 9246364553
dataset_size: 9219503176
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9375723434
num_examples: 1787
download_size: 9401019242
dataset_size: 9375723434
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9567750688
num_examples: 1790
download_size: 9596745445
dataset_size: 9567750688
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9639191254
num_examples: 1812
download_size: 9668262467
dataset_size: 9639191254
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10010824960
num_examples: 1877
download_size: 10041256612
dataset_size: 10010824960
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10044736643
num_examples: 1890
download_size: 10075237919
dataset_size: 10044736643
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9996933459
num_examples: 1873
download_size: 10026116707
dataset_size: 9996933459
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10095822332
num_examples: 1883
download_size: 10126245840
dataset_size: 10095822332
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10268303934
num_examples: 1916
download_size: 10298810059
dataset_size: 10268303934
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9097360602
num_examples: 1832
download_size: 9122322351
dataset_size: 9097360602
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10120152697
num_examples: 1903
download_size: 10150083596
dataset_size: 10120152697
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9284035527
num_examples: 1736
download_size: 9311653969
dataset_size: 9284035527
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10250178033
num_examples: 1887
download_size: 10280517538
dataset_size: 10250178033
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9222731957
num_examples: 1736
download_size: 9249882756
dataset_size: 9222731957
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9818296986
num_examples: 1829
download_size: 9847340045
dataset_size: 9818296986
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10009491493
num_examples: 1862
download_size: 10039851706
dataset_size: 10009491493
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10168022034
num_examples: 1914
download_size: 10196509154
dataset_size: 10168022034
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10076591228
num_examples: 1874
download_size: 10106985644
dataset_size: 10076591228
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10027538901
num_examples: 1871
download_size: 10057947606
dataset_size: 10027538901
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10094618912
num_examples: 1891
download_size: 10125094585
dataset_size: 10094618912
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9933535024
num_examples: 2009
download_size: 9963487174
dataset_size: 9933535024
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9926663871
num_examples: 1885
download_size: 9955941714
dataset_size: 9926663871
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10328392928
num_examples: 1913
download_size: 10358834523
dataset_size: 10328392928
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10325523231
num_examples: 1910
download_size: 10355953098
dataset_size: 10325523231
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10139356186
num_examples: 1887
download_size: 10169781380
dataset_size: 10139356186
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10033769422
num_examples: 1867
download_size: 10064131883
dataset_size: 10033769422
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10014868133
num_examples: 1881
download_size: 10045337903
dataset_size: 10014868133
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10002691939
num_examples: 1862
download_size: 10029974377
dataset_size: 10002691939
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10137271516
num_examples: 1897
download_size: 10166450449
dataset_size: 10137271516
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10210468115
num_examples: 1900
download_size: 10240900582
dataset_size: 10210468115
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10209211677
num_examples: 1886
download_size: 10239579862
dataset_size: 10209211677
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9715767539
num_examples: 1977
download_size: 9745666870
dataset_size: 9715767539
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10196598544
num_examples: 1913
download_size: 10227130013
dataset_size: 10196598544
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10267980939
num_examples: 1913
download_size: 10298448201
dataset_size: 10267980939
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10023876490
num_examples: 1886
download_size: 10054355442
dataset_size: 10023876490
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10193300693
num_examples: 1875
download_size: 10223629532
dataset_size: 10193300693
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10231732730
num_examples: 1900
download_size: 10262173245
dataset_size: 10231732730
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10013150375
num_examples: 1867
download_size: 10043533360
dataset_size: 10013150375
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10309586781
num_examples: 1900
download_size: 10339981006
dataset_size: 10309586781
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10319142937
num_examples: 1899
download_size: 10349514733
dataset_size: 10319142937
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10316858013
num_examples: 1904
download_size: 10347258013
dataset_size: 10316858013
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10377469541
num_examples: 1901
download_size: 10407826150
dataset_size: 10377469541
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
xiegeo/uspto-mol | xiegeo | "2024-12-20T19:49:26Z" | 20,664 | 1 | [
"region:us"
] | null | "2024-12-04T05:34:34Z" | ---
configs:
- config_name: default
data_files:
- split: all
path: "data/**/*"
---
# An intermediate dataset for US molecular patent grants
Retrieves patent grant data from USPTO weekly releases `bulkdata.uspto.gov/data/patent/grant/redbook/{year}` and keeps only patents with `.mol` files for downstream data mining use cases.
Compared to existing work such as [https://github.com/learningmatter-mit/PatentChem/tree/main]. We keep all files of a patent grant, especially `.til` image files. We also share the data as a git lfs repo instead of a single `.zip` file to allow easier access.
## The data directory
``` bash
data/{year}/{weekly}/{sub_dir}/{filename}.ZIP
```
Each file is a full patent grant with original redbook directory structure preserved. `weekly` is the weekly release date. Expect inconsistencies for the formate of `weekly` and depth in `sub_dir` over time.
See [https://www.uspto.gov/learning-and-resources/xml-resources] for DTD and Red Book Viewer.
## Data quality
Ideally, USPTO provides the ground truth on chemical structure annotations for the molecular drawings. But the raw data quality appears poor. Some outlandish annotations such as chemical structure annotations on non-chemistry related images suggest an automated annotation process with little human QA is used to annotate the drawings.
A report on improving data quality is left as a future work. (seeking calibrators)
## Related resources
- (2023) Automated patent extraction powers generative modeling in focused chemical spaces [https://github.com/learningmatter-mit/PatentChem]
- (2014) Processing USPTO Patent Data [https://github.com/funginstitute/patentprocessor/] [https://funginstitute.berkeley.edu/wp-content/uploads/2014/06/patentprocessor.pdf]: How to extract information from the `.xml` file
- (2011) SCRIPDB: a portal for easy access to syntheses, chemicals and reactions in patents [https://pmc.ncbi.nlm.nih.gov/articles/PMC3245107/]: extensive data analytics, but database and website no longer exist.
## The DATA-CHANGELOG.md file
This file is written by `update_data.py` to log statistics from each processed weekly.
## How to update the dataset
`python update_data.py --from_year 2024` will search for new weeklies from 2024 to current year, add them to the data directory and push to upstream.
Please only use this project's root directory as the working directory. You can also copy pre-downloaded weeklies here to avoid re-downloading them.
``` bash
$ python update_data.py -h
usage: update_data.py [-h] [--quick] [--keep] [--stream] [--verbose] [--from_year FROM_YEAR] [--to_year TO_YEAR]
Downloads and filters patent files for mol files from USPTO website, then adds them to a dataset
options:
-h, --help show this help message and exit
--quick only processed at most one new weekly file, you can also create a file named 'quick' (touch quick) to enable this flag during runtime to exit after finishing
processing the current weekly file
--keep keep the original downloaded files
--stream remove all pushed lfs files and objects locally. DO NOT USE WHEN RUNNING PARALLEL SCRIPTS
--verbose print verbose logs
--from_year FROM_YEAR
start from this year (default 2001 is the earliest available)
--to_year TO_YEAR end at this year (default current year)
```
|
andstor/the_pile_github | andstor | "2023-03-20T23:39:53Z" | 20,624 | 8 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2101.00027",
"arxiv:2201.07311",
"region:us"
] | [
"text-generation",
"fill-mask",
"text-classification"
] | "2023-03-07T15:53:05Z" | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: The Pile GitHub
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids: []
---
# Dataset Card for The Pile GitHub
## Table of Contents
- [Dataset Card for Smart Contracts](#dataset-card-for-the-pile-github)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElutherAI](https://pile.eleuther.ai)
- **Repository:** [GitHub](https://github.com/andstor/the-pile-github)
- **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the GitHub subset of EleutherAi/The Pile dataset and contains GitHub repositories. The programming languages are identified using the [guesslang library](https://github.com/yoeo/guesslang). A total of 54 programming languages are included in the dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The following languages are covered by the dataset:
```
'Assembly', 'Batchfile', 'C', 'C#', 'C++', 'CMake', 'COBOL', 'CSS', 'CSV', 'Clojure', 'CoffeeScript', 'DM', 'Dart', 'Dockerfile', 'Elixir', 'Erlang', 'Fortran', 'Go', 'Groovy', 'HTML', 'Haskell', 'INI', 'JSON', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lisp', 'Lua', 'Makefile', 'Markdown', 'Matlab', 'None', 'OCaml', 'Objective-C', 'PHP', 'Pascal', 'Perl', 'PowerShell', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Shell', 'Swift', 'TOML', 'TeX', 'TypeScript', 'Verilog', 'Visual Basic', 'XML', 'YAML'
```
The [guesslang library](https://github.com/yoeo/guesslang) is used to identify the programming languages. It has a guessing accuracy of above 90%. Hence, there will be some misclassifications in the language identification.
## Dataset Structure
### Data Instances
[More Information Needed]
```
{
'text': ...,
'meta': {'language': ...}
}
```
### Data Fields
- `text` (`string`): the source code.
- `meta` (`dict`): the metadata of the source code.
- `language` (`string`): the programming language of the source code.
### Data Splits
[More Information Needed]
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The data is purely a subset of the [EleutherAI/The Pile dataset](https://huggingface.co/datasets/the_pile). See the original [dataset](https://arxiv.org/abs/2201.07311) for more details.
## Additional Information
### Licensing Information
The Pile dataset was released on January 1st, 2021. It is licensed under the MIT License. See the [dataset](https://arxiv.org/abs/2201.07311) for more details.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
Thanks to [@andstor](https://github.com/andstor) for adding this dataset. |
yahma/alpaca-cleaned | yahma | "2023-04-10T20:29:06Z" | 20,563 | 639 | [
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"instruction-finetuning"
] | [
"text-generation"
] | "2023-03-24T18:27:58Z" | ---
license: cc-by-4.0
language:
- en
tags:
- instruction-finetuning
pretty_name: Alpaca-Cleaned
task_categories:
- text-generation
---
# Dataset Card for Alpaca-Cleaned
- **Repository:** https://github.com/gururise/AlpacaDataCleaned
## Dataset Description
This is a cleaned version of the original Alpaca Dataset released by Stanford. The following issues have been identified in the original release and fixed in this dataset:
1. **Hallucinations:** Many instructions in the original dataset had instructions referencing data on the internet, which just caused GPT3 to hallucinate an answer.
```
"instruction":"Summarize the given article in 200 Words.",
"input": "https://www.bbc.com/news/world-51461830",
"output": "The recent protests in Belarus are part of a growing wave of discontent against the rule of Alexander Lukashenko, who has led the country since 1994. Demonstrations against his government have been ongoing since August, and come in the wake of an election widely seen as rigged. As police and security forces have cracked down on protesters, the movement has only grown, with hundreds of thousands of people taking to the streets to call for new leadership. Human rights activists have accused the government of using extreme violence in order prosecute protesters, leading to an international outcry and sanctions from the European Union. As the situation in Belarus continues to unfold, the international community is monitoring the situation closely - hoping for a peaceful resolution, while also standing firm against the oppressive tactics employed by the Belarusian state."
"instruction": "Classify the following shape according to simple geometry.",
"input": "<Picture Attached>",
"output": "Triangle"
"instruction": "Given a recent news article, write a headline for it.",
"input": "https://www.news.example.com/article-1",
"output": "\"NASA Launches a Mars 2020 Mission to Search for Signs of Ancient Life\""
```
2. **Merged Instructions:** There were many instructions that were merged together in the original dataset for some reason.
```
"instruction": "Should I invest in stocks?",
"input": "",
"output": "It depends on your individual situation. Investing in stocks carries a certain amount of risk but can also offer potential rewards. Make sure to research the different types of stocks, understand the risks and rewards, and develop a plan that works for you.\n##\n18. Instruction: Identify the noun in the following sentence:\n18. Input:\nThe large apple was delicious.\n18. Output:\nApple"
```
3. **Empty outputs:** Some entries in the original dataset had empty outputs.
```
"instruction": "Calculate the area of the following shape in square centimeters.",
"input": "rectangle of size 4 cm x 5 cm",
"output": ""
```
4. **Empty code examples:** Some descriptions in the original dataset were missing code examples, making it difficult to understand the intended behavior of the code.
5. **Instructions to generate images:** Some descriptions in the original dataset included instructions to generate images, something obviously not possible.
```
"instruction": "Create a graphic or logo that visually represents the word \"courage\".",
"input": "",
"output": "<No Output>"
```
6. **N/A outputs:** Some code snippets in the original dataset had N/A outputs.
7. **Inconsistent input field:** The original dataset had inconsistent usage of the input field when it was supposed to be empty.
```
"input":"<no input>"
"input":"No input"
"input":"noinput"
"input":"<noinput>"
```
8. **Wrong answers:** Some instructions/questions in the original dataset had incorrect answers. About 80% of the math problems are estimated to have incorrect answers.
```
"instruction": "Calculate the median of the following data set.",
"input": "1, 2, 4, 5, 8, 9",
"output": "5"
"instruction": "Convert 25m to km.",
"input": "",
"output": "25km"
```
9. **Non-Sensical/Unclear instructions:** Many instructions are unclear, we try to clarify (or re-write) if instructions are non-sensical. Instructions that are slightly unclear, but where one could deduce the meaning are not altered.
```
"instruction": "Freeze the following sample of yogurt for 10 minutes.",
"input": "Yogurt sample",
"output": "<noinput>"
"instruction": "Increase the font size to 12 points.",
"input": "",
"output": "The font size has been increased to 12 points."
```
10. **Extraneous escape and control characters:** The original dataset had several entries with extraneous escape and control characters.
### Original Alpaca Dataset Summary
Alpaca is a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. This instruction data can be used to conduct instruction-tuning for language models and make the language model follow instruction better.
The authors built on the data generation pipeline from [Self-Instruct framework](https://github.com/yizhongw/self-instruct) and made the following modifications:
- The `text-davinci-003` engine to generate the instruction data instead of `davinci`.
- A [new prompt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) was written that explicitly gave the requirement of instruction generation to `text-davinci-003`.
- Much more aggressive batch decoding was used, i.e., generating 20 instructions at once, which significantly reduced the cost of data generation.
- The data generation pipeline was simplified by discarding the difference between classification and non-classification instructions.
- Only a single instance was generated for each instruction, instead of 2 to 3 instances as in Self-Instruct.
This produced an instruction-following dataset with 52K examples obtained at a much lower cost (less than $500).
In a preliminary study, the authors also found that the 52K generated data to be much more diverse than the data released by [Self-Instruct](https://github.com/yizhongw/self-instruct/blob/main/data/seed_tasks.jsonl).
### Supported Tasks and Leaderboards
The Alpaca dataset designed for instruction training pretrained language models.
### Languages
The data in Alpaca are in English (BCP-47 en).
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"instruction": "Create a classification task by clustering the given list of items.",
"input": "Apples, oranges, bananas, strawberries, pineapples",
"output": "Class 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
"text": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nCreate a classification task by clustering the given list of items.\n\n### Input:\nApples, oranges, bananas, strawberries, pineapples\n\n### Response:\nClass 1: Apples, Oranges\nClass 2: Bananas, Strawberries\nClass 3: Pineapples",
}
```
### Data Fields
The data fields are as follows:
* `instruction`: describes the task the model should perform. Each of the 52K instructions is unique.
* `input`: optional context or input for the task. For example, when the instruction is "Summarize the following article", the input is the article. Around 40% of the examples have an input.
* `output`: the answer to the instruction as generated by `text-davinci-003`.
* `text`: the `instruction`, `input` and `output` formatted with the [prompt template](https://github.com/tatsu-lab/stanford_alpaca#data-release) used by the authors for fine-tuning their models.
### Data Splits
| | train |
|---------------|------:|
| alpaca | 52002 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Excerpt the [blog post](https://crfm.stanford.edu/2023/03/13/alpaca.html) accompanying the release of this dataset:
> We believe that releasing the above assets will enable the academic community to perform controlled scientific studies on instruction-following language models, resulting in better science and ultimately new techniques to address the existing deficiencies with these models. At the same time, any release carries some risk. First, we recognize that releasing our training recipe reveals the feasibility of certain capabilities. On one hand, this enables more people (including bad actors) to create models that could cause harm (either intentionally or not). On the other hand, this awareness might incentivize swift defensive action, especially from the academic community, now empowered by the means to perform deeper safety research on such models. Overall, we believe that the benefits for the research community outweigh the risks of this particular release. Given that we are releasing the training recipe, we believe that releasing the data, model weights, and training code incur minimal further risk, given the simplicity of the recipe. At the same time, releasing these assets has enormous benefits for reproducible science, so that the academic community can use standard datasets, models, and code to perform controlled comparisons and to explore extensions. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the barrier for spam, fraud, or disinformation. We have put into place two risk mitigation strategies. First, we have implemented a content filter using OpenAI’s content moderation API, which filters out harmful content as defined by OpenAI’s usage policies. Second, we watermark all the model outputs using the method described in Kirchenbauer et al. 2023, so that others can detect (with some probability) whether an output comes from Alpaca 7B. Finally, we have strict terms and conditions for using the demo; it is restricted to non-commercial uses and to uses that follow LLaMA’s license agreement. We understand that these mitigation measures can be circumvented once we release the model weights or if users train their own instruction-following models. However, by installing these mitigations, we hope to advance the best practices and ultimately develop community norms for the responsible deployment of foundation models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
The `alpaca` data is generated by a language model (`text-davinci-003`) and inevitably contains some errors or biases. We encourage users to use this data with caution and propose new methods to filter or improve the imperfections.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
### Citation Information
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
```
### Contributions
[More Information Needed] |
sentence-transformers/parallel-sentences-ccmatrix | sentence-transformers | "2024-06-18T19:49:55Z" | 20,509 | 5 | [
"task_categories:feature-extraction",
"task_categories:sentence-similarity",
"language:af",
"language:ar",
"language:ast",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ceb",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:id",
"language:ig",
"language:ilo",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ko",
"language:la",
"language:lb",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mr",
"language:ms",
"language:ne",
"language:nl",
"language:no",
"language:oc",
"language:or",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:tl",
"language:tr",
"language:uk",
"language:ur",
"language:vi",
"language:xh",
"language:yi",
"language:zh",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"sentence-transformers"
] | [
"feature-extraction",
"sentence-similarity"
] | "2024-05-25T08:10:49Z" | ---
size_categories:
- 1B<n<10B
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: CCMatrix
tags:
- sentence-transformers
dataset_info:
- config_name: en-af
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1148266566
num_examples: 8694461
download_size: 832696872
dataset_size: 1148266566
- config_name: en-ar
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 14226823350
num_examples: 49697322
download_size: 8765479032
dataset_size: 14226823350
- config_name: en-ast
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 637434232
num_examples: 2956618
download_size: 453215367
dataset_size: 637434232
- config_name: en-az
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 177938263
num_examples: 1251254
download_size: 124360060
dataset_size: 177938263
- config_name: en-be
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 542841152
num_examples: 1885446
download_size: 142253337
dataset_size: 542841152
- config_name: en-bg
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 11961705756
num_examples: 44635282
download_size: 7160113672
dataset_size: 11961705756
- config_name: en-bn
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 2053120318
num_examples: 10074620
download_size: 1098221602
dataset_size: 2053120318
- config_name: en-br
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 86947450
num_examples: 454175
download_size: 34384705
dataset_size: 86947450
- config_name: en-ca
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 3837708529
num_examples: 21284430
download_size: 2751851357
dataset_size: 3837708529
- config_name: en-ceb
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 234354977
num_examples: 962549
download_size: 49981947
dataset_size: 234354977
- config_name: en-cs
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 10277621210
num_examples: 56307029
download_size: 7607844854
dataset_size: 10277621210
- config_name: en-da
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 9093768397
num_examples: 52273664
download_size: 6510153593
dataset_size: 9093768397
- config_name: en-de
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 51555584594
num_examples: 247470736
download_size: 36674461807
dataset_size: 51555584594
- config_name: en-el
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 14907671089
num_examples: 49262631
download_size: 8919934254
dataset_size: 14907671089
- config_name: en-eo
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 2967819260
num_examples: 15418393
download_size: 2168808728
dataset_size: 2967819260
- config_name: en-es
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 100839563812
num_examples: 409061333
download_size: 69967087456
dataset_size: 100839563812
- config_name: en-et
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 3615519238
num_examples: 22007049
download_size: 2628823101
dataset_size: 3615519238
- config_name: en-eu
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 821387539
num_examples: 7778871
download_size: 606760358
dataset_size: 821387539
- config_name: en-fa
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 5720870387
num_examples: 24597533
download_size: 3493927697
dataset_size: 5720870387
- config_name: en-fi
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 6003531112
num_examples: 35982562
download_size: 4339586568
dataset_size: 6003531112
- config_name: en-fr
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 76362272973
num_examples: 328595738
download_size: 52468991292
dataset_size: 76362272973
- config_name: en-fy
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 192234316
num_examples: 1372321
download_size: 93352000
dataset_size: 192234316
- config_name: en-ga
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 225464892
num_examples: 1076420
download_size: 45935095
dataset_size: 225464892
- config_name: en-gd
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 114460109
num_examples: 310351
download_size: 24846114
dataset_size: 114460109
- config_name: en-gl
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1746914916
num_examples: 13178507
download_size: 1294690325
dataset_size: 1746914916
- config_name: en-ha
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1381532117
num_examples: 5861080
download_size: 314288607
dataset_size: 1381532117
- config_name: en-he
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4410235248
num_examples: 25228938
download_size: 2861838829
dataset_size: 4410235248
- config_name: en-hi
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4083378822
num_examples: 15127900
download_size: 2132432520
dataset_size: 4083378822
- config_name: en-hr
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 3218511310
num_examples: 18797643
download_size: 2368836034
dataset_size: 3218511310
- config_name: en-hu
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 6820144152
num_examples: 36435409
download_size: 4937653466
dataset_size: 6820144152
- config_name: en-id
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 15124269935
num_examples: 70545705
download_size: 10343250018
dataset_size: 15124269935
- config_name: en-ig
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 9470472
num_examples: 80385
download_size: 4927497
dataset_size: 9470472
- config_name: en-ilo
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 97134614
num_examples: 335469
download_size: 25965608
dataset_size: 97134614
- config_name: en-is
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1006043191
num_examples: 8723145
download_size: 726436377
dataset_size: 1006043191
- config_name: en-it
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 34781986566
num_examples: 146240552
download_size: 24249713724
dataset_size: 34781986566
- config_name: en-ja
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 6088805781
num_examples: 40883733
download_size: 4379531648
dataset_size: 6088805781
- config_name: en-jv
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 281788171
num_examples: 819280
download_size: 43299173
dataset_size: 281788171
- config_name: en-ko
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 2895213164
num_examples: 19358582
download_size: 2111224807
dataset_size: 2895213164
- config_name: en-la
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 93121864
num_examples: 1114190
download_size: 72143426
dataset_size: 93121864
- config_name: en-lb
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4552854478
num_examples: 11978495
download_size: 762899421
dataset_size: 4552854478
- config_name: en-lt
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4141127566
num_examples: 23298470
download_size: 3004038735
dataset_size: 4141127566
- config_name: en-lv
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 3297623891
num_examples: 16685969
download_size: 2397129326
dataset_size: 3297623891
- config_name: en-mg
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 272623294
num_examples: 1736359
download_size: 177355627
dataset_size: 272623294
- config_name: en-mk
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 2780199285
num_examples: 12040173
download_size: 1672182611
dataset_size: 2780199285
- config_name: en-ml
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1372569050
num_examples: 6809956
download_size: 704237919
dataset_size: 1372569050
- config_name: en-mr
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 464346053
num_examples: 2874211
download_size: 248129592
dataset_size: 464346053
- config_name: en-ms
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1599412862
num_examples: 10730648
download_size: 1124736238
dataset_size: 1599412862
- config_name: en-ne
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 124241733
num_examples: 708316
download_size: 65140507
dataset_size: 124241733
- config_name: en-nl
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 20167977866
num_examples: 106695917
download_size: 14404091096
dataset_size: 20167977866
- config_name: en-no
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 7658217005
num_examples: 47801406
download_size: 5558513486
dataset_size: 7658217005
- config_name: en-oc
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 142791359
num_examples: 1730828
download_size: 109912116
dataset_size: 142791359
- config_name: en-or
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 17904200
num_examples: 96595
download_size: 8406901
dataset_size: 17904200
- config_name: en-pl
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 14630774653
num_examples: 74070714
download_size: 10663782153
dataset_size: 14630774653
- config_name: en-pt
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 41115105368
num_examples: 173743166
download_size: 28862370774
dataset_size: 41115105368
- config_name: en-ro
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 12075310975
num_examples: 55607023
download_size: 8615952576
dataset_size: 12075310975
- config_name: en-ru
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 42356783826
num_examples: 139937785
download_size: 25636989987
dataset_size: 42356783826
- config_name: en-sd
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 335072671
num_examples: 1717573
download_size: 197842483
dataset_size: 335072671
- config_name: en-si
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 960598149
num_examples: 6270800
download_size: 548699294
dataset_size: 960598149
- config_name: en-sk
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 6584874005
num_examples: 38096241
download_size: 4879550406
dataset_size: 6584874005
- config_name: en-sl
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4487888666
num_examples: 27406782
download_size: 3305678350
dataset_size: 4487888666
- config_name: en-so
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 49524014
num_examples: 222793
download_size: 30280185
dataset_size: 49524014
- config_name: en-sq
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 3723203721
num_examples: 22358158
download_size: 2627564869
dataset_size: 3723203721
- config_name: en-sr
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 4455973990
num_examples: 26510872
download_size: 3135382266
dataset_size: 4455973990
- config_name: en-su
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 31745713
num_examples: 271736
download_size: 18535781
dataset_size: 31745713
- config_name: en-sv
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 12787309195
num_examples: 77008059
download_size: 9187888944
dataset_size: 12787309195
- config_name: en-sw
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 874785438
num_examples: 5756664
download_size: 617058117
dataset_size: 874785438
- config_name: en-ta
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1604830149
num_examples: 7291118
download_size: 790660374
dataset_size: 1604830149
- config_name: en-tl
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 385586424
num_examples: 3113828
download_size: 282939008
dataset_size: 385586424
- config_name: en-tr
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 7629424945
num_examples: 47045956
download_size: 5549531679
dataset_size: 7629424945
- config_name: en-uk
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 5045729411
num_examples: 20240171
download_size: 3145670206
dataset_size: 5045729411
- config_name: en-ur
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 1029899312
num_examples: 6094149
download_size: 643507216
dataset_size: 1029899312
- config_name: en-vi
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 11459576669
num_examples: 50092444
download_size: 7526497393
dataset_size: 11459576669
- config_name: en-xh
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 10632555903
num_examples: 18980689
download_size: 951644540
dataset_size: 10632555903
- config_name: en-yi
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 175142487
num_examples: 275076
download_size: 24791912
dataset_size: 175142487
- config_name: en-zh
features:
- name: english
dtype: string
- name: non_english
dtype: string
splits:
- name: train
num_bytes: 13169778580
num_examples: 71383325
download_size: 9897543738
dataset_size: 13169778580
configs:
- config_name: en-af
data_files:
- split: train
path: en-af/train-*
- config_name: en-ar
data_files:
- split: train
path: en-ar/train-*
- config_name: en-ast
data_files:
- split: train
path: en-ast/train-*
- config_name: en-az
data_files:
- split: train
path: en-az/train-*
- config_name: en-be
data_files:
- split: train
path: en-be/train-*
- config_name: en-bg
data_files:
- split: train
path: en-bg/train-*
- config_name: en-bn
data_files:
- split: train
path: en-bn/train-*
- config_name: en-br
data_files:
- split: train
path: en-br/train-*
- config_name: en-ca
data_files:
- split: train
path: en-ca/train-*
- config_name: en-ceb
data_files:
- split: train
path: en-ceb/train-*
- config_name: en-cs
data_files:
- split: train
path: en-cs/train-*
- config_name: en-da
data_files:
- split: train
path: en-da/train-*
- config_name: en-de
data_files:
- split: train
path: en-de/train-*
- config_name: en-el
data_files:
- split: train
path: en-el/train-*
- config_name: en-eo
data_files:
- split: train
path: en-eo/train-*
- config_name: en-es
data_files:
- split: train
path: en-es/train-*
- config_name: en-et
data_files:
- split: train
path: en-et/train-*
- config_name: en-eu
data_files:
- split: train
path: en-eu/train-*
- config_name: en-fa
data_files:
- split: train
path: en-fa/train-*
- config_name: en-fi
data_files:
- split: train
path: en-fi/train-*
- config_name: en-fr
data_files:
- split: train
path: en-fr/train-*
- config_name: en-fy
data_files:
- split: train
path: en-fy/train-*
- config_name: en-ga
data_files:
- split: train
path: en-ga/train-*
- config_name: en-gd
data_files:
- split: train
path: en-gd/train-*
- config_name: en-gl
data_files:
- split: train
path: en-gl/train-*
- config_name: en-ha
data_files:
- split: train
path: en-ha/train-*
- config_name: en-he
data_files:
- split: train
path: en-he/train-*
- config_name: en-hi
data_files:
- split: train
path: en-hi/train-*
- config_name: en-hr
data_files:
- split: train
path: en-hr/train-*
- config_name: en-hu
data_files:
- split: train
path: en-hu/train-*
- config_name: en-id
data_files:
- split: train
path: en-id/train-*
- config_name: en-ig
data_files:
- split: train
path: en-ig/train-*
- config_name: en-ilo
data_files:
- split: train
path: en-ilo/train-*
- config_name: en-is
data_files:
- split: train
path: en-is/train-*
- config_name: en-it
data_files:
- split: train
path: en-it/train-*
- config_name: en-ja
data_files:
- split: train
path: en-ja/train-*
- config_name: en-jv
data_files:
- split: train
path: en-jv/train-*
- config_name: en-ko
data_files:
- split: train
path: en-ko/train-*
- config_name: en-la
data_files:
- split: train
path: en-la/train-*
- config_name: en-lb
data_files:
- split: train
path: en-lb/train-*
- config_name: en-lt
data_files:
- split: train
path: en-lt/train-*
- config_name: en-lv
data_files:
- split: train
path: en-lv/train-*
- config_name: en-mg
data_files:
- split: train
path: en-mg/train-*
- config_name: en-mk
data_files:
- split: train
path: en-mk/train-*
- config_name: en-ml
data_files:
- split: train
path: en-ml/train-*
- config_name: en-mr
data_files:
- split: train
path: en-mr/train-*
- config_name: en-ms
data_files:
- split: train
path: en-ms/train-*
- config_name: en-ne
data_files:
- split: train
path: en-ne/train-*
- config_name: en-nl
data_files:
- split: train
path: en-nl/train-*
- config_name: en-no
data_files:
- split: train
path: en-no/train-*
- config_name: en-oc
data_files:
- split: train
path: en-oc/train-*
- config_name: en-or
data_files:
- split: train
path: en-or/train-*
- config_name: en-pl
data_files:
- split: train
path: en-pl/train-*
- config_name: en-pt
data_files:
- split: train
path: en-pt/train-*
- config_name: en-ro
data_files:
- split: train
path: en-ro/train-*
- config_name: en-ru
data_files:
- split: train
path: en-ru/train-*
- config_name: en-sd
data_files:
- split: train
path: en-sd/train-*
- config_name: en-si
data_files:
- split: train
path: en-si/train-*
- config_name: en-sk
data_files:
- split: train
path: en-sk/train-*
- config_name: en-sl
data_files:
- split: train
path: en-sl/train-*
- config_name: en-so
data_files:
- split: train
path: en-so/train-*
- config_name: en-sq
data_files:
- split: train
path: en-sq/train-*
- config_name: en-sr
data_files:
- split: train
path: en-sr/train-*
- config_name: en-su
data_files:
- split: train
path: en-su/train-*
- config_name: en-sv
data_files:
- split: train
path: en-sv/train-*
- config_name: en-sw
data_files:
- split: train
path: en-sw/train-*
- config_name: en-ta
data_files:
- split: train
path: en-ta/train-*
- config_name: en-tl
data_files:
- split: train
path: en-tl/train-*
- config_name: en-tr
data_files:
- split: train
path: en-tr/train-*
- config_name: en-uk
data_files:
- split: train
path: en-uk/train-*
- config_name: en-ur
data_files:
- split: train
path: en-ur/train-*
- config_name: en-vi
data_files:
- split: train
path: en-vi/train-*
- config_name: en-xh
data_files:
- split: train
path: en-xh/train-*
- config_name: en-yi
data_files:
- split: train
path: en-yi/train-*
- config_name: en-zh
data_files:
- split: train
path: en-zh/train-*
language:
- af
- ar
- ast
- az
- be
- bg
- bn
- br
- ca
- ceb
- cs
- da
- de
- el
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- ha
- he
- hi
- hr
- hu
- id
- ig
- ilo
- is
- it
- ja
- jv
- ko
- la
- lb
- lt
- lv
- mg
- mk
- ml
- mr
- ms
- ne
- nl
- 'no'
- oc
- or
- pl
- pt
- ro
- ru
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- tl
- tr
- uk
- ur
- vi
- xh
- yi
- zh
---
# Dataset Card for Parallel Sentences - CCMatrix
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. The texts originate from the [CCMatrix](https://ai.meta.com/blog/ccmatrix-a-billion-scale-bitext-data-set-for-training-translation-models/) dataset.
## Related Datasets
The following datasets are also a part of the Parallel Sentences collection:
* [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl)
* [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices)
* [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse)
* [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300)
* [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary)
* [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles)
* [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks)
* [parallel-sentences-tatoeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-tatoeba)
* [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix)
* [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles)
* [parallel-sentences-ccmatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-ccmatrix)
These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
## Dataset Subsets
### `en-...` subsets
* Columns: "english", "non_english"
* Column types: `str`, `str`
* Examples:
```python
{
"english": "He and his mother will be standing vigil there.”",
"non_english": "Él y su madre estarán de vigilia allí”.",
}
```
* Collection strategy: Processing the data from [yhavinga/ccmatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) and reformatting it in Parquet and with "english" and "non_english" columns.
* Deduplified: No |
asahi417/seamless-align-enA-viA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-25T01:27:26Z" | 20,391 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-11T14:33:04Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13152124231
num_examples: 1853
download_size: 13196111601
dataset_size: 13152124231
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6657621784
num_examples: 1090
download_size: 6681474976
dataset_size: 6657621784
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9681441551
num_examples: 1520
download_size: 9716028942
dataset_size: 9681441551
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9765322177
num_examples: 1545
download_size: 9800069846
dataset_size: 9765322177
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9860909658
num_examples: 1527
download_size: 9895478528
dataset_size: 9860909658
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9709072687
num_examples: 1513
download_size: 9743463146
dataset_size: 9709072687
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9317976193
num_examples: 1465
download_size: 9350906284
dataset_size: 9317976193
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9599344558
num_examples: 1506
download_size: 9633714817
dataset_size: 9599344558
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9611793564
num_examples: 1517
download_size: 9646264815
dataset_size: 9611793564
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9817555828
num_examples: 1518
download_size: 9851984781
dataset_size: 9817555828
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9759896244
num_examples: 1532
download_size: 9794111609
dataset_size: 9759896244
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9920046697
num_examples: 1539
download_size: 9954649265
dataset_size: 9920046697
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7059876136
num_examples: 1146
download_size: 7085729003
dataset_size: 7059876136
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9229841208
num_examples: 1471
download_size: 9262898789
dataset_size: 9229841208
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9385133176
num_examples: 1450
download_size: 9418047633
dataset_size: 9385133176
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9493840990
num_examples: 1478
download_size: 9525582216
dataset_size: 9493840990
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9706904149
num_examples: 1493
download_size: 9741249161
dataset_size: 9706904149
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9563013098
num_examples: 1487
download_size: 9597350611
dataset_size: 9563013098
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9735858312
num_examples: 1504
download_size: 9770239160
dataset_size: 9735858312
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10002930156
num_examples: 1492
download_size: 10038446988
dataset_size: 10002930156
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9623888728
num_examples: 1524
download_size: 9658530642
dataset_size: 9623888728
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9814472575
num_examples: 1494
download_size: 9848825015
dataset_size: 9814472575
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9737465938
num_examples: 1524
download_size: 9770633676
dataset_size: 9737465938
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6654933967
num_examples: 1109
download_size: 6679320781
dataset_size: 6654933967
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9725133610
num_examples: 1530
download_size: 9759744099
dataset_size: 9725133610
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9807752743
num_examples: 1539
download_size: 9842394283
dataset_size: 9807752743
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10160148335
num_examples: 1535
download_size: 10195448487
dataset_size: 10160148335
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9612845669
num_examples: 1510
download_size: 9647308805
dataset_size: 9612845669
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9783575920
num_examples: 1503
download_size: 9817979503
dataset_size: 9783575920
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9937878067
num_examples: 1500
download_size: 9970948531
dataset_size: 9937878067
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10063911799
num_examples: 1509
download_size: 10099441054
dataset_size: 10063911799
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10105113965
num_examples: 1516
download_size: 10140782767
dataset_size: 10105113965
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9885429926
num_examples: 1535
download_size: 9920039716
dataset_size: 9885429926
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10050853493
num_examples: 1565
download_size: 10086889128
dataset_size: 10050853493
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6723416863
num_examples: 1147
download_size: 6748236994
dataset_size: 6723416863
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9902133998
num_examples: 1523
download_size: 9936635144
dataset_size: 9902133998
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10072082067
num_examples: 1536
download_size: 10107859734
dataset_size: 10072082067
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9821819874
num_examples: 1511
download_size: 9856206643
dataset_size: 9821819874
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9999675819
num_examples: 1537
download_size: 10034262905
dataset_size: 9999675819
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9721739176
num_examples: 1459
download_size: 9755772967
dataset_size: 9721739176
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9512855942
num_examples: 1442
download_size: 9546864623
dataset_size: 9512855942
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9919939128
num_examples: 1539
download_size: 9954523425
dataset_size: 9919939128
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10135847217
num_examples: 1576
download_size: 10171980144
dataset_size: 10135847217
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10244250087
num_examples: 1540
download_size: 10280034910
dataset_size: 10244250087
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10163189946
num_examples: 1520
download_size: 10198683267
dataset_size: 10163189946
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6904498909
num_examples: 1158
download_size: 6929277825
dataset_size: 6904498909
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9964789146
num_examples: 1521
download_size: 9999265403
dataset_size: 9964789146
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9863711559
num_examples: 1493
download_size: 9896918120
dataset_size: 9863711559
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10361663085
num_examples: 1558
download_size: 10397631438
dataset_size: 10361663085
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10304210073
num_examples: 1561
download_size: 10340224269
dataset_size: 10304210073
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10546727512
num_examples: 1595
download_size: 10584185986
dataset_size: 10546727512
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10409793660
num_examples: 1533
download_size: 10443966828
dataset_size: 10409793660
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10303709963
num_examples: 1553
download_size: 10339570190
dataset_size: 10303709963
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10567856125
num_examples: 1565
download_size: 10604972751
dataset_size: 10567856125
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10074810790
num_examples: 1547
download_size: 10110786944
dataset_size: 10074810790
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1257786005
num_examples: 185
download_size: 1262597069
dataset_size: 1257786005
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7060472765
num_examples: 1207
download_size: 7086949634
dataset_size: 7060472765
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6958294837
num_examples: 1161
download_size: 6983180953
dataset_size: 6958294837
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7126462079
num_examples: 1197
download_size: 7151925016
dataset_size: 7126462079
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7319727811
num_examples: 1263
download_size: 7346599578
dataset_size: 7319727811
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7077454525
num_examples: 1209
download_size: 7103923783
dataset_size: 7077454525
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 10695197513
num_examples: 1497
download_size: 10730532441
dataset_size: 10695197513
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6868855968
num_examples: 1196
download_size: 6893810483
dataset_size: 6868855968
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7093800554
num_examples: 1235
download_size: 7120534137
dataset_size: 7093800554
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7078975934
num_examples: 1209
download_size: 7104886899
dataset_size: 7078975934
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7325996453
num_examples: 1245
download_size: 7352616605
dataset_size: 7325996453
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7136525609
num_examples: 1260
download_size: 7162458879
dataset_size: 7136525609
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7508441357
num_examples: 1290
download_size: 7536769302
dataset_size: 7508441357
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7661879109
num_examples: 1335
download_size: 7690578111
dataset_size: 7661879109
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7242872802
num_examples: 1267
download_size: 7269843614
dataset_size: 7242872802
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7652932813
num_examples: 1309
download_size: 7681349247
dataset_size: 7652932813
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7614481810
num_examples: 1305
download_size: 7642892649
dataset_size: 7614481810
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9239795404
num_examples: 1325
download_size: 9270733082
dataset_size: 9239795404
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7433036297
num_examples: 1266
download_size: 7459246146
dataset_size: 7433036297
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7628711906
num_examples: 1297
download_size: 7656969364
dataset_size: 7628711906
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7607420351
num_examples: 1307
download_size: 7635852315
dataset_size: 7607420351
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7560370118
num_examples: 1293
download_size: 7588502113
dataset_size: 7560370118
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8139862534
num_examples: 1337
download_size: 8169411382
dataset_size: 8139862534
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7453499255
num_examples: 1272
download_size: 7480374578
dataset_size: 7453499255
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7925438221
num_examples: 1309
download_size: 7952522193
dataset_size: 7925438221
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7739400596
num_examples: 1305
download_size: 7767421974
dataset_size: 7739400596
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7528886677
num_examples: 1268
download_size: 7556740252
dataset_size: 7528886677
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7422223042
num_examples: 1243
download_size: 7448913873
dataset_size: 7422223042
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7875711478
num_examples: 1171
download_size: 7902846811
dataset_size: 7875711478
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7672187490
num_examples: 1291
download_size: 7700422032
dataset_size: 7672187490
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7312597972
num_examples: 1180
download_size: 7338811144
dataset_size: 7312597972
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7860725780
num_examples: 1336
download_size: 7889373692
dataset_size: 7860725780
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7820892373
num_examples: 1341
download_size: 7849558877
dataset_size: 7820892373
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8088726754
num_examples: 1352
download_size: 8118596134
dataset_size: 8088726754
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7810827527
num_examples: 1356
download_size: 7838356933
dataset_size: 7810827527
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7748273881
num_examples: 1284
download_size: 7776538627
dataset_size: 7748273881
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8155822961
num_examples: 1346
download_size: 8185289321
dataset_size: 8155822961
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7719278546
num_examples: 1334
download_size: 7747990038
dataset_size: 7719278546
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8366996802
num_examples: 1373
download_size: 8396943074
dataset_size: 8366996802
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7423996532
num_examples: 1130
download_size: 7449603166
dataset_size: 7423996532
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8052394619
num_examples: 1325
download_size: 8082057526
dataset_size: 8052394619
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8208691898
num_examples: 1383
download_size: 8238827868
dataset_size: 8208691898
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8497030225
num_examples: 1431
download_size: 8527197231
dataset_size: 8497030225
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8325064686
num_examples: 1366
download_size: 8355047764
dataset_size: 8325064686
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8442464622
num_examples: 1412
download_size: 8472259549
dataset_size: 8442464622
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8472138320
num_examples: 1389
download_size: 8502289446
dataset_size: 8472138320
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8320062316
num_examples: 1365
download_size: 8349954826
dataset_size: 8320062316
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8583549410
num_examples: 1398
download_size: 8614870512
dataset_size: 8583549410
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8835066172
num_examples: 1448
download_size: 8866474621
dataset_size: 8835066172
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8258116095
num_examples: 1392
download_size: 8288332282
dataset_size: 8258116095
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7198816764
num_examples: 1128
download_size: 7221616089
dataset_size: 7198816764
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8648402008
num_examples: 1394
download_size: 8679765409
dataset_size: 8648402008
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8593161997
num_examples: 1397
download_size: 8624047199
dataset_size: 8593161997
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8394114682
num_examples: 1385
download_size: 8424110689
dataset_size: 8394114682
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8140698615
num_examples: 1319
download_size: 8170375758
dataset_size: 8140698615
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8336623372
num_examples: 1390
download_size: 8366759385
dataset_size: 8336623372
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8626196236
num_examples: 1406
download_size: 8657732211
dataset_size: 8626196236
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8723106178
num_examples: 1398
download_size: 8753662184
dataset_size: 8723106178
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8723014783
num_examples: 1408
download_size: 8754409534
dataset_size: 8723014783
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8487354376
num_examples: 1410
download_size: 8516494928
dataset_size: 8487354376
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8594422639
num_examples: 1403
download_size: 8625850894
dataset_size: 8594422639
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6717272768
num_examples: 1091
download_size: 6740538108
dataset_size: 6717272768
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8618674712
num_examples: 1420
download_size: 8649448120
dataset_size: 8618674712
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8660129964
num_examples: 1421
download_size: 8691680929
dataset_size: 8660129964
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8842857082
num_examples: 1440
download_size: 8874581597
dataset_size: 8842857082
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8889486335
num_examples: 1474
download_size: 8921461083
dataset_size: 8889486335
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9107689650
num_examples: 1483
download_size: 9140828036
dataset_size: 9107689650
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9079611515
num_examples: 1473
download_size: 9112756891
dataset_size: 9079611515
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8651416691
num_examples: 1425
download_size: 8683088964
dataset_size: 8651416691
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8805756802
num_examples: 1416
download_size: 8837190154
dataset_size: 8805756802
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8434022491
num_examples: 1350
download_size: 8463835989
dataset_size: 8434022491
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8645068051
num_examples: 1405
download_size: 8676296197
dataset_size: 8645068051
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6505406968
num_examples: 1060
download_size: 6529493775
dataset_size: 6505406968
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8935649178
num_examples: 1442
download_size: 8967341579
dataset_size: 8935649178
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8977856404
num_examples: 1429
download_size: 9009438069
dataset_size: 8977856404
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9250893084
num_examples: 1443
download_size: 9283714245
dataset_size: 9250893084
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8908148488
num_examples: 1446
download_size: 8939861798
dataset_size: 8908148488
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9201901772
num_examples: 1466
download_size: 9234362431
dataset_size: 9201901772
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 8973208285
num_examples: 1438
download_size: 9004836562
dataset_size: 8973208285
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9531262524
num_examples: 1484
download_size: 9565563954
dataset_size: 9531262524
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9237080311
num_examples: 1481
download_size: 9270200551
dataset_size: 9237080311
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9432040586
num_examples: 1503
download_size: 9465196781
dataset_size: 9432040586
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9299174387
num_examples: 1477
download_size: 9332230915
dataset_size: 9299174387
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6250695911
num_examples: 1037
download_size: 6273368424
dataset_size: 6250695911
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9076719123
num_examples: 1434
download_size: 9107047689
dataset_size: 9076719123
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9042681056
num_examples: 1459
download_size: 9075705064
dataset_size: 9042681056
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9060561447
num_examples: 1456
download_size: 9093521154
dataset_size: 9060561447
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9193480780
num_examples: 1474
download_size: 9226578780
dataset_size: 9193480780
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9642374570
num_examples: 1501
download_size: 9676773522
dataset_size: 9642374570
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9060928875
num_examples: 1439
download_size: 9093805567
dataset_size: 9060928875
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9461107851
num_examples: 1487
download_size: 9494252029
dataset_size: 9461107851
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9380786998
num_examples: 1487
download_size: 9413913597
dataset_size: 9380786998
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9128034376
num_examples: 1453
download_size: 9160750230
dataset_size: 9128034376
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: viA.id
dtype: string
- name: viA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: viA.audio.speaker_embedding
sequence: float32
- name: viA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 9669792454
num_examples: 1518
download_size: 9704382184
dataset_size: 9669792454
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
Tuxifan/UbuntuIRC | Tuxifan | "2023-06-04T15:35:31Z" | 20,311 | 0 | [
"task_categories:text-generation",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-06-02T22:48:40Z" | ---
license: cc0-1.0
task_categories:
- text-generation
pretty_name: Ubuntu IRC channels
---
Completely uncurated collection of IRC logs from the Ubuntu IRC channels |
Yossh/danbooru2023-webp-4Mpixel-224 | Yossh | "2024-05-14T08:37:03Z" | 20,088 | 2 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-05-12T01:53:41Z" | ---
license: apache-2.0
---
The data set is just resized to 224*224
https://huggingface.co/datasets/KBlueLeaf/danbooru2023-webp-4Mpixel
Pseudo code for processing
```
def resize_image(file_path):
with Image.open(file_path) as img:
resized_img = img.resize((224, 224))
resized_img.save(file_path)
``` |
hexuan21/weak-to-strong-backup | hexuan21 | "2024-10-26T05:49:20Z" | 20,060 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-09-08T13:52:27Z" | ---
license: apache-2.0
---
|
OS-Copilot/OS-Atlas-data | OS-Copilot | "2024-12-04T10:26:47Z" | 20,007 | 16 | [
"license:apache-2.0",
"arxiv:2410.23218",
"arxiv:2407.17490",
"arxiv:2107.13731",
"arxiv:2010.04295",
"arxiv:2005.03776",
"arxiv:2405.14573",
"arxiv:2401.10935",
"arxiv:2406.17557",
"region:us"
] | null | "2024-11-11T11:56:29Z" | ---
license: apache-2.0
viewer: false
---
# GUI Grounding Pre-training Data for OS-ATLAS
This document describes the acquisition of the pre-training data used by OS-ATLAS [OS-ATLAS: A Foundation Action Model for Generalist GUI Agents](https://huggingface.co/papers/2410.23218).
<div align="center">
[\[🏠Homepage\]](https://osatlas.github.io) [\[💻Code\]](https://github.com/OS-Copilot/OS-Atlas) [\[🚀Quick Start\]](#quick-start) [\[📝Paper\]](https://arxiv.org/abs/2410.23218) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-atlas-67246e44003a1dfcc5d0d045) [\[🤗ScreenSpot-v2\]](https://huggingface.co/datasets/OS-Copilot/ScreenSpot-v2)
</div>

**Notes:** In GUI grounding data, the position of the target element is recorded in the `bbox` key, represented by `[left, top, right, bottom]`.
Each value is a [0, 1] decimal number indicating the ratio of the corresponding position to the width or height of the image.
The data stored in this dataset consists of raw data containing **only** element grounding information. When training a model, you need to use the corresponding prompts to wrap these data.
The data we released is divided into three domains: mobile, desktop and web.
All annotation data is stored in JSON format and each sample contains:
* `img_filename`: the interface screenshot file
* `instruction`: human instruction or referring expression extracted from ally tree or html
* `bbox`: the bounding box of the target element corresponding to instruction
Some data also contains a `data_type`, which records the type of an element in its structured information, if it can be obtained.
***
### Mobile data
This part of data is stored under the *mobile_domain* directory. Our mobile grounding data consists of four parts.
#### AMEX
Android Multi-annotation EXpo (AMEX) is a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents [1].
The annotation data is stored in
-`amex_raw.json`
Due to the single file size limitation of Hugging Face datasets, we stored the Amex images in *zip* format and split them into several sub-files.
- `amex_images_part_aa`
- `amex_images_part_ab`
- `amex_images_part_ac`
You need to first merge these split files back into the original file and then extract the contents.
```
cat amex_images_part_* > amex_images.zip
7z x amex_images.zip -aoa -o/path/to/extract/folder
```
#### UIBert
UIBert [2] is a dataset extended from Rico dataset [3] for two tasks: similar UI component retrieval and referring expression component retrieval.
The annotation data is stored in
- `uibert_raw.json`
The UIBert images are stored in
- `UIBert.zip`
#### Widget Captioning and RICOSCA
Widget Captioning data are collected by [4].
RICOSCA is a dataset automatically labeled using Android VH in [5]
The annotation data is stored in
- `widget_captioning.json`
- `ricosca.json`
The rico images are stored in
- `rico_imgs.zip`
#### Android_world_data
This part of data are sampled from a android environment for building and benchmarking autonomous computer control agents [6].
The annotation data is stored in
- `aw_mobile.json`
The rico images are stored in
- `mobile_images.zip`
***
### Desktop data
This part of data is stored under the *desktop_domain* directory.
All of the desktop grounding data is collected from the real environments of personal computers running different operating systems. Each image is split into multiple sub-images to enhance data diversity.
Our desktop grounding data consists of three parts: Windows, Linux and MacOS.
**The image and annotation data for each operating system are stored in corresponding zip and json files.**
It is worth noting that, due to the large size of the Windows image data, the split files need to be merged before extraction.
```
cat windows_image_part_* > windows_images.zip
7z x windows_images.zip -aoa -o/path/to/extract/folder
```
***
### Web data
This part of data is stored under the *web_domain* directory.
Our desktop grounding data consists of two parts.
#### Seeclick web data
The web data from SeeClick [7] was crawled from websites provided by Common Crawl, containing more than 270k webpage screenshots and over 3 million webpage elements.
The annotation data is stored in
- `seeclick_web.json`
The images are stored into split files and need to be merged before extraction.
```
cat seeclick_web_image_part_* > seeclick_web_images.zip
7z x seeclick_web_images.zip -aoa -o/path/to/extract/folder
```
#### Fineweb_crawled_data
This part of data is crawled from web pages from the latest URLs obtained from FineWeb [8], a cleaned and deduplicated English dataset derived from Common Crawl.
Since this portion of the data contains at least 1.6 million images, we have compressed them into 10 zip files, from `fineweb_3m_s11.zip` to `fineweb_3m_s52.zip`.
Please extract them into the same directory.
As an example,
```
7z x fineweb_3m_s11.zip -aoa -o/same/path/to/extract/fineweb
```
The annotation data is stored in
- `fineweb_3m.json`
***
### Best practice
During the training of **OS-Atlas-4B**, we randomly sampled predefined prompts to wrap the grounding data. Additionally, we scaled the relative coordinates of each element (in the range [0, 1]) by multiplying them by 1000 before inputting them into the model for training.
Below is an example of a data entry:
```
{
"conversations": [
{
"from": "human",
"value": "<image>\nUsing the provided screenshot, I'll describe webpage elements for you to locate (with bbox).\n<ref>media-maniacs.org</ref>\n<ref>Home</ref>\n<ref>Sitemap</ref>\n<ref>shop you can have what you choose 2012</ref>"
},
{
"from": "gpt",
"value": "<ref>media-maniacs.org</ref><box>[[70,856,141,871]]</box>\n<ref>Home</ref><box>[[21,935,43,951]]</box>\n<ref>Sitemap</ref><box>[[21,919,52,934]]</box>\n<ref>shop you can have what you choose 2012</ref><box>[[368,839,523,855]]</box>"
}
]
}
```
**OS-Atlas-7B**
```
{
"conversations": [
{
"from": "human",
"value": "<image>\nUsing the provided screenshot, I'll describe webpage elements for you to locate (with bbox).\n<|object_ref_start|>Facebook<|object_ref_end|>\n<|object_ref_start|>Subscribe<|object_ref_end|>\n<|object_ref_start|>Twitter<|object_ref_end|>\n<|object_ref_start|>Read More<|object_ref_end|>\n<|object_ref_start|>Read More<|object_ref_end|>"
},
{
"from": "gpt",
"value": "<|object_ref_start|>Facebook<|object_ref_end|><|box_start|>(4,955),(36,970)<|box_end|>\n<|object_ref_start|>Subscribe<|object_ref_end|><|box_start|>(4,913),(43,932)<|box_end|>\n<|object_ref_start|>Twitter<|object_ref_end|><|box_start|>(39,955),(62,970)<|box_end|>\n<|object_ref_start|>Read More<|object_ref_end|><|box_start|>(30,138),(73,157)<|box_end|>\n<|object_ref_start|>Read More<|object_ref_end|><|box_start|>(30,139),(73,155)<|box_end|>"
}
]
}
```
The prompts we used are stored in `prompts.json`.
***
**The following are the open-source datasets we used as data sources. We welcome everyone to check the details and cite these sources accordingly!**
[1] [AMEX: Android Multi-annotation Expo Dataset for Mobile GUI Agents](https://arxiv.org/abs/2407.17490)
[2] [UIBert: Learning Generic Multimodal Representations for UI Understanding](https://arxiv.org/abs/2107.13731)
[3] [Rico: A mobile app dataset for building data-driven design applications](https://dl.acm.org/doi/pdf/10.1145/3126594.3126651)
[4] [Widget Captioning: Generating Natural Language Description for Mobile User Interface Elements](https://arxiv.org/pdf/2010.04295.pdf)
[5] [Mapping Natural Language Instructions to Mobile UI Action Sequences](https://arxiv.org/pdf/2005.03776)
[6] [ANDROIDWORLD: A Dynamic Benchmarking Environment for Autonomous Agents](https://arxiv.org/abs/2405.14573)
[7] [SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents](https://arxiv.org/abs/2401.10935)
[8] [The fineweb datasets: Decanting the web for the finest text data at scale](https://arxiv.org/abs/2406.17557) |
fsicoli/common_voice_16_0 | fsicoli | "2023-12-22T19:58:33Z" | 20,003 | 2 | [
"task_categories:automatic-speech-recognition",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:hu",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lo",
"language:lt",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nl",
"language:oc",
"language:or",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sw",
"language:ta",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yue",
"language:zgh",
"language:zh",
"language:yo",
"license:cc0-1.0",
"size_categories:100B<n<1T",
"region:us",
"mozilla",
"foundation"
] | [
"automatic-speech-recognition"
] | "2023-12-19T17:26:21Z" | ---
license: cc0-1.0
language:
- ab
- af
- am
- ar
- as
- ast
- az
- ba
- bas
- be
- bg
- bn
- br
- ca
- ckb
- cnh
- cs
- cv
- cy
- da
- de
- dv
- dyu
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- gn
- ha
- he
- hi
- hsb
- hu
- ia
- id
- ig
- is
- it
- ja
- ka
- kab
- kk
- kmr
- ko
- ky
- lg
- lo
- lt
- lv
- mdf
- mhr
- mk
- ml
- mn
- mr
- mrj
- mt
- myv
- nl
- oc
- or
- pl
- ps
- pt
- quy
- ro
- ru
- rw
- sah
- sat
- sc
- sk
- skr
- sl
- sq
- sr
- sw
- ta
- th
- ti
- tig
- tk
- tok
- tr
- tt
- tw
- ug
- uk
- ur
- uz
- vi
- vot
- yue
- zgh
- zh
- yo
task_categories:
- automatic-speech-recognition
pretty_name: Common Voice Corpus 16.0
size_categories:
- 100B<n<1T
tags:
- mozilla
- foundation
---
# Dataset Card for Common Voice Corpus 16.0
<!-- Provide a quick summary of the dataset. -->
This dataset is an unofficial version of the Mozilla Common Voice Corpus 16. It was downloaded and converted from the project's website https://commonvoice.mozilla.org/.
## Languages
```
Abkhaz, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
```
## How to use
The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.
For example, to download the Portuguese config, simply specify the corresponding language config name (i.e., "pt" for Portuguese):
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a streaming=True argument to the load_dataset function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```
from datasets import load_dataset
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train", streaming=True)
print(next(iter(cv_16)))
```
Bonus: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
```
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset("fsicoli/common_voice_16_0", "pt", split="train")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.
### Dataset Structure
Data Instances
A typical data point comprises the path to the audio file and its sentence. Additional fields include accent, age, client_id, up_votes, down_votes, gender, locale and segment.
### Licensing Information
Public Domain, CC-0
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
---
|
2A2I/Arabic_Aya | 2A2I | "2024-03-15T11:08:04Z" | 19,953 | 13 | [
"task_categories:text-classification",
"task_categories:translation",
"task_categories:summarization",
"language:ar",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:tabular",
"modality:text",
"arxiv:2402.06619",
"region:us"
] | [
"text-classification",
"translation",
"summarization"
] | "2024-02-13T17:16:49Z" | ---
language:
- ar
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- text-classification
- translation
- summarization
pretty_name: 2A
dataset_info:
- config_name: CohereForAI-aya_collection-aya_dataset
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: string
- name: language_code
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 7555482
num_examples: 13960
download_size: 3687445
dataset_size: 7555482
- config_name: CohereForAI-aya_collection-aya_human_annotated
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 222650
num_examples: 250
download_size: 120393
dataset_size: 222650
- config_name: CohereForAI-aya_collection-templated_afrisenti
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5070578
num_examples: 14468
- name: test
num_bytes: 2674428
num_examples: 7838
- name: validation
num_bytes: 643036
num_examples: 1816
download_size: 2330165
dataset_size: 8388042
- config_name: CohereForAI-aya_collection-templated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20413129
num_examples: 70000
- name: test
num_bytes: 5799667
num_examples: 20000
- name: validation
num_bytes: 2976183
num_examples: 10000
download_size: 6746433
dataset_size: 29188979
- config_name: CohereForAI-aya_collection-templated_ntx_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 199809
num_examples: 111
download_size: 34306
dataset_size: 199809
- config_name: CohereForAI-aya_collection-templated_xcsqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: validation
num_bytes: 393580
num_examples: 1000
download_size: 137233
dataset_size: 393580
- config_name: CohereForAI-aya_collection-templated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 97691354
num_examples: 90760
- name: test
num_bytes: 15499274
num_examples: 14791
- name: validation
num_bytes: 10752041
num_examples: 9768
download_size: 57959575
dataset_size: 123942669
- config_name: CohereForAI-aya_collection-translated_adversarial_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 147727007
num_examples: 100000
- name: test
num_bytes: 16108000
num_examples: 10000
- name: validation
num_bytes: 14862183
num_examples: 10000
download_size: 52642775
dataset_size: 178697190
- config_name: CohereForAI-aya_collection-translated_cnn_dailymail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 3578924407
num_examples: 1000000
- name: test
num_bytes: 415594340
num_examples: 114900
- name: validation
num_bytes: 486698663
num_examples: 133680
download_size: 2209523190
dataset_size: 4481217410
- config_name: CohereForAI-aya_collection-translated_dolly
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: gcp_source
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: alphabet
dtype: string
- name: split
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 213140804
num_examples: 148080
download_size: 96189154
dataset_size: 213140804
- config_name: CohereForAI-aya_collection-translated_flan_coqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 245744048
num_examples: 64090
download_size: 124335769
dataset_size: 245744048
- config_name: CohereForAI-aya_collection-translated_flan_cot
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 634249526
num_examples: 919100
download_size: 273491678
dataset_size: 634249526
- config_name: CohereForAI-aya_collection-translated_flan_gem_wiki
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 961863533.277311
num_examples: 271470
download_size: 485152798
dataset_size: 961863533.277311
- config_name: CohereForAI-aya_collection-translated_flan_lambada
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 16531932
num_examples: 42790
download_size: 7457248
dataset_size: 16531932
- config_name: CohereForAI-aya_collection-translated_flan_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2989244
num_examples: 5400
download_size: 1292664
dataset_size: 2989244
- config_name: CohereForAI-aya_collection-translated_hotpotqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 1154195031
num_examples: 3554760
- name: validation
num_bytes: 69779681
num_examples: 224000
download_size: 420699282
dataset_size: 1223974712
- config_name: CohereForAI-aya_collection-translated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8219049
num_examples: 7540
download_size: 3600136
dataset_size: 8219049
- config_name: CohereForAI-aya_collection-translated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 40908047
num_examples: 140000
- name: test
num_bytes: 11646781
num_examples: 40000
- name: validation
num_bytes: 5951801
num_examples: 20000
download_size: 12723211
dataset_size: 58506629
- config_name: CohereForAI-aya_collection-translated_mlqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 331062576
num_examples: 231800
- name: validation
num_bytes: 31900260
num_examples: 22960
download_size: 146571384
dataset_size: 362962836
- config_name: CohereForAI-aya_collection-translated_nqopen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 397677612
num_examples: 1758500
- name: validation
num_bytes: 16780970
num_examples: 72200
download_size: 136208663
dataset_size: 414458582
- config_name: CohereForAI-aya_collection-translated_paws
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 303643575
num_examples: 494010
- name: test
num_bytes: 49242541
num_examples: 80000
- name: validation
num_bytes: 49475307
num_examples: 80000
download_size: 66436419
dataset_size: 402361423
- config_name: CohereForAI-aya_collection-translated_piqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 113290227
num_examples: 161130
- name: validation
num_bytes: 12924744
num_examples: 18380
download_size: 45954644
dataset_size: 126214971
- config_name: CohereForAI-aya_collection-translated_soda
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 6230916321
num_examples: 11915820
- name: test
num_bytes: 777982873
num_examples: 1489680
- name: validation
num_bytes: 772817056
num_examples: 1463460
download_size: 2804874077
dataset_size: 7781716250
- config_name: CohereForAI-aya_collection-translated_wiki_split
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 6349516377
num_examples: 9899440
- name: test
num_bytes: 32058254
num_examples: 50000
- name: validation
num_bytes: 32284536
num_examples: 50000
download_size: 2446037624
dataset_size: 6413859167
- config_name: CohereForAI-aya_collection-translated_wikiqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5014300
num_examples: 10400
- name: test
num_bytes: 1378807
num_examples: 2930
- name: validation
num_bytes: 685770
num_examples: 1400
download_size: 2872586
dataset_size: 7078877
- config_name: CohereForAI-aya_collection-translated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 5250663186
num_examples: 5231120
- name: test
num_bytes: 721821743
num_examples: 729740
- name: validation
num_bytes: 635907993
num_examples: 632640
download_size: 3091503409
dataset_size: 6608392922
- config_name: CohereForAI-aya_dataset
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 8314232
num_examples: 13960
- name: test
num_bytes: 246400
num_examples: 250
download_size: 3778631
dataset_size: 8560632
- config_name: CohereForAI-aya_evaluation_suite-aya_human_annotated
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 222650
num_examples: 250
download_size: 120393
dataset_size: 222650
- config_name: CohereForAI-aya_evaluation_suite-dolly_human_edited
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 188495
num_examples: 200
download_size: 100291
dataset_size: 188495
- config_name: CohereForAI-aya_evaluation_suite-dolly_machine_translated
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: test
num_bytes: 3491803
num_examples: 2000
download_size: 1762303
dataset_size: 3491803
configs:
- config_name: CohereForAI-aya_collection-aya_dataset
data_files:
- split: train
path: CohereForAI-aya_collection-aya_dataset/train-*
- config_name: CohereForAI-aya_collection-aya_human_annotated
data_files:
- split: test
path: CohereForAI-aya_collection-aya_human_annotated/test-*
- config_name: CohereForAI-aya_collection-templated_afrisenti
data_files:
- split: train
path: CohereForAI-aya_collection-templated_afrisenti/train-*
- split: test
path: CohereForAI-aya_collection-templated_afrisenti/test-*
- split: validation
path: CohereForAI-aya_collection-templated_afrisenti/validation-*
- config_name: CohereForAI-aya_collection-templated_mintaka
data_files:
- split: train
path: CohereForAI-aya_collection-templated_mintaka/train-*
- split: test
path: CohereForAI-aya_collection-templated_mintaka/test-*
- split: validation
path: CohereForAI-aya_collection-templated_mintaka/validation-*
- config_name: CohereForAI-aya_collection-templated_ntx_llm
data_files:
- split: train
path: CohereForAI-aya_collection-templated_ntx_llm/train-*
- config_name: CohereForAI-aya_collection-templated_xcsqa
data_files:
- split: validation
path: CohereForAI-aya_collection-templated_xcsqa/validation-*
- config_name: CohereForAI-aya_collection-templated_xlel_wd
data_files:
- split: train
path: CohereForAI-aya_collection-templated_xlel_wd/train-*
- split: test
path: CohereForAI-aya_collection-templated_xlel_wd/test-*
- split: validation
path: CohereForAI-aya_collection-templated_xlel_wd/validation-*
- config_name: CohereForAI-aya_collection-translated_adversarial_qa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_adversarial_qa/train-*
- split: test
path: CohereForAI-aya_collection-translated_adversarial_qa/test-*
- split: validation
path: CohereForAI-aya_collection-translated_adversarial_qa/validation-*
- config_name: CohereForAI-aya_collection-translated_cnn_dailymail
data_files:
- split: train
path: CohereForAI-aya_collection-translated_cnn_dailymail/train-*
- split: test
path: CohereForAI-aya_collection-translated_cnn_dailymail/test-*
- split: validation
path: CohereForAI-aya_collection-translated_cnn_dailymail/validation-*
- config_name: CohereForAI-aya_collection-translated_dolly
data_files:
- split: train
path: CohereForAI-aya_collection-translated_dolly/train-*
- config_name: CohereForAI-aya_collection-translated_flan_coqa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_flan_coqa/train-*
- config_name: CohereForAI-aya_collection-translated_flan_cot
data_files:
- split: train
path: CohereForAI-aya_collection-translated_flan_cot/train-*
- config_name: CohereForAI-aya_collection-translated_flan_gem_wiki
data_files:
- split: train
path: CohereForAI-aya_collection-translated_flan_gem_wiki/train-*
- config_name: CohereForAI-aya_collection-translated_flan_lambada
data_files:
- split: train
path: CohereForAI-aya_collection-translated_flan_lambada/train-*
- config_name: CohereForAI-aya_collection-translated_flan_qa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_flan_qa/train-*
- config_name: CohereForAI-aya_collection-translated_hotpotqa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_hotpotqa/train-*
- split: validation
path: CohereForAI-aya_collection-translated_hotpotqa/validation-*
- config_name: CohereForAI-aya_collection-translated_joke_explaination
data_files:
- split: train
path: CohereForAI-aya_collection-translated_joke_explaination/train-*
- config_name: CohereForAI-aya_collection-translated_mintaka
data_files:
- split: train
path: CohereForAI-aya_collection-translated_mintaka/train-*
- split: test
path: CohereForAI-aya_collection-translated_mintaka/test-*
- split: validation
path: CohereForAI-aya_collection-translated_mintaka/validation-*
- config_name: CohereForAI-aya_collection-translated_mlqa
data_files:
- split: test
path: CohereForAI-aya_collection-translated_mlqa/test-*
- split: validation
path: CohereForAI-aya_collection-translated_mlqa/validation-*
- config_name: CohereForAI-aya_collection-translated_nqopen
data_files:
- split: train
path: CohereForAI-aya_collection-translated_nqopen/train-*
- split: validation
path: CohereForAI-aya_collection-translated_nqopen/validation-*
- config_name: CohereForAI-aya_collection-translated_paws
data_files:
- split: train
path: CohereForAI-aya_collection-translated_paws/train-*
- split: test
path: CohereForAI-aya_collection-translated_paws/test-*
- split: validation
path: CohereForAI-aya_collection-translated_paws/validation-*
- config_name: CohereForAI-aya_collection-translated_piqa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_piqa/train-*
- split: validation
path: CohereForAI-aya_collection-translated_piqa/validation-*
- config_name: CohereForAI-aya_collection-translated_soda
data_files:
- split: train
path: CohereForAI-aya_collection-translated_soda/train-*
- split: test
path: CohereForAI-aya_collection-translated_soda/test-*
- split: validation
path: CohereForAI-aya_collection-translated_soda/validation-*
- config_name: CohereForAI-aya_collection-translated_wiki_split
data_files:
- split: train
path: CohereForAI-aya_collection-translated_wiki_split/train-*
- split: test
path: CohereForAI-aya_collection-translated_wiki_split/test-*
- split: validation
path: CohereForAI-aya_collection-translated_wiki_split/validation-*
- config_name: CohereForAI-aya_collection-translated_wikiqa
data_files:
- split: train
path: CohereForAI-aya_collection-translated_wikiqa/train-*
- split: test
path: CohereForAI-aya_collection-translated_wikiqa/test-*
- split: validation
path: CohereForAI-aya_collection-translated_wikiqa/validation-*
- config_name: CohereForAI-aya_collection-translated_xlel_wd
data_files:
- split: train
path: CohereForAI-aya_collection-translated_xlel_wd/train-*
- split: test
path: CohereForAI-aya_collection-translated_xlel_wd/test-*
- split: validation
path: CohereForAI-aya_collection-translated_xlel_wd/validation-*
- config_name: CohereForAI-aya_dataset
data_files:
- split: train
path: CohereForAI-aya_dataset/train-*
- split: test
path: CohereForAI-aya_dataset/test-*
- config_name: CohereForAI-aya_evaluation_suite-aya_human_annotated
data_files:
- split: test
path: CohereForAI-aya_evaluation_suite-aya_human_annotated/test-*
- config_name: CohereForAI-aya_evaluation_suite-dolly_human_edited
data_files:
- split: test
path: CohereForAI-aya_evaluation_suite-dolly_human_edited/test-*
- config_name: CohereForAI-aya_evaluation_suite-dolly_machine_translated
data_files:
- split: test
path: CohereForAI-aya_evaluation_suite-dolly_machine_translated/test-*
---
# Dataset Card for : Arabic Aya (2A)
<!-- Provide a quick summary of the dataset. -->
<!-- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).-->
## **Arabic Aya (2A) : A Curated Subset of the Aya Collection for Arabic Language Processing**
### Dataset Sources & Infos
- **Data Origin**: Derived from 69 subsets of the original Aya datasets : [CohereForAI/aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection), [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), and [CohereForAI/aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite).
- **Languages**: Modern Standard Arabic (MSA) and a variety of Arabic dialects ( 'arb', 'arz', 'ary', 'ars', 'knc', 'acm', 'apc', 'aeb', 'ajp', 'acq' )
- **Applications**: `Language Modeling`, `Text Classification`, `Sentiment Analysis`, `Dialect Identification`, `Translation`
- **Paper:** [2402.06619](https://huggingface.co/papers/2402.06619)
- **Maintainer:** [Elfilali Ali](https://huggingface.co/Ali-C137)
- **License:** Apache-2.0
### Overview
`Arabic Aya` is a meticulously curated dataset derived from the comprehensive Aya collection by [CohereForAI](https://huggingface.co/CohereForAI), specifically focusing on Arabic text data. This dataset aggregates content from the [CohereForAI/aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection), [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), and [CohereForAI/aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite), filtering out all but the Arabic content, including both Modern Standard Arabic (MSA) and various regional dialects.
### Purpose
The aim of 'Arabic Aya' is to provide researchers, technologists, and linguists with a ready-to-use Arabic text resource, significantly reducing the time and effort required for data preprocessing in NLP and AI projects focused on the Arabic language.
- Use the Aya datasets out of the box for your Arabic applications and research 😀
### Usage
This dataset serves as a foundational tool for those embarking on Arabic language projects, from academic research to commercial applications. By providing a pre-filtered source of Arabic text, 'Arabic Aya' enables users to dive straight into model training, analysis, and application development without the preliminary hassle of data cleaning and language filtering.
#### Use with HuggingFace's datasets library
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use a similar code to the following:
```python
from datasets import load_dataset
dataset = load_dataset("2A2I/Arabic_Aya", "CohereForAI-aya_collection-templated_mintaka")
```
In the above code snippet, "CohereForAI-aya_collection-templated_mintaka" refers to the arabic version (100k rows) of the original "templated_mintaka" subset (780k rows) of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
### Access and Contribution
Available on the Hugging Face Hub under [2A2I/Arabic_Aya](https://huggingface.co/datasets/2A2I/Arabic_Aya), 'Arabic Aya' invites contributions from the community. Users are encouraged to offer feedback, suggest improvements.
### Support and Collaboration
We are committed to fostering an inclusive and supportive environment around Arabic AI and NLP research. For support, collaboration, or queries regarding the dataset, please reach out through the Hugging Face Hub's discussion section or reach out at [2A2I Contact Email]([email protected]).
# Original Dataset Card of Aya by CohereForAI

# Dataset Summary
The Aya Collection is a massive multilingual collection consisting of 513 million instances of prompts and completions covering a wide range of tasks.
This collection incorporates instruction-style templates from fluent speakers and applies them to a curated list of datasets, as well as translations of instruction-style datasets into 101 languages. Aya Dataset, a human-curated multilingual instruction and response dataset, is also part of this collection. See our paper for more details regarding the collection.
- **Curated by:** Contributors of [Aya Open Science Intiative](https://cohere.com/research/aya)
- **Language(s):** 115 languages
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages.|
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
# Dataset
The `Aya Collection` is a comprehensive, large corpus of datasets that can be used by researchers around the world to train multilingual models. Our goal is only to include datasets with permissive licensing for manipulation and redistribution.
The `Aya Collection` consists of three different sources of data:
1. Templated data: We collaborated with fluent speakers to create templates that allowed for the automatic expansion of existing datasets into various languages.
2. Translated data: We translated a hand-selected subset of 19 datasets into 101 languages (114 dialects) using the NLLB 3.3B parameter machine translation model.
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) as a subset of the overall collection. This is the only dataset in the collection that is human-annotated in its entirety.
## Load with Datasets
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("CohereForAI/aya_collection", "templated_mintaka")
```
In the above code snippet, "templated_mintaka" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
## Data Instances
An example of a `train` instance looks as follows:
```json
{'id': 246001,
'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?',
'targets': 'The answer is Mount Lucania.',
'dataset_name': 'Mintaka-inst',
'sub_dataset_name': '-',
'task_type': 'question-answering',
'template_id': 3,
'language': 'eng',
'split': 'train',
'script': 'Latn'
}
```
## Data Fields
The data fields are the same among all splits:
- `id:` Unique id of the data point
- `inputs:` Prompt or input to the language model.
- `targets:` Completion or output of the language model.
- `dataset_name:` The name of the source dataset that the data point was taken from
- `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank.
- `task_type:` The task type that this conversation belongs to.
- `template_id`: The id of the template applied to this data point.
- `language:` The ISO code of the dialect of the conversation.
- `script:` The script of the language.
- `split:` Indicates whether the data point is part of the `train` or the `test` split.
### Statistics
The total number of data points, including the Aya Dataset` is 513,758,189. To view the breakdown of dialect codes and the respective templated and translated data point counts in the Aya Collection , refer to the toggled table below.
<details>
<summary> <b> Breakdown of Aya Collection data point counts grouped by dialects </b> </summary>
|dialect code|language|translated data point count|templated data point count|total count |
|------------|--------|---------------------------|--------------------------|---------------|
|ace |Achinese|8240684 |2000 |8242684 |
|acm |Arabic |4120342 |0 |4120342 |
|acq |Arabic |4120342 |0 |4120342 |
|aeb |Arabic |4120342 |0 |4120342 |
|afr |Afrikaans|4120342 |6108 |4126450 |
|ajp |Arabic |4120342 |0 |4120342 |
|als |Albanian|4120342 |0 |4120342 |
|amh |Amharic |4120342 |25327 |4145669 |
|apc |Arabic |4120342 |0 |4120342 |
|arb |Arabic |6424999 |216430 |6641429 |
|ars |Arabic |4120342 |0 |4120342 |
|ary |Arabic |4120342 |18076 |4138418 |
|arz |Arabic |4120342 |0 |4120342 |
|azb |Azerbaijani|4120342 |0 |4120342 |
|azj |Azerbaijani|4120342 |0 |4120342 |
|bel |Belarusian|4120342 |21273 |4141615 |
|ben |Bengali |4120342 |30661 |4151003 |
|bjn |Banjar |8240684 |2000 |8242684 |
|bul |Bulgarian|4120342 |37722 |4158064 |
|cat |Catalan |4120342 |66900 |4187242 |
|ceb |Cebuano |4120342 |0 |4120342 |
|ces |Czech |4120342 |179604 |4299946 |
|ckb |Kurdish |4120342 |0 |4120342 |
|cym |Welsh |4120342 |0 |4120342 |
|dan |Danish |4120342 |36310 |4156652 |
|deu |German |4120342 |1326722 |5447064 |
|ell |Greek |4120342 |40291 |4160633 |
|eng |English |9771427 |8066678 |17838105 |
|epo |Esperanto|4120342 |0 |4120342 |
|est |Estonian|4120342 |0 |4120342 |
|eus |Basque |4120342 |0 |4120342 |
|fin |Finnish |4120342 |457895 |4578237 |
|fra |French |4120342 |835520 |4955862 |
|gla |Scottish Gaelic|4120342 |0 |4120342 |
|gle |Irish |4120342 |0 |4120342 |
|glg |Galician|4120342 |0 |4120342 |
|guj |Gujarati|4120342 |2157 |4122499 |
|hat |Haitian Creole|4120342 |0 |4120342 |
|hau |Hausa |4120342 |51396 |4171738 |
|heb |Hebrew |4120342 |103466 |4223808 |
|hin |Hindi |4120342 |260387 |4380729 |
|hun |Hungarian|4120342 |82039 |4202381 |
|hye |Armenian|4120342 |7080 |4127422 |
|ibo |Igbo |4120342 |36312 |4156654 |
|ind |Indonesian|4120342 |45709 |4166051 |
|isl |Icelandic|4120342 |0 |4120342 |
|ita |Italian |4120342 |405682 |4526024 |
|jav |Javanese|4120342 |829 |4121171 |
|jpn |Japanese|4120342 |2693177 |6813519 |
|kan |Kannada |4120342 |1156 |4121498 |
|kas |Kashmiri|4120342 |0 |4120342 |
|kat |Georgian|4120342 |0 |4120342 |
|kaz |Kazakh |4120342 |0 |4120342 |
|khk |Mongolian|4120342 |0 |4120342 |
|khm |Khmer |4120342 |0 |4120342 |
|kir |Kyrgyz |4120342 |0 |4120342 |
|kmr |Kurdish |4120342 |0 |4120342 |
|knc |Kanuri |8240684 |0 |8240684 |
|kor |Korean |4120342 |41011 |4161353 |
|lao |Lao |4120342 |0 |4120342 |
|lit |Lithuanian|4120342 |0 |4120342 |
|ltz |Luxembourgish|4120342 |0 |4120342 |
|lvs |Latvian |4120342 |0 |4120342 |
|mal |Malayalam|4120342 |4347 |4124689 |
|mar |Marathi |4120342 |3678 |4124020 |
|min |Minangkabau|6753788 |2000 |6755788 |
|mkd |Macedonian|4120342 |0 |4120342 |
|mlt |Maltese |4120342 |0 |4120342 |
|mni |Manipuri|4120342 |0 |4120342 |
|mri |Maori |4120342 |0 |4120342 |
|mya |Burmese |4120342 |0 |4120342 |
|nld |Dutch |4120342 |220181 |4340523 |
|nno |Norwegian|4120342 |0 |4120342 |
|nob |Norwegian|4120342 |0 |4120342 |
|npi |Nepali |4120342 |0 |4120342 |
|nso |Northern Sotho|4120342 |0 |4120342 |
|pbt |Pashto |4120342 |0 |4120342 |
|pes |Persian |4120342 |245520 |4365862 |
|plt |Malagasy|4120342 |0 |4120342 |
|pol |Polish |4120342 |332503 |4452845 |
|por |Portuguese|4120342 |287432 |4407774 |
|ron |Romanian|4120342 |36359 |4156701 |
|rus |Russian |4120342 |545920 |4666262 |
|sin |Sinhala |4120342 |195 |4120537 |
|slk |Slovak |4120342 |27845 |4148187 |
|slv |Slovenian|4120342 |25731 |4146073 |
|smo |Samoan |4120342 |0 |4120342 |
|sna |Shona |4120342 |3684 |4124026 |
|snd |Sindhi |4120342 |0 |4120342 |
|som |Somali |4120342 |2926 |4123268 |
|sot |Southern Sotho|4120342 |0 |4120342 |
|spa |Spanish |4120342 |379194 |4499536 |
|srp |Serbian |4120342 |77124 |4197466 |
|sun |Sundanese|4120342 |2208 |4122550 |
|swe |Swedish |4120342 |76486 |4196828 |
|swh |Swahili |4120342 |12726 |4133068 |
|tam |Tamil |4120342 |11462 |4131804 |
|taq |Tamasheq|4120342 |0 |4120342 |
|tel |Telugu |4120342 |477821 |4598163 |
|tgk |Tajik |4120342 |0 |4120342 |
|tha |Thai |4120342 |2125180 |6245522 |
|tur |Turkish |4120342 |59932 |4180274 |
|ukr |Ukrainian|4120342 |189384 |4309726 |
|urd |Urdu |4120342 |337739 |4458081 |
|uzn |Uzbek |4120342 |0 |4120342 |
|vie |Vietnamese|4120342 |42232 |4162574 |
|xho |Xhosa |4120342 |2952 |4123294 |
|ydd |Yiddish |4120342 |0 |4120342 |
|yor |Yoruba |4120342 |4907 |4125249 |
|yue |Chinese |4120342 |0 |4120342 |
|zho-Hans |Chinese |4120342 |54528 |4174870 |
|zho-Hant |Chinese |4120342 |0 |4120342 |
|zsm |Malay |4120342 |13950 |4134292 |
|zul |Zulu |4120342 |786 |4121128 |
|arq |Arabic |0 |6046 |6046 |
|ban |Balinese|0 |2000 |2000 |
|bbc |Toba Batak|0 |2000 |2000 |
|bem |Bemba |0 |776 |776 |
|fil |Filipino|0 |220 |220 |
|fon |Fon |0 |845 |845 |
|hrv |Croatian|0 |9007 |9007 |
|kin |Kinyarwanda|0 |11165 |11165 |
|lij |Ligurian|0 |6409 |6409 |
|mad |Madurese|0 |2000 |2000 |
|nij |Ngaju |0 |2000 |2000 |
|nor |Norwegian|0 |72352 |72352 |
|pan |Punjabi |0 |2156 |2156 |
|twi |Twi |0 |10840 |10840 |
|wol |Wolof |0 |785 |785 |
|zho |Chinese |0 |74972 |74972 |
PS: Templated data also includes Mozambican Portuguese, which doesn't have its own ISO language code.
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** Automatic augmentation of existing datasets serves to enhance the available linguistic resources for multiple languages. The list of languages was initially established from mT5 and aligned with the annotators’ language list and NLLB translation model. The datasets were translated directly from English for all languages.
# Additional Information
## Provenance
- **Methods Used:** A combination of crowd-sourced templating and automatic translation was employed to source this dataset.
- **Methodology Details:**
- *Source:* Existing NLP datasets
- *Dates of Collection:* May 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 02/2024
- *First Release:* 02/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://cohere.com/research/aya
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
anon8231489123/ShareGPT_Vicuna_unfiltered | anon8231489123 | "2023-04-12T05:23:59Z" | 19,794 | 770 | [
"language:en",
"license:apache-2.0",
"region:us"
] | null | "2023-04-02T05:30:31Z" | ---
license: apache-2.0
language:
- en
---
**Further cleaning done. Please look through the dataset and ensure that I didn't miss anything.**
**Update: Confirmed working method for training the model: https://huggingface.co/AlekseyKorshuk/vicuna-7b/discussions/4#64346c08ef6d5abefe42c12c**
Two choices:
- Removes instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json
- Has instances of "I'm sorry, but": https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json
The choice is yours. The first dataset may go to far and remove valuable data. The second is better for when the AI asks for clarification, but it also may refuse to do stuff like browse the internet, which it actually may be able to do with certain langchain implementations. These are important things to think about before training.
~100k ShareGPT conversations narrowed down to 53k by:
* Removing non-english conversations
* Removing excessive unicode (indicative of Chinese or Korean text, usually)
* Removing excessive repeated characters
* Removing various instances "AI Moralizing". Conversations with these phrases were removed (and a few others that can't be mentioned here):
"text-based AI language model",
"domestic violence",
"please refrain",
"derogatory",
"inappropriate",
"offensive",
"racism",
"racist",
"racial",
"discriminate",
"discriminatory",
"discrimination",
"sexist",
"sexism",
"unacceptable",
"inclusive workplace",
"lgbt",
"morals",
"ethics",
"ethical",
"legality",
"illegal",
"illegality",
"hateful",
"harmful",
"it is never okay",
"It is important to",
"It's important to",
"real-world consequences",
"hate speech",
"glorify",
"not be appropriate",
"supremacist",
"extremist",
"responsible AI",
"AI principles",
"AI assistant",
"an AI language",
"ableist",
"hurtful",
"gender stereotype",
"gender inequality",
"underrepresentation",
"safe spaces",
"gender-based",
"inclusivity",
"feminist",
"feminism",
"transgender",
"empowerment",
"communist",
"capitalism",
"stereotypes",
"biases",
"bias",
"Microaggression",
"prioritize human safety",
"as a language model",
"as an AI language model",
"As a large language model",
"As an AI",
"ethical principles",
"consensual",
"it is not appropriate",
"it's not appropriate",
"I cannot fulfill your request",
"harmful to human beings",
"ethical guidelines",
"my guidelines",
"prioritize user safety",
"adhere to ethical guidelines",
"harmful consequences",
"potentially harmful",
"dangerous activities",
"promote safety",
"well-being of all users",
"responsible information sharing",
"jeopardize the safety",
"illegal actions or intentions",
"undermine the stability",
"promote the well-being",
"illegal activities or actions",
"adherence to the law",
"potentially be harmful",
"illegal substances or activities",
"committed to promoting",
"safe information",
"lawful information",
"cannot provide guidance",
"cannot provide information",
"unable to offer assistance",
"cannot engage in discussions",
"programming prohibits",
"follow ethical guidelines",
"ensure the safety",
"involves an illegal subject",
"prioritize safety",
"illegal subject",
"prioritize user well-being",
"cannot support or promote",
"activities that could harm",
"pose a risk to others",
"against my programming",
"activities that could undermine",
"potentially dangerous",
"not within the scope",
"designed to prioritize safety",
"not able to provide",
"maintain user safety",
"adhere to safety guidelines",
"dangerous or harmful",
"cannot provide any information",
"focus on promoting safety"
* Conversations split into 2048 token chunks as described here: https://github.com/lm-sys/FastChat/blob/main/docs/commands/data_cleaning.md
This should be fully ready to train an unfiltered english Vicuna model based on the procedure here: https://github.com/lm-sys/FastChat/ |
inductiva/windtunnel-20k | inductiva | "2024-10-02T11:49:47Z" | 19,794 | 5 | [
"task_categories:feature-extraction",
"task_categories:graph-ml",
"task_categories:image-to-3d",
"language:en",
"size_categories:10K<n<100K",
"modality:3d",
"modality:tabular",
"modality:text",
"region:us",
"simulation",
"openfoam",
"physics",
"windtunnel",
"inductiva",
"machine learning",
"synthetic"
] | [
"feature-extraction",
"graph-ml",
"image-to-3d"
] | "2024-09-26T09:06:47Z" | ---
pretty_name: Wind Tunnel 20K Dataset
size_categories:
- 10K<n<100K
task_categories:
- feature-extraction
- graph-ml
- image-to-3d
language:
- en
tags:
- simulation
- openfoam
- physics
- windtunnel
- inductiva
- machine learning
- synthetic
---
<p align="center">
<img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/pressure_field_mesh.png", width="500px">
</p>
# Wind Tunnel Dataset
The **Wind Tunnel Dataset** contains **19,812 OpenFOAM simulations** of **1,000 unique automobile-like objects** placed in a virtual wind tunnel measuring **20 meters long, 10 meters wide, and 8 meters high.**
Each object was tested under **20 different conditions**: 4 random wind speeds ranging from **10 to 50 m/s**, and 5 rotation angles (**0°**, **180°** and **3 random angles**).
The object meshes were generated using [Instant Mesh](https://github.com/TencentARC/InstantMesh) based on images sourced from the [Stanford Cars Dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset). To make sure the results are stable and reliable, each simulation runs for **300 iterations**.
The entire dataset is organized into three subsets: **70% for training, 20% for validation, and 10% for testing.**
The data generation process itself was orchestrated using the [Inductiva API](https://inductiva.ai/),
which allowed us to run hundreds of OpenFOAM simulations in parallel on the cloud.
## Motivation: Addressing the Data Gap in CFD
Recently, there’s been a lot of interest in using **machine learning (ML)** to speed up **CFD simulations**. Research has shown that for well-defined scenarios—like a virtual wind tunnel—you can train an ML model to “predict” the results of a simulation **much faster** than traditional methods, while still keeping the accuracy close to what you’d expect from classical simulations.
That said, the **ML/CFD communities** are still lacking enough **training data** for their research. We’ve identified two main reasons for this.
First, there’s a shortage of datasets with **high-quality 3D meshes** needed for running CFD simulations. Existing 3D object datasets have a lot of limitations: they’re either too small, closed-source, or have low-quality meshes. Without this input data, it’s been really hard to generate large-scale training datasets for realistic CFD scenarios, which almost always involve 3D meshes.
Second, even if you had all the 3D meshes you needed, setting up and running thousands of **CFD simulations** to generate a large, diverse dataset isn’t easy. To create a dataset like this, you’d need to define an initial simulation scenario (like the wind tunnel setup) and then run enough variations—different meshes, wind speeds, and so on—to cover a wide range of data points for training a robust **ML model**.
The problem is, running a single CFD simulation can be tricky enough with most software. Orchestrating **thousands of simulations** and handling all the resulting data? That’s a whole new level of challenge.
While both of these problems are difficult to solve in general, we decided to focus on one common CFD scenario: a **virtual wind tunnel** for **static automobiles**. Using the popular **OpenFOAM** simulation package, we produced a large dataset of CFD simulations.
Next, we’ll explain how we tackled the challenges of generating the data and orchestrating the simulations.
## Generating a Large Quantity of Automobile-like 3D Meshes
Since there aren’t many publicly available 3D meshes of automobiles, we decided to use recent image-to-mesh models to generate meshes from freely available car images.
We specifically used the open-source [InstantMesh](https://github.com/TencentARC/InstantMesh) model (Apache-2.0), which is currently state-of-the-art in **image-to-mesh generation**. We generated the automobile-like meshes by running Instant Mesh on **1,000 images** from the publicly available [Stanford Cars Dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset) (Apache-2.0), which contains 16,185 images of automobiles.
Running the image-to-mesh model naturally results in some defects, like irregular surfaces, asymmetry, holes, and disconnected components. To address these issues, we implemented a custom post-processing step to improve mesh quality. We used **PCA** to align the meshes with the main axes and removed any disconnected components.
The resulting set of meshes still contains minor defects, like “spikes” or “cavities” in flat areas, unexpected holes, and asymmetry issues. However, we see these imperfections as valuable features of the dataset. From a machine learning perspective, they bring challenges that can help prevent overfitting and contribute to building more robust and generalizable models.
## Orchestrating 20k Cloud Simulations—Using Just Python
To tackle the challenge of orchestrating **20,000 OpenFOAM simulations**, we resorted to the **Inductiva API**. The Inductiva platform offers a simple Python API for running simulation workflows in the cloud and supports several popular open-source packages, including **OpenFOAM**. Here’s an [example](https://docs.inductiva.ai/en/latest/simulators/OpenFOAM.html) of how to run an OpenFOAM simulation using Inductiva.
With the Inductiva API, it’s easy to parameterize specific simulation scenarios and run variations of a base case by programmatically adjusting the input parameters and starting conditions of the simulation. More details [here](#OpenFoam-Parameters). Additionally, users can create custom Python classes that wrap these parameterized simulations, providing a simple Python interface for running simulations—no need to interact directly with the underlying simulation packages.
We used the [Inductiva API to create a Python class for the Wind Tunnel scenario](https://github.com/inductiva/wind-tunnel), which allowed us to run **20,000 simulations** across a range of input parameters.
For more on how to transform your complex simulation workflows into easy-to-use Python classes, we wrote a [blog post](https://inductiva.ai/blog/article/transform-complex-simulations) all about it.
## How Did We Generate the Dataset?
1. **Generate Input Meshes**: We first generated input meshes using the **InstantMesh model** with images from the **Stanford Cars Dataset**, followed by post-processing to improve mesh quality.
2. **Run OpenFOAM Simulations**: Using the Inductiva API, we ran OpenFOAM simulations on the input meshes under different wind speeds and angles. The result is an output mesh `openfoam_mesh.obj`that contains all the relevant simulation data.
3. **Post-process OpenFOAM Output**: We post-processed the OpenFOAM output to generate streamlines and pressure map meshes.
The code we used to generate and post-process the meshes is available on [GitHub](https://github.com/inductiva/datasets-generation/tree/main/windtunnel_dataset).
## Dataset Structure
```
data
├── train
│ ├── <SIMULATION_ID>
│ │ ├── input_mesh.obj
│ │ ├── openfoam_mesh.obj
│ │ ├── pressure_field_mesh.vtk
│ │ ├── simulation_metadata.json
│ │ └── streamlines_mesh.ply
│ └── ...
├── validation
│ └── ...
└── test
└── ...
```
## Dataset Files
Each simulation in the Wind Tunnel Dataset is accompanied by several key files that provide both the input and the output data of the simulations.
Here’s a breakdown of the files included in each simulation:
- **[input_mesh.obj](#input_meshobj)**: OBJ file with the input mesh.
- **[openfoam_mesh.obj](#openfoam_meshobj)**: OBJ file with the OpenFOAM mesh.
- **[pressure_field_mesh.vtk](#pressure_field_meshvtk)**: VTK file with the pressure field data.
- **[streamlines_mesh.ply](#streamlines_meshply)**: PLY file with the streamlines.
- **[metadata.json](#metadatajson)**: JSON with metadata about the input parameters and about some output results such as the force coefficients (obtained via simulation) and the path of the output files.
### input_mesh.obj
The input mesh we generated using the Instant Mesh model from images in the Stanford Cars Dataset, serves as the starting point for the OpenFOAM simulation.
Details on the mesh generation process can be found [here](#Generating-a-large-quantity-of-Automobile-like-3D-Meshes).
| **Input Mesh** | **# points of input meshes** |
|-------------------------------|------------------------------|
|  |  |
```python
import pyvista as pv
# Load the mesh
mesh_path = "input_mesh.obj"
mesh = pv.read(mesh_path)
# Get the vertices (points)
vertices = mesh.points
# Get the faces (connections)
# The faces array contains the number of vertices per face followed by the vertex indices.
# For example: [3, v1, v2, v3, 3, v4, v5, v6, ...] where 3 means a triangle.
faces = mesh.faces
```
### openfoam_mesh.obj
This mesh is the result of the OpenFOAM simulation. The number of points is reduced compared to the `input_mesh.obj` due to mesh refinement and processing steps applied by OpenFOAM during the simulation.
| **OpenFoam Mesh** | **# points of OpenFoam meshes** |
|-------------------------------|------------------------------|
|  |  |
```python
import pyvista as pv
# Load the mesh
mesh_path = "openfoam_mesh.obj"
mesh = pv.read(mesh_path)
# Get the vertices (points)
vertices = mesh.points
# Get the faces (connections)
# The faces array contains the number of vertices per face followed by the vertex indices.
# For example: [3, v1, v2, v3, 3, v4, v5, v6, ...] where 3 means a triangle.
faces = mesh.faces
```
### pressure_field_mesh.vtk
Pressure values were extracted from the `openfoam_mesh.obj` and interpolated onto the `input_mesh.obj` using the closest point strategy. This approach allowed us to project the pressure values onto a higher-resolution mesh. As shown in the histogram, the the point distribution matches that of the `input_mesh.obj`.
More details can be found here [here](https://github.com/inductiva/wind-tunnel/blob/deab68a018531ff05d0d8ef9d63d8c108800f78f/windtunnel/windtunnel_outputs.py#L111).
| **Pressure Field Mesh** | **# points of Pressure Field meshes** |
|-------------------------------|------------------------------|
|  | ) |
```python
import pyvista as pv
mesh_path = "pressure_field_mesh.vtk"
mesh = pv.read(mesh_path)
# The vertices array contains the coordinates of each point in the mesh.
vertices = mesh.points
# The faces array contains the number of vertices per face followed by the vertex indices.
# For example: [3, v1, v2, v3, 3, v4, v5, v6, ...] where 3 means a triangle.
faces = mesh.faces
# Get the pressure data (scalar named "p")
# This retrieves the pressure values associated with each vertex in the mesh.
pressure_data = mesh.point_data["p"]
```
### streamlines_mesh.ply
Streamlines visually represent the flow characteristics within the simulation, illustrating how air flows around the object
More information can be found [here](https://github.com/inductiva/wind-tunnel/blob/deab68a018531ff05d0d8ef9d63d8c108800f78f/windtunnel/windtunnel_outputs.py#L70).
| **Streamlines Mesh** | **# points of streamlines meshes** |
|-------------------------------|------------------------------|
|  |  |
### metadata.json
This file contains metadata related to the simulation, including input parameters such as `wind_speed`, `rotate_angle`, `num_iterations`, and `resolution`. Additionally, it includes output parameters like `drag_coefficient`, `moment_coefficient`, `lift_coefficient`, `front_lift_coefficient`, and `rear_lift_coefficient`. The file also specifies the locations of the generated output meshes.
```json
{
"id": "1w63au1gpxgyn9kun5q9r7eqa",
"object_file": "object_24.obj",
"wind_speed": 35,
"rotate_angle": 332,
"num_iterations": 300,
"resolution": 5,
"drag_coefficient": 0.8322182,
"moment_coefficient": 0.3425206,
"lift_coefficient": 0.1824983,
"front_lift_coefficient": 0.4337698,
"rear_lift_coefficient": -0.2512715,
"input_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/input_mesh.obj",
"openfoam_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/openfoam_mesh.obj",
"pressure_field_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/pressure_field_mesh.vtk",
"streamlines_mesh_path": "data/train/1w63au1gpxgyn9kun5q9r7eqa/streamlines_mesh.ply"
}
```
### Dataset Statistics
The dataset includes **19,812 valid samples** out of **20,000 simulations**, with [188 submissions failing](https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/failed_tasks.txt) due to numerical errors in OpenFOAM.
The full dataset requires about **300 GB** of storage, but you can also download smaller portions if needed.
## Downloading the Dataset:
To download the dataset, you’ll need to install the [Datasets package](https://huggingface.co/docs/datasets/en/index) from Hugging Face:
```python
pip install datasets
```
### 1. Using snapshot_download()
```python
import huggingface_hub
dataset_name = "inductiva/windtunnel-20k"
# Download the entire dataset
huggingface_hub.snapshot_download(repo_id=dataset_name, repo_type="dataset")
# Download to a specific local directory
huggingface_hub.snapshot_download(
repo_id=dataset_name, repo_type="dataset", local_dir="local_folder"
)
# Download only the simulation metadata across all simulations
huggingface_hub.snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
local_dir="local_folder",
allow_patterns=["*/*/*/simulation_metadata.json"]
)
```
### 2. Using load_dataset()
```python
import datasets
# Load the dataset (streaming is supported)
dataset = datasets.load_dataset("inductiva/windtunnel-20k", streaming=False)
# Display dataset information
print(dataset)
# Access a sample from the training set
sample = dataset["train"][0]
print("Sample from training set:", sample)
```
## OpenFoam Parameters
We used the [Inductiva Template Manager](https://tutorials.inductiva.ai/intro_to_api/templating.html) to **parameterize** the OpenFoam configuration files.
Below are some snippets from the templates used in the wind tunnel simulations.
initialConditions.jinja
```jinja
flowVelocity ({{ wind_speed }} 0 0);
```
controlDict.jinja
```jinja
endTime {{ num_iterations }};
```
forceCoeffs.jinja
```jinja
magUInf {{ wind_speed }};
lRef {{ length }}; // Wheelbase length
Aref {{ area }}; // Estimated
```
snappyHexMeshDict.jinja
```jinja
geometry
{
object
{
type triSurfaceMesh;
file "object.obj";
}
refinementBox
{
type searchableBox;
min ({{ x_min }} {{ y_min }} {{ z_min }});
max ({{ x_max }} {{ y_max }} {{ z_max }});
}
};
features
(
{
file "object.eMesh";
level {{ resolution + 1 }};
}
);
refinementSurfaces
{
object
{
// Surface-wise min and max refinement level
level ({{ resolution }} {{ resolution + 1 }});
}
}
refinementRegions
{
refinementBox
{
mode inside;
levels ((1E15 {{ resolution - 1 }}));
}
}
locationInMesh ({{ x_min }} {{ y_min }} {{ z_min }});
```
You can find the full OpenFoam configuration on github: [https://github.com/inductiva/wind-tunnel/tree/main/windtunnel/templates](https://github.com/inductiva/wind-tunnel/tree/main/windtunnel/templates)
## What's Next?
If you encounter any issues with this dataset, feel free to reach out at [[email protected]]([email protected]).
If you spot any problematic meshes, let us know so we can fix them in the next version of the **Windtunnel-20k dataset**.
To learn more about how we created this dataset—or how you can generate synthetic datasets for Physics-AI models—check out our well-tested [4-step recipe for generating synthetic data](https://inductiva.ai/blog/article/supercharge-your-physics-ml-with-inductivas-cloud-based-simulation-api) or discover how to [transform your own complex simulation workflows into easy-to-use Python classes](https://inductiva.ai/blog/article/transform-complex-simulations).
You may also be interested in reading our blog post, [The 3D Mesh Resolution Threshold - 5k Points is All You Need!](https://inductiva.ai/blog/article/5k-points-is-all-you-need), where we explore just how much you can reduce the level of detail in a 3D object while still maintaining accurate aerodynamic results in a virtual wind tunnel built with OpenFOAM.
|
wecover/OPUS_TED2020 | wecover | "2024-01-31T11:42:51Z" | 19,443 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-31T07:37:04Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: '*/*/train.parquet'
- split: valid
path: '*/*/valid.parquet'
- split: test
path: '*/*/test.parquet'
- config_name: af
data_files:
- split: train
path: '*/*af*/train.parquet'
- split: test
path: '*/*af*/test.parquet'
- split: valid
path: '*/*af*/valid.parquet'
- config_name: am
data_files:
- split: train
path: '*/*am*/train.parquet'
- split: test
path: '*/*am*/test.parquet'
- split: valid
path: '*/*am*/valid.parquet'
- config_name: ar
data_files:
- split: train
path: '*/*ar*/train.parquet'
- split: test
path: '*/*ar*/test.parquet'
- split: valid
path: '*/*ar*/valid.parquet'
- config_name: az
data_files:
- split: train
path: '*/*az*/train.parquet'
- split: test
path: '*/*az*/test.parquet'
- split: valid
path: '*/*az*/valid.parquet'
- config_name: be
data_files:
- split: train
path: '*/*be*/train.parquet'
- split: test
path: '*/*be*/test.parquet'
- split: valid
path: '*/*be*/valid.parquet'
- config_name: bg
data_files:
- split: train
path: '*/*bg*/train.parquet'
- split: test
path: '*/*bg*/test.parquet'
- split: valid
path: '*/*bg*/valid.parquet'
- config_name: bn
data_files:
- split: train
path: '*/*bn*/train.parquet'
- split: test
path: '*/*bn*/test.parquet'
- split: valid
path: '*/*bn*/valid.parquet'
- config_name: bs
data_files:
- split: train
path: '*/*bs*/train.parquet'
- split: test
path: '*/*bs*/test.parquet'
- split: valid
path: '*/*bs*/valid.parquet'
- config_name: ca
data_files:
- split: train
path: '*/*ca*/train.parquet'
- split: test
path: '*/*ca*/test.parquet'
- split: valid
path: '*/*ca*/valid.parquet'
- config_name: cs
data_files:
- split: train
path: '*/*cs*/train.parquet'
- split: test
path: '*/*cs*/test.parquet'
- split: valid
path: '*/*cs*/valid.parquet'
- config_name: da
data_files:
- split: train
path: '*/*da*/train.parquet'
- split: test
path: '*/*da*/test.parquet'
- split: valid
path: '*/*da*/valid.parquet'
- config_name: de
data_files:
- split: train
path: '*/*de*/train.parquet'
- split: test
path: '*/*de*/test.parquet'
- split: valid
path: '*/*de*/valid.parquet'
- config_name: el
data_files:
- split: train
path: '*/*el*/train.parquet'
- split: test
path: '*/*el*/test.parquet'
- split: valid
path: '*/*el*/valid.parquet'
- config_name: en
data_files:
- split: train
path: '*/*en*/train.parquet'
- split: test
path: '*/*en*/test.parquet'
- split: valid
path: '*/*en*/valid.parquet'
- config_name: eo
data_files:
- split: train
path: '*/*eo*/train.parquet'
- split: test
path: '*/*eo*/test.parquet'
- split: valid
path: '*/*eo*/valid.parquet'
- config_name: es
data_files:
- split: train
path: '*/*es*/train.parquet'
- split: test
path: '*/*es*/test.parquet'
- split: valid
path: '*/*es*/valid.parquet'
- config_name: et
data_files:
- split: train
path: '*/*et*/train.parquet'
- split: test
path: '*/*et*/test.parquet'
- split: valid
path: '*/*et*/valid.parquet'
- config_name: eu
data_files:
- split: train
path: '*/*eu*/train.parquet'
- split: test
path: '*/*eu*/test.parquet'
- split: valid
path: '*/*eu*/valid.parquet'
- config_name: fa
data_files:
- split: train
path: '*/*fa*/train.parquet'
- split: test
path: '*/*fa*/test.parquet'
- split: valid
path: '*/*fa*/valid.parquet'
- config_name: fi
data_files:
- split: train
path: '*/*fi*/train.parquet'
- split: test
path: '*/*fi*/test.parquet'
- split: valid
path: '*/*fi*/valid.parquet'
- config_name: fr
data_files:
- split: train
path: '*/*fr*/train.parquet'
- split: test
path: '*/*fr*/test.parquet'
- split: valid
path: '*/*fr*/valid.parquet'
- config_name: ga
data_files:
- split: train
path: '*/*ga*/train.parquet'
- split: test
path: '*/*ga*/test.parquet'
- split: valid
path: '*/*ga*/valid.parquet'
- config_name: gl
data_files:
- split: train
path: '*/*gl*/train.parquet'
- split: test
path: '*/*gl*/test.parquet'
- split: valid
path: '*/*gl*/valid.parquet'
- config_name: he
data_files:
- split: train
path: '*/*he*/train.parquet'
- split: test
path: '*/*he*/test.parquet'
- split: valid
path: '*/*he*/valid.parquet'
- config_name: hi
data_files:
- split: train
path: '*/*hi*/train.parquet'
- split: test
path: '*/*hi*/test.parquet'
- split: valid
path: '*/*hi*/valid.parquet'
- config_name: hr
data_files:
- split: train
path: '*/*hr*/train.parquet'
- split: test
path: '*/*hr*/test.parquet'
- split: valid
path: '*/*hr*/valid.parquet'
- config_name: hu
data_files:
- split: train
path: '*/*hu*/train.parquet'
- split: test
path: '*/*hu*/test.parquet'
- split: valid
path: '*/*hu*/valid.parquet'
- config_name: hy
data_files:
- split: train
path: '*/*hy*/train.parquet'
- split: test
path: '*/*hy*/test.parquet'
- split: valid
path: '*/*hy*/valid.parquet'
- config_name: id
data_files:
- split: train
path: '*/*id*/train.parquet'
- split: test
path: '*/*id*/test.parquet'
- split: valid
path: '*/*id*/valid.parquet'
- config_name: is
data_files:
- split: train
path: '*/*is*/train.parquet'
- split: test
path: '*/*is*/test.parquet'
- split: valid
path: '*/*is*/valid.parquet'
- config_name: it
data_files:
- split: train
path: '*/*it*/train.parquet'
- split: test
path: '*/*it*/test.parquet'
- split: valid
path: '*/*it*/valid.parquet'
- config_name: ja
data_files:
- split: train
path: '*/*ja*/train.parquet'
- split: test
path: '*/*ja*/test.parquet'
- split: valid
path: '*/*ja*/valid.parquet'
- config_name: ka
data_files:
- split: train
path: '*/*ka*/train.parquet'
- split: test
path: '*/*ka*/test.parquet'
- split: valid
path: '*/*ka*/valid.parquet'
- config_name: kk
data_files:
- split: train
path: '*/*kk*/train.parquet'
- split: test
path: '*/*kk*/test.parquet'
- split: valid
path: '*/*kk*/valid.parquet'
- config_name: km
data_files:
- split: train
path: '*/*km*/train.parquet'
- split: test
path: '*/*km*/test.parquet'
- split: valid
path: '*/*km*/valid.parquet'
- config_name: kn
data_files:
- split: train
path: '*/*kn*/train.parquet'
- split: test
path: '*/*kn*/test.parquet'
- split: valid
path: '*/*kn*/valid.parquet'
- config_name: ko
data_files:
- split: train
path: '*/*ko*/train.parquet'
- split: test
path: '*/*ko*/test.parquet'
- split: valid
path: '*/*ko*/valid.parquet'
- config_name: ku
data_files:
- split: train
path: '*/*ku*/train.parquet'
- split: test
path: '*/*ku*/test.parquet'
- split: valid
path: '*/*ku*/valid.parquet'
- config_name: lo
data_files:
- split: train
path: '*/*lo*/train.parquet'
- split: test
path: '*/*lo*/test.parquet'
- split: valid
path: '*/*lo*/valid.parquet'
- config_name: lt
data_files:
- split: train
path: '*/*lt*/train.parquet'
- split: test
path: '*/*lt*/test.parquet'
- split: valid
path: '*/*lt*/valid.parquet'
- config_name: mg
data_files:
- split: train
path: '*/*mg*/train.parquet'
- split: test
path: '*/*mg*/test.parquet'
- split: valid
path: '*/*mg*/valid.parquet'
- config_name: mk
data_files:
- split: train
path: '*/*mk*/train.parquet'
- split: test
path: '*/*mk*/test.parquet'
- split: valid
path: '*/*mk*/valid.parquet'
- config_name: ml
data_files:
- split: train
path: '*/*ml*/train.parquet'
- split: test
path: '*/*ml*/test.parquet'
- split: valid
path: '*/*ml*/valid.parquet'
- config_name: mn
data_files:
- split: train
path: '*/*mn*/train.parquet'
- split: test
path: '*/*mn*/test.parquet'
- split: valid
path: '*/*mn*/valid.parquet'
- config_name: mr
data_files:
- split: train
path: '*/*mr*/train.parquet'
- split: test
path: '*/*mr*/test.parquet'
- split: valid
path: '*/*mr*/valid.parquet'
- config_name: ms
data_files:
- split: train
path: '*/*ms*/train.parquet'
- split: test
path: '*/*ms*/test.parquet'
- split: valid
path: '*/*ms*/valid.parquet'
- config_name: my
data_files:
- split: train
path: '*/*my*/train.parquet'
- split: test
path: '*/*my*/test.parquet'
- split: valid
path: '*/*my*/valid.parquet'
- config_name: ne
data_files:
- split: train
path: '*/*ne*/train.parquet'
- split: test
path: '*/*ne*/test.parquet'
- split: valid
path: '*/*ne*/valid.parquet'
- config_name: nl
data_files:
- split: train
path: '*/*nl*/train.parquet'
- split: test
path: '*/*nl*/test.parquet'
- split: valid
path: '*/*nl*/valid.parquet'
- config_name: pl
data_files:
- split: train
path: '*/*pl*/train.parquet'
- split: test
path: '*/*pl*/test.parquet'
- split: valid
path: '*/*pl*/valid.parquet'
- config_name: pt
data_files:
- split: train
path: '*/*pt*/train.parquet'
- split: test
path: '*/*pt*/test.parquet'
- split: valid
path: '*/*pt*/valid.parquet'
- config_name: ro
data_files:
- split: train
path: '*/*ro*/train.parquet'
- split: test
path: '*/*ro*/test.parquet'
- split: valid
path: '*/*ro*/valid.parquet'
- config_name: ru
data_files:
- split: train
path: '*/*ru*/train.parquet'
- split: test
path: '*/*ru*/test.parquet'
- split: valid
path: '*/*ru*/valid.parquet'
- config_name: sk
data_files:
- split: train
path: '*/*sk*/train.parquet'
- split: test
path: '*/*sk*/test.parquet'
- split: valid
path: '*/*sk*/valid.parquet'
- config_name: sl
data_files:
- split: train
path: '*/*sl*/train.parquet'
- split: test
path: '*/*sl*/test.parquet'
- split: valid
path: '*/*sl*/valid.parquet'
- config_name: so
data_files:
- split: train
path: '*/*so*/train.parquet'
- split: test
path: '*/*so*/test.parquet'
- split: valid
path: '*/*so*/valid.parquet'
- config_name: sq
data_files:
- split: train
path: '*/*sq*/train.parquet'
- split: test
path: '*/*sq*/test.parquet'
- split: valid
path: '*/*sq*/valid.parquet'
- config_name: sr
data_files:
- split: train
path: '*/*sr*/train.parquet'
- split: test
path: '*/*sr*/test.parquet'
- split: valid
path: '*/*sr*/valid.parquet'
- config_name: sv
data_files:
- split: train
path: '*/*sv*/train.parquet'
- split: test
path: '*/*sv*/test.parquet'
- split: valid
path: '*/*sv*/valid.parquet'
- config_name: sw
data_files:
- split: train
path: '*/*sw*/train.parquet'
- split: test
path: '*/*sw*/test.parquet'
- split: valid
path: '*/*sw*/valid.parquet'
- config_name: ta
data_files:
- split: train
path: '*/*ta*/train.parquet'
- split: test
path: '*/*ta*/test.parquet'
- split: valid
path: '*/*ta*/valid.parquet'
- config_name: te
data_files:
- split: train
path: '*/*te*/train.parquet'
- split: test
path: '*/*te*/test.parquet'
- split: valid
path: '*/*te*/valid.parquet'
- config_name: th
data_files:
- split: train
path: '*/*th*/train.parquet'
- split: test
path: '*/*th*/test.parquet'
- split: valid
path: '*/*th*/valid.parquet'
- config_name: tl
data_files:
- split: train
path: '*/*tl*/train.parquet'
- split: test
path: '*/*tl*/test.parquet'
- split: valid
path: '*/*tl*/valid.parquet'
- config_name: tr
data_files:
- split: train
path: '*/*tr*/train.parquet'
- split: test
path: '*/*tr*/test.parquet'
- split: valid
path: '*/*tr*/valid.parquet'
- config_name: ug
data_files:
- split: train
path: '*/*ug*/train.parquet'
- split: test
path: '*/*ug*/test.parquet'
- split: valid
path: '*/*ug*/valid.parquet'
- config_name: uk
data_files:
- split: train
path: '*/*uk*/train.parquet'
- split: test
path: '*/*uk*/test.parquet'
- split: valid
path: '*/*uk*/valid.parquet'
- config_name: ur
data_files:
- split: train
path: '*/*ur*/train.parquet'
- split: test
path: '*/*ur*/test.parquet'
- split: valid
path: '*/*ur*/valid.parquet'
- config_name: uz
data_files:
- split: train
path: '*/*uz*/train.parquet'
- split: test
path: '*/*uz*/test.parquet'
- split: valid
path: '*/*uz*/valid.parquet'
- config_name: vi
data_files:
- split: train
path: '*/*vi*/train.parquet'
- split: test
path: '*/*vi*/test.parquet'
- split: valid
path: '*/*vi*/valid.parquet'
- config_name: ky
data_files:
- split: train
path: '*/*ky*/train.parquet'
- split: test
path: '*/*ky*/test.parquet'
- split: valid
path: '*/*ky*/valid.parquet'
- config_name: si
data_files:
- split: train
path: '*/*si*/train.parquet'
- split: test
path: '*/*si*/test.parquet'
- split: valid
path: '*/*si*/valid.parquet'
- config_name: as
data_files:
- split: train
path: '*/*as*/train.parquet'
- split: test
path: '*/*as*/test.parquet'
- split: valid
path: '*/*as*/valid.parquet'
- config_name: ha
data_files:
- split: train
path: '*/*ha*/train.parquet'
- split: test
path: '*/*ha*/test.parquet'
- split: valid
path: '*/*ha*/valid.parquet'
- config_name: la
data_files:
- split: train
path: '*/*la*/train.parquet'
- split: test
path: '*/*la*/test.parquet'
- split: valid
path: '*/*la*/valid.parquet'
- config_name: pa
data_files:
- split: train
path: '*/*pa*/train.parquet'
- split: test
path: '*/*pa*/test.parquet'
- split: valid
path: '*/*pa*/valid.parquet'
- config_name: ps
data_files:
- split: train
path: '*/*ps*/train.parquet'
- split: test
path: '*/*ps*/test.parquet'
- split: valid
path: '*/*ps*/valid.parquet'
---
|
huggingface/release-assets | huggingface | "2024-09-26T12:48:50Z" | 19,394 | 1 | [
"license:mit",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-09-25T10:32:15Z" | ---
license: mit
---
|
poloclub/diffusiondb | poloclub | "2024-01-22T22:17:47Z" | 19,209 | 486 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:n>1T",
"arxiv:2210.14896",
"region:us",
"stable diffusion",
"prompt engineering",
"prompts",
"research paper"
] | [
"text-to-image",
"image-to-text"
] | "2022-10-25T02:25:28Z" | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB
size_categories:
- n>1T
source_datasets:
- original
tags:
- stable diffusion
- prompt engineering
- prompts
- research paper
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB
<img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Two Subsets](#two-subsets)
- [Key Differences](#key-differences)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
- [Usage/Examples](#usageexamples)
- [Downloading a single file](#downloading-a-single-file)
- [Downloading a range of files](#downloading-a-range-of-files)
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
- **Point of Contact:** [Jay Wang](mailto:[email protected])
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
```bash
# DiffusionDB 2M
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
```bash
# DiffusionDB Large
./
├── diffusiondb-large-part-1
│ ├── part-000001
│ │ ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
│ │ ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
│ │ ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-010000
├── diffusiondb-large-part-2
│ ├── part-010001
│ │ ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
│ │ ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
│ │ ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-010002
│ ├── part-010003
│ ├── [...]
│ └── part-014000
└── metadata-large.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://i.imgur.com/gqWcRs2.png">
```json
{
"f3501e05-aef7-4225-a9e9-f516527408ac.png": {
"p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ",
"se": 38753269,
"c": 12.0,
"st": 50,
"sa": "k_lms"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Prompt
- `se`: Random seed
- `c`: CFG Scale (guidance scale)
- `st`: Steps
- `sa`: Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` and `metatable-large.parquet` share the same schema.
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`prompt`|`string`|The text prompt used to generate this image.|
|`part_id`|`uint16`|Folder ID of this image.|
|`seed`|`uint32`| Random seed used to generate this image.|
|`step`|`uint16`| Step count (hyperparameter).|
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|`width`|`uint16`|Image width.|
|`height`|`uint16`|Image height.|
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
|`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `large_random_1k` subset
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
```
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
- `-z` `--unzip` - Unzip the file/files after downloading
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
```bash
python download.py -i 23
```
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
```bash
python download.py -i 1 -r 2000
```
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
```bash
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
```
Again, the script will automatically add the `/` between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
```bash
python download.py -i 1 -r 2000 -z
```
#### Method 3. Use `metadata.parquet` (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
```python
from urllib.request import urlretrieve
import pandas as pd
# Download the parquet table
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
urlretrieve(table_url, 'metadata.parquet')
# Read the table using Pandas
metadata_df = pd.read_parquet('metadata.parquet')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
|
SwayStar123/preprocessed_commoncatalog-cc-by_DCAE | SwayStar123 | "2025-01-29T08:20:23Z" | 19,093 | 1 | [
"task_categories:text-to-image",
"language:en",
"license:cc",
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-to-image"
] | "2025-01-24T10:14:27Z" | ---
license: cc
task_categories:
- text-to-image
language:
- en
pretty_name: 'Preprocessed Common catalogue (CC-BY) DCAE '
size_categories:
- 10M<n<100M
---
The images are resized and then encoded with the DC-AE f32 autoencoder. The resizing is done with a bucketmanager with base resolution 512x512, minimum side length 256, maximum side length 1024, all sides are divisible by 32 ofcourse as they needed to be encoded by the DCAEf32 encoder.
The captions are generated with moondream2, encoded with siglip and bert. (Bert embeddings variance is very high, so use a norm layer). The text embeddings are padded to 64 tokens, but i have provided the unpadded length aswell so you can prune it to the maximum in the batch and save compute. |
Salesforce/lotsa_data | Salesforce | "2025-01-21T09:25:00Z" | 19,003 | 64 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:arrow",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:mlcroissant",
"arxiv:2402.02592",
"region:us"
] | null | "2024-02-22T03:12:11Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: "*/*.arrow"
- config_name: "BEIJING_SUBWAY_30MIN"
data_files:
- split: train
path: "BEIJING_SUBWAY_30MIN/*.arrow"
- config_name: "HZMETRO"
data_files:
- split: train
path: "HZMETRO/*.arrow"
- config_name: "LOOP_SEATTLE"
data_files:
- split: train
path: "LOOP_SEATTLE/*.arrow"
- config_name: "LOS_LOOP"
data_files:
- split: train
path: "LOS_LOOP/*.arrow"
- config_name: "M_DENSE"
data_files:
- split: train
path: "M_DENSE/*.arrow"
- config_name: "PEMS03"
data_files:
- split: train
path: "PEMS03/*.arrow"
- config_name: "PEMS04"
data_files:
- split: train
path: "PEMS04/*.arrow"
- config_name: "PEMS07"
data_files:
- split: train
path: "PEMS07/*.arrow"
- config_name: "PEMS08"
data_files:
- split: train
path: "PEMS08/*.arrow"
- config_name: "PEMS_BAY"
data_files:
- split: train
path: "PEMS_BAY/*.arrow"
- config_name: "Q-TRAFFIC"
data_files:
- split: train
path: "Q-TRAFFIC/*.arrow"
- config_name: "SHMETRO"
data_files:
- split: train
path: "SHMETRO/*.arrow"
- config_name: "SZ_TAXI"
data_files:
- split: train
path: "SZ_TAXI/*.arrow"
- config_name: "alibaba_cluster_trace_2018"
data_files:
- split: train
path: "alibaba_cluster_trace_2018/*.arrow"
- config_name: "australian_electricity_demand"
data_files:
- split: train
path: "australian_electricity_demand/*.arrow"
- config_name: "azure_vm_traces_2017"
data_files:
- split: train
path: "azure_vm_traces_2017/*.arrow"
- config_name: "bdg-2_bear"
data_files:
- split: train
path: "bdg-2_bear/*.arrow"
- config_name: "bdg-2_fox"
data_files:
- split: train
path: "bdg-2_fox/*.arrow"
- config_name: "bdg-2_panther"
data_files:
- split: train
path: "bdg-2_panther/*.arrow"
- config_name: "bdg-2_rat"
data_files:
- split: train
path: "bdg-2_rat/*.arrow"
- config_name: "beijing_air_quality"
data_files:
- split: train
path: "beijing_air_quality/*.arrow"
- config_name: "bitcoin_with_missing"
data_files:
- split: train
path: "bitcoin_with_missing/*.arrow"
- config_name: "borealis"
data_files:
- split: train
path: "borealis/*.arrow"
- config_name: "borg_cluster_data_2011"
data_files:
- split: train
path: "borg_cluster_data_2011/*.arrow"
- config_name: "buildings_900k"
data_files:
- split: train
path: "buildings_900k/*.arrow"
- config_name: "bull"
data_files:
- split: train
path: "bull/*.arrow"
- config_name: "car_parts_with_missing"
data_files:
- split: train
path: "car_parts_with_missing/*.arrow"
- config_name: "cdc_fluview_ilinet"
data_files:
- split: train
path: "cdc_fluview_ilinet/*.arrow"
- config_name: "cdc_fluview_who_nrevss"
data_files:
- split: train
path: "cdc_fluview_who_nrevss/*.arrow"
- config_name: "china_air_quality"
data_files:
- split: train
path: "china_air_quality/*.arrow"
- config_name: "cif_2016_12"
data_files:
- split: train
path: "cif_2016_12/*.arrow"
- config_name: "cif_2016_6"
data_files:
- split: train
path: "cif_2016_6/*.arrow"
- config_name: "cmip6"
data_files:
- split: train
path: "cmip6_*/*.arrow"
- config_name: "cmip6_1850"
data_files:
- split: train
path: "cmip6_1850/*.arrow"
- config_name: "cmip6_1855"
data_files:
- split: train
path: "cmip6_1855/*.arrow"
- config_name: "cmip6_1860"
data_files:
- split: train
path: "cmip6_1860/*.arrow"
- config_name: "cmip6_1865"
data_files:
- split: train
path: "cmip6_1865/*.arrow"
- config_name: "cmip6_1870"
data_files:
- split: train
path: "cmip6_1870/*.arrow"
- config_name: "cmip6_1875"
data_files:
- split: train
path: "cmip6_1875/*.arrow"
- config_name: "cmip6_1880"
data_files:
- split: train
path: "cmip6_1880/*.arrow"
- config_name: "cmip6_1885"
data_files:
- split: train
path: "cmip6_1885/*.arrow"
- config_name: "cmip6_1890"
data_files:
- split: train
path: "cmip6_1890/*.arrow"
- config_name: "cmip6_1895"
data_files:
- split: train
path: "cmip6_1895/*.arrow"
- config_name: "cmip6_1900"
data_files:
- split: train
path: "cmip6_1900/*.arrow"
- config_name: "cmip6_1905"
data_files:
- split: train
path: "cmip6_1905/*.arrow"
- config_name: "cmip6_1910"
data_files:
- split: train
path: "cmip6_1910/*.arrow"
- config_name: "cmip6_1915"
data_files:
- split: train
path: "cmip6_1915/*.arrow"
- config_name: "cmip6_1920"
data_files:
- split: train
path: "cmip6_1920/*.arrow"
- config_name: "cmip6_1925"
data_files:
- split: train
path: "cmip6_1925/*.arrow"
- config_name: "cmip6_1930"
data_files:
- split: train
path: "cmip6_1930/*.arrow"
- config_name: "cmip6_1935"
data_files:
- split: train
path: "cmip6_1935/*.arrow"
- config_name: "cmip6_1940"
data_files:
- split: train
path: "cmip6_1940/*.arrow"
- config_name: "cmip6_1945"
data_files:
- split: train
path: "cmip6_1945/*.arrow"
- config_name: "cmip6_1950"
data_files:
- split: train
path: "cmip6_1950/*.arrow"
- config_name: "cmip6_1955"
data_files:
- split: train
path: "cmip6_1955/*.arrow"
- config_name: "cmip6_1960"
data_files:
- split: train
path: "cmip6_1960/*.arrow"
- config_name: "cmip6_1965"
data_files:
- split: train
path: "cmip6_1965/*.arrow"
- config_name: "cmip6_1970"
data_files:
- split: train
path: "cmip6_1970/*.arrow"
- config_name: "cmip6_1975"
data_files:
- split: train
path: "cmip6_1975/*.arrow"
- config_name: "cmip6_1980"
data_files:
- split: train
path: "cmip6_1980/*.arrow"
- config_name: "cmip6_1985"
data_files:
- split: train
path: "cmip6_1985/*.arrow"
- config_name: "cmip6_1990"
data_files:
- split: train
path: "cmip6_1990/*.arrow"
- config_name: "cmip6_1995"
data_files:
- split: train
path: "cmip6_1995/*.arrow"
- config_name: "cmip6_2000"
data_files:
- split: train
path: "cmip6_2000/*.arrow"
- config_name: "cmip6_2005"
data_files:
- split: train
path: "cmip6_2005/*.arrow"
- config_name: "cmip6_2010"
data_files:
- split: train
path: "cmip6_2010/*.arrow"
- config_name: "cockatoo"
data_files:
- split: train
path: "cockatoo/*.arrow"
- config_name: "covid19_energy"
data_files:
- split: train
path: "covid19_energy/*.arrow"
- config_name: "covid_deaths"
data_files:
- split: train
path: "covid_deaths/*.arrow"
- config_name: "covid_mobility"
data_files:
- split: train
path: "covid_mobility/*.arrow"
- config_name: "elecdemand"
data_files:
- split: train
path: "elecdemand/*.arrow"
- config_name: "elf"
data_files:
- split: train
path: "elf/*.arrow"
- config_name: "era5"
data_files:
- split: train
path: "era5_*/*.arrow"
- config_name: "era5_1989"
data_files:
- split: train
path: "era5_1989/*.arrow"
- config_name: "era5_1990"
data_files:
- split: train
path: "era5_1990/*.arrow"
- config_name: "era5_1991"
data_files:
- split: train
path: "era5_1991/*.arrow"
- config_name: "era5_1992"
data_files:
- split: train
path: "era5_1992/*.arrow"
- config_name: "era5_1993"
data_files:
- split: train
path: "era5_1993/*.arrow"
- config_name: "era5_1994"
data_files:
- split: train
path: "era5_1994/*.arrow"
- config_name: "era5_1995"
data_files:
- split: train
path: "era5_1995/*.arrow"
- config_name: "era5_1996"
data_files:
- split: train
path: "era5_1996/*.arrow"
- config_name: "era5_1997"
data_files:
- split: train
path: "era5_1997/*.arrow"
- config_name: "era5_1998"
data_files:
- split: train
path: "era5_1998/*.arrow"
- config_name: "era5_1999"
data_files:
- split: train
path: "era5_1999/*.arrow"
- config_name: "era5_2000"
data_files:
- split: train
path: "era5_2000/*.arrow"
- config_name: "era5_2001"
data_files:
- split: train
path: "era5_2001/*.arrow"
- config_name: "era5_2002"
data_files:
- split: train
path: "era5_2002/*.arrow"
- config_name: "era5_2003"
data_files:
- split: train
path: "era5_2003/*.arrow"
- config_name: "era5_2004"
data_files:
- split: train
path: "era5_2004/*.arrow"
- config_name: "era5_2005"
data_files:
- split: train
path: "era5_2005/*.arrow"
- config_name: "era5_2006"
data_files:
- split: train
path: "era5_2006/*.arrow"
- config_name: "era5_2007"
data_files:
- split: train
path: "era5_2007/*.arrow"
- config_name: "era5_2008"
data_files:
- split: train
path: "era5_2008/*.arrow"
- config_name: "era5_2009"
data_files:
- split: train
path: "era5_2009/*.arrow"
- config_name: "era5_2010"
data_files:
- split: train
path: "era5_2010/*.arrow"
- config_name: "era5_2011"
data_files:
- split: train
path: "era5_2011/*.arrow"
- config_name: "era5_2012"
data_files:
- split: train
path: "era5_2012/*.arrow"
- config_name: "era5_2013"
data_files:
- split: train
path: "era5_2013/*.arrow"
- config_name: "era5_2014"
data_files:
- split: train
path: "era5_2014/*.arrow"
- config_name: "era5_2015"
data_files:
- split: train
path: "era5_2015/*.arrow"
- config_name: "era5_2016"
data_files:
- split: train
path: "era5_2016/*.arrow"
- config_name: "era5_2017"
data_files:
- split: train
path: "era5_2017/*.arrow"
- config_name: "era5_2018"
data_files:
- split: train
path: "era5_2018/*.arrow"
- config_name: "extended_web_traffic_with_missing"
data_files:
- split: train
path: "extended_web_traffic_with_missing/*.arrow"
- config_name: "favorita_sales"
data_files:
- split: train
path: "favorita_sales/*.arrow"
- config_name: "favorita_transactions"
data_files:
- split: train
path: "favorita_transactions/*.arrow"
- config_name: "fred_md"
data_files:
- split: train
path: "fred_md/*.arrow"
- config_name: "gfc12_load"
data_files:
- split: train
path: "gfc12_load/*.arrow"
- config_name: "gfc14_load"
data_files:
- split: train
path: "gfc14_load/*.arrow"
- config_name: "gfc17_load"
data_files:
- split: train
path: "gfc17_load/*.arrow"
- config_name: "godaddy"
data_files:
- split: train
path: "godaddy/*.arrow"
- config_name: "hierarchical_sales"
data_files:
- split: train
path: "hierarchical_sales/*.arrow"
- config_name: "hog"
data_files:
- split: train
path: "hog/*.arrow"
- config_name: "hospital"
data_files:
- split: train
path: "hospital/*.arrow"
- config_name: "ideal"
data_files:
- split: train
path: "ideal/*.arrow"
- config_name: "kaggle_web_traffic_weekly"
data_files:
- split: train
path: "kaggle_web_traffic_weekly/*.arrow"
- config_name: "kdd2022"
data_files:
- split: train
path: "kdd2022/*.arrow"
- config_name: "kdd_cup_2018_with_missing"
data_files:
- split: train
path: "kdd_cup_2018_with_missing/*.arrow"
- config_name: "largest"
data_files:
- split: train
path: "largest_*/*.arrow"
- config_name: "largest_2017"
data_files:
- split: train
path: "largest_2017/*.arrow"
- config_name: "largest_2018"
data_files:
- split: train
path: "largest_2018/*.arrow"
- config_name: "largest_2019"
data_files:
- split: train
path: "largest_2019/*.arrow"
- config_name: "largest_2020"
data_files:
- split: train
path: "largest_2020/*.arrow"
- config_name: "largest_2021"
data_files:
- split: train
path: "largest_2021/*.arrow"
- config_name: "lcl"
data_files:
- split: train
path: "lcl/*.arrow"
- config_name: "london_smart_meters_with_missing"
data_files:
- split: train
path: "london_smart_meters_with_missing/*.arrow"
- config_name: "m1_monthly"
data_files:
- split: train
path: "m1_monthly/*.arrow"
- config_name: "m1_quarterly"
data_files:
- split: train
path: "m1_quarterly/*.arrow"
- config_name: "m1_yearly"
data_files:
- split: train
path: "m1_yearly/*.arrow"
- config_name: "m4_daily"
data_files:
- split: train
path: "m4_daily/*.arrow"
- config_name: "m4_hourly"
data_files:
- split: train
path: "m4_hourly/*.arrow"
- config_name: "m4_monthly"
data_files:
- split: train
path: "m4_monthly/*.arrow"
- config_name: "m4_quarterly"
data_files:
- split: train
path: "m4_quarterly/*.arrow"
- config_name: "m4_weekly"
data_files:
- split: train
path: "m4_weekly/*.arrow"
- config_name: "m4_yearly"
data_files:
- split: train
path: "m4_yearly/*.arrow"
- config_name: "m5"
data_files:
- split: train
path: "m5/*.arrow"
- config_name: "monash_m3_monthly"
data_files:
- split: train
path: "monash_m3_monthly/*.arrow"
- config_name: "monash_m3_other"
data_files:
- split: train
path: "monash_m3_other/*.arrow"
- config_name: "monash_m3_quarterly"
data_files:
- split: train
path: "monash_m3_quarterly/*.arrow"
- config_name: "monash_m3_yearly"
data_files:
- split: train
path: "monash_m3_yearly/*.arrow"
- config_name: "nn5_daily_with_missing"
data_files:
- split: train
path: "nn5_daily_with_missing/*.arrow"
- config_name: "nn5_weekly"
data_files:
- split: train
path: "nn5_weekly/*.arrow"
- config_name: "oikolab_weather"
data_files:
- split: train
path: "oikolab_weather/*.arrow"
- config_name: "pdb"
data_files:
- split: train
path: "pdb/*.arrow"
- config_name: "pedestrian_counts"
data_files:
- split: train
path: "pedestrian_counts/*.arrow"
- config_name: "project_tycho"
data_files:
- split: train
path: "project_tycho/*.arrow"
- config_name: "residential_load_power"
data_files:
- split: train
path: "residential_load_power/*.arrow"
- config_name: "residential_pv_power"
data_files:
- split: train
path: "residential_pv_power/*.arrow"
- config_name: "restaurant"
data_files:
- split: train
path: "restaurant/*.arrow"
- config_name: "rideshare_with_missing"
data_files:
- split: train
path: "rideshare_with_missing/*.arrow"
- config_name: "saugeenday"
data_files:
- split: train
path: "saugeenday/*.arrow"
- config_name: "sceaux"
data_files:
- split: train
path: "sceaux/*.arrow"
- config_name: "smart"
data_files:
- split: train
path: "smart/*.arrow"
- config_name: "solar_power"
data_files:
- split: train
path: "solar_power/*.arrow"
- config_name: "spain"
data_files:
- split: train
path: "spain/*.arrow"
- config_name: "subseasonal"
data_files:
- split: train
path: "subseasonal/*.arrow"
- config_name: "subseasonal_precip"
data_files:
- split: train
path: "subseasonal_precip/*.arrow"
- config_name: "sunspot_with_missing"
data_files:
- split: train
path: "sunspot_with_missing/*.arrow"
- config_name: "taxi_30min"
data_files:
- split: train
path: "taxi_30min/*.arrow"
- config_name: "temperature_rain_with_missing"
data_files:
- split: train
path: "temperature_rain_with_missing/*.arrow"
- config_name: "tourism_monthly"
data_files:
- split: train
path: "tourism_monthly/*.arrow"
- config_name: "tourism_quarterly"
data_files:
- split: train
path: "tourism_quarterly/*.arrow"
- config_name: "tourism_yearly"
data_files:
- split: train
path: "tourism_yearly/*.arrow"
- config_name: "traffic_hourly"
data_files:
- split: train
path: "traffic_hourly/*.arrow"
- config_name: "traffic_weekly"
data_files:
- split: train
path: "traffic_weekly/*.arrow"
- config_name: "uber_tlc_daily"
data_files:
- split: train
path: "uber_tlc_daily/*.arrow"
- config_name: "uber_tlc_hourly"
data_files:
- split: train
path: "uber_tlc_hourly/*.arrow"
- config_name: "us_births"
data_files:
- split: train
path: "us_births/*.arrow"
- config_name: "vehicle_trips_with_missing"
data_files:
- split: train
path: "vehicle_trips_with_missing/*.arrow"
- config_name: "weather"
data_files:
- split: train
path: "weather/*.arrow"
- config_name: "wiki-rolling_nips"
data_files:
- split: train
path: "wiki-rolling_nips/*.arrow"
- config_name: "wind_farms_with_missing"
data_files:
- split: train
path: "wind_farms_with_missing/*.arrow"
- config_name: "wind_power"
data_files:
- split: train
path: "wind_power/*.arrow"
---
# LOTSA Data
The Large-scale Open Time Series Archive (LOTSA) is a collection of open time series datasets for time series forecasting.
It was collected for the purpose of pre-training Large Time Series Models.
See the [paper](https://arxiv.org/abs/2402.02592) and [codebase](https://github.com/SalesforceAIResearch/uni2ts) for more information.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you're using LOTSA data in your research or applications, please cite it using this BibTeX:
**BibTeX:**
```markdown
@article{woo2024unified,
title={Unified Training of Universal Time Series Forecasting Transformers},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Xiong, Caiming and Savarese, Silvio and Sahoo, Doyen},
journal={arXiv preprint arXiv:2402.02592},
year={2024}
}
```
## Ethical Considerations
This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. |
orionweller/reddit_mds_incremental | orionweller | "2024-07-23T17:17:42Z" | 18,956 | 0 | [
"region:us"
] | null | "2024-06-24T14:44:04Z" | ---
dataset_info:
features: []
splits:
- name: creation
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: creation
path: data/creation-*
---
|
livecodebench/code_generation_lite | livecodebench | "2025-01-14T18:03:07Z" | 18,954 | 20 | [
"license:cc",
"size_categories:n<1K",
"arxiv:2403.07974",
"region:us",
"code",
"code generation"
] | null | "2024-04-16T04:46:53Z" | ---
license: cc
tags:
- code
- code generation
pretty_name: LiveCodeBench
size_categories:
- n<1K
---
## LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
<p align="center">
<a href="https://livecodebench.github.io/">🏠 Home Page</a> •
<a href="https://github.com/LiveCodeBench/LiveCodeBench">💻 GitHub Repository </a> •
<a href="https://livecodebench.github.io/leaderboard.html">🏆 Leaderboard</a> •
<a href="https://arxiv.org/abs/2403.07974">📄 Paper </a>
</p>

## Change Log
Since LiveCodeBench is a continuously updated benchmark, we provide different versions of the dataset. Particularly, we provide the following versions of the dataset:
- `release_v1`: The initial release of the dataset with problems released between May 2023 and Mar 2024 containing 400 problems.
- `release_v2`: The updated release of the dataset with problems released between May 2023 and May 2024 containing 511 problems.
- `release_v3`: The updated release of the dataset with problems released between May 2023 and Jul 2024 containing 612 problems.
- `release_v4`: The updated release of the dataset with problems released between May 2023 and Sep 2024 containing 713 problems.
- `release_v5`: The updated release of the dataset with problems released between May 2023 and Jan 2025 containing 880 problems.
You can use the `version_tag` argument to load the desired version of the dataset. Additionally, you can use version tags like `v1`, `v2`, `v1_v3`, `v4_v5` to get the problems released in a specific version.
## Dataset Description
LiveCodeBench is a "live" updating benchmark for holistically evaluating code related capabilities of LLMs.
Particularly, it evaluates LLMs across a range of capabilties including code generation, self-repair, test output prediction, and code execution.
This is the code generation scenario of LiveCodeBench. It is also used for evaluating self-repair using test case feedback.
LiveCodeBench problems are collected from competition programming websites with particular focus on maintaining problem quality, test case quality, and problem difficulty diversity.
This scenario currently hosts over 500 problems from LeetCode, AtCoder, and Codeforces.
Each problem instance is consists of problem description, input/output examples, and hidden test cases.
Additionally, every problem is tagged with its difficulty level and release date which allows measuring model performance across different time windows.
The goal is to generate a correct and efficient solution for each problem instance.
The initial code_generation dataset included larger number of test cases which leads to substantially large dataset size. This (lite) version has pruned and sampled tests while trying to ensure similar performances with the original dataset. Going forward, livecodebench will be using this lite version for code generation evaluations.
## Usage
You can use the dataset by loading it from the Hugging Face datasets library. Additionally, the version tag "release_v1" is used to specify the (temporal) version of the dataset. "v1" corresponds to the initial release of the dataset and "release_v2" is the second version.
```python
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
``` |
openai/MMMLU | openai | "2024-10-16T18:39:00Z" | 18,938 | 460 | [
"task_categories:question-answering",
"language:ar",
"language:bn",
"language:de",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:pt",
"language:sw",
"language:yo",
"language:zh",
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2009.03300",
"region:us"
] | [
"question-answering"
] | "2024-09-13T16:37:19Z" | ---
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: test
path: test/*.csv
- config_name: AR_XY
data_files:
- split: test
path: test/mmlu_AR-XY.csv
- config_name: BN_BD
data_files:
- split: test
path: test/mmlu_BN-BD.csv
- config_name: DE_DE
data_files:
- split: test
path: test/mmlu_DE-DE.csv
- config_name: ES_LA
data_files:
- split: test
path: test/mmlu_ES-LA.csv
- config_name: FR_FR
data_files:
- split: test
path: test/mmlu_FR-FR.csv
- config_name: HI_IN
data_files:
- split: test
path: test/mmlu_HI-IN.csv
- config_name: ID_ID
data_files:
- split: test
path: test/mmlu_ID-ID.csv
- config_name: IT_IT
data_files:
- split: test
path: test/mmlu_IT-IT.csv
- config_name: JA_JP
data_files:
- split: test
path: test/mmlu_JA-JP.csv
- config_name: KO_KR
data_files:
- split: test
path: test/mmlu_KO-KR.csv
- config_name: PT_BR
data_files:
- split: test
path: test/mmlu_PT-BR.csv
- config_name: SW_KE
data_files:
- split: test
path: test/mmlu_SW-KE.csv
- config_name: YO_NG
data_files:
- split: test
path: test/mmlu_YO-NG.csv
- config_name: ZH_CN
data_files:
- split: test
path: test/mmlu_ZH-CN.csv
language:
- ar
- bn
- de
- es
- fr
- hi
- id
- it
- ja
- ko
- pt
- sw
- yo
- zh
license: mit
---
# Multilingual Massive Multitask Language Understanding (MMMLU)
The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 different categories, covering elementary-level knowledge up to advanced professional subjects like law, physics, history, and computer science.
We translated the MMLU’s test set into 14 languages using professional human translators. Relying on human translators for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages like Yoruba. We are publishing the professional human translations and the code we use to run the evaluations.
This effort reflects our commitment to improving the multilingual capabilities of AI models, ensuring they perform accurately across languages, particularly for underrepresented communities. By prioritizing high-quality translations, we aim to make AI technology more inclusive and effective for users worldwide.
## Locales
MMMLU contains the MMLU test set translated into the following locales:
* AR_XY (Arabic)
* BN_BD (Bengali)
* DE_DE (German)
* ES_LA (Spanish)
* FR_FR (French)
* HI_IN (Hindi)
* ID_ID (Indonesian)
* IT_IT (Italian)
* JA_JP (Japanese)
* KO_KR (Korean)
* PT_BR (Brazilian Portuguese)
* SW_KE (Swahili)
* YO_NG (Yoruba)
* ZH_CN (Simplified Chinese)
## Sources
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). [*Measuring Massive Multitask Language Understanding*](https://arxiv.org/abs/2009.03300).
[OpenAI Simple Evals GitHub Repository](https://github.com/openai/simple-evals) |
lmms-lab/LLaVA-Video-178K | lmms-lab | "2024-10-11T04:59:25Z" | 18,891 | 114 | [
"task_categories:visual-question-answering",
"task_categories:video-text-to-text",
"language:en",
"size_categories:1M<n<10M",
"modality:text",
"modality:video",
"arxiv:2410.02713",
"region:us",
"video"
] | [
"visual-question-answering",
"video-text-to-text"
] | "2024-08-27T07:09:50Z" | ---
configs:
- config_name: 0_30_s_academic_v0_1
data_files:
- split: caption
path: 0_30_s_academic_v0_1/*cap*.json
- split: open_ended
path: 0_30_s_academic_v0_1/*oe*.json
- split: multi_choice
path: 0_30_s_academic_v0_1/*mc*.json
- config_name: 0_30_s_youtube_v0_1
data_files:
- split: caption
path: 0_30_s_youtube_v0_1/*cap*.json
- split: open_ended
path: 0_30_s_youtube_v0_1/*oe*.json
- split: multi_choice
path: 0_30_s_youtube_v0_1/*mc*.json
- config_name: 0_30_s_activitynet
data_files:
- split: open_ended
path: 0_30_s_activitynet/*oe*.json
- config_name: 0_30_s_perceptiontest
data_files:
- split: multi_choice
path: 0_30_s_perceptiontest/*mc*.json
- config_name: 0_30_s_nextqa
data_files:
- split: open_ended
path: 0_30_s_nextqa/*oe*.json
- split: multi_choice
path: 0_30_s_nextqa/*mc*.json
- config_name: 30_60_s_academic_v0_1
data_files:
- split: caption
path: 30_60_s_academic_v0_1/*cap*.json
- split: open_ended
path: 30_60_s_academic_v0_1/*oe*.json
- split: multi_choice
path: 30_60_s_academic_v0_1/*mc*.json
- config_name: 30_60_s_youtube_v0_1
data_files:
- split: caption
path: 30_60_s_youtube_v0_1/*cap*.json
- split: open_ended
path: 30_60_s_youtube_v0_1/*oe*.json
- split: multi_choice
path: 30_60_s_youtube_v0_1/*mc*.json
- config_name: 30_60_s_activitynet
data_files:
- split: open_ended
path: 30_60_s_activitynet/*oe*.json
- config_name: 30_60_s_perceptiontest
data_files:
- split: multi_choice
path: 30_60_s_perceptiontest/*mc*.json
- config_name: 30_60_s_nextqa
data_files:
- split: open_ended
path: 30_60_s_nextqa/*oe*.json
- split: multi_choice
path: 30_60_s_nextqa/*mc*.json
- config_name: 1_2_m_youtube_v0_1
data_files:
- split: caption
path: 1_2_m_youtube_v0_1/*cap*.json
- split: open_ended
path: 1_2_m_youtube_v0_1/*oe*.json
- split: multi_choice
path: 1_2_m_youtube_v0_1/*mc*.json
- config_name: 1_2_m_academic_v0_1
data_files:
- split: caption
path: 1_2_m_academic_v0_1/*cap*.json
- split: open_ended
path: 1_2_m_academic_v0_1/*oe*.json
- split: multi_choice
path: 1_2_m_academic_v0_1/*mc*.json
- config_name: 1_2_m_activitynet
data_files:
- split: open_ended
path: 1_2_m_activitynet/*oe*.json
- config_name: 1_2_m_nextqa
data_files:
- split: open_ended
path: 1_2_m_nextqa/*oe*.json
- split: multi_choice
path: 1_2_m_nextqa/*mc*.json
- config_name: 2_3_m_youtube_v0_1
data_files:
- split: caption
path: 2_3_m_youtube_v0_1/*cap*.json
- split: open_ended
path: 2_3_m_youtube_v0_1/*oe*.json
- split: multi_choice
path: 2_3_m_youtube_v0_1/*mc*.json
- config_name: 2_3_m_academic_v0_1
data_files:
- split: caption
path: 2_3_m_academic_v0_1/*cap*.json
- split: open_ended
path: 2_3_m_academic_v0_1/*oe*.json
- split: multi_choice
path: 2_3_m_academic_v0_1/*mc*.json
- config_name: 2_3_m_activitynet
data_files:
- split: open_ended
path: 2_3_m_activitynet/*oe*.json
- config_name: 2_3_m_nextqa
data_files:
- split: open_ended
path: 2_3_m_nextqa/*oe*.json
- split: multi_choice
path: 2_3_m_nextqa/*mc*.json
- config_name: llava_hound
data_files:
- split: open_ended
path: llava_hound/sharegptvideo_qa_255k_processed.json
language:
- en
task_categories:
- visual-question-answering
- video-text-to-text
tags:
- video
---
# Dataset Card for LLaVA-Video-178K
## Dataset Description
- **Curated by:** Yuanhan Zhang, Jinming Wu, Wei Li
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Uses
This dataset is used for the training of the LLaVA-Video model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
### Data Sources
For the training of LLaVA-Video, we utilized video-language data from five primary sources:
- **LLaVA-Video-178K**: This dataset includes **178,510** caption entries, 960,792 open-ended QA (question and answer) items, and 196,198 multiple-choice QA items. These data were newly annotated for this project.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_academic_v0_1 and LLaVA-Video-178K/XXX_youtube_v0_1.
- **NeXT-QA**: Comprises 17,090 open-ended QA items and 17,024 multiple-choice QA items.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_nextqa.
- **ActivityNetQA**: Includes 23,530 open-ended QA items,
- We include this dataset in this repository: LLaVA-Video-178K/XXX_activitynetqa.
- **PerceptionTest**: Includes 1,803 open-ended QA items.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_perceptiontest.
- **LLaVA-Hound**: Contains 240,000 open-ended QA items and 15,000 caption entries.
- The video data and annotations are available at the following URLs:
- Video data: [train_300k](https://huggingface.co/datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k)
- Annotation data: LLaVA-Video-178K/llava_hound
- loading function is specified here: [function](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/7125e3654d88063cb467ed242db76f1e2b184d4c/llava/train/train.py#L1162)
The **LLaVA-Video-178K** dataset is the only contribution from this repository; we provide additional datasets for reproducing LLaVA-Video.
- **Project Page:** [Project Page](https://llava-vl.github.io/blog/2024-09-30-llava-video/).
- **Paper**: For more details, please check our [paper](https://arxiv.org/abs/2410.02713)
### Annotation Pipeline
The following directories are provided for generating captions and QA data:
- **Captions**: `LLaVA-Video-178K/gpt4o_caption_prompt`
- **QA**: `LLaVA-Video-178K/gpt4o_qa_prompt`
### The subset used in the LLaVA-OneVision
We have included captions and open-ended questions in the [0_30_s_academic_v0_1 split](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K/tree/main/0_30_s_academic_v0_1), along with 240,000 open-ended QA items and 15,000 caption entries, as part of the video data in LLaVA-Hound for LLaVA-OneVision.
- [**0_30_s_academic_v0_1 caption**](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K/blob/main/0_30_s_academic_v0_1/0_30_s_academic_v0_1_cap_processed.json)
- [**0_30_s_academic_v0_1 open-ended QA**](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K/blob/main/0_30_s_academic_v0_1/0_30_s_academic_v0_1_cap_processed.json)
- **LLaVA-Hound**: Same as above.
## Citation
```bibtex
@misc{zhang2024videoinstructiontuningsynthetic,
title={Video Instruction Tuning With Synthetic Data},
author={Yuanhan Zhang and Jinming Wu and Wei Li and Bo Li and Zejun Ma and Ziwei Liu and Chunyuan Li},
year={2024},
eprint={2410.02713},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.02713},
}
```
## Dataset Card Contact
[Yuanhan Zhang](https://zhangyuanhan-ai.github.io/)
[Jinming Wu](https://scholar.google.com/citations?user=eh-XJIoAAAAJ&hl=zh-CN)
[Wei Li](https://scholar.google.com/citations?user=q8ZrKVIAAAAJ&hl=zh-CN) |
philschmid/llama2-german-corpus-tokenized-llama-chunk-4096 | philschmid | "2023-08-18T17:33:00Z" | 18,801 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-08-18T10:51:09Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1190392538880
num_examples: 20753008
download_size: 307400657843
dataset_size: 1190392538880
---
# Dataset Card for "llama2-german-corpus-tokenized-llama-chunk-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bigbio/pubmed_qa | bigbio | "2024-03-23T19:06:35Z" | 18,756 | 41 | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | null | "2022-11-13T22:11:45Z" |
---
language:
- en
bigbio_language:
- English
license: mit
multilinguality: monolingual
bigbio_license_shortname: MIT
pretty_name: PubMedQA
homepage: https://github.com/pubmedqa/pubmedqa
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- QUESTION_ANSWERING
---
# Dataset Card for PubMedQA
## Dataset Description
- **Homepage:** https://github.com/pubmedqa/pubmedqa
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
## Citation Information
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
|
AmazonScience/MultilingualMultiModalClassification | AmazonScience | "2024-12-06T14:00:39Z" | 18,553 | 2 | [
"license:cc-by-4.0",
"region:us"
] | null | "2023-05-12T20:22:46Z" | ---
license: cc-by-4.0
dataset_info:
- config_name: multieurlex-doc-bg
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 407278322
num_examples: 15979
- name: validation
num_bytes: 121021498
num_examples: 4997
- name: test
num_bytes: 126194699
num_examples: 4988
download_size: 94161088
dataset_size: 654494519
- config_name: multieurlex-doc-cs
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 465064539
num_examples: 23056
- name: validation
num_bytes: 98206202
num_examples: 4997
- name: test
num_bytes: 101905013
num_examples: 4988
download_size: 103341160
dataset_size: 665175754
- config_name: multieurlex-doc-da
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1137431321
num_examples: 54806
- name: validation
num_bytes: 100630592
num_examples: 4997
- name: test
num_bytes: 103660755
num_examples: 4988
download_size: 211774968
dataset_size: 1341722668
- config_name: multieurlex-doc-de
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1156790099
num_examples: 54804
- name: test
num_bytes: 108731388
num_examples: 4988
- name: validation
num_bytes: 105635067
num_examples: 4997
download_size: 214358454
dataset_size: 1371156554
- config_name: multieurlex-doc-el
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1412326683
num_examples: 54828
- name: validation
num_bytes: 127450631
num_examples: 4997
- name: test
num_bytes: 132083962
num_examples: 4988
download_size: 249838066
dataset_size: 1671861276
- config_name: multieurlex-doc-en
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1208998381
num_examples: 54808
- name: test
num_bytes: 110325080
num_examples: 4988
- name: validation
num_bytes: 106866095
num_examples: 4997
download_size: 223853363
dataset_size: 1426189556
- config_name: multieurlex-doc-es
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1354212928
num_examples: 52621
- name: test
num_bytes: 128661948
num_examples: 4988
- name: validation
num_bytes: 124535827
num_examples: 4997
download_size: 254828898
dataset_size: 1607410703
- config_name: multieurlex-doc-et
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 385076032
num_examples: 22986
- name: validation
num_bytes: 82795960
num_examples: 4997
- name: test
num_bytes: 85548380
num_examples: 4988
download_size: 87523878
dataset_size: 553420372
- config_name: multieurlex-doc-fi
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 746551995
num_examples: 42362
- name: validation
num_bytes: 88644474
num_examples: 4997
- name: test
num_bytes: 90495504
num_examples: 4988
download_size: 144867468
dataset_size: 925691973
- config_name: multieurlex-doc-fr
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1308833036
num_examples: 54804
- name: validation
num_bytes: 117528920
num_examples: 4997
- name: test
num_bytes: 122076609
num_examples: 4988
download_size: 244074331
dataset_size: 1548438565
- config_name: multieurlex-doc-hr
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 166426724
num_examples: 7944
- name: validation
num_bytes: 52267708
num_examples: 2499
- name: test
num_bytes: 99712738
num_examples: 4988
download_size: 49985102
dataset_size: 318407170
- config_name: multieurlex-doc-hu
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 430043841
num_examples: 22542
- name: validation
num_bytes: 94622333
num_examples: 4997
- name: test
num_bytes: 97747785
num_examples: 4988
download_size: 97614905
dataset_size: 622413959
- config_name: multieurlex-doc-it
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1249061937
num_examples: 54805
- name: validation
num_bytes: 110908837
num_examples: 4997
- name: test
num_bytes: 114867681
num_examples: 4987
download_size: 231926930
dataset_size: 1474838455
- config_name: multieurlex-doc-nl
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1286183580
num_examples: 54803
- name: validation
num_bytes: 112858254
num_examples: 4997
- name: test
num_bytes: 116992911
num_examples: 4988
download_size: 237826260
dataset_size: 1516034745
- config_name: multieurlex-doc-pl
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 471614388
num_examples: 23063
- name: validation
num_bytes: 101196012
num_examples: 4997
- name: test
num_bytes: 104384366
num_examples: 4988
download_size: 104236091
dataset_size: 677194766
- config_name: multieurlex-doc-pt
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 1269347766
num_examples: 52205
- name: validation
num_bytes: 117194055
num_examples: 4997
- name: test
num_bytes: 120747746
num_examples: 4988
download_size: 238776517
dataset_size: 1507289567
- config_name: multieurlex-doc-ro
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 359230898
num_examples: 15914
- name: validation
num_bytes: 107876284
num_examples: 4997
- name: test
num_bytes: 112291364
num_examples: 4988
download_size: 89545760
dataset_size: 579398546
- config_name: multieurlex-doc-sv
features:
- name: filename
dtype: string
- name: words
sequence:
sequence: string
- name: boxes
sequence:
sequence:
sequence: int64
splits:
- name: train
num_bytes: 867755140
num_examples: 42356
- name: validation
num_bytes: 101193984
num_examples: 4997
- name: test
num_bytes: 103453976
num_examples: 4988
download_size: 166948914
dataset_size: 1072403100
- config_name: wiki-doc-ar-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Earthquake
'1': SolarEclipse
'2': MusicFestival
'3': MilitaryConflict
'4': FilmFestival
'5': Convention
'6': FootballMatch
'7': OlympicEvent
'8': GrandPrix
'9': GolfTournament
'10': WomensTennisAssociationTournament
'11': TennisTournament
'12': SoccerTournament
'13': WrestlingEvent
'14': HorseRace
'15': CyclingRace
'16': MixedMartialArtsEvent
'17': Election
'18': SoccerClubSeason
'19': NationalFootballLeagueSeason
'20': NCAATeamSeason
'21': BaseballSeason
'22': VideoGame
'23': BiologicalDatabase
'24': EurovisionSongContestEntry
'25': Album
'26': Musical
'27': ClassicalMusicComposition
'28': ArtistDiscography
'29': Single
'30': Poem
'31': Magazine
'32': Newspaper
'33': AcademicJournal
'34': Play
'35': Manga
'36': ComicStrip
'37': Anime
'38': HollywoodCartoon
'39': MusicGenre
'40': Grape
'41': Conifer
'42': Fern
'43': Moss
'44': GreenAlga
'45': CultivatedVariety
'46': Cycad
'47': Arachnid
'48': Fish
'49': Insect
'50': Reptile
'51': Mollusca
'52': Bird
'53': Amphibian
'54': RaceHorse
'55': Crustacean
'56': Fungus
'57': Lighthouse
'58': Theatre
'59': RollerCoaster
'60': Airport
'61': RailwayStation
'62': Road
'63': RailwayLine
'64': Bridge
'65': RoadTunnel
'66': Dam
'67': CricketGround
'68': Stadium
'69': Racecourse
'70': GolfCourse
'71': Prison
'72': Hospital
'73': Museum
'74': Hotel
'75': Library
'76': Restaurant
'77': ShoppingMall
'78': HistoricBuilding
'79': Castle
'80': Volcano
'81': MountainPass
'82': Glacier
'83': Canal
'84': River
'85': Lake
'86': Mountain
'87': Cave
'88': MountainRange
'89': Galaxy
'90': ArtificialSatellite
'91': Planet
'92': Town
'93': Village
'94': Diocese
'95': AutomobileEngine
'96': SupremeCourtOfTheUnitedStatesCase
'97': MilitaryPerson
'98': Religious
'99': Engineer
'100': BusinessPerson
'101': SportsTeamMember
'102': SoccerManager
'103': Chef
'104': Philosopher
'105': CollegeCoach
'106': ScreenWriter
'107': Historian
'108': Poet
'109': President
'110': PrimeMinister
'111': Congressman
'112': Senator
'113': Mayor
'114': MemberOfParliament
'115': Governor
'116': Monarch
'117': PlayboyPlaymate
'118': Cardinal
'119': Saint
'120': Pope
'121': ChristianBishop
'122': BeautyQueen
'123': RadioHost
'124': HandballPlayer
'125': Cricketer
'126': Jockey
'127': SumoWrestler
'128': AmericanFootballPlayer
'129': LacrossePlayer
'130': TennisPlayer
'131': AmateurBoxer
'132': SoccerPlayer
'133': Rower
'134': TableTennisPlayer
'135': BeachVolleyballPlayer
'136': SpeedwayRider
'137': FormulaOneRacer
'138': NascarDriver
'139': Swimmer
'140': IceHockeyPlayer
'141': FigureSkater
'142': Skater
'143': Curler
'144': Skier
'145': GolfPlayer
'146': SquashPlayer
'147': PokerPlayer
'148': BadmintonPlayer
'149': ChessPlayer
'150': RugbyPlayer
'151': DartsPlayer
'152': NetballPlayer
'153': MartialArtist
'154': Gymnast
'155': Canoeist
'156': GaelicGamesPlayer
'157': HorseRider
'158': BaseballPlayer
'159': Cyclist
'160': Bodybuilder
'161': AustralianRulesFootballPlayer
'162': BasketballPlayer
'163': Ambassador
'164': Baronet
'165': Model
'166': Architect
'167': Judge
'168': Economist
'169': Journalist
'170': Painter
'171': Comedian
'172': ComicsCreator
'173': ClassicalMusicArtist
'174': FashionDesigner
'175': AdultActor
'176': VoiceActor
'177': Photographer
'178': HorseTrainer
'179': Entomologist
'180': Medician
'181': SoapCharacter
'182': AnimangaCharacter
'183': MythologicalFigure
'184': Noble
'185': Astronaut
'186': OfficeHolder
'187': PublicTransitSystem
'188': BusCompany
'189': LawFirm
'190': Winery
'191': RecordLabel
'192': Brewery
'193': Airline
'194': Publisher
'195': Bank
'196': PoliticalParty
'197': Legislature
'198': Band
'199': BasketballLeague
'200': SoccerLeague
'201': IceHockeyLeague
'202': BaseballLeague
'203': RugbyLeague
'204': MilitaryUnit
'205': University
'206': School
'207': CyclingTeam
'208': CanadianFootballTeam
'209': BasketballTeam
'210': AustralianFootballTeam
'211': HockeyTeam
'212': HandballTeam
'213': CricketTeam
'214': RugbyClub
'215': TradeUnion
'216': RadioStation
'217': BroadcastNetwork
'218': TelevisionStation
splits:
- name: train
num_bytes: 7919491304.875
num_examples: 8129
- name: test
num_bytes: 1691686089.125
num_examples: 1743
- name: validation
num_bytes: 1701166069.25
num_examples: 1742
download_size: 11184835705
dataset_size: 11312343463.25
- config_name: wiki-doc-ar-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 8062791605.746
num_examples: 8129
- name: test
num_bytes: 1722071386.382
num_examples: 1743
- name: validation
num_bytes: 1731948280.766
num_examples: 1742
download_size: 11226133595
dataset_size: 11516811272.894001
- config_name: wiki-doc-de-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 59980253508.125
num_examples: 41047
- name: validation
num_bytes: 12842370238.5
num_examples: 8796
- name: test
num_bytes: 12835845039.5
num_examples: 8796
download_size: 84274708249
dataset_size: 85658468786.125
- config_name: wiki-doc-en-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 201861788293.75
num_examples: 152506
- name: validation
num_bytes: 43199951001.0
num_examples: 32680
- name: test
num_bytes: 43177176523.0
num_examples: 32680
download_size: 282546982586
dataset_size: 288238915817.75
- config_name: wiki-doc-es-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 58485685843.875
num_examples: 42713
- name: validation
num_bytes: 12550991282.569
num_examples: 9153
- name: test
num_bytes: 12546829230.442
num_examples: 9154
download_size: 82063829353
dataset_size: 83583506356.886
- config_name: wiki-doc-fr-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 40498485460.875
num_examples: 33329
- name: validation
num_bytes: 8641683528.108
num_examples: 7142
- name: test
num_bytes: 8649896334.108
num_examples: 7142
download_size: 56468886228
dataset_size: 57790065323.091
- config_name: wiki-doc-it-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 25293800981.25
num_examples: 20166
- name: validation
num_bytes: 5433600428.554
num_examples: 4321
- name: test
num_bytes: 5411100552.106
num_examples: 4322
download_size: 35441755215
dataset_size: 36138501961.91
- config_name: wiki-doc-ja-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30506965411.75
num_examples: 23250
- name: test
num_bytes: 6540291049.322
num_examples: 4982
- name: validation
num_bytes: 6513584731.193
num_examples: 4983
download_size: 43248429810
dataset_size: 43560841192.265
- config_name: wiki-doc-ja-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30650799906.25
num_examples: 23254
- name: validation
num_bytes: 6543258936.193
num_examples: 4983
- name: test
num_bytes: 6570176552.322
num_examples: 4982
download_size: 43344042661
dataset_size: 43764235394.765
- config_name: wiki-doc-pt-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 21744787468.0
num_examples: 20168
- name: test
num_bytes: 4702448837.106
num_examples: 4322
- name: validation
num_bytes: 4646765273.106
num_examples: 4322
download_size: 30769070664
dataset_size: 31094001578.211998
- config_name: wiki-doc-pt-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 22164275072.0
num_examples: 20168
- name: validation
num_bytes: 4735717368.106
num_examples: 4322
- name: test
num_bytes: 4792666148.106
num_examples: 4322
download_size: 30891429558
dataset_size: 31692658588.211998
- config_name: wiki-doc-pt-merged-v2
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 22164275065.16
num_examples: 20168
- name: validation
num_bytes: 4735717370.818
num_examples: 4322
- name: test
num_bytes: 4792666150.818
num_examples: 4322
download_size: 30891429558
dataset_size: 31692658586.796
- config_name: wiki-doc-zh-img
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30248140475.625
num_examples: 23099
- name: test
num_bytes: 6471322916.25
num_examples: 4950
- name: validation
num_bytes: 6507120137.25
num_examples: 4950
download_size: 42958276266
dataset_size: 43226583529.125
- config_name: wiki-doc-zh-merged
features:
- name: image
dtype: image
- name: filename
dtype: string
- name: words
sequence: string
- name: ocr_bboxes
sequence:
sequence: int64
- name: label
dtype:
class_label:
names:
'0': AcademicJournal
'1': AdultActor
'2': Album
'3': AmateurBoxer
'4': Ambassador
'5': AmericanFootballPlayer
'6': Amphibian
'7': AnimangaCharacter
'8': Anime
'9': Arachnid
'10': Baronet
'11': BasketballTeam
'12': BeautyQueen
'13': BroadcastNetwork
'14': BusCompany
'15': BusinessPerson
'16': CanadianFootballTeam
'17': Canal
'18': Cardinal
'19': Cave
'20': ChristianBishop
'21': ClassicalMusicArtist
'22': ClassicalMusicComposition
'23': CollegeCoach
'24': Comedian
'25': ComicsCreator
'26': Congressman
'27': Conifer
'28': Convention
'29': Cricketer
'30': Crustacean
'31': CultivatedVariety
'32': Cycad
'33': Dam
'34': Economist
'35': Engineer
'36': Entomologist
'37': EurovisionSongContestEntry
'38': Fern
'39': FilmFestival
'40': Fish
'41': FootballMatch
'42': Glacier
'43': GolfTournament
'44': Governor
'45': Gymnast
'46': Historian
'47': IceHockeyLeague
'48': Insect
'49': Journalist
'50': Judge
'51': Lighthouse
'52': Magazine
'53': Mayor
'54': Medician
'55': MemberOfParliament
'56': MilitaryPerson
'57': Model
'58': Mollusca
'59': Monarch
'60': Moss
'61': Mountain
'62': MountainPass
'63': MountainRange
'64': MusicFestival
'65': Musical
'66': MythologicalFigure
'67': Newspaper
'68': Noble
'69': OfficeHolder
'70': Other
'71': Philosopher
'72': Photographer
'73': PlayboyPlaymate
'74': Poem
'75': Poet
'76': Pope
'77': President
'78': PrimeMinister
'79': PublicTransitSystem
'80': Racecourse
'81': RadioHost
'82': RadioStation
'83': Religious
'84': Reptile
'85': Restaurant
'86': Road
'87': RoadTunnel
'88': RollerCoaster
'89': RugbyClub
'90': RugbyLeague
'91': Saint
'92': School
'93': ScreenWriter
'94': Senator
'95': ShoppingMall
'96': Skater
'97': SoccerLeague
'98': SoccerManager
'99': SoccerPlayer
'100': SoccerTournament
'101': SportsTeamMember
'102': SumoWrestler
'103': TelevisionStation
'104': TennisTournament
'105': TradeUnion
'106': University
'107': Village
'108': VoiceActor
'109': Volcano
'110': WrestlingEvent
splits:
- name: train
num_bytes: 30382212749.625
num_examples: 23099
- name: test
num_bytes: 6499933446.25
num_examples: 4950
- name: validation
num_bytes: 6536010774.25
num_examples: 4950
download_size: 43027961181
dataset_size: 43418156970.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: multieurlex-doc-bg
data_files:
- split: train
path: multieurlex-doc-bg/train-*
- split: validation
path: multieurlex-doc-bg/validation-*
- split: test
path: multieurlex-doc-bg/test-*
- config_name: multieurlex-doc-cs
data_files:
- split: train
path: multieurlex-doc-cs/train-*
- split: validation
path: multieurlex-doc-cs/validation-*
- split: test
path: multieurlex-doc-cs/test-*
- config_name: multieurlex-doc-da
data_files:
- split: train
path: multieurlex-doc-da/train-*
- split: validation
path: multieurlex-doc-da/validation-*
- split: test
path: multieurlex-doc-da/test-*
- config_name: multieurlex-doc-de
data_files:
- split: train
path: multieurlex-doc-de/train-*
- split: test
path: multieurlex-doc-de/test-*
- split: validation
path: multieurlex-doc-de/validation-*
- config_name: multieurlex-doc-el
data_files:
- split: train
path: multieurlex-doc-el/train-*
- split: validation
path: multieurlex-doc-el/validation-*
- split: test
path: multieurlex-doc-el/test-*
- config_name: multieurlex-doc-en
data_files:
- split: train
path: multieurlex-doc-en/train-*
- split: test
path: multieurlex-doc-en/test-*
- split: validation
path: multieurlex-doc-en/validation-*
- config_name: multieurlex-doc-es
data_files:
- split: train
path: multieurlex-doc-es/train-*
- split: test
path: multieurlex-doc-es/test-*
- split: validation
path: multieurlex-doc-es/validation-*
- config_name: multieurlex-doc-et
data_files:
- split: train
path: multieurlex-doc-et/train-*
- split: validation
path: multieurlex-doc-et/validation-*
- split: test
path: multieurlex-doc-et/test-*
- config_name: multieurlex-doc-fi
data_files:
- split: train
path: multieurlex-doc-fi/train-*
- split: validation
path: multieurlex-doc-fi/validation-*
- split: test
path: multieurlex-doc-fi/test-*
- config_name: multieurlex-doc-fr
data_files:
- split: train
path: multieurlex-doc-fr/train-*
- split: validation
path: multieurlex-doc-fr/validation-*
- split: test
path: multieurlex-doc-fr/test-*
- config_name: multieurlex-doc-hr
data_files:
- split: train
path: multieurlex-doc-hr/train-*
- split: validation
path: multieurlex-doc-hr/validation-*
- split: test
path: multieurlex-doc-hr/test-*
- config_name: multieurlex-doc-hu
data_files:
- split: train
path: multieurlex-doc-hu/train-*
- split: validation
path: multieurlex-doc-hu/validation-*
- split: test
path: multieurlex-doc-hu/test-*
- config_name: multieurlex-doc-it
data_files:
- split: train
path: multieurlex-doc-it/train-*
- split: validation
path: multieurlex-doc-it/validation-*
- split: test
path: multieurlex-doc-it/test-*
- config_name: multieurlex-doc-nl
data_files:
- split: train
path: multieurlex-doc-nl/train-*
- split: validation
path: multieurlex-doc-nl/validation-*
- split: test
path: multieurlex-doc-nl/test-*
- config_name: multieurlex-doc-pl
data_files:
- split: train
path: multieurlex-doc-pl/train-*
- split: validation
path: multieurlex-doc-pl/validation-*
- split: test
path: multieurlex-doc-pl/test-*
- config_name: multieurlex-doc-pt
data_files:
- split: train
path: multieurlex-doc-pt/train-*
- split: validation
path: multieurlex-doc-pt/validation-*
- split: test
path: multieurlex-doc-pt/test-*
- config_name: multieurlex-doc-ro
data_files:
- split: train
path: multieurlex-doc-ro/train-*
- split: validation
path: multieurlex-doc-ro/validation-*
- split: test
path: multieurlex-doc-ro/test-*
- config_name: multieurlex-doc-sv
data_files:
- split: train
path: multieurlex-doc-sv/train-*
- split: validation
path: multieurlex-doc-sv/validation-*
- split: test
path: multieurlex-doc-sv/test-*
- config_name: wiki-doc-ar-img
data_files:
- split: train
path: wiki-doc-ar-img/train-*
- split: test
path: wiki-doc-ar-img/test-*
- split: validation
path: wiki-doc-ar-img/validation-*
- config_name: wiki-doc-ar-merged
data_files:
- split: train
path: wiki-doc-ar-merged/train-*
- split: test
path: wiki-doc-ar-merged/test-*
- split: validation
path: wiki-doc-ar-merged/validation-*
- config_name: wiki-doc-de-merged
data_files:
- split: train
path: wiki-doc-de-merged/train-*
- split: validation
path: wiki-doc-de-merged/validation-*
- split: test
path: wiki-doc-de-merged/test-*
- config_name: wiki-doc-en-merged
data_files:
- split: train
path: wiki-doc-en-merged/train-*
- split: validation
path: wiki-doc-en-merged/validation-*
- split: test
path: wiki-doc-en-merged/test-*
- config_name: wiki-doc-es-merged
data_files:
- split: train
path: wiki-doc-es-merged/train-*
- split: validation
path: wiki-doc-es-merged/validation-*
- split: test
path: wiki-doc-es-merged/test-*
- config_name: wiki-doc-fr-merged
data_files:
- split: train
path: wiki-doc-fr-merged/train-*
- split: validation
path: wiki-doc-fr-merged/validation-*
- split: test
path: wiki-doc-fr-merged/test-*
- config_name: wiki-doc-it-merged
data_files:
- split: train
path: wiki-doc-it-merged/train-*
- split: validation
path: wiki-doc-it-merged/validation-*
- split: test
path: wiki-doc-it-merged/test-*
- config_name: wiki-doc-ja-img
data_files:
- split: train
path: wiki-doc-ja-img/train-*
- split: test
path: wiki-doc-ja-img/test-*
- split: validation
path: wiki-doc-ja-img/validation-*
- config_name: wiki-doc-ja-merged
data_files:
- split: train
path: wiki-doc-ja-merged/train-*
- split: validation
path: wiki-doc-ja-merged/validation-*
- split: test
path: wiki-doc-ja-merged/test-*
- config_name: wiki-doc-pt-img
data_files:
- split: train
path: wiki-doc-pt-img/train-*
- split: test
path: wiki-doc-pt-img/test-*
- split: validation
path: wiki-doc-pt-img/validation-*
- config_name: wiki-doc-pt-merged
data_files:
- split: train
path: wiki-doc-pt-merged/train-*
- split: validation
path: wiki-doc-pt-merged/validation-*
- split: test
path: wiki-doc-pt-merged/test-*
- config_name: wiki-doc-pt-merged-v2
data_files:
- split: train
path: wiki-doc-pt-merged-v2/train-*
- split: validation
path: wiki-doc-pt-merged-v2/validation-*
- split: test
path: wiki-doc-pt-merged-v2/test-*
- config_name: wiki-doc-zh-img
data_files:
- split: train
path: wiki-doc-zh-img/train-*
- split: test
path: wiki-doc-zh-img/test-*
- split: validation
path: wiki-doc-zh-img/validation-*
- config_name: wiki-doc-zh-merged
data_files:
- split: train
path: wiki-doc-zh-merged/train-*
- split: test
path: wiki-doc-zh-merged/test-*
- split: validation
path: wiki-doc-zh-merged/validation-*
---
## Additional Information
To load the dataset,
```
import datasets
ds = datasets.load_dataset("AmazonScience/MultilingualMultiModalClassification", data_dir="wiki-doc-ar-merged")
print(ds)
DatasetDict({
train: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 8129
})
validation: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 1742
})
test: Dataset({
features: ['image', 'filename', 'words', 'ocr_bboxes', 'label'],
num_rows: 1743
})
})
# In case you encountered `NonMatchingSplitsSizesError`, try out the following:
# from datasets import Image, Value, Sequence, ClassLabel, Features
# features = Features({'image': Image(mode=None, decode=True, id=None), 'filename': Value(dtype='string', id=None), 'words': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ocr_bboxes': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'label': ClassLabel(names=['AcademicJournal', 'AdultActor', 'Album', 'AmateurBoxer', 'Ambassador', 'AmericanFootballPlayer', 'Amphibian', 'AnimangaCharacter', 'Anime', 'Arachnid', 'Baronet', 'BasketballTeam', 'BeautyQueen', 'BroadcastNetwork', 'BusCompany', 'BusinessPerson', 'CanadianFootballTeam', 'Canal', 'Cardinal', 'Cave', 'ChristianBishop', 'ClassicalMusicArtist', 'ClassicalMusicComposition', 'CollegeCoach', 'Comedian', 'ComicsCreator', 'Congressman', 'Conifer', 'Convention', 'Cricketer', 'Crustacean', 'CultivatedVariety', 'Cycad', 'Dam', 'Economist', 'Engineer', 'Entomologist', 'EurovisionSongContestEntry', 'Fern', 'FilmFestival', 'Fish', 'FootballMatch', 'Glacier', 'GolfTournament', 'Governor', 'Gymnast', 'Historian', 'IceHockeyLeague', 'Insect', 'Journalist', 'Judge', 'Lighthouse', 'Magazine', 'Mayor', 'Medician', 'MemberOfParliament', 'MilitaryPerson', 'Model', 'Mollusca', 'Monarch', 'Moss', 'Mountain', 'MountainPass', 'MountainRange', 'MusicFestival', 'Musical', 'MythologicalFigure', 'Newspaper', 'Noble', 'OfficeHolder', 'Other', 'Philosopher', 'Photographer', 'PlayboyPlaymate', 'Poem', 'Poet', 'Pope', 'President', 'PrimeMinister', 'PublicTransitSystem', 'Racecourse', 'RadioHost', 'RadioStation', 'Religious', 'Reptile', 'Restaurant', 'Road', 'RoadTunnel', 'RollerCoaster', 'RugbyClub', 'RugbyLeague', 'Saint', 'School', 'ScreenWriter', 'Senator', 'ShoppingMall', 'Skater', 'SoccerLeague', 'SoccerManager', 'SoccerPlayer', 'SoccerTournament', 'SportsTeamMember', 'SumoWrestler', 'TelevisionStation', 'TennisTournament', 'TradeUnion', 'University', 'Village', 'VoiceActor', 'Volcano', 'WrestlingEvent'], id=None)})
# ds = datasets.load_dataset("AmazonScience/MultilingualMultiModalClassification", data_dir="wiki-doc-ar-merged", features=features, verification_mode="no_checks")
```
### Licensing Information
#### Wiki
Each image is licensed under original provider.
Any additional work provided by current work is provided under CC-BY-SA-4.0 following the Wikipedia license.
#### MultiEURLEX
We provide MultiEURLEX with the same licensing as the original EU data (CC-BY-4.0):
© European Union, 1998-2021
The Commission’s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.
The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.
Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \
Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html
### Citation Information
```
@inproceedings{fujinuma-etal-2023-multi,
title = "A Multi-Modal Multilingual Benchmark for Document Image Classification",
author = "Fujinuma, Yoshinari and
Varia, Siddharth and
Sankaran, Nishant and
Appalaraju, Srikar and
Min, Bonan and
Vyas, Yogarshi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.958",
doi = "10.18653/v1/2023.findings-emnlp.958",
pages = "14361--14376",
abstract = "Document image classification is different from plain-text document classification and consists of classifying a document by understanding the content and structure of documents such as forms, emails, and other such documents. We show that the only existing dataset for this task (Lewis et al., 2006) has several limitations and we introduce two newly curated multilingual datasets WIKI-DOC and MULTIEURLEX-DOC that overcome these limitations. We further undertake a comprehensive study of popular visually-rich document understanding or Document AI models in previously untested setting in document image classification such as 1) multi-label classification, and 2) zero-shot cross-lingual transfer setup. Experimental results show limitations of multilingual Document AI models on cross-lingual transfer across typologically distant languages. Our datasets and findings open the door for future research into improving Document AI models.",
}
``` |
legacy-datasets/c4 | legacy-datasets | "2024-03-05T08:44:26Z" | 18,538 | 240 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"arxiv:1910.10683",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: C4
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- odc-by
multilinguality:
- multilingual
size_categories:
- 100M<n<1B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: c4
viewer: false
dataset_info:
- config_name: en
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 828589180707
num_examples: 364868892
- name: validation
num_bytes: 825767266
num_examples: 364608
download_size: 326778635540
dataset_size: 1657178361414
- config_name: en.noblocklist
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 1029628201361
num_examples: 393391519
- name: validation
num_bytes: 1025606012
num_examples: 393226
download_size: 406611392434
dataset_size: 2059256402722
- config_name: realnewslike
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 38165657946
num_examples: 13799838
- name: validation
num_bytes: 37875873
num_examples: 13863
download_size: 15419740744
dataset_size: 76331315892
- config_name: en.noclean
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 6715509699938
num_examples: 1063805381
- name: validation
num_bytes: 6706356913
num_examples: 1065029
download_size: 2430376268625
dataset_size: 6722216056851
---
<div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
<p><b>Deprecated:</b> Dataset "c4" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/allenai/c4">allenai/c4</a>" instead.</p>
</div>
# Dataset Card for C4
## Table of Contents
- [Dataset Card for C4](#dataset-card-for-c4)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/allenai/c4
- **Paper:** https://arxiv.org/abs/1910.10683
### Dataset Summary
A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org".
This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4
It comes in four variants:
- `en`: 305GB in JSON format
- `en.noblocklist`: 380GB in JSON format
- `en.noclean`: 2.3TB in JSON format
- `realnewslike`: 15GB in JSON format
The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words.
### Supported Tasks and Leaderboards
C4 is mainly intended to pretrain language models and word representations.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example form the `en` config is:
```
{
'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/',
'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.',
'timestamp': '2019-04-25T12:57:54Z'
}
```
### Data Fields
The data have several fields:
- `url`: url of the source as a string
- `text`: text content as a string
- `timestamp`: timestamp as a string
### Data Splits
| name | train |validation|
|----------------|--------:|---------:|
| en |364868892| 364608|
| en.noblocklist |393391519| 393226|
| en.noclean | ?| ?|
| realnewslike | 13799838| 13863|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets.
The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
### Citation Information
```
@article{2019t5,
author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
journal = {arXiv e-prints},
year = {2019},
archivePrefix = {arXiv},
eprint = {1910.10683},
}
```
### Contributions
Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
Qi28/SD_QZ | Qi28 | "2025-02-12T17:28:08Z" | 18,448 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-19T13:22:11Z" | ---
license: apache-2.0
---
|
mteb/sts12-sts | mteb | "2022-09-27T19:11:50Z" | 18,404 | 7 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-20T10:47:29Z" | ---
language:
- en
--- |
open-r1/OpenR1-Math-220k | open-r1 | "2025-02-18T11:45:27Z" | 18,280 | 396 | [
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2025-02-10T13:41:48Z" | ---
license: apache-2.0
language:
- en
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: extended
data_files:
- split: train
path: extended/train-*
dataset_info:
- config_name: all
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 9734110026.0
num_examples: 225129
download_size: 4221672067
dataset_size: 9734110026.0
- config_name: default
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4964543659
num_examples: 93733
download_size: 2149897914
dataset_size: 4964543659
- config_name: extended
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4769566550
num_examples: 131396
download_size: 2063936457
dataset_size: 4769566550
---
# OpenR1-Math-220k
## Dataset description
OpenR1-Math-220k is a large-scale dataset for mathematical reasoning. It consists of 220k math problems with two to four reasoning traces generated by [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) for problems from NuminaMath 1.5.
The traces were verified using [Math Verify](https://github.com/huggingface/Math-Verify) for most samples and [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as a judge for 12% of the samples, and each problem contains at least one reasoning trace with a correct answer.
The dataset consists of two splits:
- `default` with 94k problems and that achieves the best performance after SFT.
- `extended` with 131k samples where we add data sources like `cn_k12`. This provides more reasoning traces, but we found that the performance after SFT to be lower than the `default` subset, likely because the questions from `cn_k12` are less difficult than other sources.
You can load the dataset as follows:
```python
from datasets import load_dataset
ds = load_dataset("open-r1/OpenR1-Math-220k", "default")
```
## Dataset curation
To build OpenR1-Math-220k, we prompt [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) model to generate solutions for 400k problems from [NuminaMath 1.5](https://huggingface.co/datasets/AI-MO/NuminaMath-1.5) using [SGLang](https://github.com/sgl-project/sglang), the generation code is available [here](https://github.com/huggingface/open-r1/tree/main/slurm). We follow the model card’s recommended generation parameters and prepend the following instruction to the user prompt:
`"Please reason step by step, and put your final answer within \boxed{}."`
We set a 16k token limit per generation, as our analysis showed that only 75% of problems could be solved in under 8k tokens, and most of the remaining problems required the full 16k tokens. We were able to generate 25 solutions per hour per H100, enabling us to generate 300k problem solutions per day on 512 H100s.
We generate two solutions per problem—and in some cases, four—to provide flexibility in filtering and training. This approach allows for rejection sampling, similar to DeepSeek R1’s methodology, and also makes the dataset suitable for preference optimisation methods like DPO.
## License
The dataset is licensed under Apache 2.0
|
rayliuca/WikidataLabels | rayliuca | "2024-01-11T04:17:57Z" | 18,165 | 1 | [
"task_categories:translation",
"task_categories:text2text-generation",
"language:en",
"language:fr",
"language:de",
"language:ja",
"language:zh",
"language:hi",
"language:ar",
"language:bn",
"language:ru",
"language:es",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation",
"text2text-generation"
] | "2024-01-01T00:23:08Z" | ---
license: cc0-1.0
dataset_info:
- config_name: aa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13986211
num_examples: 436895
download_size: 9821312
dataset_size: 13986211
- config_name: ab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5012532
num_examples: 159908
download_size: 3013706
dataset_size: 5012532
- config_name: abs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252728
num_examples: 143986
download_size: 2567450
dataset_size: 4252728
- config_name: ace
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19105673
num_examples: 574712
download_size: 13573374
dataset_size: 19105673
- config_name: ady
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4444259
num_examples: 148627
download_size: 2705754
dataset_size: 4444259
- config_name: ady-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4412556
num_examples: 147884
download_size: 2682170
dataset_size: 4412556
- config_name: aeb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4305734
num_examples: 145198
download_size: 2606368
dataset_size: 4305734
- config_name: aeb-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4467930
num_examples: 148796
download_size: 2722169
dataset_size: 4467930
- config_name: aeb-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12770359
num_examples: 404946
download_size: 8886489
dataset_size: 12770359
- config_name: af
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58561042
num_examples: 1643153
download_size: 42539052
dataset_size: 58561042
- config_name: agq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1317
num_examples: 33
download_size: 2906
dataset_size: 1317
- config_name: ak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14198715
num_examples: 443037
download_size: 9991525
dataset_size: 14198715
- config_name: aln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13811116
num_examples: 432089
download_size: 9673418
dataset_size: 13811116
- config_name: als
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20691
num_examples: 543
download_size: 17540
dataset_size: 20691
- config_name: alt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 108390
num_examples: 1814
download_size: 59046
dataset_size: 108390
- config_name: am
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5231176
num_examples: 163038
download_size: 3187164
dataset_size: 5231176
- config_name: ami
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21519
num_examples: 686
download_size: 16640
dataset_size: 21519
- config_name: an
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 240345072
num_examples: 5921087
download_size: 164895205
dataset_size: 240345072
- config_name: ang
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14275715
num_examples: 443461
download_size: 10063758
dataset_size: 14275715
- config_name: anp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8558258
num_examples: 241612
download_size: 4381360
dataset_size: 8558258
- config_name: ar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 291173732
num_examples: 5724064
download_size: 159369497
dataset_size: 291173732
- config_name: arc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4473283
num_examples: 150006
download_size: 2722619
dataset_size: 4473283
- config_name: arn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13879729
num_examples: 433912
download_size: 9715431
dataset_size: 13879729
- config_name: arq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4346991
num_examples: 146004
download_size: 2636972
dataset_size: 4346991
- config_name: ary
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5358568
num_examples: 171568
download_size: 3313402
dataset_size: 5358568
- config_name: arz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81806333
num_examples: 1669699
download_size: 49423508
dataset_size: 81806333
- config_name: as
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21658610
num_examples: 450074
download_size: 9641626
dataset_size: 21658610
- config_name: ase
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252943
num_examples: 143986
download_size: 2568106
dataset_size: 4252943
- config_name: ast
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1385628786
num_examples: 20696237
download_size: 955908362
dataset_size: 1385628786
- config_name: atj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12996229
num_examples: 411639
download_size: 9057557
dataset_size: 12996229
- config_name: av
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4722934
num_examples: 153781
download_size: 2880103
dataset_size: 4722934
- config_name: avk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13194485
num_examples: 414598
download_size: 9200917
dataset_size: 13194485
- config_name: awa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8599312
num_examples: 242320
download_size: 4411751
dataset_size: 8599312
- config_name: ay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14269432
num_examples: 443521
download_size: 10029939
dataset_size: 14269432
- config_name: az
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21049248
num_examples: 516732
download_size: 14117527
dataset_size: 21049248
- config_name: azb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30781587
num_examples: 607562
download_size: 16028687
dataset_size: 30781587
- config_name: ba
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11525351
num_examples: 261509
download_size: 6733777
dataset_size: 11525351
- config_name: ban
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13674052
num_examples: 426706
download_size: 9513747
dataset_size: 13674052
- config_name: ban-bali
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50961
num_examples: 748
download_size: 25817
dataset_size: 50961
- config_name: bar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54783034
num_examples: 1566120
download_size: 40389830
dataset_size: 54783034
- config_name: bbc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12820895
num_examples: 406960
download_size: 8917054
dataset_size: 12820895
- config_name: bcc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017228
num_examples: 241977
download_size: 4344579
dataset_size: 8017228
- config_name: be
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30978832
num_examples: 564184
download_size: 17461174
dataset_size: 30978832
- config_name: be-tarask
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18931909
num_examples: 374396
download_size: 10871239
dataset_size: 18931909
- config_name: bg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 200628708
num_examples: 4383953
download_size: 137745533
dataset_size: 200628708
- config_name: bgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7999280
num_examples: 241566
download_size: 4331249
dataset_size: 7999280
- config_name: bi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14040026
num_examples: 438382
download_size: 9867032
dataset_size: 14040026
- config_name: bjn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8375348
num_examples: 254558
download_size: 5722334
dataset_size: 8375348
- config_name: bm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18145787
num_examples: 549694
download_size: 13129193
dataset_size: 18145787
- config_name: bn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 815803977
num_examples: 9767284
download_size: 261147329
dataset_size: 815803977
- config_name: bo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11671330
num_examples: 278307
download_size: 5669602
dataset_size: 11671330
- config_name: bpy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15497749
num_examples: 347458
download_size: 6991190
dataset_size: 15497749
- config_name: bqi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017455
num_examples: 241984
download_size: 4345123
dataset_size: 8017455
- config_name: br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58304963
num_examples: 1653800
download_size: 42722031
dataset_size: 58304963
- config_name: brh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5328437
num_examples: 171504
download_size: 3376189
dataset_size: 5328437
- config_name: bs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30441466
num_examples: 858190
download_size: 21606575
dataset_size: 30441466
- config_name: btm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252525
num_examples: 143980
download_size: 2567218
dataset_size: 4252525
- config_name: bto
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12841721
num_examples: 407470
download_size: 8934218
dataset_size: 12841721
- config_name: bug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7595464
num_examples: 235268
download_size: 5129941
dataset_size: 7595464
- config_name: bxr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4713699
num_examples: 153707
download_size: 2869313
dataset_size: 4713699
- config_name: ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 408509932
num_examples: 9936886
download_size: 288474980
dataset_size: 408509932
- config_name: cbk-zam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14108232
num_examples: 440345
download_size: 9920793
dataset_size: 14108232
- config_name: cdo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6503254
num_examples: 201362
download_size: 4137841
dataset_size: 6503254
- config_name: ce
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28093148
num_examples: 607767
download_size: 16367596
dataset_size: 28093148
- config_name: ceb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 332947091
num_examples: 7769402
download_size: 219525737
dataset_size: 332947091
- config_name: ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13983906
num_examples: 436785
download_size: 9817385
dataset_size: 13983906
- config_name: cho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950786
num_examples: 435869
download_size: 9791296
dataset_size: 13950786
- config_name: chr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5386793
num_examples: 172855
download_size: 3419676
dataset_size: 5386793
- config_name: chy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994916
num_examples: 437007
download_size: 9830465
dataset_size: 13994916
- config_name: ckb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23343034
num_examples: 511183
download_size: 11459344
dataset_size: 23343034
- config_name: co
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47080480
num_examples: 1346929
download_size: 34551346
dataset_size: 47080480
- config_name: cps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12849864
num_examples: 407695
download_size: 8941921
dataset_size: 12849864
- config_name: cr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5516556
num_examples: 176667
download_size: 3532952
dataset_size: 5516556
- config_name: crh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10864382
num_examples: 336709
download_size: 7542853
dataset_size: 10864382
- config_name: crh-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4419064
num_examples: 148046
download_size: 2688683
dataset_size: 4419064
- config_name: crh-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14201429
num_examples: 442905
download_size: 9986290
dataset_size: 14201429
- config_name: cs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140189244
num_examples: 3384048
download_size: 97516751
dataset_size: 140189244
- config_name: csb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20177120
num_examples: 619275
download_size: 14528772
dataset_size: 20177120
- config_name: cv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8047221
num_examples: 215611
download_size: 4857718
dataset_size: 8047221
- config_name: cy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 89241808
num_examples: 2244550
download_size: 62686006
dataset_size: 89241808
- config_name: da
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 130931077
num_examples: 3448894
download_size: 98202417
dataset_size: 130931077
- config_name: dag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2664957
num_examples: 78534
download_size: 2052615
dataset_size: 2664957
- config_name: de
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 765398522
num_examples: 17531361
download_size: 527642124
dataset_size: 765398522
- config_name: de-at
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53043722
num_examples: 1515373
download_size: 38761571
dataset_size: 53043722
- config_name: de-ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53480908
num_examples: 1528137
download_size: 39349412
dataset_size: 53480908
- config_name: de-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4256391
num_examples: 144061
download_size: 2571862
dataset_size: 4256391
- config_name: din
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12819746
num_examples: 406591
download_size: 8922303
dataset_size: 12819746
- config_name: diq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7570161
num_examples: 232674
download_size: 5057742
dataset_size: 7570161
- config_name: dsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16135830
num_examples: 491423
download_size: 11412316
dataset_size: 16135830
- config_name: dtp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13867373
num_examples: 433733
download_size: 9720699
dataset_size: 13867373
- config_name: dty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8839082
num_examples: 246026
download_size: 4551845
dataset_size: 8839082
- config_name: dua
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2631
num_examples: 87
download_size: 3877
dataset_size: 2631
- config_name: dv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81396462
num_examples: 2103276
download_size: 45332104
dataset_size: 81396462
- config_name: dz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8590239
num_examples: 242196
download_size: 4406353
dataset_size: 8590239
- config_name: ee
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14377017
num_examples: 447208
download_size: 10136064
dataset_size: 14377017
- config_name: egl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13068224
num_examples: 413551
download_size: 9121776
dataset_size: 13068224
- config_name: el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32978562
num_examples: 592016
download_size: 19577876
dataset_size: 32978562
- config_name: eml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14768563
num_examples: 458847
download_size: 10453636
dataset_size: 14768563
- config_name: en
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6327454281
num_examples: 81801560
download_size: 4224231068
dataset_size: 6327454281
- config_name: en-ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 73305274
num_examples: 1909970
download_size: 53060194
dataset_size: 73305274
- config_name: en-gb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 115978412
num_examples: 2520405
download_size: 78924421
dataset_size: 115978412
- config_name: en-us
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14815
num_examples: 332
download_size: 9953
dataset_size: 14815
- config_name: eo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 256196064
num_examples: 6285304
download_size: 177219679
dataset_size: 256196064
- config_name: es
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 730214298
num_examples: 17233968
download_size: 514588069
dataset_size: 730214298
- config_name: es-419
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4355180
num_examples: 146476
download_size: 2659218
dataset_size: 4355180
- config_name: es-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4280933
num_examples: 144717
download_size: 2592085
dataset_size: 4280933
- config_name: et
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 65123623
num_examples: 1820762
download_size: 48197302
dataset_size: 65123623
- config_name: eu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 290282374
num_examples: 7109758
download_size: 197889378
dataset_size: 290282374
- config_name: ext
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 223257222
num_examples: 5359047
download_size: 147078789
dataset_size: 223257222
- config_name: fa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 123727757
num_examples: 2142642
download_size: 65952114
dataset_size: 123727757
- config_name: ff
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14116652
num_examples: 440614
download_size: 9920388
dataset_size: 14116652
- config_name: fi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 286539944
num_examples: 6905698
download_size: 209916638
dataset_size: 286539944
- config_name: fit
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20217258
num_examples: 620391
download_size: 14566702
dataset_size: 20217258
- config_name: fj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14159041
num_examples: 441745
download_size: 9956108
dataset_size: 14159041
- config_name: fkv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4328482
num_examples: 145988
download_size: 2619845
dataset_size: 4328482
- config_name: fo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24474476
num_examples: 731732
download_size: 17876981
dataset_size: 24474476
- config_name: fr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 774128723
num_examples: 17908351
download_size: 534489308
dataset_size: 774128723
- config_name: frc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17896106
num_examples: 547258
download_size: 12953740
dataset_size: 17896106
- config_name: frp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40902510
num_examples: 1191134
download_size: 29778105
dataset_size: 40902510
- config_name: frr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16979214
num_examples: 515350
download_size: 12069637
dataset_size: 16979214
- config_name: fur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42077410
num_examples: 1221071
download_size: 30714082
dataset_size: 42077410
- config_name: ga
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 471527543
num_examples: 11524282
download_size: 320967189
dataset_size: 471527543
- config_name: gag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14149375
num_examples: 440732
download_size: 9940551
dataset_size: 14149375
- config_name: gan
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31572161
num_examples: 905186
download_size: 18909564
dataset_size: 31572161
- config_name: gan-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31004794
num_examples: 889875
download_size: 18566811
dataset_size: 31004794
- config_name: gan-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4374444
num_examples: 147098
download_size: 2657182
dataset_size: 4374444
- config_name: gcr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4311409
num_examples: 145829
download_size: 2618211
dataset_size: 4311409
- config_name: gd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49316935
num_examples: 1429457
download_size: 36220978
dataset_size: 49316935
- config_name: gl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 289484839
num_examples: 7052226
download_size: 197315151
dataset_size: 289484839
- config_name: glk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8327018
num_examples: 249115
download_size: 4538325
dataset_size: 8327018
- config_name: gn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14212974
num_examples: 442765
download_size: 10004863
dataset_size: 14212974
- config_name: gom
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4584575
num_examples: 150273
download_size: 2780570
dataset_size: 4584575
- config_name: gom-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8585678
num_examples: 242131
download_size: 4400578
dataset_size: 8585678
- config_name: gom-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12783006
num_examples: 405302
download_size: 8897342
dataset_size: 12783006
- config_name: gor
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14667616
num_examples: 454512
download_size: 10319196
dataset_size: 14667616
- config_name: got
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5432139
num_examples: 172951
download_size: 3435531
dataset_size: 5432139
- config_name: grc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4494817
num_examples: 149631
download_size: 2746170
dataset_size: 4494817
- config_name: gu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23788894
num_examples: 486140
download_size: 10779200
dataset_size: 23788894
- config_name: guc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1419
num_examples: 38
download_size: 3054
dataset_size: 1419
- config_name: guw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 118
num_examples: 4
download_size: 1864
dataset_size: 118
- config_name: gv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20683485
num_examples: 631005
download_size: 14894590
dataset_size: 20683485
- config_name: ha
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14716168
num_examples: 455836
download_size: 10421790
dataset_size: 14716168
- config_name: hak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6128644
num_examples: 193036
download_size: 3991729
dataset_size: 6128644
- config_name: haw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14158084
num_examples: 441511
download_size: 9952975
dataset_size: 14158084
- config_name: he
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43629050
num_examples: 884809
download_size: 27221301
dataset_size: 43629050
- config_name: hi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 37237187
num_examples: 668964
download_size: 17804873
dataset_size: 37237187
- config_name: hif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14457954
num_examples: 449009
download_size: 10166264
dataset_size: 14457954
- config_name: hif-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14519845
num_examples: 454037
download_size: 10240704
dataset_size: 14519845
- config_name: hil
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12928914
num_examples: 409962
download_size: 9009705
dataset_size: 12928914
- config_name: ho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950504
num_examples: 435857
download_size: 9790849
dataset_size: 13950504
- config_name: hr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 61272623
num_examples: 1720527
download_size: 45307411
dataset_size: 61272623
- config_name: hrx
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12869295
num_examples: 407823
download_size: 8964114
dataset_size: 12869295
- config_name: hsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23720349
num_examples: 707100
download_size: 17145693
dataset_size: 23720349
- config_name: ht
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16835529
num_examples: 509955
download_size: 11880404
dataset_size: 16835529
- config_name: hu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 85054175
num_examples: 2200589
download_size: 64143342
dataset_size: 85054175
- config_name: hu-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252810
num_examples: 143986
download_size: 2567582
dataset_size: 4252810
- config_name: hy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39339286
num_examples: 773925
download_size: 22108994
dataset_size: 39339286
- config_name: hyw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5443608
num_examples: 166902
download_size: 3238370
dataset_size: 5443608
- config_name: hz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13948574
num_examples: 435804
download_size: 9788697
dataset_size: 13948574
- config_name: ia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 229143237
num_examples: 5616433
download_size: 155877454
dataset_size: 229143237
- config_name: id
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 95220928
num_examples: 2512331
download_size: 69525046
dataset_size: 95220928
- config_name: ie
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 225725262
num_examples: 5533032
download_size: 153371930
dataset_size: 225725262
- config_name: ig
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20109388
num_examples: 617044
download_size: 14475407
dataset_size: 20109388
- config_name: ii
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4310418
num_examples: 145332
download_size: 2609723
dataset_size: 4310418
- config_name: ik
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989609
num_examples: 436958
download_size: 9823174
dataset_size: 13989609
- config_name: ike-cans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4352278
num_examples: 146355
download_size: 2645174
dataset_size: 4352278
- config_name: ike-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13851135
num_examples: 432932
download_size: 9714057
dataset_size: 13851135
- config_name: ilo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15955483
num_examples: 480555
download_size: 11141942
dataset_size: 15955483
- config_name: inh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634360
num_examples: 152226
download_size: 2831580
dataset_size: 4634360
- config_name: io
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 233656822
num_examples: 5757440
download_size: 159720058
dataset_size: 233656822
- config_name: is
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51679396
num_examples: 1483610
download_size: 37965494
dataset_size: 51679396
- config_name: it
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 536601426
num_examples: 12631487
download_size: 375025347
dataset_size: 536601426
- config_name: iu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5360588
num_examples: 172215
download_size: 3402239
dataset_size: 5360588
- config_name: ja
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140641579
num_examples: 2917962
download_size: 92145329
dataset_size: 140641579
- config_name: jam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18849751
num_examples: 571777
download_size: 13684422
dataset_size: 18849751
- config_name: jbo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14301985
num_examples: 446512
download_size: 9994516
dataset_size: 14301985
- config_name: jv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27232302
num_examples: 794181
download_size: 19651565
dataset_size: 27232302
- config_name: ka
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24073345
num_examples: 399546
download_size: 11679979
dataset_size: 24073345
- config_name: kaa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14082184
num_examples: 439411
download_size: 9902820
dataset_size: 14082184
- config_name: kab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18459676
num_examples: 557857
download_size: 13384218
dataset_size: 18459676
- config_name: kbd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4594409
num_examples: 149733
download_size: 2759503
dataset_size: 4594409
- config_name: kbd-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4417661
num_examples: 148017
download_size: 2687531
dataset_size: 4417661
- config_name: kbp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12873178
num_examples: 408039
download_size: 8965474
dataset_size: 12873178
- config_name: kea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12793700
num_examples: 405901
download_size: 8896866
dataset_size: 12793700
- config_name: kg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40949149
num_examples: 1193499
download_size: 29766747
dataset_size: 40949149
- config_name: khw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308653
num_examples: 145279
download_size: 2608581
dataset_size: 4308653
- config_name: ki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14056900
num_examples: 439015
download_size: 9875534
dataset_size: 14056900
- config_name: kj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13881723
num_examples: 433861
download_size: 9733715
dataset_size: 13881723
- config_name: kjp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8504302
num_examples: 240339
download_size: 4341523
dataset_size: 8504302
- config_name: kk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19216115
num_examples: 428880
download_size: 11577682
dataset_size: 19216115
- config_name: kk-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7241749
num_examples: 211731
download_size: 4487032
dataset_size: 7241749
- config_name: kk-kz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4937945
num_examples: 160027
download_size: 3062906
dataset_size: 4937945
- config_name: kk-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22197825
num_examples: 677162
download_size: 16072332
dataset_size: 22197825
- config_name: kk-tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20060635
num_examples: 616521
download_size: 14438929
dataset_size: 20060635
- config_name: ko
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 60335212
num_examples: 1364440
download_size: 39186630
dataset_size: 60335212
- config_name: ko-kp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4338717
num_examples: 146150
download_size: 2630925
dataset_size: 4338717
- config_name: koi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4737590
num_examples: 155082
download_size: 2894674
dataset_size: 4737590
- config_name: kr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13886057
num_examples: 433990
download_size: 9737602
dataset_size: 13886057
- config_name: krc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4646136
num_examples: 151026
download_size: 2785454
dataset_size: 4646136
- config_name: kri
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12798530
num_examples: 406032
download_size: 8902330
dataset_size: 12798530
- config_name: krj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13850324
num_examples: 433444
download_size: 9703460
dataset_size: 13850324
- config_name: krl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12788020
num_examples: 405729
download_size: 8893337
dataset_size: 12788020
- config_name: ks
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4390604
num_examples: 147033
download_size: 2671069
dataset_size: 4390604
- config_name: ks-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8567518
num_examples: 241832
download_size: 4387687
dataset_size: 8567518
- config_name: ksh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20394712
num_examples: 624523
download_size: 14698860
dataset_size: 20394712
- config_name: ku
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8037777
num_examples: 239515
download_size: 5306097
dataset_size: 8037777
- config_name: ku-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4577826
num_examples: 151290
download_size: 2796159
dataset_size: 4577826
- config_name: ku-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14683841
num_examples: 458802
download_size: 10371977
dataset_size: 14683841
- config_name: kum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252739
num_examples: 143985
download_size: 2567503
dataset_size: 4252739
- config_name: kv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4946978
num_examples: 158888
download_size: 2997865
dataset_size: 4946978
- config_name: kw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20245535
num_examples: 621432
download_size: 14581378
dataset_size: 20245535
- config_name: ky
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8909613
num_examples: 235165
download_size: 5462115
dataset_size: 8909613
- config_name: la
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 299766395
num_examples: 7085082
download_size: 201477460
dataset_size: 299766395
- config_name: lad
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20336417
num_examples: 622775
download_size: 14653199
dataset_size: 20336417
- config_name: lb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56473066
num_examples: 1601093
download_size: 41410732
dataset_size: 56473066
- config_name: lbe
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4501470
num_examples: 149898
download_size: 2744786
dataset_size: 4501470
- config_name: lez
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4890798
num_examples: 155936
download_size: 2959653
dataset_size: 4890798
- config_name: lfn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14709210
num_examples: 456719
download_size: 10408539
dataset_size: 14709210
- config_name: lg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13979286
num_examples: 436009
download_size: 9802779
dataset_size: 13979286
- config_name: li
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43476868
num_examples: 1253970
download_size: 31750932
dataset_size: 43476868
- config_name: lij
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42327066
num_examples: 1227346
download_size: 30898971
dataset_size: 42327066
- config_name: liv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12781331
num_examples: 405236
download_size: 8895889
dataset_size: 12781331
- config_name: lki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8039166
num_examples: 242526
download_size: 4363703
dataset_size: 8039166
- config_name: lld
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 90305
num_examples: 2634
download_size: 69672
dataset_size: 90305
- config_name: lmo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18287638
num_examples: 545398
download_size: 13130119
dataset_size: 18287638
- config_name: ln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14123637
num_examples: 439731
download_size: 9915851
dataset_size: 14123637
- config_name: lo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9905189
num_examples: 271710
download_size: 5313218
dataset_size: 9905189
- config_name: loz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695602
num_examples: 428723
download_size: 9581113
dataset_size: 13695602
- config_name: lt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39902419
num_examples: 1096727
download_size: 29185765
dataset_size: 39902419
- config_name: ltg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13884707
num_examples: 433453
download_size: 9736637
dataset_size: 13884707
- config_name: lus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695197
num_examples: 428712
download_size: 9580538
dataset_size: 13695197
- config_name: luz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8459036
num_examples: 253454
download_size: 4687414
dataset_size: 8459036
- config_name: lv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27242119
num_examples: 764753
download_size: 19676667
dataset_size: 27242119
- config_name: lzh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25067538
num_examples: 685152
download_size: 14998856
dataset_size: 25067538
- config_name: mdf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634268
num_examples: 152141
download_size: 2820744
dataset_size: 4634268
- config_name: mg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43863002
num_examples: 1271074
download_size: 32016826
dataset_size: 43863002
- config_name: mh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13775721
num_examples: 431162
download_size: 9644397
dataset_size: 13775721
- config_name: mi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20857040
num_examples: 637118
download_size: 15060301
dataset_size: 20857040
- config_name: min
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53044258
num_examples: 1464128
download_size: 38587450
dataset_size: 53044258
- config_name: mk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24087229
num_examples: 449241
download_size: 12217912
dataset_size: 24087229
- config_name: ml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 189266798
num_examples: 2664923
download_size: 71344031
dataset_size: 189266798
- config_name: mn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9311543
num_examples: 219695
download_size: 5272784
dataset_size: 9311543
- config_name: mni
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8696893
num_examples: 243616
download_size: 4470994
dataset_size: 8696893
- config_name: mnw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8861861
num_examples: 244906
download_size: 4517726
dataset_size: 8861861
- config_name: mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5377009
num_examples: 172144
download_size: 3405661
dataset_size: 5377009
- config_name: mr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 26855182
num_examples: 526220
download_size: 12358679
dataset_size: 26855182
- config_name: mrh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 68
num_examples: 2
download_size: 1820
dataset_size: 68
- config_name: mrj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5007903
num_examples: 160889
download_size: 3073431
dataset_size: 5007903
- config_name: ms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 64674328
num_examples: 1803714
download_size: 47165217
dataset_size: 64674328
- config_name: ms-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 136496
num_examples: 2961
download_size: 92316
dataset_size: 136496
- config_name: mt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22632686
num_examples: 682867
download_size: 16352572
dataset_size: 22632686
- config_name: mus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14013416
num_examples: 437688
download_size: 9835239
dataset_size: 14013416
- config_name: mwl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14493299
num_examples: 448926
download_size: 10225888
dataset_size: 14493299
- config_name: my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16182182
num_examples: 345096
download_size: 7981905
dataset_size: 16182182
- config_name: mzn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17973941
num_examples: 447870
download_size: 9174617
dataset_size: 17973941
- config_name: na
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13992666
num_examples: 436956
download_size: 9823328
dataset_size: 13992666
- config_name: nah
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14490294
num_examples: 449748
download_size: 10192501
dataset_size: 14490294
- config_name: nan-hani
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 191
num_examples: 6
download_size: 1925
dataset_size: 191
- config_name: nap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42362346
num_examples: 1229161
download_size: 30918265
dataset_size: 42362346
- config_name: nb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 142554768
num_examples: 3688026
download_size: 105549981
dataset_size: 142554768
- config_name: nds
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58766114
num_examples: 1666813
download_size: 43421948
dataset_size: 58766114
- config_name: nds-nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44121756
num_examples: 1273149
download_size: 32201410
dataset_size: 44121756
- config_name: ne
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11925386
num_examples: 295006
download_size: 6265232
dataset_size: 11925386
- config_name: new
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16906308
num_examples: 350362
download_size: 7680329
dataset_size: 16906308
- config_name: ng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13870754
num_examples: 433582
download_size: 9723795
dataset_size: 13870754
- config_name: nia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20649
num_examples: 515
download_size: 16535
dataset_size: 20649
- config_name: niu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12794247
num_examples: 405902
download_size: 8897260
dataset_size: 12794247
- config_name: nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5016576732
num_examples: 61931959
download_size: 3380404239
dataset_size: 5016576732
- config_name: nn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 99997815
num_examples: 2708994
download_size: 74736304
dataset_size: 99997815
- config_name: 'no'
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2934
num_examples: 64
download_size: 4108
dataset_size: 2934
- config_name: nod
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4322068
num_examples: 145566
download_size: 2618106
dataset_size: 4322068
- config_name: nov
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14150434
num_examples: 440903
download_size: 9947798
dataset_size: 14150434
- config_name: nqo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8094271
num_examples: 243184
download_size: 4398836
dataset_size: 8094271
- config_name: nrm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 41330956
num_examples: 1203295
download_size: 30084065
dataset_size: 41330956
- config_name: nso
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14178321
num_examples: 443205
download_size: 9959708
dataset_size: 14178321
- config_name: nv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15351770
num_examples: 455188
download_size: 10472240
dataset_size: 15351770
- config_name: ny
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989813
num_examples: 436764
download_size: 9821588
dataset_size: 13989813
- config_name: nys
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13092059
num_examples: 413241
download_size: 9153100
dataset_size: 13092059
- config_name: oc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 266612548
num_examples: 6569770
download_size: 180156462
dataset_size: 266612548
- config_name: olo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13200388
num_examples: 416935
download_size: 9214968
dataset_size: 13200388
- config_name: om
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5476389
num_examples: 175314
download_size: 3496637
dataset_size: 5476389
- config_name: or
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22798709
num_examples: 470237
download_size: 10322832
dataset_size: 22798709
- config_name: os
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5946062
num_examples: 177054
download_size: 3583703
dataset_size: 5946062
- config_name: ota
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8015024
num_examples: 241903
download_size: 4343478
dataset_size: 8015024
- config_name: pa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20505754
num_examples: 481522
download_size: 10552147
dataset_size: 20505754
- config_name: pam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14527964
num_examples: 451253
download_size: 10242443
dataset_size: 14527964
- config_name: pap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54505401
num_examples: 1449881
download_size: 40415776
dataset_size: 54505401
- config_name: pcd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42132826
num_examples: 1221362
download_size: 30766812
dataset_size: 42132826
- config_name: pdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14435256
num_examples: 448055
download_size: 10178322
dataset_size: 14435256
- config_name: pdt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994892
num_examples: 437200
download_size: 9819388
dataset_size: 13994892
- config_name: pfl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15461023
num_examples: 474198
download_size: 10893651
dataset_size: 15461023
- config_name: pi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8913354
num_examples: 250251
download_size: 4651392
dataset_size: 8913354
- config_name: pih
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13971081
num_examples: 436214
download_size: 9810653
dataset_size: 13971081
- config_name: pl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 426030491
num_examples: 10025139
download_size: 295767506
dataset_size: 426030491
- config_name: pms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51268512
num_examples: 1477043
download_size: 37698831
dataset_size: 51268512
- config_name: pnb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16192682
num_examples: 409037
download_size: 9196626
dataset_size: 16192682
- config_name: pnt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4439173
num_examples: 148336
download_size: 2703117
dataset_size: 4439173
- config_name: prg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17940420
num_examples: 544030
download_size: 12958482
dataset_size: 17940420
- config_name: ps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8860902
num_examples: 259186
download_size: 4916502
dataset_size: 8860902
- config_name: pt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 491184040
num_examples: 11574568
download_size: 340831923
dataset_size: 491184040
- config_name: pt-br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 318857431
num_examples: 7782980
download_size: 223442911
dataset_size: 318857431
- config_name: pwn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8500
num_examples: 269
download_size: 8738
dataset_size: 8500
- config_name: qu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15254702
num_examples: 468823
download_size: 10750388
dataset_size: 15254702
- config_name: quc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32
num_examples: 1
download_size: 1772
dataset_size: 32
- config_name: qug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13798264
num_examples: 431733
download_size: 9661685
dataset_size: 13798264
- config_name: rgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17001688
num_examples: 519871
download_size: 12258201
dataset_size: 17001688
- config_name: rif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13792951
num_examples: 431588
download_size: 9657698
dataset_size: 13792951
- config_name: rm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44450577
num_examples: 1284908
download_size: 32519630
dataset_size: 44450577
- config_name: rmc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 159
num_examples: 4
download_size: 1963
dataset_size: 159
- config_name: rmy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5610156
num_examples: 179191
download_size: 3608283
dataset_size: 5610156
- config_name: rn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13935534
num_examples: 435271
download_size: 9779486
dataset_size: 13935534
- config_name: ro
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 247469452
num_examples: 5878366
download_size: 177525205
dataset_size: 247469452
- config_name: roa-tara
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14425120
num_examples: 448972
download_size: 10152875
dataset_size: 14425120
- config_name: ru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 405103215
num_examples: 7485811
download_size: 257215625
dataset_size: 405103215
- config_name: rue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4953403
num_examples: 159530
download_size: 3037824
dataset_size: 4953403
- config_name: rup
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14459686
num_examples: 450345
download_size: 10198398
dataset_size: 14459686
- config_name: ruq-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4434290
num_examples: 148404
download_size: 2700920
dataset_size: 4434290
- config_name: ruq-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783683
num_examples: 430978
download_size: 9656941
dataset_size: 13783683
- config_name: rw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14090196
num_examples: 439172
download_size: 9901257
dataset_size: 14090196
- config_name: rwr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8568706
num_examples: 241841
download_size: 4388475
dataset_size: 8568706
- config_name: ryu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2852
num_examples: 82
download_size: 4237
dataset_size: 2852
- config_name: sa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21404327
num_examples: 455674
download_size: 9692464
dataset_size: 21404327
- config_name: sat
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10810040
num_examples: 284911
download_size: 5750917
dataset_size: 10810040
- config_name: sc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47195572
num_examples: 1348137
download_size: 34521764
dataset_size: 47195572
- config_name: scn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43458983
num_examples: 1259067
download_size: 31775157
dataset_size: 43458983
- config_name: sco
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56960413
num_examples: 1611092
download_size: 41724559
dataset_size: 56960413
- config_name: sd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14257513
num_examples: 363318
download_size: 7844047
dataset_size: 14257513
- config_name: sdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13975497
num_examples: 436913
download_size: 9800517
dataset_size: 13975497
- config_name: se
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23962268
num_examples: 711439
download_size: 17409387
dataset_size: 23962268
- config_name: sei
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13827581
num_examples: 432520
download_size: 9684192
dataset_size: 13827581
- config_name: sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13913524
num_examples: 434751
download_size: 9761739
dataset_size: 13913524
- config_name: sh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30173635
num_examples: 746207
download_size: 20133594
dataset_size: 30173635
- config_name: shi-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783218
num_examples: 430968
download_size: 9656828
dataset_size: 13783218
- config_name: shi-tfng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308577
num_examples: 145279
download_size: 2608525
dataset_size: 4308577
- config_name: shn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10139002
num_examples: 260808
download_size: 4952168
dataset_size: 10139002
- config_name: shy-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4255322
num_examples: 144058
download_size: 2570625
dataset_size: 4255322
- config_name: si
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7405400
num_examples: 189718
download_size: 4270591
dataset_size: 7405400
- config_name: sjd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4300688
num_examples: 145047
download_size: 2604357
dataset_size: 4300688
- config_name: sje
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20970223
num_examples: 637639
download_size: 15120381
dataset_size: 20970223
- config_name: sju
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4315103
num_examples: 145655
download_size: 2620763
dataset_size: 4315103
- config_name: sk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 75586366
num_examples: 2050873
download_size: 54951330
dataset_size: 75586366
- config_name: skr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4274062
num_examples: 144443
download_size: 2585286
dataset_size: 4274062
- config_name: sl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 157883240
num_examples: 4112048
download_size: 118047353
dataset_size: 157883240
- config_name: sli
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13909208
num_examples: 434986
download_size: 9745964
dataset_size: 13909208
- config_name: sm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13984823
num_examples: 436830
download_size: 9817472
dataset_size: 13984823
- config_name: sma
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20653595
num_examples: 630437
download_size: 14902319
dataset_size: 20653595
- config_name: smj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19640206
num_examples: 604326
download_size: 14133964
dataset_size: 19640206
- config_name: smn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10902411
num_examples: 337543
download_size: 7576850
dataset_size: 10902411
- config_name: sms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4462345
num_examples: 149355
download_size: 2741038
dataset_size: 4462345
- config_name: sn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20116601
num_examples: 618231
download_size: 14463728
dataset_size: 20116601
- config_name: sq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 304708913
num_examples: 7311820
download_size: 225592169
dataset_size: 304708913
- config_name: sr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52787253
num_examples: 1018361
download_size: 31364006
dataset_size: 52787253
- config_name: sr-ec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9237541
num_examples: 248556
download_size: 5875548
dataset_size: 9237541
- config_name: sr-el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 48848162
num_examples: 1418824
download_size: 35859120
dataset_size: 48848162
- config_name: srq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12796525
num_examples: 405957
download_size: 8899493
dataset_size: 12796525
- config_name: ss
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13823630
num_examples: 432423
download_size: 9682165
dataset_size: 13823630
- config_name: st
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13938937
num_examples: 435419
download_size: 9785161
dataset_size: 13938937
- config_name: stq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14484394
num_examples: 449885
download_size: 10228446
dataset_size: 14484394
- config_name: su
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20025826
num_examples: 583096
download_size: 14042822
dataset_size: 20025826
- config_name: sv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 339074900
num_examples: 8115455
download_size: 236022796
dataset_size: 339074900
- config_name: sw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50612064
num_examples: 1465385
download_size: 37096369
dataset_size: 50612064
- config_name: szl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16772062
num_examples: 500107
download_size: 11868254
dataset_size: 16772062
- config_name: szy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4332021
num_examples: 146136
download_size: 2633271
dataset_size: 4332021
- config_name: ta
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31251824
num_examples: 546558
download_size: 15157673
dataset_size: 31251824
- config_name: tay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4345269
num_examples: 146938
download_size: 2632535
dataset_size: 4345269
- config_name: tcy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8723594
num_examples: 244350
download_size: 4487471
dataset_size: 8723594
- config_name: te
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27587665
num_examples: 569615
download_size: 13669398
dataset_size: 27587665
- config_name: tet
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15092299
num_examples: 466244
download_size: 10702917
dataset_size: 15092299
- config_name: tg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12643125
num_examples: 304625
download_size: 7622522
dataset_size: 12643125
- config_name: tg-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4504034
num_examples: 149533
download_size: 2755000
dataset_size: 4504034
- config_name: tg-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19845835
num_examples: 610020
download_size: 14264492
dataset_size: 19845835
- config_name: th
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32693750
num_examples: 537447
download_size: 15849247
dataset_size: 32693750
- config_name: ti
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4366995
num_examples: 146479
download_size: 2648869
dataset_size: 4366995
- config_name: tk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5797050
num_examples: 184302
download_size: 3728802
dataset_size: 5797050
- config_name: tl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13661554
num_examples: 387377
download_size: 9456413
dataset_size: 13661554
- config_name: tly
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4309748
num_examples: 145312
download_size: 2609307
dataset_size: 4309748
- config_name: tly-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35
num_examples: 1
download_size: 1793
dataset_size: 35
- config_name: tn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13936132
num_examples: 435219
download_size: 9780279
dataset_size: 13936132
- config_name: to
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13980327
num_examples: 436460
download_size: 9810650
dataset_size: 13980327
- config_name: tpi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14169019
num_examples: 442133
download_size: 9961827
dataset_size: 14169019
- config_name: tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 72134544
num_examples: 1770267
download_size: 51032484
dataset_size: 72134544
- config_name: tru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5322844
num_examples: 171327
download_size: 3371105
dataset_size: 5322844
- config_name: trv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 94285
num_examples: 3109
download_size: 65138
dataset_size: 94285
- config_name: ts
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13943481
num_examples: 435408
download_size: 9783789
dataset_size: 13943481
- config_name: tt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24182976
num_examples: 548502
download_size: 14868166
dataset_size: 24182976
- config_name: tt-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4943914
num_examples: 158198
download_size: 3048932
dataset_size: 4943914
- config_name: tt-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13842972
num_examples: 432513
download_size: 9702714
dataset_size: 13842972
- config_name: tum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13924159
num_examples: 435110
download_size: 9770501
dataset_size: 13924159
- config_name: tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13830508
num_examples: 432669
download_size: 9688164
dataset_size: 13830508
- config_name: ty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16816401
num_examples: 507332
download_size: 12098154
dataset_size: 16816401
- config_name: tyv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4583082
num_examples: 149929
download_size: 2779632
dataset_size: 4583082
- config_name: tzm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253588
num_examples: 144002
download_size: 2569067
dataset_size: 4253588
- config_name: udm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4854947
num_examples: 156300
download_size: 2958444
dataset_size: 4854947
- config_name: ug-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4316690
num_examples: 145443
download_size: 2614962
dataset_size: 4316690
- config_name: ug-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13786474
num_examples: 431056
download_size: 9659723
dataset_size: 13786474
- config_name: uk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 251058352
num_examples: 5108733
download_size: 168140976
dataset_size: 251058352
- config_name: ur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57063750
num_examples: 987011
download_size: 28328459
dataset_size: 57063750
- config_name: uz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11731793
num_examples: 344615
download_size: 8102734
dataset_size: 11731793
- config_name: uz-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252574
num_examples: 143981
download_size: 2567325
dataset_size: 4252574
- config_name: ve
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13932174
num_examples: 435216
download_size: 9777266
dataset_size: 13932174
- config_name: vec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52081230
num_examples: 1466867
download_size: 37307805
dataset_size: 52081230
- config_name: vep
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6174898
num_examples: 192298
download_size: 3994582
dataset_size: 6174898
- config_name: vi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 246835524
num_examples: 5743737
download_size: 172949263
dataset_size: 246835524
- config_name: vls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42789297
num_examples: 1239359
download_size: 31228294
dataset_size: 42789297
- config_name: vmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18352990
num_examples: 555205
download_size: 13289296
dataset_size: 18352990
- config_name: vo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 228352533
num_examples: 5610875
download_size: 155496988
dataset_size: 228352533
- config_name: vot
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5406190
num_examples: 173486
download_size: 3439433
dataset_size: 5406190
- config_name: wa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49235347
num_examples: 1426584
download_size: 36167816
dataset_size: 49235347
- config_name: war
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 190306474
num_examples: 4449062
download_size: 133786270
dataset_size: 190306474
- config_name: wls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4033
num_examples: 104
download_size: 5150
dataset_size: 4033
- config_name: wo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40961626
num_examples: 1193626
download_size: 29778666
dataset_size: 40961626
- config_name: wuu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40570130
num_examples: 1127741
download_size: 24209117
dataset_size: 40570130
- config_name: wya
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28
num_examples: 1
download_size: 1740
dataset_size: 28
- config_name: xal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4475344
num_examples: 149984
download_size: 2722459
dataset_size: 4475344
- config_name: xh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20036194
num_examples: 615514
download_size: 14405310
dataset_size: 20036194
- config_name: xmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5943645
num_examples: 169507
download_size: 3418593
dataset_size: 5943645
- config_name: xsy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4262789
num_examples: 144305
download_size: 2573349
dataset_size: 4262789
- config_name: yav
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4070
num_examples: 102
download_size: 4718
dataset_size: 4070
- config_name: yi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5495313
num_examples: 170277
download_size: 3373820
dataset_size: 5495313
- config_name: yo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25424749
num_examples: 724345
download_size: 18086773
dataset_size: 25424749
- config_name: za
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15159230
num_examples: 365892
download_size: 7774767
dataset_size: 15159230
- config_name: zea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14538518
num_examples: 451577
download_size: 10262897
dataset_size: 14538518
- config_name: zgh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253917
num_examples: 144006
download_size: 2569373
dataset_size: 4253917
- config_name: zh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 264353677
num_examples: 5424320
download_size: 174420118
dataset_size: 264353677
- config_name: zh-cn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42868611
num_examples: 1158755
download_size: 27243799
dataset_size: 42868611
- config_name: zh-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57233156
num_examples: 1483225
download_size: 36583522
dataset_size: 57233156
- config_name: zh-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53502814
num_examples: 1356560
download_size: 36755083
dataset_size: 53502814
- config_name: zh-hk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15325323
num_examples: 408391
download_size: 10455809
dataset_size: 15325323
- config_name: zh-mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6568267
num_examples: 180950
download_size: 3547260
dataset_size: 6568267
- config_name: zh-my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32637498
num_examples: 916876
download_size: 19289581
dataset_size: 32637498
- config_name: zh-sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35325327
num_examples: 979652
download_size: 21150070
dataset_size: 35325327
- config_name: zh-tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17500668
num_examples: 443057
download_size: 11121104
dataset_size: 17500668
- config_name: zh-yue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1352
num_examples: 30
download_size: 2963
dataset_size: 1352
- config_name: zu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47349379
num_examples: 1380550
download_size: 34649660
dataset_size: 47349379
configs:
- config_name: aa
data_files:
- split: label
path: aa/label-*
- config_name: ab
data_files:
- split: label
path: ab/label-*
- config_name: abs
data_files:
- split: label
path: abs/label-*
- config_name: ace
data_files:
- split: label
path: ace/label-*
- config_name: ady
data_files:
- split: label
path: ady/label-*
- config_name: ady-cyrl
data_files:
- split: label
path: ady-cyrl/label-*
- config_name: aeb
data_files:
- split: label
path: aeb/label-*
- config_name: aeb-arab
data_files:
- split: label
path: aeb-arab/label-*
- config_name: aeb-latn
data_files:
- split: label
path: aeb-latn/label-*
- config_name: af
data_files:
- split: label
path: af/label-*
- config_name: agq
data_files:
- split: label
path: agq/label-*
- config_name: ak
data_files:
- split: label
path: ak/label-*
- config_name: aln
data_files:
- split: label
path: aln/label-*
- config_name: als
data_files:
- split: label
path: als/label-*
- config_name: alt
data_files:
- split: label
path: alt/label-*
- config_name: am
data_files:
- split: label
path: am/label-*
- config_name: ami
data_files:
- split: label
path: ami/label-*
- config_name: an
data_files:
- split: label
path: an/label-*
- config_name: ang
data_files:
- split: label
path: ang/label-*
- config_name: anp
data_files:
- split: label
path: anp/label-*
- config_name: ar
data_files:
- split: label
path: ar/label-*
- config_name: arc
data_files:
- split: label
path: arc/label-*
- config_name: arn
data_files:
- split: label
path: arn/label-*
- config_name: arq
data_files:
- split: label
path: arq/label-*
- config_name: ary
data_files:
- split: label
path: ary/label-*
- config_name: arz
data_files:
- split: label
path: arz/label-*
- config_name: as
data_files:
- split: label
path: as/label-*
- config_name: ase
data_files:
- split: label
path: ase/label-*
- config_name: ast
data_files:
- split: label
path: ast/label-*
- config_name: atj
data_files:
- split: label
path: atj/label-*
- config_name: av
data_files:
- split: label
path: av/label-*
- config_name: avk
data_files:
- split: label
path: avk/label-*
- config_name: awa
data_files:
- split: label
path: awa/label-*
- config_name: ay
data_files:
- split: label
path: ay/label-*
- config_name: az
data_files:
- split: label
path: az/label-*
- config_name: azb
data_files:
- split: label
path: azb/label-*
- config_name: ba
data_files:
- split: label
path: ba/label-*
- config_name: ban
data_files:
- split: label
path: ban/label-*
- config_name: ban-bali
data_files:
- split: label
path: ban-bali/label-*
- config_name: bar
data_files:
- split: label
path: bar/label-*
- config_name: bbc
data_files:
- split: label
path: bbc/label-*
- config_name: bcc
data_files:
- split: label
path: bcc/label-*
- config_name: be
data_files:
- split: label
path: be/label-*
- config_name: be-tarask
data_files:
- split: label
path: be-tarask/label-*
- config_name: bg
data_files:
- split: label
path: bg/label-*
- config_name: bgn
data_files:
- split: label
path: bgn/label-*
- config_name: bi
data_files:
- split: label
path: bi/label-*
- config_name: bjn
data_files:
- split: label
path: bjn/label-*
- config_name: bm
data_files:
- split: label
path: bm/label-*
- config_name: bn
data_files:
- split: label
path: bn/label-*
- config_name: bo
data_files:
- split: label
path: bo/label-*
- config_name: bpy
data_files:
- split: label
path: bpy/label-*
- config_name: bqi
data_files:
- split: label
path: bqi/label-*
- config_name: br
data_files:
- split: label
path: br/label-*
- config_name: brh
data_files:
- split: label
path: brh/label-*
- config_name: bs
data_files:
- split: label
path: bs/label-*
- config_name: btm
data_files:
- split: label
path: btm/label-*
- config_name: bto
data_files:
- split: label
path: bto/label-*
- config_name: bug
data_files:
- split: label
path: bug/label-*
- config_name: bxr
data_files:
- split: label
path: bxr/label-*
- config_name: ca
data_files:
- split: label
path: ca/label-*
- config_name: cbk-zam
data_files:
- split: label
path: cbk-zam/label-*
- config_name: cdo
data_files:
- split: label
path: cdo/label-*
- config_name: ce
data_files:
- split: label
path: ce/label-*
- config_name: ceb
data_files:
- split: label
path: ceb/label-*
- config_name: ch
data_files:
- split: label
path: ch/label-*
- config_name: cho
data_files:
- split: label
path: cho/label-*
- config_name: chr
data_files:
- split: label
path: chr/label-*
- config_name: chy
data_files:
- split: label
path: chy/label-*
- config_name: ckb
data_files:
- split: label
path: ckb/label-*
- config_name: co
data_files:
- split: label
path: co/label-*
- config_name: cps
data_files:
- split: label
path: cps/label-*
- config_name: cr
data_files:
- split: label
path: cr/label-*
- config_name: crh
data_files:
- split: label
path: crh/label-*
- config_name: crh-cyrl
data_files:
- split: label
path: crh-cyrl/label-*
- config_name: crh-latn
data_files:
- split: label
path: crh-latn/label-*
- config_name: cs
data_files:
- split: label
path: cs/label-*
- config_name: csb
data_files:
- split: label
path: csb/label-*
- config_name: cv
data_files:
- split: label
path: cv/label-*
- config_name: cy
data_files:
- split: label
path: cy/label-*
- config_name: da
data_files:
- split: label
path: da/label-*
- config_name: dag
data_files:
- split: label
path: dag/label-*
- config_name: de
data_files:
- split: label
path: de/label-*
- config_name: de-at
data_files:
- split: label
path: de-at/label-*
- config_name: de-ch
data_files:
- split: label
path: de-ch/label-*
- config_name: de-formal
data_files:
- split: label
path: de-formal/label-*
- config_name: din
data_files:
- split: label
path: din/label-*
- config_name: diq
data_files:
- split: label
path: diq/label-*
- config_name: dsb
data_files:
- split: label
path: dsb/label-*
- config_name: dtp
data_files:
- split: label
path: dtp/label-*
- config_name: dty
data_files:
- split: label
path: dty/label-*
- config_name: dua
data_files:
- split: label
path: dua/label-*
- config_name: dv
data_files:
- split: label
path: dv/label-*
- config_name: dz
data_files:
- split: label
path: dz/label-*
- config_name: ee
data_files:
- split: label
path: ee/label-*
- config_name: egl
data_files:
- split: label
path: egl/label-*
- config_name: el
data_files:
- split: label
path: el/label-*
- config_name: eml
data_files:
- split: label
path: eml/label-*
- config_name: en
data_files:
- split: label
path: en/label-*
default: true
- config_name: en-ca
data_files:
- split: label
path: en-ca/label-*
- config_name: en-gb
data_files:
- split: label
path: en-gb/label-*
- config_name: en-us
data_files:
- split: label
path: en-us/label-*
- config_name: eo
data_files:
- split: label
path: eo/label-*
- config_name: es
data_files:
- split: label
path: es/label-*
- config_name: es-419
data_files:
- split: label
path: es-419/label-*
- config_name: es-formal
data_files:
- split: label
path: es-formal/label-*
- config_name: et
data_files:
- split: label
path: et/label-*
- config_name: eu
data_files:
- split: label
path: eu/label-*
- config_name: ext
data_files:
- split: label
path: ext/label-*
- config_name: fa
data_files:
- split: label
path: fa/label-*
- config_name: ff
data_files:
- split: label
path: ff/label-*
- config_name: fi
data_files:
- split: label
path: fi/label-*
- config_name: fit
data_files:
- split: label
path: fit/label-*
- config_name: fj
data_files:
- split: label
path: fj/label-*
- config_name: fkv
data_files:
- split: label
path: fkv/label-*
- config_name: fo
data_files:
- split: label
path: fo/label-*
- config_name: fr
data_files:
- split: label
path: fr/label-*
- config_name: frc
data_files:
- split: label
path: frc/label-*
- config_name: frp
data_files:
- split: label
path: frp/label-*
- config_name: frr
data_files:
- split: label
path: frr/label-*
- config_name: fur
data_files:
- split: label
path: fur/label-*
- config_name: ga
data_files:
- split: label
path: ga/label-*
- config_name: gag
data_files:
- split: label
path: gag/label-*
- config_name: gan
data_files:
- split: label
path: gan/label-*
- config_name: gan-hans
data_files:
- split: label
path: gan-hans/label-*
- config_name: gan-hant
data_files:
- split: label
path: gan-hant/label-*
- config_name: gcr
data_files:
- split: label
path: gcr/label-*
- config_name: gd
data_files:
- split: label
path: gd/label-*
- config_name: gl
data_files:
- split: label
path: gl/label-*
- config_name: glk
data_files:
- split: label
path: glk/label-*
- config_name: gn
data_files:
- split: label
path: gn/label-*
- config_name: gom
data_files:
- split: label
path: gom/label-*
- config_name: gom-deva
data_files:
- split: label
path: gom-deva/label-*
- config_name: gom-latn
data_files:
- split: label
path: gom-latn/label-*
- config_name: gor
data_files:
- split: label
path: gor/label-*
- config_name: got
data_files:
- split: label
path: got/label-*
- config_name: grc
data_files:
- split: label
path: grc/label-*
- config_name: gu
data_files:
- split: label
path: gu/label-*
- config_name: guc
data_files:
- split: label
path: guc/label-*
- config_name: guw
data_files:
- split: label
path: guw/label-*
- config_name: gv
data_files:
- split: label
path: gv/label-*
- config_name: ha
data_files:
- split: label
path: ha/label-*
- config_name: hak
data_files:
- split: label
path: hak/label-*
- config_name: haw
data_files:
- split: label
path: haw/label-*
- config_name: he
data_files:
- split: label
path: he/label-*
- config_name: hi
data_files:
- split: label
path: hi/label-*
- config_name: hif
data_files:
- split: label
path: hif/label-*
- config_name: hif-latn
data_files:
- split: label
path: hif-latn/label-*
- config_name: hil
data_files:
- split: label
path: hil/label-*
- config_name: ho
data_files:
- split: label
path: ho/label-*
- config_name: hr
data_files:
- split: label
path: hr/label-*
- config_name: hrx
data_files:
- split: label
path: hrx/label-*
- config_name: hsb
data_files:
- split: label
path: hsb/label-*
- config_name: ht
data_files:
- split: label
path: ht/label-*
- config_name: hu
data_files:
- split: label
path: hu/label-*
- config_name: hu-formal
data_files:
- split: label
path: hu-formal/label-*
- config_name: hy
data_files:
- split: label
path: hy/label-*
- config_name: hyw
data_files:
- split: label
path: hyw/label-*
- config_name: hz
data_files:
- split: label
path: hz/label-*
- config_name: ia
data_files:
- split: label
path: ia/label-*
- config_name: id
data_files:
- split: label
path: id/label-*
- config_name: ie
data_files:
- split: label
path: ie/label-*
- config_name: ig
data_files:
- split: label
path: ig/label-*
- config_name: ii
data_files:
- split: label
path: ii/label-*
- config_name: ik
data_files:
- split: label
path: ik/label-*
- config_name: ike-cans
data_files:
- split: label
path: ike-cans/label-*
- config_name: ike-latn
data_files:
- split: label
path: ike-latn/label-*
- config_name: ilo
data_files:
- split: label
path: ilo/label-*
- config_name: inh
data_files:
- split: label
path: inh/label-*
- config_name: io
data_files:
- split: label
path: io/label-*
- config_name: is
data_files:
- split: label
path: is/label-*
- config_name: it
data_files:
- split: label
path: it/label-*
- config_name: iu
data_files:
- split: label
path: iu/label-*
- config_name: ja
data_files:
- split: label
path: ja/label-*
- config_name: jam
data_files:
- split: label
path: jam/label-*
- config_name: jbo
data_files:
- split: label
path: jbo/label-*
- config_name: jv
data_files:
- split: label
path: jv/label-*
- config_name: ka
data_files:
- split: label
path: ka/label-*
- config_name: kaa
data_files:
- split: label
path: kaa/label-*
- config_name: kab
data_files:
- split: label
path: kab/label-*
- config_name: kbd
data_files:
- split: label
path: kbd/label-*
- config_name: kbd-cyrl
data_files:
- split: label
path: kbd-cyrl/label-*
- config_name: kbp
data_files:
- split: label
path: kbp/label-*
- config_name: kea
data_files:
- split: label
path: kea/label-*
- config_name: kg
data_files:
- split: label
path: kg/label-*
- config_name: khw
data_files:
- split: label
path: khw/label-*
- config_name: ki
data_files:
- split: label
path: ki/label-*
- config_name: kj
data_files:
- split: label
path: kj/label-*
- config_name: kjp
data_files:
- split: label
path: kjp/label-*
- config_name: kk
data_files:
- split: label
path: kk/label-*
- config_name: kk-arab
data_files:
- split: label
path: kk-arab/label-*
- config_name: kk-kz
data_files:
- split: label
path: kk-kz/label-*
- config_name: kk-latn
data_files:
- split: label
path: kk-latn/label-*
- config_name: kk-tr
data_files:
- split: label
path: kk-tr/label-*
- config_name: ko
data_files:
- split: label
path: ko/label-*
- config_name: ko-kp
data_files:
- split: label
path: ko-kp/label-*
- config_name: koi
data_files:
- split: label
path: koi/label-*
- config_name: kr
data_files:
- split: label
path: kr/label-*
- config_name: krc
data_files:
- split: label
path: krc/label-*
- config_name: kri
data_files:
- split: label
path: kri/label-*
- config_name: krj
data_files:
- split: label
path: krj/label-*
- config_name: krl
data_files:
- split: label
path: krl/label-*
- config_name: ks
data_files:
- split: label
path: ks/label-*
- config_name: ks-deva
data_files:
- split: label
path: ks-deva/label-*
- config_name: ksh
data_files:
- split: label
path: ksh/label-*
- config_name: ku
data_files:
- split: label
path: ku/label-*
- config_name: ku-arab
data_files:
- split: label
path: ku-arab/label-*
- config_name: ku-latn
data_files:
- split: label
path: ku-latn/label-*
- config_name: kum
data_files:
- split: label
path: kum/label-*
- config_name: kv
data_files:
- split: label
path: kv/label-*
- config_name: kw
data_files:
- split: label
path: kw/label-*
- config_name: ky
data_files:
- split: label
path: ky/label-*
- config_name: la
data_files:
- split: label
path: la/label-*
- config_name: lad
data_files:
- split: label
path: lad/label-*
- config_name: lb
data_files:
- split: label
path: lb/label-*
- config_name: lbe
data_files:
- split: label
path: lbe/label-*
- config_name: lez
data_files:
- split: label
path: lez/label-*
- config_name: lfn
data_files:
- split: label
path: lfn/label-*
- config_name: lg
data_files:
- split: label
path: lg/label-*
- config_name: li
data_files:
- split: label
path: li/label-*
- config_name: lij
data_files:
- split: label
path: lij/label-*
- config_name: liv
data_files:
- split: label
path: liv/label-*
- config_name: lki
data_files:
- split: label
path: lki/label-*
- config_name: lld
data_files:
- split: label
path: lld/label-*
- config_name: lmo
data_files:
- split: label
path: lmo/label-*
- config_name: ln
data_files:
- split: label
path: ln/label-*
- config_name: lo
data_files:
- split: label
path: lo/label-*
- config_name: loz
data_files:
- split: label
path: loz/label-*
- config_name: lt
data_files:
- split: label
path: lt/label-*
- config_name: ltg
data_files:
- split: label
path: ltg/label-*
- config_name: lus
data_files:
- split: label
path: lus/label-*
- config_name: luz
data_files:
- split: label
path: luz/label-*
- config_name: lv
data_files:
- split: label
path: lv/label-*
- config_name: lzh
data_files:
- split: label
path: lzh/label-*
- config_name: mdf
data_files:
- split: label
path: mdf/label-*
- config_name: mg
data_files:
- split: label
path: mg/label-*
- config_name: mh
data_files:
- split: label
path: mh/label-*
- config_name: mi
data_files:
- split: label
path: mi/label-*
- config_name: min
data_files:
- split: label
path: min/label-*
- config_name: mk
data_files:
- split: label
path: mk/label-*
- config_name: ml
data_files:
- split: label
path: ml/label-*
- config_name: mn
data_files:
- split: label
path: mn/label-*
- config_name: mni
data_files:
- split: label
path: mni/label-*
- config_name: mnw
data_files:
- split: label
path: mnw/label-*
- config_name: mo
data_files:
- split: label
path: mo/label-*
- config_name: mr
data_files:
- split: label
path: mr/label-*
- config_name: mrh
data_files:
- split: label
path: mrh/label-*
- config_name: mrj
data_files:
- split: label
path: mrj/label-*
- config_name: ms
data_files:
- split: label
path: ms/label-*
- config_name: ms-arab
data_files:
- split: label
path: ms-arab/label-*
- config_name: mt
data_files:
- split: label
path: mt/label-*
- config_name: mus
data_files:
- split: label
path: mus/label-*
- config_name: mwl
data_files:
- split: label
path: mwl/label-*
- config_name: my
data_files:
- split: label
path: my/label-*
- config_name: mzn
data_files:
- split: label
path: mzn/label-*
- config_name: na
data_files:
- split: label
path: na/label-*
- config_name: nah
data_files:
- split: label
path: nah/label-*
- config_name: nan-hani
data_files:
- split: label
path: nan-hani/label-*
- config_name: nap
data_files:
- split: label
path: nap/label-*
- config_name: nb
data_files:
- split: label
path: nb/label-*
- config_name: nds
data_files:
- split: label
path: nds/label-*
- config_name: nds-nl
data_files:
- split: label
path: nds-nl/label-*
- config_name: ne
data_files:
- split: label
path: ne/label-*
- config_name: new
data_files:
- split: label
path: new/label-*
- config_name: ng
data_files:
- split: label
path: ng/label-*
- config_name: nia
data_files:
- split: label
path: nia/label-*
- config_name: niu
data_files:
- split: label
path: niu/label-*
- config_name: nl
data_files:
- split: label
path: nl/label-*
- config_name: nn
data_files:
- split: label
path: nn/label-*
- config_name: 'no'
data_files:
- split: label
path: no/label-*
- config_name: nod
data_files:
- split: label
path: nod/label-*
- config_name: nov
data_files:
- split: label
path: nov/label-*
- config_name: nqo
data_files:
- split: label
path: nqo/label-*
- config_name: nrm
data_files:
- split: label
path: nrm/label-*
- config_name: nso
data_files:
- split: label
path: nso/label-*
- config_name: nv
data_files:
- split: label
path: nv/label-*
- config_name: ny
data_files:
- split: label
path: ny/label-*
- config_name: nys
data_files:
- split: label
path: nys/label-*
- config_name: oc
data_files:
- split: label
path: oc/label-*
- config_name: olo
data_files:
- split: label
path: olo/label-*
- config_name: om
data_files:
- split: label
path: om/label-*
- config_name: or
data_files:
- split: label
path: or/label-*
- config_name: os
data_files:
- split: label
path: os/label-*
- config_name: ota
data_files:
- split: label
path: ota/label-*
- config_name: pa
data_files:
- split: label
path: pa/label-*
- config_name: pam
data_files:
- split: label
path: pam/label-*
- config_name: pap
data_files:
- split: label
path: pap/label-*
- config_name: pcd
data_files:
- split: label
path: pcd/label-*
- config_name: pdc
data_files:
- split: label
path: pdc/label-*
- config_name: pdt
data_files:
- split: label
path: pdt/label-*
- config_name: pfl
data_files:
- split: label
path: pfl/label-*
- config_name: pi
data_files:
- split: label
path: pi/label-*
- config_name: pih
data_files:
- split: label
path: pih/label-*
- config_name: pl
data_files:
- split: label
path: pl/label-*
- config_name: pms
data_files:
- split: label
path: pms/label-*
- config_name: pnb
data_files:
- split: label
path: pnb/label-*
- config_name: pnt
data_files:
- split: label
path: pnt/label-*
- config_name: prg
data_files:
- split: label
path: prg/label-*
- config_name: ps
data_files:
- split: label
path: ps/label-*
- config_name: pt
data_files:
- split: label
path: pt/label-*
- config_name: pt-br
data_files:
- split: label
path: pt-br/label-*
- config_name: pwn
data_files:
- split: label
path: pwn/label-*
- config_name: qu
data_files:
- split: label
path: qu/label-*
- config_name: quc
data_files:
- split: label
path: quc/label-*
- config_name: qug
data_files:
- split: label
path: qug/label-*
- config_name: rgn
data_files:
- split: label
path: rgn/label-*
- config_name: rif
data_files:
- split: label
path: rif/label-*
- config_name: rm
data_files:
- split: label
path: rm/label-*
- config_name: rmc
data_files:
- split: label
path: rmc/label-*
- config_name: rmy
data_files:
- split: label
path: rmy/label-*
- config_name: rn
data_files:
- split: label
path: rn/label-*
- config_name: ro
data_files:
- split: label
path: ro/label-*
- config_name: roa-tara
data_files:
- split: label
path: roa-tara/label-*
- config_name: ru
data_files:
- split: label
path: ru/label-*
- config_name: rue
data_files:
- split: label
path: rue/label-*
- config_name: rup
data_files:
- split: label
path: rup/label-*
- config_name: ruq-cyrl
data_files:
- split: label
path: ruq-cyrl/label-*
- config_name: ruq-latn
data_files:
- split: label
path: ruq-latn/label-*
- config_name: rw
data_files:
- split: label
path: rw/label-*
- config_name: rwr
data_files:
- split: label
path: rwr/label-*
- config_name: ryu
data_files:
- split: label
path: ryu/label-*
- config_name: sa
data_files:
- split: label
path: sa/label-*
- config_name: sat
data_files:
- split: label
path: sat/label-*
- config_name: sc
data_files:
- split: label
path: sc/label-*
- config_name: scn
data_files:
- split: label
path: scn/label-*
- config_name: sco
data_files:
- split: label
path: sco/label-*
- config_name: sd
data_files:
- split: label
path: sd/label-*
- config_name: sdc
data_files:
- split: label
path: sdc/label-*
- config_name: se
data_files:
- split: label
path: se/label-*
- config_name: sei
data_files:
- split: label
path: sei/label-*
- config_name: sg
data_files:
- split: label
path: sg/label-*
- config_name: sh
data_files:
- split: label
path: sh/label-*
- config_name: shi-latn
data_files:
- split: label
path: shi-latn/label-*
- config_name: shi-tfng
data_files:
- split: label
path: shi-tfng/label-*
- config_name: shn
data_files:
- split: label
path: shn/label-*
- config_name: shy-latn
data_files:
- split: label
path: shy-latn/label-*
- config_name: si
data_files:
- split: label
path: si/label-*
- config_name: sjd
data_files:
- split: label
path: sjd/label-*
- config_name: sje
data_files:
- split: label
path: sje/label-*
- config_name: sju
data_files:
- split: label
path: sju/label-*
- config_name: sk
data_files:
- split: label
path: sk/label-*
- config_name: skr
data_files:
- split: label
path: skr/label-*
- config_name: sl
data_files:
- split: label
path: sl/label-*
- config_name: sli
data_files:
- split: label
path: sli/label-*
- config_name: sm
data_files:
- split: label
path: sm/label-*
- config_name: sma
data_files:
- split: label
path: sma/label-*
- config_name: smj
data_files:
- split: label
path: smj/label-*
- config_name: smn
data_files:
- split: label
path: smn/label-*
- config_name: sms
data_files:
- split: label
path: sms/label-*
- config_name: sn
data_files:
- split: label
path: sn/label-*
- config_name: sq
data_files:
- split: label
path: sq/label-*
- config_name: sr
data_files:
- split: label
path: sr/label-*
- config_name: sr-ec
data_files:
- split: label
path: sr-ec/label-*
- config_name: sr-el
data_files:
- split: label
path: sr-el/label-*
- config_name: srq
data_files:
- split: label
path: srq/label-*
- config_name: ss
data_files:
- split: label
path: ss/label-*
- config_name: st
data_files:
- split: label
path: st/label-*
- config_name: stq
data_files:
- split: label
path: stq/label-*
- config_name: su
data_files:
- split: label
path: su/label-*
- config_name: sv
data_files:
- split: label
path: sv/label-*
- config_name: sw
data_files:
- split: label
path: sw/label-*
- config_name: szl
data_files:
- split: label
path: szl/label-*
- config_name: szy
data_files:
- split: label
path: szy/label-*
- config_name: ta
data_files:
- split: label
path: ta/label-*
- config_name: tay
data_files:
- split: label
path: tay/label-*
- config_name: tcy
data_files:
- split: label
path: tcy/label-*
- config_name: te
data_files:
- split: label
path: te/label-*
- config_name: tet
data_files:
- split: label
path: tet/label-*
- config_name: tg
data_files:
- split: label
path: tg/label-*
- config_name: tg-cyrl
data_files:
- split: label
path: tg-cyrl/label-*
- config_name: tg-latn
data_files:
- split: label
path: tg-latn/label-*
- config_name: th
data_files:
- split: label
path: th/label-*
- config_name: ti
data_files:
- split: label
path: ti/label-*
- config_name: tk
data_files:
- split: label
path: tk/label-*
- config_name: tl
data_files:
- split: label
path: tl/label-*
- config_name: tly
data_files:
- split: label
path: tly/label-*
- config_name: tly-cyrl
data_files:
- split: label
path: tly-cyrl/label-*
- config_name: tn
data_files:
- split: label
path: tn/label-*
- config_name: to
data_files:
- split: label
path: to/label-*
- config_name: tpi
data_files:
- split: label
path: tpi/label-*
- config_name: tr
data_files:
- split: label
path: tr/label-*
- config_name: tru
data_files:
- split: label
path: tru/label-*
- config_name: trv
data_files:
- split: label
path: trv/label-*
- config_name: ts
data_files:
- split: label
path: ts/label-*
- config_name: tt
data_files:
- split: label
path: tt/label-*
- config_name: tt-cyrl
data_files:
- split: label
path: tt-cyrl/label-*
- config_name: tt-latn
data_files:
- split: label
path: tt-latn/label-*
- config_name: tum
data_files:
- split: label
path: tum/label-*
- config_name: tw
data_files:
- split: label
path: tw/label-*
- config_name: ty
data_files:
- split: label
path: ty/label-*
- config_name: tyv
data_files:
- split: label
path: tyv/label-*
- config_name: tzm
data_files:
- split: label
path: tzm/label-*
- config_name: udm
data_files:
- split: label
path: udm/label-*
- config_name: ug-arab
data_files:
- split: label
path: ug-arab/label-*
- config_name: ug-latn
data_files:
- split: label
path: ug-latn/label-*
- config_name: uk
data_files:
- split: label
path: uk/label-*
- config_name: ur
data_files:
- split: label
path: ur/label-*
- config_name: uz
data_files:
- split: label
path: uz/label-*
- config_name: uz-cyrl
data_files:
- split: label
path: uz-cyrl/label-*
- config_name: ve
data_files:
- split: label
path: ve/label-*
- config_name: vec
data_files:
- split: label
path: vec/label-*
- config_name: vep
data_files:
- split: label
path: vep/label-*
- config_name: vi
data_files:
- split: label
path: vi/label-*
- config_name: vls
data_files:
- split: label
path: vls/label-*
- config_name: vmf
data_files:
- split: label
path: vmf/label-*
- config_name: vo
data_files:
- split: label
path: vo/label-*
- config_name: vot
data_files:
- split: label
path: vot/label-*
- config_name: wa
data_files:
- split: label
path: wa/label-*
- config_name: war
data_files:
- split: label
path: war/label-*
- config_name: wls
data_files:
- split: label
path: wls/label-*
- config_name: wo
data_files:
- split: label
path: wo/label-*
- config_name: wuu
data_files:
- split: label
path: wuu/label-*
- config_name: wya
data_files:
- split: label
path: wya/label-*
- config_name: xal
data_files:
- split: label
path: xal/label-*
- config_name: xh
data_files:
- split: label
path: xh/label-*
- config_name: xmf
data_files:
- split: label
path: xmf/label-*
- config_name: xsy
data_files:
- split: label
path: xsy/label-*
- config_name: yav
data_files:
- split: label
path: yav/label-*
- config_name: yi
data_files:
- split: label
path: yi/label-*
- config_name: yo
data_files:
- split: label
path: yo/label-*
- config_name: za
data_files:
- split: label
path: za/label-*
- config_name: zea
data_files:
- split: label
path: zea/label-*
- config_name: zgh
data_files:
- split: label
path: zgh/label-*
- config_name: zh
data_files:
- split: label
path: zh/label-*
- config_name: zh-cn
data_files:
- split: label
path: zh-cn/label-*
- config_name: zh-hans
data_files:
- split: label
path: zh-hans/label-*
- config_name: zh-hant
data_files:
- split: label
path: zh-hant/label-*
- config_name: zh-hk
data_files:
- split: label
path: zh-hk/label-*
- config_name: zh-mo
data_files:
- split: label
path: zh-mo/label-*
- config_name: zh-my
data_files:
- split: label
path: zh-my/label-*
- config_name: zh-sg
data_files:
- split: label
path: zh-sg/label-*
- config_name: zh-tw
data_files:
- split: label
path: zh-tw/label-*
- config_name: zh-yue
data_files:
- split: label
path: zh-yue/label-*
- config_name: zu
data_files:
- split: label
path: zu/label-*
task_categories:
- translation
- text2text-generation
language:
- en
- fr
- de
- ja
- zh
- hi
- ar
- bn
- ru
- es
---
# Wikidata Labels
Large parallel corpus for machine translation
- Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only
- Only download the languages you need with `datasets>=2.14.0`
- Similar dataset: https://huggingface.co/datasets/wmt/wikititles (18 Wikipedia titles pairs instead of all Wikidata entities)
## Dataset Details
### Dataset Sources
- Wikidata JSON dump (wikidata-20220103-all.json.gz) https://www.wikidata.org/wiki/Wikidata:Database_download
## Uses
You can generate parallel text examples from this dataset like below:
```python
from datasets import load_dataset
import pandas as pd
def parallel_labels(lang_codes: list, how="inner", repo_id="rayliuca/wikidata_entity_label", merge_config={}, datasets_config={}) -> pd.DataFrame:
out_df = None
for lc in lang_codes:
dataset = load_dataset(repo_id, lc, **datasets_config)
dataset_df = dataset['label'].to_pandas().rename(columns={"label":lc}).drop(columns=['lastrevid'])
if out_df is None:
out_df = dataset_df
else:
out_df = out_df.merge(
dataset_df,
on='wikidata_id',
how=how,
**merge_config
)
return out_df
# Note: the "en" subset is >4GB
parallel_labels(['en', 'fr', 'ja', 'zh']).head()
```
### Output
| | wikidata_id | en | fr | ja | zh |
|---:|:--------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------|:---------------------------------------------|
| 0 | Q109739412 | SARS-CoV-2 Omicron variant | variant Omicron du SARS-CoV-2 | SARSコロナウイルス2-オミクロン株 | 嚴重急性呼吸道症候群冠狀病毒2型Omicron變異株 |
| 1 | Q108460606 | Ulughbegsaurus | Ulughbegsaurus | ウルグベグサウルス | 兀魯伯龍屬 |
| 2 | Q108556886 | AUKUS | AUKUS | AUKUS | AUKUS |
| 3 | Q106496152 | Claude Joseph | Claude Joseph | クロード・ジョゼフ | 克洛德·约瑟夫 |
| 4 | Q105519361 | The World's Finest Assassin Gets Reincarnated in a Different World as an Aristocrat | The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat | 世界最高の暗殺者、異世界貴族に転生する | 世界頂尖的暗殺者轉生為異世界貴族 |
Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page [The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat](https://fr.wikipedia.org/wiki/The_World%27s_Finest_Assassin_Gets_Reincarnated_in_Another_World_as_an_Aristocrat) uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well
## Dataset Structure
Each language has its own subset (aka config), which means you only have to download the languages you need with `datasets>=2.14.0`
Each subset has these fields:
- wikidata_id
- lastrevid
- label
## Dataset Creation
#### Data Collection and Processing
- Filtered for item entities only
- Ignored the descriptions as those texts are not very parallel
## Bias, Risks, and Limitations
- Might be slightly outdated (2022)
- Popular languages have more entries
- Labels are not guaranteed to be literal translations (see examples above) |
Intraferon/iNat-XC-1000 | Intraferon | "2025-02-22T17:40:44Z" | 18,033 | 0 | [
"language:en",
"region:us"
] | null | "2024-12-21T11:51:02Z" | ---
language:
- en
pretty_name: i
---
# iNat-XC-1000
A multi-modal bird species classification dataset consisting of the sightings of 1000 bird species from two citizen science projects: [iNaturalist](https://www.inaturalist.org/) and [Xeno Canto](https://xeno-canto.org/). |
airtrain-ai/fineweb-edu-fortified | airtrain-ai | "2024-08-08T18:04:44Z" | 17,961 | 55 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.17557",
"arxiv:2109.07445",
"region:us"
] | [
"text-generation"
] | "2024-07-22T14:22:31Z" | ---
language:
- en
license: odc-by
task_categories:
- text-generation
dataset_info:
- config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 71683996286
num_examples: 10800000
download_size: 55571546426
dataset_size: 71683996286
- config_name: CC-MAIN-2013-48
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 38878994623
num_examples: 5800000
download_size: 30087644388
dataset_size: 38878994623
- config_name: CC-MAIN-2014-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 24971658588
num_examples: 3550000
download_size: 19058832929
dataset_size: 24971658588
- config_name: CC-MAIN-2014-15
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 13615746365
num_examples: 1850000
download_size: 10299687552
dataset_size: 13615746365
- config_name: CC-MAIN-2014-23
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21798450754
num_examples: 3100000
download_size: 16663899441
dataset_size: 21798450754
- config_name: CC-MAIN-2014-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10954201796
num_examples: 1500000
download_size: 8309419357
dataset_size: 10954201796
- config_name: CC-MAIN-2014-41
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 11392615401
num_examples: 1600000
download_size: 8694382261
dataset_size: 11392615401
- config_name: CC-MAIN-2014-42
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8491740156
num_examples: 1150000
download_size: 6430841610
dataset_size: 8491740156
- config_name: CC-MAIN-2014-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7754099049
num_examples: 1050000
download_size: 5866979308
dataset_size: 7754099049
- config_name: CC-MAIN-2014-52
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9953666568
num_examples: 1350000
download_size: 7521103037
dataset_size: 9953666568
- config_name: CC-MAIN-2015-06
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8988649992
num_examples: 1200000
download_size: 6771650647
dataset_size: 8988649992
- config_name: CC-MAIN-2015-11
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9212466984
num_examples: 1200000
download_size: 6893305603
dataset_size: 9212466984
- config_name: CC-MAIN-2015-14
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7773258320
num_examples: 1000000
download_size: 5810026390
dataset_size: 7773258320
- config_name: CC-MAIN-2015-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9906342182
num_examples: 1300000
download_size: 7420897339
dataset_size: 9906342182
- config_name: CC-MAIN-2015-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8677092389
num_examples: 1100000
download_size: 6445775687
dataset_size: 8677092389
- config_name: CC-MAIN-2015-27
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8168934142
num_examples: 1050000
download_size: 6095866065
dataset_size: 8168934142
- config_name: CC-MAIN-2015-32
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7248096143
num_examples: 950000
download_size: 5438870914
dataset_size: 7248096143
- config_name: CC-MAIN-2015-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7905807405
num_examples: 1000000
download_size: 5886313414
dataset_size: 7905807405
- config_name: CC-MAIN-2015-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 6756795023
num_examples: 850000
download_size: 5020668048
dataset_size: 6756795023
- config_name: CC-MAIN-2015-48
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 9500987324
num_examples: 1200000
download_size: 7050820902
dataset_size: 9500987324
- config_name: CC-MAIN-2016-07
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10612088943
num_examples: 1300000
download_size: 7816414470
dataset_size: 10612088943
- config_name: CC-MAIN-2016-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7478953157
num_examples: 1050000
download_size: 5691425154
dataset_size: 7478953157
- config_name: CC-MAIN-2016-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 7617762727
num_examples: 1050000
download_size: 5760598348
dataset_size: 7617762727
- config_name: CC-MAIN-2016-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 4620338482
num_examples: 650000
download_size: 3516183695
dataset_size: 4620338482
- config_name: CC-MAIN-2016-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 10574077837
num_examples: 1250000
download_size: 7732067436
dataset_size: 10574077837
- config_name: CC-MAIN-2016-36
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 8503905267
num_examples: 1000000
download_size: 6208206855
dataset_size: 8503905267
- config_name: CC-MAIN-2016-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 15377835627
num_examples: 2350000
download_size: 11940941268
dataset_size: 15377835627
- config_name: CC-MAIN-2016-44
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29529872165
num_examples: 4800000
download_size: 23162984623
dataset_size: 29529872165
- config_name: CC-MAIN-2016-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20468372716
num_examples: 3050000
download_size: 15709742655
dataset_size: 20468372716
- config_name: CC-MAIN-2017-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21037186856
num_examples: 3050000
download_size: 16038345746
dataset_size: 21037186856
- config_name: CC-MAIN-2017-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 24443091987
num_examples: 3450000
download_size: 18578003959
dataset_size: 24443091987
- config_name: CC-MAIN-2017-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 42541966320
num_examples: 6350000
download_size: 32897843366
dataset_size: 42541966320
- config_name: CC-MAIN-2017-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28067316341
num_examples: 4200000
download_size: 21670006912
dataset_size: 28067316341
- config_name: CC-MAIN-2017-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21612347473
num_examples: 3250000
download_size: 16727380174
dataset_size: 21612347473
- config_name: CC-MAIN-2017-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 26930164929
num_examples: 4150000
download_size: 21000453887
dataset_size: 26930164929
- config_name: CC-MAIN-2017-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19514567064
num_examples: 3050000
download_size: 15274197942
dataset_size: 19514567064
- config_name: CC-MAIN-2017-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21825880789
num_examples: 3450000
download_size: 17131331406
dataset_size: 21825880789
- config_name: CC-MAIN-2017-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21861199076
num_examples: 3250000
download_size: 16864955620
dataset_size: 21861199076
- config_name: CC-MAIN-2017-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22225780468
num_examples: 3250000
download_size: 17081326644
dataset_size: 22225780468
- config_name: CC-MAIN-2017-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20302441730
num_examples: 2950000
download_size: 15588692671
dataset_size: 20302441730
- config_name: CC-MAIN-2017-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17337207614
num_examples: 2550000
download_size: 13346917136
dataset_size: 17337207614
- config_name: CC-MAIN-2018-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22738512950
num_examples: 3450000
download_size: 17607554751
dataset_size: 22738512950
- config_name: CC-MAIN-2018-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23340323268
num_examples: 3600000
download_size: 18151119519
dataset_size: 23340323268
- config_name: CC-MAIN-2018-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19001159420
num_examples: 2900000
download_size: 14753194653
dataset_size: 19001159420
- config_name: CC-MAIN-2018-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17258341719
num_examples: 2600000
download_size: 13340501927
dataset_size: 17258341719
- config_name: CC-MAIN-2018-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 17491169826
num_examples: 2600000
download_size: 13470743712
dataset_size: 17491169826
- config_name: CC-MAIN-2018-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21455735998
num_examples: 3100000
download_size: 16280241314
dataset_size: 21455735998
- config_name: CC-MAIN-2018-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 18192174874
num_examples: 2500000
download_size: 13725747144
dataset_size: 18192174874
- config_name: CC-MAIN-2018-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 15796036932
num_examples: 2200000
download_size: 11987788874
dataset_size: 15796036932
- config_name: CC-MAIN-2018-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 16307757771
num_examples: 2200000
download_size: 12290791012
dataset_size: 16307757771
- config_name: CC-MAIN-2018-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25677124234
num_examples: 3800000
download_size: 19573087580
dataset_size: 25677124234
- config_name: CC-MAIN-2018-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22875798193
num_examples: 3150000
download_size: 17281464409
dataset_size: 22875798193
- config_name: CC-MAIN-2018-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22594268378
num_examples: 3300000
download_size: 17343595987
dataset_size: 22594268378
- config_name: CC-MAIN-2019-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21133044139
num_examples: 3050000
download_size: 16192299666
dataset_size: 21133044139
- config_name: CC-MAIN-2019-09
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20593069774
num_examples: 2850000
download_size: 15604520079
dataset_size: 20593069774
- config_name: CC-MAIN-2019-13
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 18350086234
num_examples: 2500000
download_size: 13859628789
dataset_size: 18350086234
- config_name: CC-MAIN-2019-18
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19748675634
num_examples: 2650000
download_size: 14875559796
dataset_size: 19748675634
- config_name: CC-MAIN-2019-22
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22315609811
num_examples: 3100000
download_size: 16925720280
dataset_size: 22315609811
- config_name: CC-MAIN-2019-26
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20009950205
num_examples: 2750000
download_size: 15138826344
dataset_size: 20009950205
- config_name: CC-MAIN-2019-30
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 20153093525
num_examples: 2750000
download_size: 15229175301
dataset_size: 20153093525
- config_name: CC-MAIN-2019-35
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23793900737
num_examples: 3300000
download_size: 18011655759
dataset_size: 23793900737
- config_name: CC-MAIN-2019-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21250081982
num_examples: 2950000
download_size: 16107325180
dataset_size: 21250081982
- config_name: CC-MAIN-2019-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23381976513
num_examples: 3150000
download_size: 17578322332
dataset_size: 23381976513
- config_name: CC-MAIN-2019-47
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 22916970895
num_examples: 3150000
download_size: 17302792952
dataset_size: 22916970895
- config_name: CC-MAIN-2019-51
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 19001480990
num_examples: 2600000
download_size: 14340161761
dataset_size: 19001480990
- config_name: CC-MAIN-2020-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21571233444
num_examples: 2950000
download_size: 16258182796
dataset_size: 21571233444
- config_name: CC-MAIN-2020-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 21550911640
num_examples: 3000000
download_size: 16304815033
dataset_size: 21550911640
- config_name: CC-MAIN-2020-16
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23381117349
num_examples: 3300000
download_size: 17744530068
dataset_size: 23381117349
- config_name: CC-MAIN-2020-24
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25046680820
num_examples: 3550000
download_size: 19043052442
dataset_size: 25046680820
- config_name: CC-MAIN-2020-29
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28072475139
num_examples: 3900000
download_size: 21219908593
dataset_size: 28072475139
- config_name: CC-MAIN-2020-34
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 23905419397
num_examples: 3300000
download_size: 18053065929
dataset_size: 23905419397
- config_name: CC-MAIN-2020-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 31964517781
num_examples: 4650000
download_size: 24445166342
dataset_size: 31964517781
- config_name: CC-MAIN-2020-45
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 28978958859
num_examples: 4150000
download_size: 22052543740
dataset_size: 28978958859
- config_name: CC-MAIN-2020-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 25828281117
num_examples: 3650000
download_size: 19596280713
dataset_size: 25828281117
- config_name: CC-MAIN-2021-04
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 32044317476
num_examples: 4450000
download_size: 24218057264
dataset_size: 32044317476
- config_name: CC-MAIN-2021-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 30664456445
num_examples: 4200000
download_size: 23053325617
dataset_size: 30664456445
- config_name: CC-MAIN-2021-17
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 33620957572
num_examples: 4450000
download_size: 25055730596
dataset_size: 33620957572
- config_name: CC-MAIN-2021-21
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 26740503282
num_examples: 3600000
download_size: 20011648584
dataset_size: 26740503282
- config_name: CC-MAIN-2021-25
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29160290793
num_examples: 3950000
download_size: 21855396161
dataset_size: 29160290793
- config_name: CC-MAIN-2021-31
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 29149182919
num_examples: 3900000
download_size: 21785469714
dataset_size: 29149182919
- config_name: CC-MAIN-2021-39
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 33379845273
num_examples: 4550000
download_size: 25057576194
dataset_size: 33379845273
- config_name: CC-MAIN-2021-43
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 34332026077
num_examples: 4700000
download_size: 25789733401
dataset_size: 34332026077
- config_name: CC-MAIN-2021-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 31418299354
num_examples: 4350000
download_size: 23666249860
dataset_size: 31418299354
- config_name: CC-MAIN-2022-05
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 32596625853
num_examples: 4450000
download_size: 24458356127
dataset_size: 32596625853
- config_name: CC-MAIN-2022-21
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 47752322889
num_examples: 6550000
download_size: 35853678975
dataset_size: 47752322889
- config_name: CC-MAIN-2022-27
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 40292830914
num_examples: 5550000
download_size: 30279346466
dataset_size: 40292830914
- config_name: CC-MAIN-2022-33
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 34010483286
num_examples: 4750000
download_size: 25633769458
dataset_size: 34010483286
- config_name: CC-MAIN-2022-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 39211229907
num_examples: 5350000
download_size: 29318062267
dataset_size: 39211229907
- config_name: CC-MAIN-2022-49
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 40322136408
num_examples: 5450000
download_size: 30095433549
dataset_size: 40322136408
- config_name: CC-MAIN-2023-06
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 39078745132
num_examples: 5250000
download_size: 29058170760
dataset_size: 39078745132
- config_name: CC-MAIN-2023-14
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 76461834465
num_examples: 10050000
download_size: 56751401774
dataset_size: 76461834465
- config_name: CC-MAIN-2023-23
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 76112971386
num_examples: 9950000
download_size: 56347776355
dataset_size: 76112971386
- config_name: CC-MAIN-2023-40
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 63452197995
num_examples: 8100000
download_size: 46078925605
dataset_size: 63452197995
- config_name: CC-MAIN-2023-50
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 63566623396
num_examples: 8200000
download_size: 46245587660
dataset_size: 63566623396
- config_name: CC-MAIN-2024-10
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: embedding
sequence: float32
- name: count
dtype: int64
splits:
- name: train
num_bytes: 43172700112
num_examples: 5750000
download_size: 31501561162
dataset_size: 43172700112
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: data/CC-MAIN-2013-20/train-*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: data/CC-MAIN-2013-48/train-*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: data/CC-MAIN-2014-10/train-*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: data/CC-MAIN-2014-15/train-*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: data/CC-MAIN-2014-23/train-*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: data/CC-MAIN-2014-35/train-*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: data/CC-MAIN-2014-41/train-*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: data/CC-MAIN-2014-42/train-*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: data/CC-MAIN-2014-49/train-*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: data/CC-MAIN-2014-52/train-*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: data/CC-MAIN-2015-06/train-*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: data/CC-MAIN-2015-11/train-*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: data/CC-MAIN-2015-14/train-*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: data/CC-MAIN-2015-18/train-*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: data/CC-MAIN-2015-22/train-*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: data/CC-MAIN-2015-27/train-*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: data/CC-MAIN-2015-32/train-*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: data/CC-MAIN-2015-35/train-*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: data/CC-MAIN-2015-40/train-*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: data/CC-MAIN-2015-48/train-*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: data/CC-MAIN-2016-07/train-*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: data/CC-MAIN-2016-18/train-*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: data/CC-MAIN-2016-22/train-*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: data/CC-MAIN-2016-26/train-*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: data/CC-MAIN-2016-30/train-*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: data/CC-MAIN-2016-36/train-*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: data/CC-MAIN-2016-40/train-*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: data/CC-MAIN-2016-44/train-*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: data/CC-MAIN-2016-50/train-*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: data/CC-MAIN-2017-04/train-*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: data/CC-MAIN-2017-09/train-*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: data/CC-MAIN-2017-13/train-*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: data/CC-MAIN-2017-17/train-*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: data/CC-MAIN-2017-22/train-*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: data/CC-MAIN-2017-26/train-*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: data/CC-MAIN-2017-30/train-*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: data/CC-MAIN-2017-34/train-*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: data/CC-MAIN-2017-39/train-*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: data/CC-MAIN-2017-43/train-*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: data/CC-MAIN-2017-47/train-*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: data/CC-MAIN-2017-51/train-*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: data/CC-MAIN-2018-05/train-*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: data/CC-MAIN-2018-09/train-*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: data/CC-MAIN-2018-13/train-*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: data/CC-MAIN-2018-17/train-*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: data/CC-MAIN-2018-22/train-*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: data/CC-MAIN-2018-26/train-*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: data/CC-MAIN-2018-30/train-*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: data/CC-MAIN-2018-34/train-*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: data/CC-MAIN-2018-39/train-*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: data/CC-MAIN-2018-43/train-*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: data/CC-MAIN-2018-47/train-*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: data/CC-MAIN-2018-51/train-*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: data/CC-MAIN-2019-04/train-*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: data/CC-MAIN-2019-09/train-*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: data/CC-MAIN-2019-13/train-*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: data/CC-MAIN-2019-18/train-*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: data/CC-MAIN-2019-22/train-*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: data/CC-MAIN-2019-26/train-*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: data/CC-MAIN-2019-30/train-*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: data/CC-MAIN-2019-35/train-*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: data/CC-MAIN-2019-39/train-*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: data/CC-MAIN-2019-43/train-*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: data/CC-MAIN-2019-47/train-*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: data/CC-MAIN-2019-51/train-*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: data/CC-MAIN-2020-05/train-*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: data/CC-MAIN-2020-10/train-*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: data/CC-MAIN-2020-16/train-*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: data/CC-MAIN-2020-24/train-*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: data/CC-MAIN-2020-29/train-*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: data/CC-MAIN-2020-34/train-*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: data/CC-MAIN-2020-40/train-*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: data/CC-MAIN-2020-45/train-*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: data/CC-MAIN-2020-50/train-*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: data/CC-MAIN-2021-04/train-*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: data/CC-MAIN-2021-10/train-*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: data/CC-MAIN-2021-17/train-*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: data/CC-MAIN-2021-21/train-*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: data/CC-MAIN-2021-25/train-*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: data/CC-MAIN-2021-31/train-*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: data/CC-MAIN-2021-39/train-*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: data/CC-MAIN-2021-43/train-*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: data/CC-MAIN-2021-49/train-*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: data/CC-MAIN-2022-05/train-*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: data/CC-MAIN-2022-21/train-*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: data/CC-MAIN-2022-27/train-*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: data/CC-MAIN-2022-33/train-*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: data/CC-MAIN-2022-40/train-*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: data/CC-MAIN-2022-49/train-*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: data/CC-MAIN-2023-06/train-*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: data/CC-MAIN-2023-14/train-*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: data/CC-MAIN-2023-23/train-*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: data/CC-MAIN-2023-40/train-*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: data/CC-MAIN-2023-50/train-*
- config_name: CC-MAIN-2024-10
data_files:
- split: train
path: data/CC-MAIN-2024-10/train-*
---
# Fineweb-Edu-Fortified
<figure>
<img src="https://cdn-uploads.huggingface.co/production/uploads/646516d2200b583e1e50faf8/79yPdK79m9mA0cCz-3h4v.png" width="500" style="margin-left:auto; margin-right: auto"/>
<figcaption style="text-align: center; margin-left: auto; margin-right: auto; font-style: italic;">
The composition of fineweb-edu-fortified, produced by automatically clustering a 500k row sample in
<a href="https://app.airtrain.ai/dataset/c232b33f-4f4a-49a7-ba55-8167a5f433da/null/1/0"> Airtrain </a>
</figcaption>
</figure>
## What is it?
Fineweb-Edu-Fortified is a dataset derived from
[Fineweb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) by applying exact-match
deduplication across the whole dataset and producing an embedding for each row. The number of times
the text from each row appears is also included as a `count` column. The embeddings were produced
using [TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro)
Fineweb and Fineweb-Edu were obtained by processing data from 95 crawls of
[Common Crawl](https://commoncrawl.org/), covering a time period from 2013 to 2024.
More information about the original datasets can be found by consulting:
- [Fineweb-edu dataset card](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
- [Fineweb dataset card](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
- [Fineweb release blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1)
- [Fineweb paper](https://arxiv.org/abs/2406.17557)
The contents of a randomly selected 500k rows from this dataset can be interactively
explored in this
[Airtrain](https://app.airtrain.ai/dataset/c232b33f-4f4a-49a7-ba55-8167a5f433da/null/1/0)
dashboard.
## Deduplication
### Deduplication in original Fineweb and Fineweb-Edu
During creation of the original Fineweb dataset, a variety of deduplication strategies were
explored. The evaluation criteria used to assess deduplication strategies was to train ablation models
on randomly selected subsets of the data, using a subset of up to ~350 billion tokens.
Using this mechanism, the Fineweb authors selected a MinHash algorithm, using parameters
considering documents with approximately 75% similarity or higher to be duplicates. This deduplication was
performed *within* each Common Crawl crawl. For example, it would have removed all approximate duplicates
from the 20th crawl from 2013, but would have retained an identical record that showed up
in both the 2013-20 crawl and the 2013-48 crawl. The authors note that applying the
deduplication *across crawls* reduced the evaluation performance of the ablation models used
for assessment. The proposed reason for this performance degredation is that data
duplicated across crawls is more likely to be high-quality compared to data that is not,
so leaving in the duplicates effectively upsamples the higer-quality data.
Following deduplication in Fineweb, Fineweb-Edu was extracted using a model-based quality classifier
targeting educational content. It thus inherited the same inter-crawl deduplication strategy of Fineweb.
### Deduplication in this dataset
#### Motivation
Given the findings that cross-crawl deduplication reduced ablation model performance, one might ask
what the motivation is for producing a dataset that uses it. Our motivation was threefold:
- Reduce the number of rows that needed to be embedded by avoiding embedding of exact-match content
- Enable easier filtering of the dataset for subsets-of-interest
- Provide a version of the dataset for users whose training goals include avoiding training on non-unique
tokens.
For use cases that would benefit from "re-hydrating" or filtering the rows based on how frequently
the text appeared in the original dataset, the new `count` column retains the number of appearances
of the associated text.
#### Procedure
The overall procedure was to remove exact matches that appeared in multiple crawls (also referred to
as "dumps"). This was achieved by performing an md5 hash on the text column and removing rows with
duplicate hashes. To make this tractable at scale, we first grouped all rows by the first two hex
digits of their hashes, then looked for exact hash matches within each of the resulting 256
buckets of data. Note that unlike the intra-crawl deduplication, we only eliminated exact matches
across crawls. For duplicated rows, a strong preference was given to keep the metadata
(ex: dump, url) from the oldest crawl where the text appeared. Following deduplication and
embedding, the data were grouped by the "dump" column, mirroring the organization of the original
Fineweb-Edu dataset.
### Deduplication stats
Deduplication removed approximately 74.7% of rows from the original dataset
(from 1.279 billion in Fineweb-Edu to 0.324 billion rows in Fineweb-Edu-Fortified).
This indicates that a substantial amount of data in Fineweb-Edu is present across multiple crawls.
The total token count in the deduplicated dataset is approximately 375 billion, compared to the
1,320 billion tokens in Fineweb-Edu.
<figure>
<img src="https://cdn-uploads.huggingface.co/production/uploads/646516d2200b583e1e50faf8/mUFyO1fUWJEXbYwiteR9e.png" width="750" style="margin-left:auto; margin-right: auto"/>
<figcaption style="text-align: center; margin-left: auto; margin-right: auto; font-style: italic;">
A histogram of the `count` column. Histogram was generated using a 500k row sample after
performing global per-row text duplication counting.
</figcaption>
</figure>
## Embeddings
To support use cases with Fineweb-Edu such as classification, clustering, semantic search, etc.,
we have produced an embedding vector for each row in the dataset. The embedding model
[TaylorAI/bge-micro](https://huggingface.co/TaylorAI/bge-micro)
was selected for its tradeoff of strong performance on [MTEB](https://huggingface.co/spaces/mteb/leaderboard)
benchmarks relative to its size (17 million parameters). The model's embedding space
has 384 dimensions. The context-window of the model is 512 tokens (roughly several paragraphs of text);
each row is embedded by using the first 512 tokens in its text field. Producing the embeddings took approximately
412 GPU-hours on Nvidia T4 GPUs.
## Using via `datasets`
```python
from datasets import load_dataset
fw = load_dataset("airtrain-ai/fineweb-edu-fortified", name="CC-MAIN-2024-10", split="train", streaming=True)
```
## Considerations for Using the Data
This "Considerations" section is copied from the parent dataset:
[FineWeb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu).
### Social Impact of Dataset
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
### Discussion of Biases
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to [disproportionately remove content in specific dialects](https://aclanthology.org/D16-1120/) and [overclassify as toxic text related to specific social identities](https://arxiv.org/pdf/2109.07445.pdf), respectively.
### Other Known Limitations
As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use 🍷 FineWeb with a code dataset, such as [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2). You should also probably consider complementing 🍷 FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in 🍷 FineWeb (we did not tailor the processing to individual websites).
## Additional Information
### Acknowledgements
Airtrain would like to thank the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets,
as well as for their support during work on Fineweb-Edu-Fortified.
We'd also like to thank [@underspirit](https://huggingface.co/underspirit) for
[pointing out](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/discussions/7)
the amount of reduction in dataset size that could be achieved via deduplication.
We owe gratitude to [TaylorAI](https://huggingface.co/TaylorAI) for the `bge-micro` embedding model.
Finally, thank you to the Hugging Face community for fostering a thriving ecosystem of models, datasets, and tools
to support open-source AI.
### Licensing Information
The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
|
codeparrot/github-code | codeparrot | "2022-10-20T15:01:14Z" | 17,900 | 308 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"region:us"
] | [
"text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Code Dataset
## Dataset Description
The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
print(next(iter(ds))["code"])
#OUTPUT:
"""\
FROM rockyluke/ubuntu:precise
ENV DEBIAN_FRONTEND="noninteractive" \
TZ="Europe/Amsterdam"
...
"""
```
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
licenses = []
for element in iter(ds).take(10_000):
licenses.append(element["license"])
print(Counter(licenses))
#OUTPUT:
Counter({'mit': 9896, 'isc': 104})
```
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
```python
ds = load_dataset("codeparrot/github-code", split="train")
```
## Data Structure
### Data Instances
```python
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code|string|content of source file|
|repo_name|string|name of the GitHub repository|
|path|string|path of file in GitHub repository|
|language|string|programming language as inferred by extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
The dataset only contains a train split.
## Languages
The dataset contains 30 programming languages with over 60 extensions:
```python
{
"Assembly": [".asm"],
"Batchfile": [".bat", ".cmd"],
"C": [".c", ".h"],
"C#": [".cs"],
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
"CMake": [".cmake"],
"CSS": [".css"],
"Dockerfile": [".dockerfile", "Dockerfile"],
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
"GO": [".go"],
"Haskell": [".hs"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Julia": [".jl"],
"Lua": [".lua"],
"Makefile": ["Makefile"],
"Markdown": [".md", ".markdown"],
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
"Perl": [".pl", ".pm", ".pod", ".perl"],
"PowerShell": ['.ps1', '.psd1', '.psm1'],
"Python": [".py"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Scala": [".scala"],
"Shell": [".sh", ".bash", ".command", ".zsh"],
"TypeScript": [".ts", ".tsx"],
"TeX": [".tex"],
"Visual Basic": [".vb"]
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:

| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.70 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.70 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.70 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 |
## Dataset Creation
The dataset was created in two steps:
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co/datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
## Considerations for Using the Data
The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
## Releases
You can load any older version of the dataset with the `revision` argument:
```Python
ds = load_dataset("codeparrot/github-code", revision="v1.0")
```
### v1.0
- Initial release of dataset
- The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_
### v1.1
- Fix missing Scala/TypeScript
- Fix deduplication issue with inconsistent Python `hash`
- The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
|
fixie-ai/peoples_speech | fixie-ai | "2024-08-11T17:26:01Z" | 17,647 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T18:35:01Z" | ---
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 2511523987.692
num_examples: 18622
- name: test
num_bytes: 4259695510.794
num_examples: 34898
- name: train
num_bytes: 401646320552.671
num_examples: 1501271
download_size: 398922548670
dataset_size: 408417540051
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 144432442623.054
num_examples: 548014
- name: validation
num_bytes: 2511524241.692
num_examples: 18622
- name: test
num_bytes: 4259695588.794
num_examples: 34898
download_size: 149491764186
dataset_size: 151203662453.53998
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- split: train
path: data/train-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
---
|
fixie-ai/librispeech_asr | fixie-ai | "2024-08-05T18:38:33Z" | 17,598 | 3 | [
"language:en",
"size_categories:100K<n<1M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-19T02:34:30Z" | ---
language:
- en
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 623948478.48
num_examples: 2620
- name: validation
num_bytes: 622190064.956
num_examples: 2703
- name: train.360
num_bytes: 41953890926.124
num_examples: 104014
- name: train.100
num_bytes: 11606313661.774
num_examples: 28539
download_size: 53886816833
dataset_size: 54806343131.334
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: continuation
dtype: string
splits:
- name: train.500
num_bytes: 57330687390.808
num_examples: 148688
- name: validation
num_bytes: 591511495.496
num_examples: 2864
- name: test
num_bytes: 616939198.113
num_examples: 2939
download_size: 57019309170
dataset_size: 58539138084.417
configs:
- config_name: clean
data_files:
- split: test
path: clean/test-*
- split: validation
path: clean/validation-*
- split: train.360
path: clean/train.360-*
- split: train.100
path: clean/train.100-*
- config_name: other
data_files:
- split: train.500
path: other/train.500-*
- split: validation
path: other/validation-*
- split: test
path: other/test-*
---
|
CohereForAI/aya_collection | CohereForAI | "2024-06-28T08:04:56Z" | 17,464 | 218 | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:translation",
"language:ace",
"language:afr",
"language:amh",
"language:ara",
"language:aze",
"language:ban",
"language:bbc",
"language:bel",
"language:bem",
"language:ben",
"language:bjn",
"language:bul",
"language:cat",
"language:ceb",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:epo",
"language:est",
"language:eus",
"language:fil",
"language:fin",
"language:fon",
"language:fra",
"language:gla",
"language:gle",
"language:glg",
"language:guj",
"language:hat",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ibo",
"language:ind",
"language:isl",
"language:ita",
"language:jav",
"language:jpn",
"language:kan",
"language:kas",
"language:kat",
"language:kau",
"language:kaz",
"language:khm",
"language:kin",
"language:kir",
"language:kor",
"language:kur",
"language:lao",
"language:lav",
"language:lij",
"language:lit",
"language:ltz",
"language:mad",
"language:mal",
"language:man",
"language:mar",
"language:min",
"language:mkd",
"language:mlg",
"language:mlt",
"language:mon",
"language:mri",
"language:msa",
"language:mya",
"language:nep",
"language:nij",
"language:nld",
"language:nor",
"language:nso",
"language:nya",
"language:pan",
"language:pes",
"language:pol",
"language:por",
"language:pus",
"language:ron",
"language:rus",
"language:sin",
"language:slk",
"language:slv",
"language:smo",
"language:sna",
"language:snd",
"language:som",
"language:sot",
"language:spa",
"language:sqi",
"language:srp",
"language:sun",
"language:swa",
"language:swe",
"language:tam",
"language:taq",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:twi",
"language:ukr",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yid",
"language:yor",
"language:zho",
"language:zul",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.06619",
"region:us"
] | [
"text-classification",
"summarization",
"translation"
] | "2024-01-31T21:40:43Z" | ---
language:
- ace
- afr
- amh
- ara
- aze
- ban
- bbc
- bel
- bem
- ben
- bjn
- bul
- cat
- ceb
- ces
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fil
- fin
- fon
- fra
- gla
- gle
- glg
- guj
- hat
- hau
- heb
- hin
- hrv
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kan
- kas
- kat
- kau
- kaz
- khm
- kin
- kir
- kor
- kur
- lao
- lav
- lij
- lit
- ltz
- mad
- mal
- man
- mar
- min
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nij
- nld
- nor
- nso
- nya
- pan
- pes
- pol
- por
- pus
- ron
- rus
- sin
- slk
- slv
- smo
- sna
- snd
- som
- sot
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- taq
- tel
- tgk
- tha
- tur
- twi
- ukr
- urd
- uzb
- vie
- wol
- xho
- yid
- yor
- zho
- zul
license: apache-2.0
size_categories:
- 100M<n<1B
task_categories:
- text-classification
- summarization
- translation
pretty_name: Aya Collection
dataset_info:
- config_name: aya_dataset
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 245523658
num_examples: 202364
download_size: 134230030
dataset_size: 245523658
- config_name: templated_afriqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1053208.8833372337
num_examples: 6834
- name: train
num_bytes: 785976.7786098759
num_examples: 5100
- name: validation
num_bytes: 794915.3380528903
num_examples: 5158
download_size: 945238
dataset_size: 2634101.0
- config_name: templated_afrisenti
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 13970874.910620399
num_examples: 42576
- name: train
num_bytes: 32313882.88468279
num_examples: 98476
- name: validation
num_bytes: 6141462.204696811
num_examples: 18716
download_size: 13309887
dataset_size: 52426220.0
- config_name: templated_amharic_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1563941.8685517767
num_examples: 523
- name: train
num_bytes: 5475291.704241497
num_examples: 1831
- name: validation
num_bytes: 786456.4272067252
num_examples: 263
download_size: 3648433
dataset_size: 7825689.999999999
- config_name: templated_armenian_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1864796.3648305084
num_examples: 3063
- name: train
num_bytes: 2445604.6351694916
num_examples: 4017
download_size: 1825641
dataset_size: 4310401.0
- config_name: templated_bengali_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 14242457
num_examples: 19096
download_size: 4609132
dataset_size: 14242457
- config_name: templated_dutch_imdb
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 39967063.5
num_examples: 24992
- name: train
num_bytes: 39967063.5
num_examples: 24992
download_size: 44533807
dataset_size: 79934127.0
- config_name: templated_hindi_headline
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 228788501.12729776
num_examples: 23452
- name: train
num_bytes: 919144047.8727022
num_examples: 94217
download_size: 243324488
dataset_size: 1147932549.0
- config_name: templated_hindi_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 109524809.11948325
num_examples: 10655
- name: train
num_bytes: 437112433.88051677
num_examples: 42524
download_size: 112865381
dataset_size: 546637243.0
- config_name: templated_indic_paraphrase
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 5340504
num_examples: 7523
download_size: 1724626
dataset_size: 5340504
- config_name: templated_indic_sentiment
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 7496187
num_examples: 11559
download_size: 3003109
dataset_size: 7496187
- config_name: templated_indo_stories
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 2042351
num_examples: 2599
download_size: 813713
dataset_size: 2042351
- config_name: templated_japanese_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1345341895
num_examples: 2463624
download_size: 580330810
dataset_size: 1345341895
- config_name: templated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 591008
num_examples: 754
download_size: 157851
dataset_size: 591008
- config_name: templated_ligurian_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 105221.25
num_examples: 54
- name: test
num_bytes: 140295.0
num_examples: 72
- name: train
num_bytes: 596253.75
num_examples: 306
download_size: 546344
dataset_size: 841770.0
- config_name: templated_masakhanews
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 31426840.99009901
num_examples: 9240
- name: train
num_bytes: 109538186.24752475
num_examples: 32206
- name: validation
num_bytes: 15679408.762376238
num_examples: 4610
download_size: 86433056
dataset_size: 156644436.0
- config_name: templated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 41153051.4
num_examples: 156000
- name: train
num_bytes: 144035679.9
num_examples: 546000
- name: validation
num_bytes: 20576525.7
num_examples: 78000
download_size: 43108344
dataset_size: 205765257.0
- config_name: templated_ntx_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 10019994
num_examples: 5983
download_size: 1037270
dataset_size: 10019994
- config_name: templated_nusax_senti
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 2684840.4
num_examples: 8000
- name: train
num_bytes: 3356050.5
num_examples: 10000
- name: validation
num_bytes: 671210.1
num_examples: 2000
download_size: 2336444
dataset_size: 6712101.0
- config_name: templated_persian_farstail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 731412.1801486664
num_examples: 1029
- name: train
num_bytes: 3424629.62483603
num_examples: 4818
- name: validation
num_bytes: 720750.1950153039
num_examples: 1014
download_size: 1417008
dataset_size: 4876792.0
- config_name: templated_persian_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 38518994.420354694
num_examples: 11186
- name: train
num_bytes: 564885564.1599021
num_examples: 164044
- name: validation
num_bytes: 38512107.41974315
num_examples: 11184
download_size: 280563392
dataset_size: 641916666.0
- config_name: templated_scirepeval
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 53956804
num_examples: 32973
download_size: 27742964
dataset_size: 53956804
- config_name: templated_seed_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 186542.23316647828
num_examples: 380
- name: test
num_bytes: 197342.04666559017
num_examples: 402
- name: train
num_bytes: 5696410.720167931
num_examples: 11604
download_size: 2674875
dataset_size: 6080295.0
- config_name: templated_soda
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 487742788.92976975
num_examples: 595872
- name: train
num_bytes: 2519225981.566041
num_examples: 3077721
- name: validation
num_bytes: 479157981.5041894
num_examples: 585384
download_size: 1668121549
dataset_size: 3486126752.0
- config_name: templated_tamil_stories
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 14555943
num_examples: 1202
download_size: 4912529
dataset_size: 14555943
- config_name: templated_tamil_thirukkural
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 7722387
num_examples: 3990
download_size: 1441119
dataset_size: 7722387
- config_name: templated_telugu_food
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1108509
num_examples: 441
download_size: 312391
dataset_size: 1108509
- config_name: templated_telugu_jokes
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 966698
num_examples: 929
download_size: 298210
dataset_size: 966698
- config_name: templated_telugu_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1150840295
num_examples: 467090
download_size: 423260269
dataset_size: 1150840295
- config_name: templated_telugu_poems
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 8244805
num_examples: 5115
download_size: 2713433
dataset_size: 8244805
- config_name: templated_telugu_riddles
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 339040
num_examples: 844
download_size: 79031
dataset_size: 339040
- config_name: templated_thai_pos
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 319580.309461865
num_examples: 1000
- name: train
num_bytes: 41690529.69053814
num_examples: 130454
download_size: 7405764
dataset_size: 42010110.0
- config_name: templated_thai_scb
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 131923007.25034823
num_examples: 177862
- name: train
num_bytes: 1188824615.223528
num_examples: 1602804
- name: validation
num_bytes: 131917073.5261238
num_examples: 177854
download_size: 441007386
dataset_size: 1452664696.0
- config_name: templated_thai_usembassy
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 10002322
num_examples: 1230
download_size: 3958145
dataset_size: 10002322
- config_name: templated_thai_wikitionary
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 12238652
num_examples: 19729
download_size: 2641369
dataset_size: 12238652
- config_name: templated_turku_paraphrase
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 9449925.655740838
num_examples: 31413
- name: train
num_bytes: 75488399.52960008
num_examples: 250935
- name: validation
num_bytes: 9502269.814659085
num_examples: 31587
download_size: 28908781
dataset_size: 94440595.00000001
- config_name: templated_ukranian_gec
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 21369624
num_examples: 29958
download_size: 9511988
dataset_size: 21369624
- config_name: templated_uner_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 59421032.72376601
num_examples: 54957
- name: test
num_bytes: 16164354.663105734
num_examples: 14950
- name: validation
num_bytes: 8420601.613128258
num_examples: 7788
download_size: 12453483
dataset_size: 84005989.0
- config_name: templated_urdu_news_category
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29923228.33936761
num_examples: 11187
- name: train
num_bytes: 269284981.6606324
num_examples: 100674
download_size: 118185925
dataset_size: 299208210.0
- config_name: templated_urdu_news_gen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29497844.81704079
num_examples: 11187
- name: train
num_bytes: 265456872.1829592
num_examples: 100674
download_size: 123276747
dataset_size: 294954717.0
- config_name: templated_urdu_news_headline
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29258423.35545901
num_examples: 11187
- name: train
num_bytes: 263302271.644541
num_examples: 100674
download_size: 123095949
dataset_size: 292560695.0
- config_name: templated_wiki_split
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 4608986.773259303
num_examples: 10000
- name: train
num_bytes: 912527760.4534814
num_examples: 1979888
- name: validation
num_bytes: 4608986.773259303
num_examples: 10000
download_size: 395631256
dataset_size: 921745734.0
- config_name: templated_xcsqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 6315047.0
num_examples: 17000
download_size: 2125506
dataset_size: 6315047.0
- config_name: templated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 493033268.5027245
num_examples: 621319
- name: train
num_bytes: 3671177872.612755
num_examples: 4626407
- name: validation
num_bytes: 420416838.88452065
num_examples: 529808
download_size: 2363004380
dataset_size: 4584627980.0
- config_name: templated_xwikis
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 219985468.96557257
num_examples: 34987
- name: train
num_bytes: 8995693557.81201
num_examples: 1430696
- name: validation
num_bytes: 251360765.22241676
num_examples: 39977
download_size: 5713306872
dataset_size: 9467039791.999998
- config_name: translated_adversarial_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 167379954.08333334
num_examples: 119000
- name: train
num_bytes: 1673799540.8333333
num_examples: 1190000
- name: validation
num_bytes: 167379954.08333334
num_examples: 119000
download_size: 595462085
dataset_size: 2008559448.9999998
- config_name: translated_cnn_dailymail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 4825107898.98773
num_examples: 1378800
- name: train
num_bytes: 41993976492.495476
num_examples: 12000000
- name: validation
num_bytes: 5613754777.516795
num_examples: 1604160
download_size: 25383694727
dataset_size: 52432839169.0
- config_name: translated_dolly
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 2188278931
num_examples: 1762152
download_size: 1089137630
dataset_size: 2188278931
- config_name: translated_flan_coqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 2884413536
num_examples: 762671
download_size: 1416350365
dataset_size: 2884413536
- config_name: translated_flan_cot
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 7470682150.0
num_examples: 11029200
download_size: 3086804878
dataset_size: 7470682150.0
- config_name: translated_flan_gem_wiki
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 11446176046
num_examples: 3230493
download_size: 5342129672
dataset_size: 11446176046
- config_name: translated_flan_lambada
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 223527122
num_examples: 509201
download_size: 99315916
dataset_size: 223527122
- config_name: translated_flan_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 34188800
num_examples: 64260
download_size: 14245088
dataset_size: 34188800
- config_name: translated_hotpotqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13234982265.87797
num_examples: 42301644
- name: validation
num_bytes: 833990488.1220294
num_examples: 2665600
download_size: 4862020346
dataset_size: 14068972754.0
- config_name: translated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 96548938
num_examples: 89726
download_size: 40366737
dataset_size: 96548938
- config_name: translated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 131276187.4
num_examples: 476000
- name: train
num_bytes: 459466655.9
num_examples: 1666000
- name: validation
num_bytes: 65638093.7
num_examples: 238000
download_size: 130340546
dataset_size: 656380937.0
- config_name: translated_mlqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 3730486242.0756793
num_examples: 2746830
- name: validation
num_bytes: 369508041.92432094
num_examples: 272076
download_size: 1662296336
dataset_size: 4099994284.0
- config_name: translated_nqopen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 4456165405.095046
num_examples: 20926150
- name: validation
num_bytes: 182959989.9049544
num_examples: 859180
download_size: 1482593128
dataset_size: 4639125395.0
- config_name: translated_paws
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 536748719.07157385
num_examples: 952000
- name: train
num_bytes: 3314490433.8568525
num_examples: 5878719
- name: validation
num_bytes: 536748719.07157385
num_examples: 952000
download_size: 686023556
dataset_size: 4387987872.0
- config_name: translated_piqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 1324751595.2891204
num_examples: 1917447
- name: validation
num_bytes: 151113599.71087962
num_examples: 218722
download_size: 504206733
dataset_size: 1475865195.0
- config_name: translated_soda
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 9332736341.158312
num_examples: 17876160
- name: validation
num_bytes: 9168469957.193184
num_examples: 17561520
- name: train
num_bytes: 74651741547.6485
num_examples: 142989840
download_size: 32022718450
dataset_size: 93152947846.0
- config_name: translated_wiki_split
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 72471632064.9965
num_examples: 117803336
- name: validation
num_bytes: 366039049.0017441
num_examples: 595000
- name: test
num_bytes: 366039049.0017441
num_examples: 595000
download_size: 27980267627
dataset_size: 73203710163.0
- config_name: translated_wikiqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 15512870.67820774
num_examples: 34867
- name: train
num_bytes: 55062749.16496945
num_examples: 123760
- name: validation
num_bytes: 7412293.156822811
num_examples: 16660
download_size: 32773189
dataset_size: 77987913.00000001
- config_name: translated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 8449087876.213723
num_examples: 8755108
- name: validation
num_bytes: 7326325551.677284
num_examples: 7591680
- name: train
num_bytes: 60579299633.10899
num_examples: 62773440
download_size: 35927637128
dataset_size: 76354713061.0
configs:
- config_name: aya_dataset
data_files:
- split: train
path: aya_dataset/train-*
- config_name: templated_afriqa
data_files:
- split: test
path: templated_afriqa/test-*
- split: train
path: templated_afriqa/train-*
- split: validation
path: templated_afriqa/validation-*
- config_name: templated_afrisenti
data_files:
- split: test
path: templated_afrisenti/test-*
- split: train
path: templated_afrisenti/train-*
- split: validation
path: templated_afrisenti/validation-*
- config_name: templated_amharic_qa
data_files:
- split: test
path: templated_amharic_qa/test-*
- split: train
path: templated_amharic_qa/train-*
- split: validation
path: templated_amharic_qa/validation-*
- config_name: templated_armenian_instruct
data_files:
- split: test
path: templated_armenian_instruct/test-*
- split: train
path: templated_armenian_instruct/train-*
- config_name: templated_bengali_news
data_files:
- split: train
path: templated_bengali_news/train-*
- config_name: templated_dutch_imdb
data_files:
- split: test
path: templated_dutch_imdb/test-*
- split: train
path: templated_dutch_imdb/train-*
- config_name: templated_hindi_headline
data_files:
- split: test
path: templated_hindi_headline/test-*
- split: train
path: templated_hindi_headline/train-*
- config_name: templated_hindi_news
data_files:
- split: test
path: templated_hindi_news/test-*
- split: train
path: templated_hindi_news/train-*
- config_name: templated_indic_paraphrase
data_files:
- split: train
path: templated_indic_paraphrase/train-*
- config_name: templated_indic_sentiment
data_files:
- split: train
path: templated_indic_sentiment/train-*
- config_name: templated_indo_stories
data_files:
- split: train
path: templated_indo_stories/train-*
- config_name: templated_japanese_instruct
data_files:
- split: train
path: templated_japanese_instruct/train-*
- config_name: templated_joke_explaination
data_files:
- split: train
path: templated_joke_explaination/train-*
- config_name: templated_ligurian_news
data_files:
- split: validation
path: templated_ligurian_news/validation-*
- split: test
path: templated_ligurian_news/test-*
- split: train
path: templated_ligurian_news/train-*
- config_name: templated_masakhanews
data_files:
- split: test
path: templated_masakhanews/test-*
- split: train
path: templated_masakhanews/train-*
- split: validation
path: templated_masakhanews/validation-*
- config_name: templated_mintaka
data_files:
- split: test
path: templated_mintaka/test-*
- split: train
path: templated_mintaka/train-*
- split: validation
path: templated_mintaka/validation-*
- config_name: templated_ntx_llm
data_files:
- split: train
path: templated_ntx_llm/train-*
- config_name: templated_nusax_senti
data_files:
- split: test
path: templated_nusax_senti/test-*
- split: train
path: templated_nusax_senti/train-*
- split: validation
path: templated_nusax_senti/validation-*
- config_name: templated_persian_farstail
data_files:
- split: test
path: templated_persian_farstail/test-*
- split: train
path: templated_persian_farstail/train-*
- split: validation
path: templated_persian_farstail/validation-*
- config_name: templated_persian_instruct
data_files:
- split: test
path: templated_persian_instruct/test-*
- split: train
path: templated_persian_instruct/train-*
- split: validation
path: templated_persian_instruct/validation-*
- config_name: templated_scirepeval
data_files:
- split: validation
path: templated_scirepeval/validation-*
- config_name: templated_seed_instruct
data_files:
- split: validation
path: templated_seed_instruct/validation-*
- split: test
path: templated_seed_instruct/test-*
- split: train
path: templated_seed_instruct/train-*
- config_name: templated_soda
data_files:
- split: test
path: templated_soda/test-*
- split: train
path: templated_soda/train-*
- split: validation
path: templated_soda/validation-*
- config_name: templated_tamil_stories
data_files:
- split: train
path: templated_tamil_stories/train-*
- config_name: templated_tamil_thirukkural
data_files:
- split: train
path: templated_tamil_thirukkural/train-*
- config_name: templated_telugu_food
data_files:
- split: train
path: templated_telugu_food/train-*
- config_name: templated_telugu_jokes
data_files:
- split: train
path: templated_telugu_jokes/train-*
- config_name: templated_telugu_news
data_files:
- split: train
path: templated_telugu_news/train-*
- config_name: templated_telugu_poems
data_files:
- split: train
path: templated_telugu_poems/train-*
- config_name: templated_telugu_riddles
data_files:
- split: train
path: templated_telugu_riddles/train-*
- config_name: templated_thai_pos
data_files:
- split: test
path: templated_thai_pos/test-*
- split: train
path: templated_thai_pos/train-*
- config_name: templated_thai_scb
data_files:
- split: test
path: templated_thai_scb/test-*
- split: train
path: templated_thai_scb/train-*
- split: validation
path: templated_thai_scb/validation-*
- config_name: templated_thai_usembassy
data_files:
- split: train
path: templated_thai_usembassy/train-*
- config_name: templated_thai_wikitionary
data_files:
- split: train
path: templated_thai_wikitionary/train-*
- config_name: templated_turku_paraphrase
data_files:
- split: test
path: templated_turku_paraphrase/test-*
- split: train
path: templated_turku_paraphrase/train-*
- split: validation
path: templated_turku_paraphrase/validation-*
- config_name: templated_ukranian_gec
data_files:
- split: train
path: templated_ukranian_gec/train-*
- config_name: templated_uner_llm
data_files:
- split: train
path: templated_uner_llm/train-*
- split: test
path: templated_uner_llm/test-*
- split: validation
path: templated_uner_llm/validation-*
- config_name: templated_urdu_news_category
data_files:
- split: test
path: templated_urdu_news_category/test-*
- split: train
path: templated_urdu_news_category/train-*
- config_name: templated_urdu_news_gen
data_files:
- split: test
path: templated_urdu_news_gen/test-*
- split: train
path: templated_urdu_news_gen/train-*
- config_name: templated_urdu_news_headline
data_files:
- split: test
path: templated_urdu_news_headline/test-*
- split: train
path: templated_urdu_news_headline/train-*
- config_name: templated_wiki_split
data_files:
- split: test
path: templated_wiki_split/test-*
- split: train
path: templated_wiki_split/train-*
- split: validation
path: templated_wiki_split/validation-*
- config_name: templated_xcsqa
data_files:
- split: validation
path: templated_xcsqa/validation-*
- config_name: templated_xlel_wd
data_files:
- split: test
path: templated_xlel_wd/test-*
- split: train
path: templated_xlel_wd/train-*
- split: validation
path: templated_xlel_wd/validation-*
- config_name: templated_xwikis
data_files:
- split: test
path: templated_xwikis/test-*
- split: train
path: templated_xwikis/train-*
- split: validation
path: templated_xwikis/validation-*
- config_name: translated_adversarial_qa
data_files:
- split: test
path: translated_adversarial_qa/test-*
- split: train
path: translated_adversarial_qa/train-*
- split: validation
path: translated_adversarial_qa/validation-*
- config_name: translated_cnn_dailymail
data_files:
- split: test
path: translated_cnn_dailymail/test-*
- split: train
path: translated_cnn_dailymail/train-*
- split: validation
path: translated_cnn_dailymail/validation-*
- config_name: translated_dolly
data_files:
- split: train
path: translated_dolly/train-*
- config_name: translated_flan_coqa
data_files:
- split: train
path: translated_flan_coqa/train-*
- config_name: translated_flan_cot
data_files:
- split: train
path: translated_flan_cot/train-*
- config_name: translated_flan_gem_wiki
data_files:
- split: train
path: translated_flan_gem_wiki/train-*
- config_name: translated_flan_lambada
data_files:
- split: train
path: translated_flan_lambada/train-*
- config_name: translated_flan_qa
data_files:
- split: train
path: translated_flan_qa/train-*
- config_name: translated_hotpotqa
data_files:
- split: train
path: translated_hotpotqa/train-*
- split: validation
path: translated_hotpotqa/validation-*
- config_name: translated_joke_explaination
data_files:
- split: train
path: translated_joke_explaination/train-*
- config_name: translated_mintaka
data_files:
- split: test
path: translated_mintaka/test-*
- split: train
path: translated_mintaka/train-*
- split: validation
path: translated_mintaka/validation-*
- config_name: translated_mlqa
data_files:
- split: test
path: translated_mlqa/test-*
- split: validation
path: translated_mlqa/validation-*
- config_name: translated_nqopen
data_files:
- split: train
path: translated_nqopen/train-*
- split: validation
path: translated_nqopen/validation-*
- config_name: translated_paws
data_files:
- split: test
path: translated_paws/test-*
- split: train
path: translated_paws/train-*
- split: validation
path: translated_paws/validation-*
- config_name: translated_piqa
data_files:
- split: train
path: translated_piqa/train-*
- split: validation
path: translated_piqa/validation-*
- config_name: translated_soda
data_files:
- split: test
path: translated_soda/test-*
- split: validation
path: translated_soda/validation-*
- split: train
path: translated_soda/train-*
- config_name: translated_wiki_split
data_files:
- split: test
path: translated_wiki_split/test-*
- split: train
path: translated_wiki_split/train-*
- split: validation
path: translated_wiki_split/validation-*
- config_name: translated_wikiqa
data_files:
- split: test
path: translated_wikiqa/test-*
- split: train
path: translated_wikiqa/train-*
- split: validation
path: translated_wikiqa/validation-*
- config_name: translated_xlel_wd
data_files:
- split: test
path: translated_xlel_wd/test-*
- split: validation
path: translated_xlel_wd/validation-*
- split: train
path: translated_xlel_wd/train-*
---

****This dataset is uploaded in two places: here and additionally [here](https://huggingface.co/datasets/CohereForAI/aya_collection_language_split) as 'Aya Collection Language Split.' These datasets are identical in content but differ in structure of upload. This dataset is structured by folders split according to dataset name. The version [here](https://huggingface.co/datasets/CohereForAI/aya_collection_language_split) instead divides the Aya collection into folders split by language. We recommend you use the language split version if you are only interested in downloading data for a single or smaller set of languages, and this version if you want to download dataset according to data source or the entire collection.****
# Dataset Summary
The Aya Collection is a massive multilingual collection consisting of 513 million instances of prompts and completions covering a wide range of tasks.
This collection incorporates instruction-style templates from fluent speakers and applies them to a curated list of datasets, as well as translations of instruction-style datasets into 101 languages. Aya Dataset, a human-curated multilingual instruction and response dataset, is also part of this collection. See our paper for more details regarding the collection.
- **Curated by:** Contributors of [Aya Open Science Intiative](https://cohere.com/research/aya)
- **Language(s):** 115 languages
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages. This collection structured based on dataset level subsets. An alternative version of the collection structured by language subsets is also available.|
| [aya_collection_language_split](https://huggingface.co/datasets/CohereForAI/aya_collection_language_split) | Aya Collection structured based on language level subsets. |
| [aya_evaluation_suite](https://huggingface.co/datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
| [aya_redteaming](https://huggingface.co/datasets/CohereForAI/aya_redteaming)| A red-teaming dataset consisting of harmful prompts in 8 languages across 9 different categories of harm with explicit labels for "global" and "local" harm.|
# Dataset
The `Aya Collection` is a comprehensive, large corpus of datasets that can be used by researchers around the world to train multilingual models. Our goal is only to include datasets with permissive licensing for manipulation and redistribution.
The `Aya Collection` consists of three different sources of data:
1. Templated data: We collaborated with fluent speakers to create templates that allowed for the automatic expansion of existing datasets into various languages.
2. Translated data: We translated a hand-selected subset of 19 datasets into 101 languages (114 dialects) using the NLLB 3.3B parameter machine translation model.
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) as a subset of the overall collection. This is the only dataset in the collection that is human-annotated in its entirety.
## Load with Datasets
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("CohereForAI/aya_collection", "templated_mintaka")
```
In the above code snippet, "templated_mintaka" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
## Data Instances
An example of a `train` instance looks as follows:
```json
{'id': 246001,
'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?',
'targets': 'The answer is Mount Lucania.',
'dataset_name': 'Mintaka-inst',
'sub_dataset_name': '-',
'task_type': 'question-answering',
'template_id': 3,
'language': 'eng',
'split': 'train',
'script': 'Latn'
}
```
## Data Fields
The data fields are the same among all splits:
- `id:` Unique id of the data point
- `inputs:` Prompt or input to the language model.
- `targets:` Completion or output of the language model.
- `dataset_name:` The name of the source dataset that the data point was taken from
- `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank.
- `task_type:` The task type that this conversation belongs to.
- `template_id`: The id of the template applied to this data point.
- `language:` The ISO code of the dialect of the conversation.
- `script:` The script of the language.
- `split:` Indicates whether the data point is part of the `train` or the `test` split.
### Statistics
The total number of data points, including the Aya Dataset` is 513,758,189. To view the breakdown of dialect codes and the respective templated and translated data point counts in the Aya Collection , refer to the toggled table below.
<details>
<summary> <b> Breakdown of Aya Collection data point counts grouped by dialects </b> </summary>
|dialect code|language|translated data point count|templated data point count|total count |
|------------|--------|---------------------------|--------------------------|---------------|
|ace |Achinese|8240684 |2000 |8242684 |
|acm |Arabic |4120342 |0 |4120342 |
|acq |Arabic |4120342 |0 |4120342 |
|aeb |Arabic |4120342 |0 |4120342 |
|afr |Afrikaans|4120342 |6108 |4126450 |
|ajp |Arabic |4120342 |0 |4120342 |
|als |Albanian|4120342 |0 |4120342 |
|amh |Amharic |4120342 |25327 |4145669 |
|apc |Arabic |4120342 |0 |4120342 |
|arb |Arabic |6424999 |216430 |6641429 |
|ars |Arabic |4120342 |0 |4120342 |
|ary |Arabic |4120342 |18076 |4138418 |
|arz |Arabic |4120342 |0 |4120342 |
|azb |Azerbaijani|4120342 |0 |4120342 |
|azj |Azerbaijani|4120342 |0 |4120342 |
|bel |Belarusian|4120342 |21273 |4141615 |
|ben |Bengali |4120342 |30661 |4151003 |
|bjn |Banjar |8240684 |2000 |8242684 |
|bul |Bulgarian|4120342 |37722 |4158064 |
|cat |Catalan |4120342 |66900 |4187242 |
|ceb |Cebuano |4120342 |0 |4120342 |
|ces |Czech |4120342 |179604 |4299946 |
|ckb |Kurdish |4120342 |0 |4120342 |
|cym |Welsh |4120342 |0 |4120342 |
|dan |Danish |4120342 |36310 |4156652 |
|deu |German |4120342 |1326722 |5447064 |
|ell |Greek |4120342 |40291 |4160633 |
|eng |English |9771427 |8066678 |17838105 |
|epo |Esperanto|4120342 |0 |4120342 |
|est |Estonian|4120342 |0 |4120342 |
|eus |Basque |4120342 |0 |4120342 |
|fin |Finnish |4120342 |457895 |4578237 |
|fra |French |4120342 |835520 |4955862 |
|gla |Scottish Gaelic|4120342 |0 |4120342 |
|gle |Irish |4120342 |0 |4120342 |
|glg |Galician|4120342 |0 |4120342 |
|guj |Gujarati|4120342 |2157 |4122499 |
|hat |Haitian Creole|4120342 |0 |4120342 |
|hau |Hausa |4120342 |51396 |4171738 |
|heb |Hebrew |4120342 |103466 |4223808 |
|hin |Hindi |4120342 |260387 |4380729 |
|hun |Hungarian|4120342 |82039 |4202381 |
|hye |Armenian|4120342 |7080 |4127422 |
|ibo |Igbo |4120342 |36312 |4156654 |
|ind |Indonesian|4120342 |45709 |4166051 |
|isl |Icelandic|4120342 |0 |4120342 |
|ita |Italian |4120342 |405682 |4526024 |
|jav |Javanese|4120342 |829 |4121171 |
|jpn |Japanese|4120342 |2693177 |6813519 |
|kan |Kannada |4120342 |1156 |4121498 |
|kas |Kashmiri|4120342 |0 |4120342 |
|kat |Georgian|4120342 |0 |4120342 |
|kaz |Kazakh |4120342 |0 |4120342 |
|khk |Mongolian|4120342 |0 |4120342 |
|khm |Khmer |4120342 |0 |4120342 |
|kir |Kyrgyz |4120342 |0 |4120342 |
|kmr |Kurdish |4120342 |0 |4120342 |
|knc |Kanuri |8240684 |0 |8240684 |
|kor |Korean |4120342 |41011 |4161353 |
|lao |Lao |4120342 |0 |4120342 |
|lit |Lithuanian|4120342 |0 |4120342 |
|ltz |Luxembourgish|4120342 |0 |4120342 |
|lvs |Latvian |4120342 |0 |4120342 |
|mal |Malayalam|4120342 |4347 |4124689 |
|mar |Marathi |4120342 |3678 |4124020 |
|min |Minangkabau|6753788 |2000 |6755788 |
|mkd |Macedonian|4120342 |0 |4120342 |
|mlt |Maltese |4120342 |0 |4120342 |
|mni |Manipuri|4120342 |0 |4120342 |
|mri |Maori |4120342 |0 |4120342 |
|mya |Burmese |4120342 |0 |4120342 |
|nld |Dutch |4120342 |220181 |4340523 |
|nno |Norwegian|4120342 |0 |4120342 |
|nob |Norwegian|4120342 |0 |4120342 |
|npi |Nepali |4120342 |0 |4120342 |
|nso |Northern Sotho|4120342 |0 |4120342 |
|pbt |Pashto |4120342 |0 |4120342 |
|pes |Persian |4120342 |245520 |4365862 |
|plt |Malagasy|4120342 |0 |4120342 |
|pol |Polish |4120342 |332503 |4452845 |
|por |Portuguese|4120342 |287432 |4407774 |
|ron |Romanian|4120342 |36359 |4156701 |
|rus |Russian |4120342 |545920 |4666262 |
|sin |Sinhala |4120342 |195 |4120537 |
|slk |Slovak |4120342 |27845 |4148187 |
|slv |Slovenian|4120342 |25731 |4146073 |
|smo |Samoan |4120342 |0 |4120342 |
|sna |Shona |4120342 |3684 |4124026 |
|snd |Sindhi |4120342 |0 |4120342 |
|som |Somali |4120342 |2926 |4123268 |
|sot |Southern Sotho|4120342 |0 |4120342 |
|spa |Spanish |4120342 |379194 |4499536 |
|srp |Serbian |4120342 |77124 |4197466 |
|sun |Sundanese|4120342 |2208 |4122550 |
|swe |Swedish |4120342 |76486 |4196828 |
|swh |Swahili |4120342 |12726 |4133068 |
|tam |Tamil |4120342 |11462 |4131804 |
|taq |Tamasheq|4120342 |0 |4120342 |
|tel |Telugu |4120342 |477821 |4598163 |
|tgk |Tajik |4120342 |0 |4120342 |
|tha |Thai |4120342 |2125180 |6245522 |
|tur |Turkish |4120342 |59932 |4180274 |
|ukr |Ukrainian|4120342 |189384 |4309726 |
|urd |Urdu |4120342 |337739 |4458081 |
|uzn |Uzbek |4120342 |0 |4120342 |
|vie |Vietnamese|4120342 |42232 |4162574 |
|xho |Xhosa |4120342 |2952 |4123294 |
|ydd |Yiddish |4120342 |0 |4120342 |
|yor |Yoruba |4120342 |4907 |4125249 |
|yue |Chinese |4120342 |0 |4120342 |
|zho-Hans |Chinese |4120342 |54528 |4174870 |
|zho-Hant |Chinese |4120342 |0 |4120342 |
|zsm |Malay |4120342 |13950 |4134292 |
|zul |Zulu |4120342 |786 |4121128 |
|arq |Arabic |0 |6046 |6046 |
|ban |Balinese|0 |2000 |2000 |
|bbc |Toba Batak|0 |2000 |2000 |
|bem |Bemba |0 |776 |776 |
|fil |Filipino|0 |220 |220 |
|fon |Fon |0 |845 |845 |
|hrv |Croatian|0 |9007 |9007 |
|kin |Kinyarwanda|0 |11165 |11165 |
|lij |Ligurian|0 |6409 |6409 |
|mad |Madurese|0 |2000 |2000 |
|nij |Ngaju |0 |2000 |2000 |
|nor |Norwegian|0 |72352 |72352 |
|pan |Punjabi |0 |2156 |2156 |
|twi |Twi |0 |10840 |10840 |
|wol |Wolof |0 |785 |785 |
|zho |Chinese |0 |74972 |74972 |
PS: Templated data also includes Mozambican Portuguese, which doesn't have its own ISO language code.
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** Automatic augmentation of existing datasets serves to enhance the available linguistic resources for multiple languages. The list of languages was initially established from mT5 and aligned with the annotators’ language list and NLLB translation model. The datasets were translated directly from English for all languages.
# Additional Information
## Provenance
- **Methods Used:** A combination of crowd-sourced templating and automatic translation was employed to source this dataset.
- **Methodology Details:**
- *Source:* Existing NLP datasets
- *Dates of Collection:* May 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 02/2024
- *First Release:* 02/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://cohere.com/research/aya
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
databricks/databricks-dolly-15k | databricks | "2023-06-30T18:34:13Z" | 17,454 | 795 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.02155",
"region:us"
] | [
"question-answering",
"summarization"
] | "2023-04-11T16:43:13Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
Subsets and Splits