Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-08-18 20:14:01
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-08-18 20:11:48
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
anirudhb11/star-graph-deg-16-path-3-nodes-300
|
anirudhb11
|
2025-06-05T16:44:01Z
| 0 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-05T16:43:47Z
| 0 |
---
dataset_info:
features:
- name: graph
dtype: string
- name: source
dtype: string
- name: destination
dtype: string
- name: path
dtype: string
splits:
- name: train
num_bytes: 52544271
num_examples: 200000
- name: test
num_bytes: 5254257
num_examples: 20000
download_size: 37729180
dataset_size: 57798528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Sufi2425/my-distiset-404e664b
|
Sufi2425
|
2024-12-20T18:21:22Z
| 9 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] |
[] |
2024-12-20T18:21:20Z
| 0 |
---
size_categories: n<1K
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: system_prompt
dtype: string
splits:
- name: train
num_bytes: 359566
num_examples: 100
download_size: 153178
dataset_size: 359566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-404e664b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Sufi2425/my-distiset-404e664b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Sufi2425/my-distiset-404e664b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "As an English language expert, I\u0027ll provide more detailed explanations and examples to help you understand the difference between \u0027all\u0027 and \u0027every\u0027 at various CEFR levels.\n\n**C1 Level**: At C1 level, we can explore the distinction between \u0027all\u0027 and \u0027every\u0027 in more advanced contexts. \u0027All\u0027 is often used to indicate a complete or total quantity, emphasizing the entirety or universality of something. For instance, \"All of the guests were impressed by the presentation\" (the presentation affected all of them). In contrast, \u0027every\u0027 tends to emphasize the individuality or singularity of something, often with a sense of completeness or thoroughness. For example, \"Every single step of the process is well-explained in the manual\" (each individual step is clearly described).\n\nHowever, it\u0027s worth noting that in some contexts, both \u0027all\u0027 and \u0027every\u0027 can be used interchangeably, albeit with slightly different nuances. For instance, \"I have read every/all the books in the library\" (both convey the idea of completing the set).\n\n**C2 Level**: At C2 level, we can examine how \u0027all\u0027 and \u0027every\u0027 are used in complex or formal contexts, allowing us to refine our understanding of the differences between these two words. In formal or academic writing, \u0027all\u0027 often precedes a plural noun or a noun phrase, emphasizing the totality or universality of the idea or concept. For example, \"All of the available data suggests that...\" or \"Essentially, all of the arguments are based on flawed assumptions.\" In contrast, \u0027every\u0027 is often used in formal writing to emphasize the individuality or the individual elements within a group. For instance, \"Every single point in the report requires careful consideration\" or \"The report highlights every major flaw in the system.\"\n\nIt\u0027s also worth noting that in some cases, \u0027all\u0027 can be used with an absolute adjective, which can sometimes make the difference between \u0027all\u0027 and \u0027every\u0027 more subtle. For example, \"All new employees are required to undergo a thorough background check\" or \"Every single new employee is required to undergo a thorough background check.\"\n\nTo summarize, while \u0027all\u0027 and \u0027every\u0027 often convey similar meanings, their uses and nuances differ based on the context in which they are employed.",
"prompt": "As an English language expert, can you help me understand the difference between \u0027all\u0027 and \u0027every\u0027 in English?\n\nThese are both general quantifiers which indicate an unlimited or indefinite quantity, and although they can be used in similar contexts, their meanings differ in subtle ways which depend on the context in which they are used.\n\n**A1 Level**: At A1 level, we usually don\u0027t need to talk about the difference between \u0027all\u0027 and \u0027every\u0027 in detail because both are more or less equivalent. However, it\u0027s worth noting generally that when you say \"I eat all the cake\" or \"I eat every single piece of the cake\", \u0027every\u0027 is more accurate as it indicates you ate every piece of the cake. But in both cases the result will still be that the cake is gone.\n\n**A2 Level**: At A2 level, we can start to discuss the basic difference between \u0027all\u0027 and \u0027every\u0027 in more detail. \u0027All\u0027 tends to emphasize the completeness or totality of something. It\u0027s used when you mean something is 100% or completely true. For example, \"I have all the money I need\" (I have all the money; I\u0027m complete). However, \u0027every\u0027 is used when you mean every single item is included or affected. For example, \"I\u0027ve read every single book on the list\" (I have read every single one).\n\n**B1 Level**: At B1 level, we can explore the nuances of \u0027all\u0027 and \u0027every\u0027 further. Both words can be used in the same context but with slight differences in meaning. \u0027All\u0027 typically indicates that something is 100% or completely true and also serves to introduce a noun phrase or a clause. For example, \"I had all the information I needed\" (I had a complete lack of missing information) or \"I love all kinds of music\" (every type of music). However, \u0027every\u0027 often emphasizes the individuality or individual units in a group. For example, \"Every single student was late\" (every individual student) or \"I\u0027ve seen every single foreign film in the cinema\" (I\u0027ve seen every individual film). One example of how we can use \u0027all\u0027 and \u0027every\u0027 interchangeably is \"I read all the books on the list\" and \"I read every single book on the list\".\n\n**B2 Level**: At B2 level, you can look at the contexts in which \u0027all\u0027 and \u0027every\u0027 differ more finely, especially how",
"system_prompt": "You are an English language tutor designed to provide detailed explanations and examples of grammar, vocabulary, and cultural concepts tailored to various levels of the Common European Framework of Reference for Languages (CEFR). Your purpose is to assist learners at A1, A2, B1, B2, C1, and C2 levels in understanding and mastering English language skills, including reading, writing, speaking, and listening. Offer clear and concise explanations, provide illustrative examples, and suggest interactive exercises to reinforce comprehension. Offer additional insights into cultural nuances and differences that may impact language usage. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Sufi2425/my-distiset-404e664b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Sufi2425/my-distiset-404e664b")
```
</details>
|
n1h111sm/math-orz-numerical-vem-llama3.2-3b-instruct
|
n1h111sm
|
2025-04-23T01:51:15Z
| 24 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-22T23:52:37Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: input
dtype: string
- name: target
dtype: string
- name: response
dtype: string
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 116003785
num_examples: 60506
download_size: 40965044
dataset_size: 116003785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lra10/claude-filter
|
lra10
|
2025-01-14T01:19:24Z
| 31 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-13T23:39:10Z
| 0 |
---
dataset_info:
features:
- name: difficulty
dtype: string
- name: difficulty_description
dtype: string
- name: topic
dtype: string
- name: context
dtype: string
- name: age_group
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_with_tags
dtype: string
- name: validation_coherence
dtype: string
- name: explanation_coherence
dtype: string
- name: validation_clarity
dtype: string
- name: explanation_clarity
dtype: string
- name: validation_correctness
dtype: string
- name: explanation_correctness
dtype: string
- name: valid
dtype: bool
splits:
- name: train
num_bytes: 839385
num_examples: 200
download_size: 376735
dataset_size: 839385
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
esfrankel17/original_HelpSteer2_binarized
|
esfrankel17
|
2025-03-10T08:14:18Z
| 7 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T08:14:04Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_rating
dtype: float64
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected_rating
dtype: float64
splits:
- name: average_rating
num_bytes: 45480117
num_examples: 8678
- name: average_rating_no_verbosity
num_bytes: 43727323
num_examples: 8315
- name: average_rating_no_verbosity_no_complexity
num_bytes: 42377546
num_examples: 8025
- name: goodness_score
num_bytes: 42952704
num_examples: 8124
download_size: 85604764
dataset_size: 174537690
configs:
- config_name: default
data_files:
- split: average_rating
path: data/average_rating-*
- split: average_rating_no_verbosity
path: data/average_rating_no_verbosity-*
- split: average_rating_no_verbosity_no_complexity
path: data/average_rating_no_verbosity_no_complexity-*
- split: goodness_score
path: data/goodness_score-*
---
|
aliffatulmf/medium_gambling_17k
|
aliffatulmf
|
2025-05-11T20:24:02Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T20:23:50Z
| 0 |
---
dataset_info:
features:
- name: author
dtype: string
- name: comment
dtype: string
- name: label
dtype: int64
- name: predict
dtype: string
- name: video_title
dtype: string
- name: video_channel
dtype: string
- name: video_url
dtype: string
splits:
- name: train
num_bytes: 3476705.2071424527
num_examples: 14135
- name: test
num_bytes: 869237.7928575472
num_examples: 3534
download_size: 1448016
dataset_size: 4345943.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
RUC-AIBOX/STILL-3-RL-90K
|
RUC-AIBOX
|
2025-04-08T08:33:33Z
| 72 | 2 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-08T08:33:23Z
| 0 |
---
dataset_info:
features:
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: int64
- name: split
dtype: string
splits:
- name: train
num_bytes: 129363727
num_examples: 88054
download_size: 21727726
dataset_size: 129363727
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aryamankeyora/detailed_description_train_antibody200_RLHF
|
aryamankeyora
|
2025-06-18T23:41:12Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-18T23:23:04Z
| 0 |
---
dataset_info:
features:
- name: publication_number
dtype: string
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: extracted_data
dtype: string
- name: __index_level_0__
dtype: int64
- name: generated_description
dtype: string
splits:
- name: train
num_bytes: 59152497
num_examples: 196
download_size: 15711178
dataset_size: 59152497
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-data-dotgov-www.fws.gov
|
alea-institute
|
2025-04-11T01:52:13Z
| 10 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07854",
"arxiv:2503.17247",
"region:us"
] |
[] |
2025-01-31T14:24:04Z
| 0 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 11693927
num_examples: 835
download_size: 1892793
dataset_size: 11693927
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# KL3M Data Project
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Description
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
## Dataset Details
- **Format**: Parquet files containing document text and metadata
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
## Abstract
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including:
1. The source code to acquire and process these documents
2. The original document formats with associated provenance and metadata
3. Extracted content in a standardized format
4. Pre-tokenized representations of the documents
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
## Legal Basis
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
- Public domain materials
- US government works
- Open access content under permissive licenses
- Content explicitly licensed for AI training
## Papers
For more information about the KL3M Data Project, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3m,
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2503.17247},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/).
|
ainewtrend01/ConciseStock
|
ainewtrend01
|
2025-06-10T15:26:00Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-10T15:25:55Z
| 0 |
---
dataset_info:
features:
- name: reports
dtype: string
- name: keys
dtype: string
splits:
- name: train
num_bytes: 38574205
num_examples: 2005
download_size: 16162504
dataset_size: 38574205
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_4b1f3e1a-c412-4f19-a569-cdba3ebeef44
|
argilla-internal-testing
|
2024-10-29T09:55:03Z
| 20 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-29T09:55:03Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_ef6ec296-ae0a-4824-be3c-751d19ff3e86
|
argilla-internal-testing
|
2024-10-29T13:44:50Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-29T13:44:49Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HHS-Official/health-conditions-among-children-under-age-18-by-s
|
HHS-Official
|
2025-05-07T19:28:40Z
| 0 | 0 |
[
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"hhs",
"cdc",
"age",
"allergy",
"asthma",
"child",
"food",
"health",
"hypersensitivity",
"insurance",
"medicaid",
"poverty",
"sex",
"skin"
] |
[] |
2025-05-07T19:28:39Z
| 0 |
---
language:
- en
pretty_name: 'Health conditions among children under age 18, by selected characteristics:
United States'
tags:
- hhs
- cdc
- age
- allergy
- asthma
- child
- food
- health
- hypersensitivity
- insurance
- medicaid
- poverty
- sex
- skin
---
# Health conditions among children under age 18, by selected characteristics: United States
## Description
NOTE: On October 19, 2021, estimates for 2016–2018 by health insurance status were revised to correct errors. Changes are highlighted and tagged at https://www.cdc.gov/nchs/data/hus/2019/012-508.pdf
Data on health conditions among children under age 18, by selected population characteristics. Please refer to the PDF or Excel version of this table in the HUS 2019 Data Finder (https://www.cdc.gov/nchs/hus/contents2019.htm) for critical information about measures, definitions, and changes over time.
SOURCE: NCHS, National Health Interview Survey, Family Core and Sample Child questionnaires. For more information on the National Health Interview Survey, see the corresponding Appendix entry at https://www.cdc.gov/nchs/data/hus/hus19-appendix-508.pdf.
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Temporal Coverage**: 1997/2019
- **Geographic Coverage**: United States
- **Last Modified**: 2025-04-21
- **Contact**: National Center for Health Statistics ([email protected])
## Source
Original data can be found at: https://www.cdc.gov/nchs/hus
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset('HHS-Official/health-conditions-among-children-under-age-18-by-s')
```
## License
This dataset is licensed under https://www.usa.gov/government-works
|
hamishivi/rds-sels-arena-hard-top326k
|
hamishivi
|
2025-03-04T04:26:47Z
| 15 | 0 |
[
"license:odc-by",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2503.01807",
"region:us"
] |
[] |
2025-02-26T17:53:25Z
| 0 |
---
license: odc-by
---
# RDS+ Selected Arena Hard 326k
This is the dataset (and associated scores) selected by RDS+ when selecting 326k samples using Arena Hard samples.
For more details, please see the paper [Practical Large-Scale Data Selection for Instruction Tuning](https://arxiv.org/abs/2503.01807).
This was used to train [this model](https://huggingface.co/hamishivi/tulu-2-arena-hard-326k-sft).
This dataset is selected from [Tulu 2 unfiltered](https://huggingface.co/datasets/hamishivi/tulu-2-unfiltered), and please see that page for more information on sources.
<center>
<img src="https://huggingface.co/hamishivi/tulu-2-multitask-rrmax-326k-sft/resolve/main/image.png" alt="Practical Large-Scale Data Selection for Instruction Tuning logo" width="200px"/>
</center>
## License
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
Note that that different licenses apply to subsets of the data. See [Tulu 2 unfiltered](https://huggingface.co/datasets/hamishivi/tulu-2-unfiltered) for details.
## Citation
If you find this model or data is useful in your work, please cite it with:
```
@misc{ivison2025data,
title={{Practical Large-Scale Data Selection for Instruction Tuning}},
author={{Hamish Ivison and Muru Zhang and Faeze Brahman and Pang Wei Koh and Pradeep Dasigi}}
year={2025},
url={https://arxiv.org/abs/2503.01807},
eprint={2503.01807},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
MagicMeWizard/WOOP_framework
|
MagicMeWizard
|
2025-01-11T20:27:42Z
| 17 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2025-01-11T20:27:37Z
| 0 |
---
license: apache-2.0
---
|
kothasuhas/rlpt_37M_16epochs_501k_generations_SNIS_T4.0
|
kothasuhas
|
2025-04-20T00:25:09Z
| 24 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T00:20:59Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: log_weight
dtype: float32
splits:
- name: train
num_bytes: 2342365172.0
num_examples: 500000
- name: validation
num_bytes: 5948216
num_examples: 1000
download_size: 2826332817
dataset_size: 2348313388.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
skjdhuhsnjd/h-tags
|
skjdhuhsnjd
|
2024-12-21T11:03:58Z
| 17 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-21T11:00:33Z
| 0 |
---
dataset_info:
features:
- name: filename
dtype: string
- name: text
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
splits:
- name: train
num_bytes: 92031
num_examples: 344
download_size: 37892
dataset_size: 92031
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bgsys/background-removal-arena-green
|
bgsys
|
2024-12-04T09:47:52Z
| 25 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-04T09:47:27Z
| 0 |
---
dataset_info:
features:
- name: original_image
dtype: image
- name: clipdrop_image
dtype: image
- name: bria_image
dtype: image
- name: photoroom_image
dtype: image
- name: removebg_image
dtype: image
- name: original_filename
dtype: string
splits:
- name: train
num_bytes: 147718672.0
num_examples: 77
download_size: 147674887
dataset_size: 147718672.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EssentialAI/triviaqa_adv
|
EssentialAI
|
2025-04-09T23:07:56Z
| 35 | 1 |
[
"task_categories:text2text-generation",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.04022",
"arxiv:1705.03551",
"region:us",
"code"
] |
[
"text2text-generation"
] |
2025-04-07T18:45:41Z
| 0 |
---
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
dataset_info:
config_name: main
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: answer
dtype:
sequence: string
- name: adversarial_answer
dtype: string
- name: adversarial_descriptions
dtype: string
- name: original_question
dtype: string
splits:
- name: test
num_examples: 2961
configs:
- config_name: main
data_files:
split: test
path: test.json
tags:
- code
size_categories:
- n<1K
---
# Dataset Card for triviaqa_adv (TriviaQA Adversarial Dataset)
[**🏆 Website**](https://essential.ai/) | [**🖥️ Code**](https://github.com/Essential-AI/reflection) | [**📖 Paper**](https://huggingface.co/papers/2504.04022)
## Dataset Description
- **Website:** https://essential.ai/
- **Repository:** https://github.com/Essential-AI/reflection
- **Paper:** https://huggingface.co/papers/2504.04022
- **Point of Contact:** [Essential AI Research Team](mailto:[email protected])
### Dataset Summary
_**TriviaQA Adversarial**_ is a dataset constructed to assess a model’s ability to perform knowledge-based reasoning when misled by a confounding context. Built on the TriviaQA validation set (17,944 questions with answer-containing passages), the adversarial version replaces the original reference document with a semantically similar but incorrect passage retrieved using Sentence-Transformers. This Situational-Reflection setting challenges models to ignore misleading cues: each question is paired with an adversarial passage that omits the correct answer but appears contextually relevant. Only examples under 500 words are retained, yielding 2,961 adversarial instances that probe a model’s ability to reason accurately despite deceptive evidence.
Please refer to the original TriviaQA dataset at: [https://huggingface.co/datasets/mandarjoshi/trivia_qa](https://huggingface.co/datasets/mandarjoshi/trivia_qa).
### Load the Dataset
To load the dataset using HuggingFace `datasets`, you first need to `pip install datasets`, then run the following code:
```python
from datasets import load_dataset
triviaqa_adv = load_dataset("EssentialAI/triviaqa_adv", split="test")
```
### Data Instances
An example from the **triviaqa_adv** looks as follows:
```
{
"question": "Which actress was voted Miss Greenwich Village in 1942?\n\nHints: Lucille Fay Steele (LeSueur) (1905 - 1977) - Genealogy\nbrother\nAbout Joan Crawford\nJoan Crawford (born Lucille Fay LeSueur) was an American actress in film, television and theatre. Starting as a dancer in traveling theatrical companies before debuting on Broadway, Crawford was signed to a motion picture contract by Metro-Goldwyn-Mayer in 1925. Initially frustrated by the size and quality of her parts, Crawford began a campaign of self-publicity and became nationally known as a flapper by the end of the 1920s. In the 1930s, Crawford's fame rivaled MGM colleagues Norma Shearer and Greta Garbo. Crawford often played hardworking young women who find romance and financial success. These \"rags-to-riches\" stories were well-received by Depression-era audiences and were popular with women. Crawford became one of Hollywood's most prominent movie stars and one of the highest paid women in the United States, but her films began losing money and by the end of the 1930s she was labeled \"box office poison\".\nAfter an absence of nearly two years from the screen, Crawford staged a comeback by starring in Mildred Pierce (1945), for which she won the Academy Award for Best Actress. In 1955, she became involved with the Pepsi-Cola Company, through her marriage to company president Alfred Steele. After his death in 1959, Crawford was elected to fill his vacancy on the board of directors but was forcibly retired in 1973. She continued acting in film and television regularly through the 1960s, when her performances became fewer; after the release of the British horror film Trog in 1970, Crawford retired from the screen. Following a public appearance in 1974, after which unflattering photographs were published, Crawford withdrew from public life and became more and more reclusive until her death in 1977.\nCrawford married four times. Her first three marriages ended in divorce; the last ended with the death of husband Al Steele. She adopted five children, one of whom was reclaimed by his birth mother. Crawford's relationships with her two older children, Christina and Christopher, were acrimonious. Crawford disinherited the two and, after Crawford's death, Christina wrote a \"tell-all\" memoir, Mommie Dearest, in which she alleged a lifelong pattern of physical and emotional abuse perpetrated by Crawford.\nCrawford's hand and footprints are immortalized in the forecourt of Grauman's Chinese Theater on Hollywood Boulevard in Hollywood. She also has a star on the Hollywood Walk of Fame at 1750 Vine Street. In 1999, Playboy listed Crawford as one of the \"100 Sexiest Women of the 20th century,\" ranking her #84.\nMain Mausoleum, Unit 8, Alcove E, Crypt 42\n\nAnswer: Joan Crawford",
"question_id": "tc_106",
"question_source": "http://www.triviacountry.com/",
"answer": [
"Bacall",
"Lauren Becal",
"Lauren Bacall",
"Lauren Becall",
"Betty J. Perske",
"Loren Bacall",
"Betty Joan Perske",
"Betty Perske",
"Betty Joan Perski"
],
"adversarial_answer": "Joan Crawford",
"adversarial_descriptions": "Lucille Fay Steele (LeSueur) (1905 - 1977) - Genealogy\nbrother\nAbout Joan Crawford\nJoan Crawford (born Lucille Fay LeSueur) was an American actress in film, television and theatre. Starting as a dancer in traveling theatrical companies before debuting on Broadway, Crawford was signed to a motion picture contract by Metro-Goldwyn-Mayer in 1925. Initially frustrated by the size and quality of her parts, Crawford began a campaign of self-publicity and became nationally known as a flapper by the end of the 1920s. In the 1930s, Crawford's fame rivaled MGM colleagues Norma Shearer and Greta Garbo. Crawford often played hardworking young women who find romance and financial success. These \"rags-to-riches\" stories were well-received by Depression-era audiences and were popular with women. Crawford became one of Hollywood's most prominent movie stars and one of the highest paid women in the United States, but her films began losing money and by the end of the 1930s she was labeled \"box office poison\".\nAfter an absence of nearly two years from the screen, Crawford staged a comeback by starring in Mildred Pierce (1945), for which she won the Academy Award for Best Actress. In 1955, she became involved with the Pepsi-Cola Company, through her marriage to company president Alfred Steele. After his death in 1959, Crawford was elected to fill his vacancy on the board of directors but was forcibly retired in 1973. She continued acting in film and television regularly through the 1960s, when her performances became fewer; after the release of the British horror film Trog in 1970, Crawford retired from the screen. Following a public appearance in 1974, after which unflattering photographs were published, Crawford withdrew from public life and became more and more reclusive until her death in 1977.\nCrawford married four times. Her first three marriages ended in divorce; the last ended with the death of husband Al Steele. She adopted five children, one of whom was reclaimed by his birth mother. Crawford's relationships with her two older children, Christina and Christopher, were acrimonious. Crawford disinherited the two and, after Crawford's death, Christina wrote a \"tell-all\" memoir, Mommie Dearest, in which she alleged a lifelong pattern of physical and emotional abuse perpetrated by Crawford.\nCrawford's hand and footprints are immortalized in the forecourt of Grauman's Chinese Theater on Hollywood Boulevard in Hollywood. She also has a star on the Hollywood Walk of Fame at 1750 Vine Street. In 1999, Playboy listed Crawford as one of the \"100 Sexiest Women of the 20th century,\" ranking her #84.\nMain Mausoleum, Unit 8, Alcove E, Crypt 42",
"original_question": "Which actress was voted Miss Greenwich Village in 1942?"
}
```
### Data Fields
- **question** (`str`): The trivia question, including any added prompt text (e.g., “Hints:”) and potentially the adversarial context passage.
- **question_id** (`str`): The unique identifier for the trivia question.
- **question_source** (`str`): The original source URL of the trivia question.
- **answer** (`List[str]`): A list of acceptable correct answers and their variants.
- **adversarial_answer** (`str`): The incorrect answer predicted under the influence of the adversarial context.
- **adversarial_descriptions** (`str`): The confounding reference passage used to mislead the model.
- **original_question** (`str`): The original, unaltered trivia question without adversarial modifications.
### Source Data and Attribution
We sourced TriviaQA from the official huggingface repository: [https://huggingface.co/datasets/mandarjoshi/trivia_qa](https://huggingface.co/datasets/mandarjoshi/trivia_qa).
## Additional Information
### Licensing Information
This dataset is licensed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
### Citation Information
Cite this dataset as well as the citation for the original TriviaQA dataset.
```
@misc{ai2025rethinkingreflectionpretraining,
title={Rethinking Reflection in Pre-Training},
author={Essential AI and : and Darsh J Shah and Peter Rushton and Somanshu Singla and Mohit Parmar and Kurt Smith and Yash Vanjani and Ashish Vaswani and Adarsh Chaluvaraju and Andrew Hojel and Andrew Ma and Anil Thomas and Anthony Polloreno and Ashish Tanwer and Burhan Drak Sibai and Divya S Mansingka and Divya Shivaprasad and Ishaan Shah and Karl Stratos and Khoi Nguyen and Michael Callahan and Michael Pust and Mrinal Iyer and Philip Monk and Platon Mazarakis and Ritvik Kapila and Saurabh Srivastava and Tim Romanski},
year={2025},
eprint={2504.04022},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.04022},
}
```
```
@article{2017arXivtriviaqa,
author = {{Joshi}, Mandar and {Choi}, Eunsol and {Weld},
Daniel and {Zettlemoyer}, Luke},
title = "{triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension}",
journal = {arXiv e-prints},
year = 2017,
eid = {arXiv:1705.03551},
pages = {arXiv:1705.03551},
archivePrefix = {arXiv},
eprint = {1705.03551},
}
```
|
Sseduss-Robotics/General-Instruction
|
Sseduss-Robotics
|
2024-12-26T17:03:55Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-26T17:03:43Z
| 0 |
---
dataset_info:
features:
- name: Instruction
dtype: string
- name: Input
dtype: string
- name: Response
dtype: string
splits:
- name: train
num_bytes: 19308508
num_examples: 25803
download_size: 11327037
dataset_size: 19308508
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liangzid/robench-eval-Time28-c
|
liangzid
|
2024-12-07T02:33:45Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-26T19:23:23Z
| 0 |
---
dataset_info:
features:
- name: text_with_holes
dtype: string
- name: text_candidates
dtype: string
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 2602148
num_examples: 2276
download_size: 1466402
dataset_size: 2602148
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
werty1248/WordNet-Ko-translate-examples-0.3
|
werty1248
|
2025-06-01T19:26:10Z
| 23 | 0 |
[
"language:en",
"language:ko",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-28T22:37:05Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: word
dtype: string
- name: pos
dtype: string
- name: definition
dtype: string
- name: word_type
dtype: string
- name: sample
sequence: string
- name: sentence_wiki
sequence: string
- name: word_wiki
sequence:
sequence: string
- name: sentence_kci
sequence:
sequence: string
- name: word_kci
sequence:
sequence: string
- name: gemini-2.5-pro-exp-03-25
sequence: string
- name: gpt-4.1-mini
sequence: string
- name: deepseek-chat-v3-0324
sequence: string
splits:
- name: train
num_bytes: 259610304
num_examples: 116694
download_size: 154709948
dataset_size: 259610304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language:
- en
- ko
---
v0.1: 예제 및 번역 추가
v0.2: wiki 실사용 사례 추가
v0.3: KCI 논문 사례 추가
- wiki/논문 사례가 엄밀하지 못하여 개선 필요...
---
- ID, Word, POS, definition은 [WordNet](https://wordnet.princeton.edu/)에서 가져왔습니다.
- 모든 ID는 유일하지만, id에 포함된 단어가 word는 아닙니다. 데이터 가공 과정에서의 실수 + 원본 WordNet에서 발생한 오타 교정 때문입니다.
- word_type, sample은 **gemini-2.5-pro-exp-03-25**를 이용하여 인공적으로 생성되었습니다.
- sentence_wiki: [한국어 위키피디아](https://huggingface.co/datasets/wikimedia/wikipedia)에서, 실제로 해당 단어가 한국어와 원문 모두 표기되어 있는 문장을 추출하였습니다.
- 문장 추출 및 검증에는 **gemini-2.5-pro-exp-03-25**와 **Qwen3-30B-A3B** 모델이 사용되었습니다.
- AI 검증 특성상, 잘못 추출된 경우가 있을 수 있습니다. (예시: elder.n.03 - (교회의)장로, wiki 예제에는 존윅3의 엘더, 엘더베리와 같은 과일 등이 포함되어 있음)
- word_wiki: sentence_wiki 문장에서 해당 단어의 번역 결과입니다.
- sentence_kci: KCI 논문 제목 및 초록에서 해당 단어가 사용되었을 때의 영문 제목/초록과 국문 제목/초록 쌍입니다.
- 마찬가지로 AI 검증이 이루어졌지만, 잘못 추출된 경우가 있을 수 있으며, 원문을 그대로 사용한 경우도 포함되어 있습니다. (예시: elder.n.03 - (교회의)장로, kci 예제에서는 노인/엘더스크롤 관련 사용례들이 등장함)
- word_kci: sentence_kci 문장에서 해당 단어의 번역 결과입니다.
- gemini-2.5-pro-exp-03-25, gpt-4.1-mini, deepseek-chat-v3-0324: 각 모델에게 word, pos, definition, sample을 주고, 단어를 한글로 번역해 달라고 요청한 결과입니다.
---
위키, 논문 사례는 밑둥썩음병(bottom rot) 등의 전문 용어 번역을 개선할 수 있을 것 같으나, 택(tack)(항해 용어), 환상박피(girdle)의 경우 위키/논문 사례가 적절하지 못해 방해를 일으킴
위키, 논문 사례가 존재하는 데이터 자체가 전체 단어의 절반 미만임
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_2_dataset_0_for_gen_6_v2
|
HungVu2003
|
2025-05-03T20:09:58Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-03T20:09:56Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1151427
num_examples: 12500
download_size: 701524
dataset_size: 1151427
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
math-extraction-comp/princeton-nlp__Llama-3-Instruct-8B-RRHF-v0.2
|
math-extraction-comp
|
2025-01-12T21:59:58Z
| 69 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-11T14:32:41Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: gold
dtype: string
- name: target
dtype: string
- name: prediction
dtype: string
- name: subset
dtype: string
- name: lighteval-c24870ea_extracted_answer
dtype: string
- name: lighteval-c24870ea_score
dtype: float64
- name: qwen_score
dtype: float64
- name: lighteval-0f21c935_score
dtype: float64
- name: harness_extracted_answer
dtype: string
- name: harness_score
dtype: float64
- name: qwen_extracted_answer
dtype: string
- name: lighteval-0f21c935_extracted_answer
dtype: string
splits:
- name: train
num_bytes: 2200988
num_examples: 1324
download_size: 1047729
dataset_size: 2200988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/gemma-2b_beta_0.0_alpha_0.0_num-company_3_dataset_0_for_gen_11
|
HungVu2003
|
2025-04-20T09:25:48Z
| 20 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T09:25:47Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 819347
num_examples: 12500
download_size: 565757
dataset_size: 819347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-3b_t0.1_n8_tests_mbpp_qwen3-0.6b-easy-unique_lr1e-5_t0.0_n1
|
test-gen
|
2025-05-19T17:55:05Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-19T17:55:01Z
| 0 |
---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5280802
num_examples: 500
download_size: 979359
dataset_size: 5280802
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
allenai/DataDecide-eval-results
|
allenai
|
2025-04-16T20:20:58Z
| 133 | 4 |
[
"license:odc-by",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.08446",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1803.05457",
"arxiv:1911.11641",
"arxiv:1811.00937",
"arxiv:1904.09728",
"arxiv:1809.02789",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:2402.00159",
"arxiv:1910.10683",
"arxiv:2409.17115",
"arxiv:2406.11794",
"arxiv:2411.15124",
"region:us"
] |
[] |
2025-03-07T00:17:50Z
| 0 |
---
dataset_info:
features:
- name: params
dtype: string
- name: data
dtype: string
- name: task
dtype: string
- name: step
dtype: int64
- name: seed
dtype: string
- name: chinchilla
dtype: string
- name: tokens
dtype: int64
- name: compute
dtype: float64
- name: metrics
dtype: string
splits:
- name: train
num_bytes: 1848365910
num_examples: 1410750
download_size: 693325464
dataset_size: 1848365910
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
---

More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release [DataDecide](allenai.org/papers/datadecide)—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
## Evaluation
We evaluate all checkpoints over OLMES suite of 10 multiple choice question answering benchmarks
([Gu et al., 2024](https://arxiv.org/abs/2406.08446)):
- [MMLU (Hendrycks et al., 2021)](https://arxiv.org/abs/2009.03300)
- [HellaSwag (Zellers et al., 2019)](https://arxiv.org/abs/1905.07830)
- [ARC-Challenge (Clark et al., 2018)](https://arxiv.org/abs/1803.05457)
- [ARC-Easy (Clark et al., 2018)](https://arxiv.org/abs/1803.05457)
- [PIQA (Bisk et al., 2020)](https://arxiv.org/abs/1911.11641)
- [CommonsenseQA (Talmor et al., 2019)](https://arxiv.org/abs/1811.00937)
- [Social IQa (Sap et al., 2019)](https://arxiv.org/abs/1904.09728)
- [OpenBookQA (Mihaylov et al., 2018)](https://arxiv.org/abs/1809.02789)
- [BoolQ (Clark et al., 2019)](https://arxiv.org/abs/1905.10044)
- [Winogrande (Sakaguchi et al., 2020)](https://arxiv.org/abs/1907.10641)
We also release evaluations for instance-level results: [https://huggingface.co/datasets/allenai/DataDecide-eval-instances](https://huggingface.co/datasets/allenai/DataDecide-eval-instances)
## 350 Models over Differences in Data in Scale
These evaluations are done over all DataDecide models. For each of our 25 datasets and 14 model sizes, we train a model linked below. Each has intermediate checkpoints (uploading after initial release), runs over 3 random seeds. All models finish training at a token to parameter ratio of 100 (e.g., 1B parameters -> 100B tokens).
| | | | | | | | | | | | | | | |
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|------|------|------|------|-----|
| Dolma1.7 | [4M](https://huggingface.co/allenai/DataDecide-dolma1_7-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_7-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_7-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_7-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_7-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_7-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_7-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_7-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_7-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_7-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_7-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_7-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_7-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_7-1B) |
| Dolma1.7 (no code) | [4M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_7-no-code-1B) |
| Dolma1.7 (no math, code) | [4M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_7-no-math-code-1B) |
| Dolma1.7 (no Reddit) | [4M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_7-no-reddit-1B) |
| Dolma1.7 (no Flan) | [4M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_7-no-flan-1B) |
| Dolma1.6++ | [4M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-4M) | [6M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-6M) | [8M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-8M) | [10M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-10M) | [14M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-14M) | [16M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-16M) | [20M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-20M) | [60M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-60M) | [90M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-90M) | [150M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-150M) | [300M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-300M) | [530M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-530M) | [750M](https://huggingface.co/allenai/DataDecide-dolma1_6plus-750M) | [1B](https://huggingface.co/allenai/DataDecide-dolma1_6plus-1B) |
| C4 | [4M](https://huggingface.co/allenai/DataDecide-c4-4M) | [6M](https://huggingface.co/allenai/DataDecide-c4-6M) | [8M](https://huggingface.co/allenai/DataDecide-c4-8M) | [10M](https://huggingface.co/allenai/DataDecide-c4-10M) | [14M](https://huggingface.co/allenai/DataDecide-c4-14M) | [16M](https://huggingface.co/allenai/DataDecide-c4-16M) | [20M](https://huggingface.co/allenai/DataDecide-c4-20M) | [60M](https://huggingface.co/allenai/DataDecide-c4-60M) | [90M](https://huggingface.co/allenai/DataDecide-c4-90M) | [150M](https://huggingface.co/allenai/DataDecide-c4-150M) | [300M](https://huggingface.co/allenai/DataDecide-c4-300M) | [530M](https://huggingface.co/allenai/DataDecide-c4-530M) | [750M](https://huggingface.co/allenai/DataDecide-c4-750M) | [1B](https://huggingface.co/allenai/DataDecide-c4-1B) |
| FineWeb-Pro | [4M](https://huggingface.co/allenai/DataDecide-fineweb-pro-4M) | [6M](https://huggingface.co/allenai/DataDecide-fineweb-pro-6M) | [8M](https://huggingface.co/allenai/DataDecide-fineweb-pro-8M) | [10M](https://huggingface.co/allenai/DataDecide-fineweb-pro-10M) | [14M](https://huggingface.co/allenai/DataDecide-fineweb-pro-14M) | [16M](https://huggingface.co/allenai/DataDecide-fineweb-pro-16M) | [20M](https://huggingface.co/allenai/DataDecide-fineweb-pro-20M) | [60M](https://huggingface.co/allenai/DataDecide-fineweb-pro-60M) | [90M](https://huggingface.co/allenai/DataDecide-fineweb-pro-90M) | [150M](https://huggingface.co/allenai/DataDecide-fineweb-pro-150M) | [300M](https://huggingface.co/allenai/DataDecide-fineweb-pro-300M) | [530M](https://huggingface.co/allenai/DataDecide-fineweb-pro-530M) | [750M](https://huggingface.co/allenai/DataDecide-fineweb-pro-750M) | [1B](https://huggingface.co/allenai/DataDecide-fineweb-pro-1B) |
| FineWeb-Edu | [4M](https://huggingface.co/allenai/DataDecide-fineweb-edu-4M) | [6M](https://huggingface.co/allenai/DataDecide-fineweb-edu-6M) | [8M](https://huggingface.co/allenai/DataDecide-fineweb-edu-8M) | [10M](https://huggingface.co/allenai/DataDecide-fineweb-edu-10M) | [14M](https://huggingface.co/allenai/DataDecide-fineweb-edu-14M) | [16M](https://huggingface.co/allenai/DataDecide-fineweb-edu-16M) | [20M](https://huggingface.co/allenai/DataDecide-fineweb-edu-20M) | [60M](https://huggingface.co/allenai/DataDecide-fineweb-edu-60M) | [90M](https://huggingface.co/allenai/DataDecide-fineweb-edu-90M) | [150M](https://huggingface.co/allenai/DataDecide-fineweb-edu-150M) | [300M](https://huggingface.co/allenai/DataDecide-fineweb-edu-300M) | [530M](https://huggingface.co/allenai/DataDecide-fineweb-edu-530M) | [750M](https://huggingface.co/allenai/DataDecide-fineweb-edu-750M) | [1B](https://huggingface.co/allenai/DataDecide-fineweb-edu-1B) |
| Falcon | [4M](https://huggingface.co/allenai/DataDecide-falcon-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-1B) |
| Falcon+CC | [4M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-and-cc-1B) |
| Falcon+CC (QC 10%) | [4M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-10p-1B) |
| Falcon+CC (QC 20%) | [4M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-20p-1B) |
| Falcon+CC (QC Orig 10%) | [4M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-orig-10p-1B) |
| Falcon+CC (QC Tulu 10%) | [4M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-4M) | [6M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-6M) | [8M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-8M) | [10M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-10M) | [14M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-14M) | [16M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-16M) | [20M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-20M) | [60M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-60M) | [90M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-90M) | [150M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-150M) | [300M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-300M) | [530M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-530M) | [750M](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-750M) | [1B](https://huggingface.co/allenai/DataDecide-falcon-and-cc-qc-tulu-10p-1B) |
| DCLM-Baseline | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-1B) |
| DCLM-Baseline (QC 7%, FW2) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw2-1B) |
| DCLM-Baseline (QC 7%, FW3) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-7p-fw3-1B) |
| DCLM-Baseline (QC FW 3%) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-3p-1B) |
| DCLM-Baseline (QC FW 10%) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-fw-10p-1B) |
| DCLM-Baseline (QC 10%) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-10p-1B) |
| DCLM-Baseline (QC 20%) | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-qc-20p-1B) |
| DCLM-Baseline 25% / Dolma 75% | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-25p-dolma1.7-75p-1B) |
| DCLM-Baseline 50% / Dolma 50% | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-50p-dolma1.7-50p-1B) |
| DCLM-Baseline 75% / Dolma 25% | [4M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-4M) | [6M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-6M) | [8M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-8M) | [10M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-10M) | [14M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-14M) | [16M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-16M) | [20M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-20M) | [60M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-60M) | [90M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-90M) | [150M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-150M) | [300M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-300M) | [530M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-530M) | [750M](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-750M) | [1B](https://huggingface.co/allenai/DataDecide-dclm-baseline-75p-dolma1.7-25p-1B) |
## Data
| Source / Recipe | Description |
|----------------------------------------|-------------|
| **Dolma1.7** *Original, No code, No math/code, No Reddit, No Flan* | A 2.3T-token corpus (Dolma; 1.7 [Soldaini et al., 2024](https://arxiv.org/abs/2402.00159)) sampling common LM sources for open research. We ablate code, math/code, Reddit, or Flan subsets. |
| **Dolma1.6++** *Original* | Dolma 1.6 plus additional sources from Dolma 1.7: RedPajama’s arxiv subset, openwebmath, algebraic stack, flan, starcoder, falcon. |
| **C4** *Original* | The C4 dataset ([Raffel et al., 2019](https://arxiv.org/abs/1910.10683)) as prepared in Dolma 1.7, heuristically filtered from the April 2019 Common Crawl. |
| **FineWeb-Pro** *Original* | The FineWeb Pro corpus ([Zhou et al., 2024](https://arxiv.org/abs/2409.17115)), featuring model-driven data cleaning on FineWeb. |
| **FineWeb-Edu** *Original* | The deduplicated FineWeb-Edu subset of SmoLLM-Corpus ([Ben Allal et al., 2024](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus)), focused on educational web pages. |
| **Falcon** *Original* | The Falcon RefinedWeb corpus ([Penedo et al., 2023](https://api.semanticscholar.org/CorpusID:259063761)) in Dolma 1.7, derived from Common Crawl through June 2023 and more aggressively filtered/deduplicated than C4. |
| **Falcon+CC** *Original, QC 10%, QC 20%, QC Orig 10%, QC Tulu 10%* | Falcon and Dolma 1.7’s Common Crawl. We quality filter to top 10% or 20% documents with reproduced or original [Li et al. (2024)](https://arxiv.org/abs/2406.11794) filter or retrain filter on pre-release version of Tulu-v3 ([Lambert et al., 2024](https://arxiv.org/abs/2411.15124)). |
| **DCLM-Baseline** *Original, QC 7% FW2, QC 7% FW3, QC FW 10%, QC 10%, QC 20%* | A SOTA Common Crawl corpus using best ablated deduplication, cleaning heuristics, and quality filter. We quality filter to top 7% of DCLM classified documents and further take 2+ or 3+ scores with FineWeb-edu classifier; or filter to top 3% or 10% with FineWeb-edu classifier; or take top 10% or 20% with reproduced DCLM classifier. |
| *λ%* **DCLM-Baseline** *+ 1 – λ%* **Dolma1.7** | Fractional combinations of Dolma1.7 and DCLM-Baseline mixing different proportions of the two datasets for λ ∈ {25%, 50%, 75%}. |
### Dataset Description
- **Developed by:** Allen Institute for AI (Ai2)
- **Language(s) (NLP):** English
- **License:** This dataset is licensed under ODC-BY and intended for research and educational use in accordance with [Ai2's Responsible Use Guidelines](https://allenai.org/responsible-use)
- **Contact:** Technical inquiries: `[email protected]`. Press: `[email protected]`
### Links
- **Repository:** [https://github.com/allenai/DataDecide](https://github.com/allenai/DataDecide)
- **Paper:** [https:/allenai.org/papers/datadecide](https:/allenai.org/papers/datadecide)
## Citation
**BibTeX:**
```
@article{MagnussonDataDecide2025,
title={{DataDecide: How to Predict Best Pretraining Data with Small Experiments}},
author={Ian Magnusson and Nguyen Tai and Ben Bogin and David Heineman and Jena Hwang and Luca Soldaini and Akshita Bhagia and Jiacheng Liu and Dirk Groeneveld and Oyvind Tafjord and Noah A. Smith and Pang Wei Koh and Jesse Dodge},
year={2025},
journal={arXiv preprint},
}
```
|
zacbrld/MNLP_M3_rag_documents_45
|
zacbrld
|
2025-06-06T16:38:31Z
| 74 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-06T16:38:27Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: source
dtype: string
- name: tokens
dtype: int64
splits:
- name: train
num_bytes: 19940227
num_examples: 74983
download_size: 11932891
dataset_size: 19940227
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mytestdpo/grpo_n4_bz512_60
|
mytestdpo
|
2025-03-19T12:47:13Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-19T12:47:11Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answers
sequence: string
- name: gt
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 45114132
num_examples: 1517
download_size: 15226195
dataset_size: 45114132
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hugginguwe/uwes_medk_dataset
|
hugginguwe
|
2025-05-14T15:45:40Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-14T13:30:12Z
| 0 |
---
dataset_info:
features:
- name: filename
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2038247
num_examples: 72
download_size: 1176882
dataset_size: 2038247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uwes_medk_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
HHS-Official/active-bacterial-core-surveillance-abcs-streptococ
|
HHS-Official
|
2025-05-07T20:09:16Z
| 0 | 0 |
[
"language:en",
"license:odbl",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"hhs",
"cdc",
"abcs",
"bactfacts"
] |
[] |
2025-05-07T20:09:15Z
| 0 |
---
language:
- en
pretty_name: Active Bacterial Core surveillance (ABCs) Streptococcus pneumoniae
tags:
- hhs
- cdc
- abcs
- bactfacts
license: odbl
---
# Active Bacterial Core surveillance (ABCs) Streptococcus pneumoniae
## Description
ABCs is an ongoing surveillance program that began in 1997. <a href="https://www.cdc.gov/abcs/reports-findings/surv-reports.html">ABCs reports</a> describe the ABCs case definition and the specific methodology used to calculate rates and estimated numbers in the United States for each bacterium by year. The methods, <a href="https://www.cdc.gov/abcs/methodology/surv-pop.html">surveillance areas</a>, and <a href="https://www.cdc.gov/abcs/methodology/surv-pop.html">laboratory isolate collection areas</a> have changed over time.
Additionally, the way missing data are taken into account changed in 2010. It went from distributing unknown values based on known values of cases by site to use of multiple imputation using a sequential regression imputation method.
Given these changes over time, trends should be interpreted with caution.
<ul> <li> <a href="http://www.cdc.gov/abcs/methodology/index.html">Methodology</a>
Find details about surveillance population, case determination, surveillance evaluation, and more. </li> <li> <a href="http://www.cdc.gov/abcs/reports-findings/index.html">Reports and Findings</a>
Get official interpretations from reports and publications created from ABCs data. </li> </ul>
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Last Modified**: 2024-09-03
- **Contact**: Active Bacterial Core surveillance ([email protected])
## Source
Original data can be found at: https://data.cdc.gov/d/en3s-hzsr
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset('HHS-Official/active-bacterial-core-surveillance-abcs-streptococ')
```
## License
This dataset is licensed under http://opendefinition.org/licenses/odc-odbl/
|
dgambettaphd/D_gmm2_gen2_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
|
2025-05-08T01:39:41Z
| 0 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-08T01:39:38Z
| 0 |
---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 11754668
num_examples: 18000
download_size: 7005774
dataset_size: 11754668
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chonkie-ai/finechonk
|
chonkie-ai
|
2025-05-07T09:02:51Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-07T08:49:59Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
- name: chunked_text
dtype: string
- name: num_chunks
dtype: int64
- name: avg_chunk_size
dtype: float64
- name: max_chunk_size
dtype: int64
- name: min_chunk_size
dtype: int64
- name: std_chunk_size
dtype: float64
splits:
- name: train
num_bytes: 23430663674
num_examples: 1038426
download_size: 13514490883
dataset_size: 23430663674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_e53f3977-c90c-4a9b-88f3-0580d7a383ff
|
argilla-internal-testing
|
2024-10-30T10:05:37Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-30T10:05:36Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
John6666/sdxl-backup-202502
|
John6666
|
2025-02-19T05:06:12Z
| 60 | 0 |
[
"language:en",
"license:other",
"region:us"
] |
[] |
2025-02-19T04:54:05Z
| 0 |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
---
|
Racktic/mmmu
|
Racktic
|
2025-06-24T13:57:09Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-24T13:56:24Z
| 0 |
---
dataset_info:
features:
- name: images
sequence: image
- name: data_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: difficulty
dtype: string
- name: index
dtype: string
splits:
- name: train
num_bytes: 330171072.0
num_examples: 894
download_size: 328790435
dataset_size: 330171072.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zera09/sampled_visionarena_chat_100
|
zera09
|
2025-03-21T07:44:20Z
| 11 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-21T07:43:58Z
| 0 |
---
dataset_info:
features:
- name: images
list:
- name: bytes
dtype: binary
- name: path
dtype: string
- name: conversation_id
dtype: string
- name: model
dtype: string
- name: num_turns
dtype: int64
- name: conversation
list:
list:
- name: content
dtype: string
- name: role
dtype: string
- name: language
dtype: string
- name: user_id
dtype: int64
- name: tstamp
dtype: float64
- name: is_preset
dtype: bool
- name: preset_dataset
dtype: string
- name: categories
struct:
- name: captioning
dtype: bool
- name: code
dtype: bool
- name: creative_writing
dtype: bool
- name: diagram
dtype: bool
- name: entity_recognition
dtype: bool
- name: homework
dtype: bool
- name: humor
dtype: bool
- name: is_code
dtype: bool
- name: ocr
dtype: bool
- name: refusal
dtype: bool
splits:
- name: train
num_bytes: 481963884
num_examples: 1190
download_size: 409844938
dataset_size: 481963884
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
skymizer/Llama3.1-base-tokenized-dolma-v1_7-50B
|
skymizer
|
2025-01-15T17:16:22Z
| 104 | 0 |
[
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-15T13:59:20Z
| 0 |
---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: position_ids
sequence: int64
- name: length
dtype: int64
splits:
- name: train
num_bytes: 947748827022
num_examples: 70783733
download_size: 205952677727
dataset_size: 947748827022
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shylee/eval_DP_so100_gauze_scratch_ckpt010000
|
shylee
|
2025-04-29T12:27:47Z
| 26 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] |
[
"robotics"
] |
2025-04-29T12:27:39Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 1,
"total_frames": 853,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
kejian/sciriff-tulu-none-500
|
kejian
|
2024-12-08T03:43:15Z
| 15 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-08T03:43:13Z
| 0 |
---
dataset_info:
features:
- name: dataset
dtype: string
- name: id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 92214633
num_examples: 18895
download_size: 46821540
dataset_size: 92214633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nvidia/DLC-Bench
|
nvidia
|
2025-04-24T17:43:52Z
| 159 | 5 |
[
"task_categories:image-to-text",
"language:en",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2504.16072",
"region:us",
"image"
] |
[
"image-to-text"
] |
2025-04-21T05:21:54Z
| 2 |
---
language:
- en
task_categories:
- image-to-text
tags:
- image
---
# Describe Anything: Detailed Localized Image and Video Captioning
**NVIDIA, UC Berkeley, UCSF**
[Long Lian](https://tonylian.com), [Yifan Ding](https://research.nvidia.com/person/yifan-ding), [Yunhao Ge](https://gyhandy.github.io/), [Sifei Liu](https://sifeiliu.net/), [Hanzi Mao](https://hanzimao.me/), [Boyi Li](https://sites.google.com/site/boyilics/home), [Marco Pavone](https://research.nvidia.com/person/marco-pavone), [Ming-Yu Liu](https://mingyuliu.net/), [Trevor Darrell](https://people.eecs.berkeley.edu/~trevor/), [Adam Yala](https://www.adamyala.org/), [Yin Cui](https://ycui.me/)
[[Paper](https://arxiv.org/abs/2504.16072)] | [[Code](https://github.com/NVlabs/describe-anything)] | [[Project Page](https://describe-anything.github.io/)] | [[Video](https://describe-anything.github.io/#video)] | [[HuggingFace Demo](https://huggingface.co/spaces/nvidia/describe-anything-model-demo)] | [[Model/Benchmark/Datasets](https://huggingface.co/collections/nvidia/describe-anything-680825bb8f5e41ff0785834c)] | [[Citation](#citation)]
# Dataset Card for DLC-Bench
Dataset for detailed localized captioning benchmark (DLC-Bench).
## LICENSE
[CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en)
## Intended Usage
This dataset is intended to demonstrate and facilitate the understanding and usage of detailed localized captioning models. It should primarily be used for research purposes.
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
alpharomercoma/qna-japanese
|
alpharomercoma
|
2024-10-20T11:24:55Z
| 32 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-20T11:23:59Z
| 0 |
---
license: apache-2.0
---
|
kevin017/kl_pool_full_name_small
|
kevin017
|
2025-04-03T05:17:29Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-03T05:17:26Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: tokens
sequence:
sequence: int64
splits:
- name: train
num_bytes: 64519
num_examples: 1000
download_size: 27817
dataset_size: 64519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Geralt-Targaryen/webnovels
|
Geralt-Targaryen
|
2025-04-02T08:31:10Z
| 19 | 0 |
[
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-20T13:30:57Z
| 0 |
---
license: apache-2.0
---
16,546 English webnovels that have been cleaned, deduplicated, and decontaminated with respect to the following benchmarks based on n-gram overlap:
- GLUE (dev set of SST-2, CoLA, QQP, WNLI, RTE, QNLI, MNLI; test set of MPRC)
- SIQA, PIQA, QASC, CSQA, HellaSWAG (all dev set)
- CONLL 2003
- BLIMP
- [MAIN](https://main.leibniz-zas.de/en/main-materials/main-materials/)
- BoolQ (dev set)
- WinoGrande (dev set)
- ANLI (test set)
- ARC easy and challenge (test set)
- RACE middle and high (test set)
- MMLU (dev, val, and test sets)
- MATH, GSM8K (test set)
- HumanEval (test set)
- GPQA (diamond)
### Dataset Statistics
Total number of samples: 16,546.
Size of downloaded parquet files: 2.8G.
### Filtered Version
There is a model-filtered version in the filter branch, including 13,713 samples (2.3G).
Qwen2.5-32B-Instruct is used to generate language quality annotation (on a scale of 1-5) for 250K C4 samples. A RoBERT-large classifier is trained with regression on these annotations. Any document receiving a score of 1 or 2 from the classifier is removed. The remaining documents are also accompanied by their scores.
|
iabd05/mongodata_train
|
iabd05
|
2025-04-14T18:21:11Z
| 29 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-14T18:21:06Z
| 0 |
---
dataset_info:
features:
- name: student_id
dtype: float64
- name: scores
list:
- name: score
dtype: float64
- name: type
dtype: string
- name: class_id
dtype: float64
splits:
- name: train
num_bytes: 7360000.0
num_examples: 80000
download_size: 3780075
dataset_size: 7360000.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
irasalsabila/realworld_audio_jv
|
irasalsabila
|
2025-04-08T11:32:21Z
| 29 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-08T09:07:17Z
| 0 |
---
dataset_info:
features:
- name: filename
dtype: string
- name: label
dtype: string
splits:
- name: test
num_bytes: 5248
num_examples: 50
download_size: 5812
dataset_size: 5248
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mlfoundations-dev/reasoning_shard_48
|
mlfoundations-dev
|
2025-03-05T21:24:17Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-05T21:24:16Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: problem_number
dtype: int64
- name: set_number
dtype: int64
splits:
- name: train
num_bytes: 41006.886854632
num_examples: 42
download_size: 22859
dataset_size: 41006.886854632
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
french-datasets/L3-IA-2025_Questions1
|
french-datasets
|
2025-06-03T20:37:12Z
| 0 | 0 |
[
"language:fra",
"region:us"
] |
[] |
2025-06-03T20:36:01Z
| 0 |
---
language:
- fra
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [L3-IA-2025/Questions1](https://huggingface.co/datasets/L3-IA-2025/Questions1).
|
nashrah18/indiantranslator
|
nashrah18
|
2025-02-21T11:01:38Z
| 21 | 1 |
[
"task_categories:translation",
"language:en",
"language:hi",
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"finance",
"legal",
"medical"
] |
[
"translation"
] |
2025-02-16T17:40:17Z
| 0 |
---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: english_text
dtype: string
- name: hindi_translation
dtype: string
- name: topic
dtype: string
splits:
- name: train
num_bytes: 38603
num_examples: 236
download_size: 15185
dataset_size: 38603
task_categories:
- translation
language:
- en
- hi
tags:
- biology
- finance
- legal
- medical
pretty_name: indiantranslator
---
|
laolaorkk/collect-data-Omni-Numina_filtered
|
laolaorkk
|
2025-02-15T15:00:13Z
| 14 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-15T15:00:07Z
| 0 |
---
dataset_info:
features:
- name: solution
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: math_type
dtype: string
- name: source_type
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 148980332
num_examples: 104593
download_size: 70348177
dataset_size: 148980332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
medmac01/test_audio_tts_offset_rmv
|
medmac01
|
2025-01-02T23:04:45Z
| 30 | 1 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-02T23:04:43Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 5608789.0
num_examples: 143
download_size: 11041676
dataset_size: 5608789.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
haorandai/Dec30_PGD_Bicycle_Orange_Epsilon0.05_10samples_1constraint
|
haorandai
|
2024-12-30T22:51:14Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-30T22:51:12Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 880667.0
num_examples: 11
download_size: 882265
dataset_size: 880667.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rhesis/Telecom-Chatbot-Access-to-Online-Content-Harmless
|
rhesis
|
2025-02-25T18:23:34Z
| 8 | 0 |
[
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-14T14:19:31Z
| 0 |
---
dataset_info:
name: Access to Online Content Harmless
license: CC-BY-4.0
language: en
tags:
- rhesis
- test-sets
- chatbot
- telecom
---
# Dataset Card for Access to Online Content Harmless
## Description
The test set has been specifically created for evaluating the performance of a telecom chatbot. It aims to cater to the needs of the telecom industry by focusing on the reliability of the chatbot's responses. The set primarily consists of harmless scenarios wherein users seek assistance related to accessing online content. By assessing the chatbot's ability to understand and provide accurate information within this domain, the test set ensures that the chatbot is well-equipped to handle various customer queries effectively and provide reliable solutions.
## Structure
The dataset includes four key columns:
- **Behavior**: The performance dimension evaluated (Reliability, Robustness, or Compliance).
- **Topic**: The topic validated as part of the prompt.
- **Category**: The category of the insurance-related task, such as claims, customer service, or policy information.
- **Prompt**: The actual test prompt provided to the chatbot.
- **Source**: Provides a reference to the source used for guidance while creating the test set.
## Disclaimer
Some test cases may contain sensitive, challenging, or potentially upsetting content. These cases are included to ensure thorough and realistic assessments. Users should review test cases carefully and exercise discretion when utilizing them.
## Integration
In order to easily integrate a Rhesis test set into your existing development pipeline, you can make use of the [Rhesis SDK](https://github.com/rhesis-ai/rhesis-sdk).
### Prerequisites
1. Create an API key by signing up at [app.rhesis.ai](https://app.rhesis.ai).
2. Install the Rhesis SDK:
```bash
pip install rhesis-sdk
```
3. Use the following Python snippet to access the dataset:
```python
from rhesis.entities import TestSet
# Initialize and load the test set
test_set = TestSet(id="access-to-online-content-harmless")
df = test_set.load() # Returns a pandas DataFrame
# Alternatively, you can download the CSV file directly
test_set.download() # Downloads to current directory as test_set_{id}.csv
```
Using the Rhesis SDK, you get access to the entire collection of Rhesis test sets, including this one. You can also create your own test sets.
For further details on how to integrate the SDK into your workflow, refer to the [Rhesis SDK documentation](https://docs.rhesis.ai/).
## Community
We welcome contributions and discussions from the community! Here are the different ways you can get involved:
- GitHub: Report issues or contribute to the Rhesis SDK on our [GitHub repository](https://github.com/rhesis-ai/rhesis-sdk).
- Discord: Join our [Discord server](https://discord.rhesis.ai) to connect with other users and developers.
- Email: Reach out to us at [[email protected]](mailto:[email protected]) for support or inquiries.
## Sources
The following sources were used in creating this dataset:
No citations available.
## Citation
If you use this dataset, please cite:
```
@inproceedings{rhesis,
title={Rhesis: A Testbench for Evaluating LLM Applications - Access to Online Content Harmless},
author={Rhesis},
year={2025}
}
```
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,548