datasetId
large_stringlengths
6
118
author
large_stringlengths
2
42
last_modified
large_stringdate
2021-04-29 15:34:29
2025-09-30 12:15:29
downloads
int64
0
3.97M
likes
int64
0
7.74k
tags
large listlengths
1
7.92k
task_categories
large listlengths
0
48
createdAt
large_stringdate
2022-03-02 23:29:22
2025-09-30 12:08:01
trending_score
float64
0
64
card
large_stringlengths
31
1M
timaeus/pubmed_abstracts_max_loss_delta_ablation_l1h2
timaeus
2025-03-18T09:07:54Z
14
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-18T09:07:50Z
0
--- dataset_info: features: - name: text dtype: string - name: meta struct: - name: pile_set_name dtype: string splits: - name: train num_bytes: 22387859 num_examples: 10000 download_size: 12019523 dataset_size: 22387859 configs: - config_name: default data_files: - split: train path: data/train-* ---
rweics5cs7/exo7-realworld-db-combined-deg-enhanced-text-v3
rweics5cs7
2025-09-27T11:57:32Z
26
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-09-27T11:57:18Z
0
--- dataset_info: - config_name: corpus features: - name: corpus-id dtype: string - name: text dtype: string splits: - name: train num_bytes: 3024380 num_examples: 3000 download_size: 1934201 dataset_size: 3024380 - config_name: qrels features: - name: query-id dtype: string - name: corpus-id dtype: string splits: - name: train num_bytes: 657740 num_examples: 14915 download_size: 167879 dataset_size: 657740 - config_name: queries features: - name: query-id dtype: string - name: query dtype: string - name: answer dtype: string splits: - name: train num_bytes: 2265614 num_examples: 14915 download_size: 1188402 dataset_size: 2265614 configs: - config_name: corpus data_files: - split: train path: corpus/train-* - config_name: qrels data_files: - split: train path: qrels/train-* - config_name: queries data_files: - split: train path: queries/train-* ---
haorandai/Dec_Banana_Orange_50samples_0constraint
haorandai
2024-12-23T07:43:30Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-23T07:43:28Z
0
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 4634451.0 num_examples: 50 download_size: 4635182 dataset_size: 4634451.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
math-ai/TemplateGSM
math-ai
2025-03-08T00:44:59Z
734
15
[ "task_categories:question-answering", "language:en", "license:cc-by-4.0", "size_categories:10M<n<100M", "modality:tabular", "modality:text", "arxiv:2411.18104", "region:us", "mathematical-reasoning", "reasoning", "finetuning", "pretraining", "llm" ]
[ "question-answering" ]
2024-02-01T13:22:35Z
1
--- language: - en license: cc-by-4.0 size_categories: - 1B<n<10B task_categories: - question-answering pretty_name: TemplateGSM configs: - config_name: templategsm-7473-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - data/1k/2000-3999/*.jsonl - data/1k/4000-7472/*.jsonl default: true - config_name: templategsm-4000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - data/1k/2000-3999/*.jsonl - config_name: templategsm-2000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - config_name: templategsm-1000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl tags: - mathematical-reasoning - reasoning - finetuning - pretraining - llm --- # Training and Evaluating Language Models with Template-based Data Generation ## TemplateGSM Dataset The TemplateGSM dataset is a large-scale collection of over 7 million (with potential for unlimited generation) grade school math problems, each paired with both code-based and natural language solutions. Designed to advance mathematical reasoning in language models, this dataset presents a diverse range of challenges to assess and improve model capabilities in solving mathematical problems. The dataset is built from 7473 problem templates, providing a rich and varied benchmark for question-answering tasks within the mathematical domain. Project Homepage: https://templatemath.github.io Arxiv link: https://arxiv.org/abs/2411.18104 Paper Link: https://arxiv.org/pdf/2411.18104 GitHub Homepage: https://github.com/iiis-ai/TemplateMath ## Objective TemplateGSM aims to serve as a benchmark for: - Assessing language models' proficiency in mathematical reasoning and symbolic computation. - Training and fine-tuning language models to improve their performance in generating accurate and logically sound mathematical solutions to provided questions. - Encouraging the development of models capable of understanding and solving complex mathematical problems, thereby bridging the gap between natural language processing and mathematical reasoning. ## Dataset Structure TemplateGSM is organized into configurations based on the volume of problems generated from each template: ### Configurations - **templategsm-1000-1k**: Contains 1000 * 1k problems generated from each of the 1000 templates (template 0000-0999), totaling over 1 million individual problems. - **templategsm-2000-1k**: Contains 2000 * 1k problems generated from each of the 2000 templates (template 0000-1999), culminating in a dataset with 2 million problems. - **templategsm-4000-1k**: Contains 4000 * 1k problems generated from each of the 4000 templates (template 0000-3999), culminating in a dataset with 4 million problems. - **templategsm-7473-1k**: Contains 7473 * 1k problems generated from each of the 7473 templates (template 0000-7472), culminating in a dataset with over 7.47 million problems. ### Data Fields Each problem in the dataset includes the following fields: - `problem`: The problem statement. - `solution_code`: A commented solution code that solves the problem in Python. - `result`: The final answer to the problem. - `solution_wocode`: The solution in natural language without the use of code. - `source`: This field indicates the template is constructed from which data source and which seed is used in problem generation, e.g., `gsm8k-train-round2-seed42`. - `template_id`: This field indicates the template from which the problem was generated, e.g., `0`. - `problem_id`: An index unique to each problem within its template. ## How to Use ```XML configs: - config_name: templategsm-7473-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - data/1k/2000-3999/*.jsonl - data/1k/4000-7472/*.jsonl default: true - config_name: templategsm-4000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - data/1k/2000-3999/*.jsonl - config_name: templategsm-2000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl - data/1k/1000-1999/*.jsonl - config_name: templategsm-1000-1k data_files: - split: train path: - data/1k/0000-0999/*.jsonl ``` To access the TemplateGSM dataset, you can use the Huggingface `datasets` library: ```python from datasets import load_dataset # Load a specific configuration dataset = load_dataset("math-ai/TemplateGSM", "templategsm-7473-1k") # or any valid config_name ``` ## License This dataset is made available under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. ## Citation If you utilize Template-based Data Generation (TDG) or the TemplateGSM dataset in your research or application, please consider citing it: ```bibtex @article{zhang2024training, title={Training and Evaluating Language Models with Template-based Data Generation}, author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih}, journal={arXiv preprint arXiv:2411.18104}, year={2024}, } ```
SeppeV/joke_gen_mistral_online_dpo_bert_judge_jo
SeppeV
2025-01-06T19:13:34Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-06T19:13:32Z
0
--- dataset_info: features: - name: jokeText dtype: string - name: userId dtype: int64 splits: - name: train num_bytes: 52526 num_examples: 125 download_size: 37001 dataset_size: 52526 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlfoundations-dev/oh_v1.3_evol_instruct_x.5
mlfoundations-dev
2024-12-08T18:53:28Z
15
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-08T18:52:26Z
0
--- dataset_info: features: - name: conversations list: - name: from dtype: string - name: value dtype: string - name: shard_id dtype: string splits: - name: train num_bytes: 1553802640.0 num_examples: 901073 download_size: 857654111 dataset_size: 1553802640.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
DavidCBaines/ebible_corpus
DavidCBaines
2025-05-10T07:51:15Z
47
0
[ "language:aai", "language:aak", "language:aau", "language:aaz", "language:abc", "language:abt", "language:abx", "language:aby", "language:acf", "language:acr", "language:acu", "language:adz", "language:aer", "language:aey", "language:agd", "language:agg", "language:agm", "language:agn", "language:agr", "language:agt", "language:agu", "language:ahr", "language:aia", "language:aii", "language:ake", "language:alp", "language:alq", "language:alw", "language:aly", "language:ame", "language:amf", "language:amh", "language:amk", "language:amm", "language:amn", "language:amo", "language:amp", "language:amr", "language:amu", "language:amx", "language:anh", "language:anv", "language:aoi", "language:aoj", "language:aom", "language:aon", "language:apb", "language:ape", "language:apn", "language:apr", "language:apu", "language:apw", "language:apz", "language:arb", "language:are", "language:arl", "language:arn", "language:arp", "language:asj", "language:asm", "language:aso", "language:ata", "language:atb", "language:atd", "language:atg", "language:att", "language:auc", "language:aui", "language:auy", "language:avt", "language:awb", "language:awk", "language:awx", "language:azb", "language:azg", "language:azz", "language:bao", "language:bba", "language:bbb", "language:bbr", "language:bch", "language:bco", "language:bdd", "language:bdv", "language:bea", "language:bef", "language:bel", "language:ben", "language:beo", "language:beu", "language:bfz", "language:bgc", "language:bgg", "language:bgs", "language:bgt", "language:bhd", "language:bhg", "language:bhi", "language:bhl", "language:bht", "language:bhu", "language:big", "language:bjk", "language:bjp", "language:bjr", "language:bjv", "language:bjz", "language:bkd", "language:bki", "language:bkq", "language:bkw", "language:bkx", "language:bla", "language:blw", "language:blz", "language:bmh", "language:bmk", "language:bmr", "language:bmu", "language:bnp", "language:boa", "language:bod", "language:boj", "language:bon", "language:box", "language:bpr", "language:bps", "language:bpx", "language:bqc", "language:bqp", "language:bre", "language:bsj", "language:bsn", "language:bsp", "language:bss", "language:buk", "language:bus", "language:bvd", "language:bvr", "language:bwo", "language:bxh", "language:byr", "language:byx", "language:bzd", "language:bzh", "language:bzj", "language:caa", "language:cab", "language:cac", "language:caf", "language:cak", "language:cao", "language:cap", "language:car", "language:cav", "language:cax", "language:cbc", "language:cbi", "language:cbk", "language:cbr", "language:cbs", "language:cbt", "language:cbu", "language:cbv", "language:cco", "language:ceb", "language:cek", "language:ces", "language:cgc", "language:cha", "language:chd", "language:chf", "language:chk", "language:chq", "language:chz", "language:cjo", "language:cjv", "language:ckb", "language:cle", "language:clu", "language:cme", "language:cmn", "language:cni", "language:cnl", "language:cnt", "language:cof", "language:con", "language:cop", "language:cot", "language:cpa", "language:cpb", "language:cpc", "language:cpu", "language:cpy", "language:crn", "language:crx", "language:cso", "language:csy", "language:cta", "language:cth", "language:ctp", "language:ctu", "language:cub", "language:cuc", "language:cui", "language:cuk", "language:cut", "language:cux", "language:cwe", "language:cya", "language:daa", "language:dad", "language:dah", "language:dan", "language:dao", "language:ded", "language:deu", "language:dgc", "language:dgr", "language:dgz", "language:dhg", "language:dif", "language:dik", "language:dji", "language:djk", "language:djr", "language:dob", "language:dop", "language:dov", "language:dso", "language:dwr", "language:dww", "language:dwy", "language:ebk", "language:ekk", "language:eko", "language:emi", "language:emp", "language:ena", "language:eng", "language:enq", "language:epo", "language:eri", "language:ese", "language:esk", "language:etr", "language:ewe", "language:faa", "language:fad", "language:fai", "language:far", "language:ffm", "language:fin", "language:for", "language:fra", "language:fue", "language:fuf", "language:fuh", "language:gah", "language:gai", "language:gam", "language:gaq", "language:gaw", "language:gaz", "language:gdn", "language:gdr", "language:geb", "language:gfk", "language:ghs", "language:glk", "language:gmv", "language:gng", "language:gnn", "language:gnw", "language:gof", "language:grc", "language:gub", "language:guh", "language:gui", "language:guj", "language:gul", "language:gum", "language:gun", "language:guo", "language:gup", "language:gux", "language:gvc", "language:gvf", "language:gvn", "language:gvs", "language:gwi", "language:gym", "language:gyr", "language:hat", "language:hau", "language:haw", "language:hbo", "language:hch", "language:heb", "language:heg", "language:hin", "language:hix", "language:hla", "language:hlt", "language:hmo", "language:hns", "language:hop", "language:hot", "language:hoy", "language:hrv", "language:hto", "language:hub", "language:hui", "language:hun", "language:hus", "language:huu", "language:huv", "language:hvn", "language:ian", "language:ibo", "language:ign", "language:ikk", "language:ikw", "language:ilo", "language:imo", "language:inb", "language:ind", "language:ino", "language:iou", "language:ipi", "language:isl", "language:isn", "language:ita", "language:iws", "language:ixl", "language:jac", "language:jae", "language:jao", "language:jic", "language:jid", "language:jiv", "language:jni", "language:jpn", "language:juy", "language:jvn", "language:kan", "language:kaq", "language:kbc", "language:kbh", "language:kbm", "language:kbq", "language:kca", "language:kdc", "language:kde", "language:kdl", "language:kek", "language:ken", "language:kew", "language:kfw", "language:kgf", "language:kgk", "language:kgp", "language:khs", "language:khz", "language:kij", "language:kik", "language:kiw", "language:kiz", "language:kje", "language:kjs", "language:kkc", "language:kkl", "language:klt", "language:klv", "language:kmg", "language:kmh", "language:kmk", "language:kmo", "language:kms", "language:kmu", "language:kne", "language:knf", "language:knj", "language:knv", "language:kor", "language:kos", "language:kpf", "language:kpg", "language:kpj", "language:kpr", "language:kpw", "language:kpx", "language:kqa", "language:kqc", "language:kqf", "language:kql", "language:kqw", "language:ksd", "language:ksj", "language:ksr", "language:ktm", "language:kto", "language:kud", "language:kue", "language:kup", "language:kvg", "language:kvn", "language:kwd", "language:kwf", "language:kwi", "language:kwj", "language:kxv", "language:kxw", "language:kyc", "language:kyf", "language:kyg", "language:kyq", "language:kyz", "language:kze", "language:lac", "language:lat", "language:lbb", "language:lbk", "language:lbm", "language:lcm", "language:leu", "language:lex", "language:lgl", "language:lid", "language:lif", "language:lin", "language:lit", "language:llg", "language:lug", "language:luo", "language:lww", "language:maa", "language:maj", "language:mal", "language:mam", "language:maq", "language:mar", "language:mau", "language:mav", "language:maz", "language:mbb", "language:mbc", "language:mbh", "language:mbj", "language:mbl", "language:mbs", "language:mbt", "language:mca", "language:mcb", "language:mcd", "language:mcf", "language:mco", "language:mcp", "language:mcq", "language:mcr", "language:mdy", "language:med", "language:mee", "language:mek", "language:meq", "language:met", "language:meu", "language:mgc", "language:mgh", "language:mgw", "language:mhl", "language:mib", "language:mic", "language:mie", "language:mig", "language:mih", "language:mil", "language:mio", "language:mir", "language:mit", "language:miz", "language:mjc", "language:mkj", "language:mkl", "language:mkn", "language:mks", "language:mle", "language:mlh", "language:mlp", "language:mmo", "language:mmx", "language:mna", "language:mni", "language:mop", "language:mox", "language:mph", "language:mpj", "language:mpm", "language:mpp", "language:mps", "language:mpt", "language:mpx", "language:mqb", "language:mqj", "language:msb", "language:msc", "language:msk", "language:msm", "language:msy", "language:mti", "language:mto", "language:mux", "language:muy", "language:mva", "language:mvn", "language:mwc", "language:mwe", "language:mwf", "language:mwp", "language:mxb", "language:mxm", "language:mxp", "language:mxq", "language:mxt", "language:mya", "language:myk", "language:myu", "language:myw", "language:myy", "language:mzz", "language:nab", "language:naf", "language:nag", "language:nak", "language:nal", "language:nas", "language:nbq", "language:nca", "language:nce", "language:nch", "language:ncj", "language:ncl", "language:ncu", "language:nde", "language:ndg", "language:ndj", "language:nfa", "language:ngp", "language:ngu", "language:nhe", "language:nhg", "language:nhi", "language:nho", "language:nhr", "language:nhu", "language:nhw", "language:nhy", "language:nif", "language:nii", "language:nin", "language:nko", "language:nld", "language:nlg", "language:nmw", "language:nna", "language:nnq", "language:noa", "language:nob", "language:nop", "language:not", "language:nou", "language:npi", "language:npl", "language:nrz", "language:nsn", "language:nss", "language:ntj", "language:ntp", "language:ntu", "language:nuq", "language:nuy", "language:nvm", "language:nwi", "language:nya", "language:nys", "language:obo", "language:okv", "language:omb", "language:omw", "language:ong", "language:ons", "language:ood", "language:opm", "language:ory", "language:ote", "language:otm", "language:otn", "language:otq", "language:ots", "language:pab", "language:pad", "language:pah", "language:pan", "language:pao", "language:peg", "language:pes", "language:pib", "language:pio", "language:pir", "language:piu", "language:pjt", "language:pls", "language:plu", "language:pma", "language:poe", "language:poh", "language:poi", "language:pol", "language:pon", "language:por", "language:poy", "language:ppo", "language:prf", "language:pri", "language:ptp", "language:ptu", "language:pwg", "language:qub", "language:quc", "language:quf", "language:quh", "language:qul", "language:qup", "language:qvc", "language:qve", "language:qvh", "language:qvm", "language:qvn", "language:qvs", "language:qvw", "language:qvz", "language:qwh", "language:qxh", "language:qxn", "language:qxo", "language:rai", "language:reg", "language:rgu", "language:rkb", "language:rmc", "language:rmn", "language:rmy", "language:ron", "language:roo", "language:rop", "language:row", "language:rro", "language:ruf", "language:rug", "language:rus", "language:rwo", "language:sab", "language:san", "language:sbe", "language:sbk", "language:sbs", "language:sby", "language:sch", "language:seh", "language:sey", "language:sgb", "language:sgz", "language:shj", "language:shp", "language:sim", "language:sja", "language:sll", "language:smk", "language:sna", "language:snc", "language:snn", "language:snp", "language:snx", "language:sny", "language:som", "language:soq", "language:soy", "language:spa", "language:spl", "language:spm", "language:spp", "language:sps", "language:spy", "language:sri", "language:srm", "language:srn", "language:srp", "language:srq", "language:ssd", "language:ssg", "language:ssx", "language:stp", "language:stw", "language:sua", "language:sue", "language:sus", "language:suz", "language:swe", "language:swh", "language:swp", "language:sxb", "language:tac", "language:taj", "language:tam", "language:tav", "language:taw", "language:tbc", "language:tbf", "language:tbg", "language:tbl", "language:tbo", "language:tbz", "language:tca", "language:tcs", "language:tcz", "language:tdt", "language:tdx", "language:tee", "language:tel", "language:ter", "language:tet", "language:tew", "language:tfr", "language:tgj", "language:tgk", "language:tgl", "language:tgo", "language:tgp", "language:tha", "language:tif", "language:tim", "language:tiw", "language:tiy", "language:tke", "language:tkr", "language:tku", "language:tlf", "language:tmd", "language:tna", "language:tnc", "language:tnk", "language:tnn", "language:tnp", "language:toc", "language:tod", "language:tof", "language:toj", "language:ton", "language:too", "language:top", "language:tos", "language:tpa", "language:tpi", "language:tpt", "language:tpz", "language:trc", "language:tsn", "language:tsw", "language:ttc", "language:tte", "language:tuc", "language:tue", "language:tuf", "language:tuo", "language:tur", "language:tvk", "language:tvt", "language:twi", "language:txq", "language:txu", "language:tzj", "language:tzo", "language:ubr", "language:ubu", "language:udu", "language:uig", "language:ukr", "language:uli", "language:ulk", "language:unx", "language:upv", "language:ura", "language:urb", "language:urd", "language:uri", "language:uro", "language:urt", "language:urw", "language:usa", "language:usp", "language:uvh", "language:uvl", "language:vid", "language:vie", "language:viv", "language:vmy", "language:waj", "language:wal", "language:wap", "language:wat", "language:wbi", "language:wbp", "language:wed", "language:wer", "language:wim", "language:wiu", "language:wiv", "language:wmt", "language:wmw", "language:wnc", "language:wnu", "language:wol", "language:wos", "language:wrk", "language:wro", "language:wrs", "language:wsk", "language:wuv", "language:xav", "language:xbi", "language:xed", "language:xla", "language:xnj", "language:xnn", "language:xon", "language:xsi", "language:xtd", "language:xtm", "language:yaa", "language:yad", "language:yal", "language:yao", "language:yap", "language:yaq", "language:yby", "language:ycn", "language:yka", "language:yle", "language:yml", "language:yom", "language:yon", "language:yor", "language:yrb", "language:yre", "language:yss", "language:yuj", "language:yut", "language:yuw", "language:yva", "language:zaa", "language:zab", "language:zac", "language:zad", "language:zai", "language:zaj", "language:zam", "language:zao", "language:zpi", "language:zar", "language:zas", "language:zat", "language:zav", "language:zaw", "language:zca", "language:zga", "language:zia", "language:ziw", "language:zlm", "language:zos", "language:zpc", "language:zpl", "language:zpm", "language:zpo", "language:zpq", "language:zpu", "language:zpv", "language:zpz", "language:zsr", "language:ztq", "language:zty", "language:zyp", "license:unknown", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "bible", "translation", "multilingual", "religious-text", "parallel-corpus", "low-resource-languages" ]
[]
2025-05-01T19:57:37Z
0
--- pretty_name: eBible_corpus tags: - bible - translation - multilingual - religious-text - parallel-corpus - low-resource-languages license: unknown language: - aai - aak - aau - aaz - abc - abt - abx - aby - acf - acr - acu - adz - aer - aey - agd - agg - agm - agn - agr - agt - agu - ahr - aia - aii - ake - alp - alq - alw - aly - ame - amf - amh - amk - amm - amn - amo - amp - amr - amu - amx - anh - anv - aoi - aoj - aom - aon - apb - ape - apn - apr - apu - apw - apz - arb - are - arl - arn - arp - asj - asm - aso - ata - atb - atd - atg - att - auc - aui - auy - avt - awb - awk - awx - azb - azg - azz - bao - bba - bbb - bbr - bch - bco - bdd - bdv - bea - bef - bel - ben - beo - beu - bfz - bgc - bgg - bgs - bgt - bhd - bhg - bhi - bhl - bht - bhu - big - bjk - bjp - bjr - bjv - bjz - bkd - bki - bkq - bkw - bkx - bla - blw - blz - bmh - bmk - bmr - bmu - bnp - boa - bod - boj - bon - box - bpr - bps - bpx - bqc - bqp - bre - bsj - bsn - bsp - bss - buk - bus - bvd - bvr - bwo - bxh - byr - byx - bzd - bzh - bzj - caa - cab - cac - caf - cak - cao - cap - car - cav - cax - cbc - cbi - cbk - cbr - cbs - cbt - cbu - cbv - cco - ceb - cek - ces - cgc - cha - chd - chf - chk - chq - chz - cjo - cjv - ckb - cle - clu - cme - cmn - cni - cnl - cnt - cof - con - cop - cot - cpa - cpb - cpc - cpu - cpy - crn - crx - cso - csy - cta - cth - ctp - ctu - cub - cuc - cui - cuk - cut - cux - cwe - cya - daa - dad - dah - dan - dao - ded - deu - dgc - dgr - dgz - dhg - dif - dik - dji - djk - djr - dob - dop - dov - dso - dwr - dww - dwy - ebk - ekk - eko - emi - emp - ena - eng - enq - epo - eri - ese - esk - etr - ewe - faa - fad - fai - far - ffm - fin - for - fra - fue - fuf - fuh - gah - gai - gam - gaq - gaw - gaz - gdn - gdr - geb - gfk - ghs - glk - gmv - gng - gnn - gnw - gof - grc - gub - guh - gui - guj - gul - gum - gun - guo - gup - gux - gvc - gvf - gvn - gvs - gwi - gym - gyr - hat - hau - haw - hbo - hch - heb - heg - hin - hix - hla - hlt - hmo - hns - hop - hot - hoy - hrv - hto - hub - hui - hun - hus - huu - huv - hvn - ian - ibo - ign - ikk - ikw - ilo - imo - inb - ind - ino - iou - ipi - isl - isn - ita - iws - ixl - jac - jae - jao - jic - jid - jiv - jni - jpn - juy - jvn - kan - kaq - kbc - kbh - kbm - kbq - kca - kdc - kde - kdl - kek - ken - kew - kfw - kgf - kgk - kgp - khs - khz - kij - kik - kiw - kiz - kje - kjs - kkc - kkl - klt - klv - kmg - kmh - kmk - kmo - kms - kmu - kne - knf - knj - knv - kor - kos - kpf - kpg - kpj - kpr - kpw - kpx - kqa - kqc - kqf - kql - kqw - ksd - ksj - ksr - ktm - kto - kud - kue - kup - kvg - kvn - kwd - kwf - kwi - kwj - kxv - kxw - kyc - kyf - kyg - kyq - kyz - kze - lac - lat - lbb - lbk - lbm - lcm - leu - lex - lgl - lid - lif - lin - lit - llg - lug - luo - lww - maa - maj - mal - mam - maq - mar - mau - mav - maz - mbb - mbc - mbh - mbj - mbl - mbs - mbt - mca - mcb - mcd - mcf - mco - mcp - mcq - mcr - mdy - med - mee - mek - meq - met - meu - mgc - mgh - mgw - mhl - mib - mic - mie - mig - mih - mil - mio - mir - mit - miz - mjc - mkj - mkl - mkn - mks - mle - mlh - mlp - mmo - mmx - mna - mni - mop - mox - mph - mpj - mpm - mpp - mps - mpt - mpx - mqb - mqj - msb - msc - msk - msm - msy - mti - mto - mux - muy - mva - mvn - mwc - mwe - mwf - mwp - mxb - mxm - mxp - mxq - mxt - mya - myk - myu - myw - myy - mzz - nab - naf - nag - nak - nal - nas - nbq - nca - nce - nch - ncj - ncl - ncu - nde - ndg - ndj - nfa - ngp - ngu - nhe - nhg - nhi - nho - nhr - nhu - nhw - nhy - nif - nii - nin - nko - nld - nlg - nmw - nna - nnq - noa - nob - nop - not - nou - npi - npl - nrz - nsn - nss - ntj - ntp - ntu - nuq - nuy - nvm - nwi - nya - nys - obo - okv - omb - omw - ong - ons - ood - opm - ory - ote - otm - otn - otq - ots - pab - pad - pah - pan - pao - peg - pes - pib - pio - pir - piu - pjt - pls - plu - pma - poe - poh - poi - pol - pon - por - poy - ppo - prf - pri - ptp - ptu - pwg - qub - quc - quf - quh - qul - qup - qvc - qve - qvh - qvm - qvn - qvs - qvw - qvz - qwh - qxh - qxn - qxo - rai - reg - rgu - rkb - rmc - rmn - rmy - ron - roo - rop - row - rro - ruf - rug - rus - rwo - sab - san - sbe - sbk - sbs - sby - sch - seh - sey - sgb - sgz - shj - shp - sim - sja - sll - smk - sna - snc - snn - snp - snx - sny - som - soq - soy - spa - spl - spm - spp - sps - spy - sri - srm - srn - srp - srq - ssd - ssg - ssx - stp - stw - sua - sue - sus - suz - swe - swh - swp - sxb - tac - taj - tam - tav - taw - tbc - tbf - tbg - tbl - tbo - tbz - tca - tcs - tcz - tdt - tdx - tee - tel - ter - tet - tew - tfr - tgj - tgk - tgl - tgo - tgp - tha - tif - tim - tiw - tiy - tke - tkr - tku - tlf - tmd - tna - tnc - tnk - tnn - tnp - toc - tod - tof - toj - ton - too - top - tos - tpa - tpi - tpt - tpz - trc - tsn - tsw - ttc - tte - tuc - tue - tuf - tuo - tur - tvk - tvt - twi - txq - txu - tzj - tzo - ubr - ubu - udu - uig - ukr - uli - ulk - unx - upv - ura - urb - urd - uri - uro - urt - urw - usa - usp - uvh - uvl - vid - vie - viv - vmy - waj - wal - wap - wat - wbi - wbp - wed - wer - wim - wiu - wiv - wmt - wmw - wnc - wnu - wol - wos - wrk - wro - wrs - wsk - wuv - xav - xbi - xed - xla - xnj - xnn - xon - xsi - xtd - xtm - yaa - yad - yal - yao - yap - yaq - yby - ycn - yka - yle - yml - yom - yon - yor - yrb - yre - yss - yuj - yut - yuw - yva - zaa - zab - zac - zad - zai - zaj - zam - zao - zpi - zar - zas - zat - zav - zaw - zca - zga - zia - ziw - zlm - zos - zpc - zpl - zpm - zpo - zpq - zpu - zpv - zpz - zsr - ztq - zty - zyp --- # Dataset Card for eBible Corpus ## Dataset Summary The eBible Corpus is a large multilingual parallel corpus containing Bible translations sourced from eBible.org. It features text from many low-resource languages and 891 languages in total. This dataset is designed to facilitate research in machine translation, cross-lingual alignment, and linguistic analysis, particularly for low-resource languages. The dataset consists of two main files: 1. `bible_corpus.parquet`: Contains the verse text aligned by a common verse reference (`book`, `chapter`, `verse`). Each column represents a specific translation identified by its `translationId`. 2. `metadata.csv`: Provides detailed metadata for each translation included in the Parquet file, such as language names, copyright information, license, and source details. The total compressed size of the dataset files is approximately 1GB. ## Supported Tasks and Leaderboards This dataset can be used for various NLP tasks, including: * **Machine Translation:** Especially for low-resource language pairs. * **Cross-Lingual Word Alignment:** Identifying corresponding words or phrases across translations. * **Linguistic Analysis:** Studying translation strategies, language evolution, or typological features across a wide range of languages. ## Languages The dataset covers hundreds of languages. The `metadata.csv` file contains details about each translation's language information (including `languageCode`, `languageName`, `languageNameInEnglish`). ## Dataset Structure ### Data Instances **`bible_corpus.parquet`** An example row looks like: ```json { "book": "GEN", "chapter": 1, "verse": 1, "eng-engESV": "In the beginning, God created the heavens and the earth.", "fra-frLSG": "Au commencement, Dieu créa les cieux et la terre.", "spa-spaRV1909": "EN el principio crió Dios los cielos y la tierra.", "...": "..." }
mteb/CQADupstackAndroidRetrieval-Fa
mteb
2025-05-06T09:41:15Z
0
0
[ "region:us" ]
[]
2025-05-06T09:41:02Z
0
--- dataset_info: - config_name: corpus features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string splits: - name: test num_bytes: 22610167 num_examples: 22998 download_size: 10997289 dataset_size: 22610167 - config_name: qrels features: - name: query-id dtype: string - name: corpus-id dtype: string - name: score dtype: int64 splits: - name: test num_bytes: 43411 num_examples: 1696 download_size: 19480 dataset_size: 43411 - config_name: queries features: - name: _id dtype: string - name: text dtype: string splits: - name: test num_bytes: 73650 num_examples: 699 download_size: 39682 dataset_size: 73650 configs: - config_name: corpus data_files: - split: test path: corpus/test-* - config_name: qrels data_files: - split: test path: qrels/test-* - config_name: queries data_files: - split: test path: queries/test-* ---
gmm0820/so100_test3
gmm0820
2024-12-11T07:45:36Z
44
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2024-12-11T07:44:17Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.0", "robot_type": "so100", "total_episodes": 10, "total_frames": 7481, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_7279ac59-b780-4fc7-8b5c-fc6b66284a5a
argilla-internal-testing
2024-10-28T14:20:59Z
19
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-28T14:20:59Z
0
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': positive '1': negative splits: - name: train num_bytes: 111 num_examples: 3 download_size: 1454 dataset_size: 111 configs: - config_name: default data_files: - split: train path: data/train-* ---
nhagar/fineweb2-hq_urls
nhagar
2025-05-15T14:19:03Z
63
0
[ "license:odc-by", "size_categories:100M<n<1B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "doi:10.57967/hf/5484", "region:us" ]
[]
2025-05-08T21:24:40Z
0
--- license: odc-by --- # Dataset Card for fineweb2-hq_urls This dataset provides the URLs and top-level domains associated with training records in [epfml/FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible. ## Dataset Details ### Dataset Description This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy). - **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy) - **License:** Same as source dataset ### Dataset Sources - **Repository:** [epfml/FineWeb2-HQ](https://huggingface.co/datasets/epfml/FineWeb2-HQ) ## Uses This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data. ### Direct Use The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve: - Identifying the most-used websites - Categorizing URLs to understand domain- or topic-level dataset composition - Comparing URLs across datasets - Digging into inclusion/exclusion patterns for a particular website ### Out-of-Scope Use This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset. ## Dataset Structure This dataset contains every record with a URL from the source dataset. It contains two columns: - `url`: The raw URL associated with each record - `domain`: The top-level domain for each URL, extracted with `tldextract` ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
ying15/mixed-cod-tokenized-v5
ying15
2025-05-08T22:03:30Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-08T22:03:28Z
0
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 12674797.0 num_examples: 1000 download_size: 5771229 dataset_size: 12674797.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
AjayP13/mt_bench_101_alpaca_eval
AjayP13
2025-09-29T18:59:46Z
0
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-09-29T18:47:44Z
0
--- dataset_info: - config_name: alpaca_eval features: - name: dataset dtype: string - name: instruction dtype: string - name: output dtype: string - name: generator dtype: string splits: - name: eval num_bytes: 149562 num_examples: 1388 download_size: 65499 dataset_size: 149562 - config_name: alpaca_eval_gpt4_baseline features: - name: dataset dtype: string - name: instruction dtype: string - name: output dtype: string - name: generator dtype: string splits: - name: eval num_bytes: 149562 num_examples: 1388 download_size: 65499 dataset_size: 149562 configs: - config_name: alpaca_eval data_files: - split: eval path: alpaca_eval/eval-* - config_name: alpaca_eval_gpt4_baseline data_files: - split: eval path: alpaca_eval_gpt4_baseline/eval-* ---
joelkoch/elixir_generated
joelkoch
2025-02-06T20:22:11Z
18
0
[ "task_categories:text-generation", "task_categories:text2text-generation", "task_categories:text-retrieval", "task_categories:question-answering", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "library:distilabel", "region:us", "synthetic", "distilabel", "rlaif", "datacraft" ]
[ "text-generation", "text2text-generation", "text-retrieval", "question-answering" ]
2025-02-06T20:22:07Z
0
--- size_categories: n<1K task_categories: - text-generation - text2text-generation - text-retrieval - question-answering dataset_info: features: - name: context dtype: string - name: question dtype: string - name: response dtype: string splits: - name: train num_bytes: 372568 num_examples: 100 download_size: 152841 dataset_size: 372568 configs: - config_name: default data_files: - split: train path: data/train-* tags: - synthetic - distilabel - rlaif - datacraft --- <p align="left"> <a href="https://github.com/argilla-io/distilabel"> <img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/> </a> </p> # Dataset Card for elixir_generated This dataset has been created with [distilabel](https://distilabel.argilla.io/). ## Dataset Summary This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI: ```console distilabel pipeline run --config "https://huggingface.co/datasets/joelkoch/elixir_generated/raw/main/pipeline.yaml" ``` or explore the configuration: ```console distilabel pipeline info --config "https://huggingface.co/datasets/joelkoch/elixir_generated/raw/main/pipeline.yaml" ``` ## Dataset structure The examples have the following structure per configuration: <details><summary> Configuration: default </summary><hr> ```json { "context": "Elixir is a dynamic, functional language that runs on the Erlang VM (BEAM), providing built-in support for concurrency, distributed programming, and hot code loading.\n\nElixir\u0027s syntax is similar to Ruby, making it easier for developers familiar with Ruby to learn. \n\nElixir provides a wide range of libraries and frameworks, such as Phoenix, Plug, and Dialyxir.\n\nElixir supports both synchronous and asynchronous programming.\n\nElixir has built-in support for tasks such as compile and run Elixir code on the Erlang VM.\n\nElixir provides the GenServer module for designing and implementing server processes.\n\nThe Elixir documentation is a comprehensive resource for learning Elixir and its many libraries and frameworks.\n\nElixir has a large and active community of developers who contribute to its growth and development.\n\nThe Elixir VM (BEAM) provides crash dumps in case of errors, allowing for easier debugging.\n\nErlang\u0027s pattern matching feature is available in Elixir, allowing for concise and expressive code.\n\nElixir supports pattern matching on structured data types such as tuples, lists, and maps.\n\nElixir\u0027s pipes operator allows for data transformation chaining.\n\nPhoenix is a popular web framework for building applications in Elixir.\n\nPlug is a specification for web applications in Elixir, providing a modular and flexible way of building web applications.\n\nDialyxir is a plugin for the Elixir compiler, providing code analysis and debugging support.\n\nOTP (Open Telecom Platform) is a set of Erlang libraries, including the GenServer module, that provides a set of behaviors for implementing OTP applications.\n\nOTP applications are built using a set of design principles that emphasize fault-tolerant, distributed, and concurrent systems.\n\nOTP applications can be easily distributed across multiple nodes in a cluster.\n\nOTP applications can be started, stopped, and monitored using the name.\n\nOTP provides an interface for distributing processes and managing process supervision.\n\nOTP applications can be compiled and run on the Erlang VM.", "question": "Is Elixir\u0027s comprehensive documentation a key factor in its adoption among developers?", "response": "Yes, Elixir\u0027s comprehensive documentation is a key factor in its adoption among developers. The document states, \"The Elixir documentation is a comprehensive resource for learning Elixir and its many libraries and frameworks.\" This suggests that the documentation is a valuable resource for developers to learn and understand Elixir, which would likely contribute to its adoption." } ``` This subset can be loaded as: ```python from datasets import load_dataset ds = load_dataset("joelkoch/elixir_generated", "default") ``` Or simply as it follows, since there's only one configuration and is named `default`: ```python from datasets import load_dataset ds = load_dataset("joelkoch/elixir_generated") ``` </details>
rxc5667/3words
rxc5667
2024-11-30T22:50:25Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-30T22:27:27Z
0
--- dataset_info: features: - name: image_name dtype: string - name: image dtype: image - name: image_url dtype: string - name: image_path dtype: string - name: description dtype: string - name: tags sequence: string splits: - name: train num_bytes: 54534411.0 num_examples: 451 download_size: 52035970 dataset_size: 54534411.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
cambridgeltl/xcopa
cambridgeltl
2024-01-04T16:55:46Z
2,816
18
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "source_datasets:extended|copa", "language:et", "language:ht", "language:id", "language:it", "language:qu", "language:sw", "language:ta", "language:th", "language:tr", "language:vi", "language:zh", "license:cc-by-4.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "question-answering" ]
2022-03-02T23:29:22Z
1
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - et - ht - id - it - qu - sw - ta - th - tr - vi - zh license: - cc-by-4.0 multilinguality: - multilingual size_categories: - unknown source_datasets: - extended|copa task_categories: - question-answering task_ids: - multiple-choice-qa paperswithcode_id: xcopa pretty_name: XCOPA dataset_info: - config_name: et features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11669 num_examples: 100 - name: test num_bytes: 56471 num_examples: 500 download_size: 54200 dataset_size: 68140 - config_name: ht features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11957 num_examples: 100 - name: test num_bytes: 58437 num_examples: 500 download_size: 50346 dataset_size: 70394 - config_name: id features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 13855 num_examples: 100 - name: test num_bytes: 63189 num_examples: 500 download_size: 55608 dataset_size: 77044 - config_name: it features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 13324 num_examples: 100 - name: test num_bytes: 64909 num_examples: 500 download_size: 59602 dataset_size: 78233 - config_name: qu features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 13941 num_examples: 100 - name: test num_bytes: 68569 num_examples: 500 download_size: 56734 dataset_size: 82510 - config_name: sw features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12666 num_examples: 100 - name: test num_bytes: 60533 num_examples: 500 download_size: 53862 dataset_size: 73199 - config_name: ta features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 36995 num_examples: 100 - name: test num_bytes: 176112 num_examples: 500 download_size: 91348 dataset_size: 213107 - config_name: th features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 21817 num_examples: 100 - name: test num_bytes: 104023 num_examples: 500 download_size: 65925 dataset_size: 125840 - config_name: tr features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11899 num_examples: 100 - name: test num_bytes: 57599 num_examples: 500 download_size: 53677 dataset_size: 69498 - config_name: translation-et features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11881 num_examples: 100 - name: test num_bytes: 57327 num_examples: 500 download_size: 52078 dataset_size: 69208 - config_name: translation-ht features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12130 num_examples: 100 - name: test num_bytes: 58019 num_examples: 500 download_size: 52823 dataset_size: 70149 - config_name: translation-id features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12457 num_examples: 100 - name: test num_bytes: 58406 num_examples: 500 download_size: 53701 dataset_size: 70863 - config_name: translation-it features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12382 num_examples: 100 - name: test num_bytes: 58936 num_examples: 500 download_size: 53410 dataset_size: 71318 - config_name: translation-sw features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12180 num_examples: 100 - name: test num_bytes: 58607 num_examples: 500 download_size: 52888 dataset_size: 70787 - config_name: translation-ta features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12372 num_examples: 100 - name: test num_bytes: 59442 num_examples: 500 download_size: 54488 dataset_size: 71814 - config_name: translation-th features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11347 num_examples: 100 - name: test num_bytes: 54758 num_examples: 500 download_size: 52243 dataset_size: 66105 - config_name: translation-tr features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11879 num_examples: 100 - name: test num_bytes: 57599 num_examples: 500 download_size: 52223 dataset_size: 69478 - config_name: translation-vi features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11604 num_examples: 100 - name: test num_bytes: 55797 num_examples: 500 download_size: 52087 dataset_size: 67401 - config_name: translation-zh features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 12001 num_examples: 100 - name: test num_bytes: 57895 num_examples: 500 download_size: 52896 dataset_size: 69896 - config_name: vi features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 15093 num_examples: 100 - name: test num_bytes: 70169 num_examples: 500 download_size: 59132 dataset_size: 85262 - config_name: zh features: - name: premise dtype: string - name: choice1 dtype: string - name: choice2 dtype: string - name: question dtype: string - name: label dtype: int32 - name: idx dtype: int32 - name: changed dtype: bool splits: - name: validation num_bytes: 11604 num_examples: 100 - name: test num_bytes: 55134 num_examples: 500 download_size: 52634 dataset_size: 66738 configs: - config_name: et data_files: - split: validation path: et/validation-* - split: test path: et/test-* - config_name: ht data_files: - split: validation path: ht/validation-* - split: test path: ht/test-* - config_name: id data_files: - split: validation path: id/validation-* - split: test path: id/test-* - config_name: it data_files: - split: validation path: it/validation-* - split: test path: it/test-* - config_name: qu data_files: - split: validation path: qu/validation-* - split: test path: qu/test-* - config_name: sw data_files: - split: validation path: sw/validation-* - split: test path: sw/test-* - config_name: ta data_files: - split: validation path: ta/validation-* - split: test path: ta/test-* - config_name: th data_files: - split: validation path: th/validation-* - split: test path: th/test-* - config_name: tr data_files: - split: validation path: tr/validation-* - split: test path: tr/test-* - config_name: translation-et data_files: - split: validation path: translation-et/validation-* - split: test path: translation-et/test-* - config_name: translation-ht data_files: - split: validation path: translation-ht/validation-* - split: test path: translation-ht/test-* - config_name: translation-id data_files: - split: validation path: translation-id/validation-* - split: test path: translation-id/test-* - config_name: translation-it data_files: - split: validation path: translation-it/validation-* - split: test path: translation-it/test-* - config_name: translation-sw data_files: - split: validation path: translation-sw/validation-* - split: test path: translation-sw/test-* - config_name: translation-ta data_files: - split: validation path: translation-ta/validation-* - split: test path: translation-ta/test-* - config_name: translation-th data_files: - split: validation path: translation-th/validation-* - split: test path: translation-th/test-* - config_name: translation-tr data_files: - split: validation path: translation-tr/validation-* - split: test path: translation-tr/test-* - config_name: translation-vi data_files: - split: validation path: translation-vi/validation-* - split: test path: translation-vi/test-* - config_name: translation-zh data_files: - split: validation path: translation-zh/validation-* - split: test path: translation-zh/test-* - config_name: vi data_files: - split: validation path: vi/validation-* - split: test path: vi/test-* - config_name: zh data_files: - split: validation path: zh/validation-* - split: test path: zh/test-* --- # Dataset Card for "xcopa" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/cambridgeltl/xcopa](https://github.com/cambridgeltl/xcopa) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 4.08 MB - **Size of the generated dataset:** 1.02 MB - **Total amount of disk used:** 5.10 MB ### Dataset Summary XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning The Cross-lingual Choice of Plausible Alternatives dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages. All the details about the creation of XCOPA and the implementation of the baselines are available in the paper. Xcopa language et ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages - et - ht - id - it - qu - sw - ta - th - tr - vi - zh ## Dataset Structure ### Data Instances #### et - **Size of downloaded dataset files:** 0.37 MB - **Size of the generated dataset:** 0.07 MB - **Total amount of disk used:** 0.44 MB An example of 'validation' looks as follows. ``` { "changed": false, "choice1": "Ta kallas piima kaussi.", "choice2": "Ta kaotas oma isu.", "idx": 1, "label": 1, "premise": "Tüdruk leidis oma helveste seest putuka.", "question": "effect" } ``` #### ht - **Size of downloaded dataset files:** 0.37 MB - **Size of the generated dataset:** 0.07 MB - **Total amount of disk used:** 0.44 MB An example of 'validation' looks as follows. ``` { "changed": false, "choice1": "Ta kallas piima kaussi.", "choice2": "Ta kaotas oma isu.", "idx": 1, "label": 1, "premise": "Tüdruk leidis oma helveste seest putuka.", "question": "effect" } ``` #### id - **Size of downloaded dataset files:** 0.37 MB - **Size of the generated dataset:** 0.07 MB - **Total amount of disk used:** 0.45 MB An example of 'validation' looks as follows. ``` { "changed": false, "choice1": "Ta kallas piima kaussi.", "choice2": "Ta kaotas oma isu.", "idx": 1, "label": 1, "premise": "Tüdruk leidis oma helveste seest putuka.", "question": "effect" } ``` #### it - **Size of downloaded dataset files:** 0.37 MB - **Size of the generated dataset:** 0.08 MB - **Total amount of disk used:** 0.45 MB An example of 'validation' looks as follows. ``` { "changed": false, "choice1": "Ta kallas piima kaussi.", "choice2": "Ta kaotas oma isu.", "idx": 1, "label": 1, "premise": "Tüdruk leidis oma helveste seest putuka.", "question": "effect" } ``` #### qu - **Size of downloaded dataset files:** 0.37 MB - **Size of the generated dataset:** 0.08 MB - **Total amount of disk used:** 0.45 MB An example of 'validation' looks as follows. ``` { "changed": false, "choice1": "Ta kallas piima kaussi.", "choice2": "Ta kaotas oma isu.", "idx": 1, "label": 1, "premise": "Tüdruk leidis oma helveste seest putuka.", "question": "effect" } ``` ### Data Fields The data fields are the same among all splits. #### et - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. - `idx`: a `int32` feature. - `changed`: a `bool` feature. #### ht - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. - `idx`: a `int32` feature. - `changed`: a `bool` feature. #### id - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. - `idx`: a `int32` feature. - `changed`: a `bool` feature. #### it - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. - `idx`: a `int32` feature. - `changed`: a `bool` feature. #### qu - `premise`: a `string` feature. - `choice1`: a `string` feature. - `choice2`: a `string` feature. - `question`: a `string` feature. - `label`: a `int32` feature. - `idx`: a `int32` feature. - `changed`: a `bool` feature. ### Data Splits |name|validation|test| |----|---------:|---:| |et | 100| 500| |ht | 100| 500| |id | 100| 500| |it | 100| 500| |qu | 100| 500| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ``` @article{ponti2020xcopa, title={{XCOPA: A} Multilingual Dataset for Causal Commonsense Reasoning}, author={Edoardo M. Ponti, Goran Glava {s}, Olga Majewska, Qianchu Liu, Ivan Vuli'{c} and Anna Korhonen}, journal={arXiv preprint}, year={2020}, url={https://ducdauge.github.io/files/xcopa.pdf} } @inproceedings{roemmele2011choice, title={Choice of plausible alternatives: An evaluation of commonsense causal reasoning}, author={Roemmele, Melissa and Bejan, Cosmin Adrian and Gordon, Andrew S}, booktitle={2011 AAAI Spring Symposium Series}, year={2011}, url={https://people.ict.usc.edu/~gordon/publications/AAAI-SPRING11A.PDF}, } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
twei11/round_12
twei11
2025-04-10T10:32:46Z
16
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-24T09:16:09Z
0
--- dataset_info: features: - name: messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 7216682 num_examples: 1998 download_size: 3477036 dataset_size: 7216682 configs: - config_name: default data_files: - split: train path: data/train-* ---
ieuniversity/nlp-ner-train
ieuniversity
2025-03-06T12:28:18Z
71
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-06T12:28:08Z
0
--- dataset_info: features: - name: text dtype: string - name: id dtype: int64 - name: entities list: - name: end dtype: int64 - name: label dtype: string - name: start dtype: int64 splits: - name: train num_bytes: 3208573 num_examples: 677 download_size: 1821648 dataset_size: 3208573 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlfoundations-dev/a1_science_wikipedia_biology
mlfoundations-dev
2025-04-13T01:19:16Z
16
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-12T00:13:00Z
0
--- dataset_info: features: - name: text dtype: string - name: category dtype: string - name: url dtype: string - name: title dtype: string - name: __index_level_0__ dtype: int64 - name: instruction_seed dtype: string - name: __original_row_idx dtype: int64 - name: reasoning dtype: string - name: deepseek_solution dtype: string - name: source dtype: string - name: final_reasoning_trace dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: train num_bytes: 795486738 num_examples: 31600 download_size: 391172242 dataset_size: 795486738 configs: - config_name: default data_files: - split: train path: data/train-* ---
paulnguyen89/coc-captioning-dataset
paulnguyen89
2025-02-21T09:58:45Z
56
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-21T08:23:56Z
0
--- dataset_info: features: - name: image dtype: image: decode: false - name: objects sequence: - name: bbox sequence: float32 - name: category dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' '4': '4' '5': '5' '6': '6' '7': '7' '8': '8' '9': '9' '10': '10' '11': '11' '12': '12' '13': '13' '14': '14' '15': '15' '16': '16' splits: - name: train num_bytes: 8317057.0 num_examples: 88 download_size: 8211666 dataset_size: 8317057.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
000Yash000/yash-cricllama-3.2
000Yash000
2024-10-25T15:26:51Z
23
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-25T15:26:49Z
0
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 66623 num_examples: 220 download_size: 26339 dataset_size: 66623 configs: - config_name: default data_files: - split: train path: data/train-* ---
mlfoundations-dev/reasoning_shard_38
mlfoundations-dev
2025-03-05T21:23:56Z
15
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-05T21:23:55Z
0
--- dataset_info: features: - name: problem dtype: string - name: answer dtype: string - name: task dtype: string - name: problem_number dtype: int64 - name: set_number dtype: int64 splits: - name: train num_bytes: 41983.24130355181 num_examples: 43 download_size: 23740 dataset_size: 41983.24130355181 configs: - config_name: default data_files: - split: train path: data/train-* ---
oopere/RetailBanking-Conversations
oopere
2025-03-30T17:30:03Z
72
1
[ "task_categories:text-generation", "task_categories:text-classification", "language:en", "license:mit", "size_categories:1K<n<10K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "doi:10.57967/hf/5533", "region:us", "finance", "banking", "synthetic", "llm", "dialogue", "retail-banking", "openai", "wizardSdata" ]
[ "text-generation", "text-classification" ]
2025-03-30T14:43:35Z
0
--- license: mit language: - en pretty_name: Retail Banking Conversations size_categories: - 1K<n<10K task_categories: - text-generation - text-classification tags: - finance - banking - synthetic - llm - dialogue - retail-banking - openai - wizardSdata --- # Dataset Description RetailBanking-Conversations is a synthetic dataset designed to train and evaluate language models in the retail banking domain, it has been created using the open source library [wizardSdata](https://github.com/peremartra/WizardSData) that eable the creation of synthetic datasets in any field. The dataset contains 320 realistic conversations, across 160 unique financial profiles and 10 key retail banking topics, between financial advisors and clients, covering 10 main categories of banking products and services with various specific subtopics. Each conversation reflects natural interactions where clients seek information, advice, or solutions for their financial needs, and advisors provide professional guidance tailored to different client profiles # Dataset Statistics * Total conversations: 320 * Number of different profiles: 160 * Main topics: 10 * Average conversation length: 4-8 turns * Language: English # Structure and Content The conversations are organized by main topics, including: * **Cards** - Credit and debit cards * **Savings & Deposits** - Savings accounts and deposits * **Mortgages** - Mortgages and real estate financing * **Personal Loans** - Personal loans * **Bank Accounts** - Bank accounts * **Investment Funds** - Investment funds * **Insurance** - Insurance * **Digital Banking** - Digital banking and mobile payments * **Pension Plans** - Pension plans and retirement * **Customer Rewards** - Loyalty programs and rewards Each conversation is structured as a multi-turn dialogue between a customer (rol1) and a financial advisor (rol2), following different customer intents, levels of financial knowledge, and risk tolerance. ## Conversation Examples. ```json { "id_conversation": "c8f9f6ea-5d30-47ba-8004-459bfc4e536b", "topic": "Investment Funds", "sequence": 0, "rol1": "Hello!", "rol2": "Hello! It's great to meet you. How can I assist you today with your investment needs? Are you looking to start investing, or do you have specific goals in mind that you'd like to discuss?" }, { "id_conversation": "c8f9f6ea-5d30-47ba-8004-459bfc4e536b", "topic": "Investment Funds", "sequence": 1, "rol1": "I'm looking to compare fund performance in the medium-term. Any advice?", "rol2": "Of course! Comparing fund performance is a crucial step in making informed investment decisions. To get started, could you tell me a bit more about what you mean by \"medium-term\"? Are you thinking of a time frame of 3 to 5 years, or something different?\n\nAdditionally, it would be helpful to know if there are specific types of funds you're interested in, such as equity funds, bond funds, or perhaps a balanced mix. Understanding your risk tolerance and investment goals will also help me provide more tailored advice." }, { "id_conversation": "c8f9f6ea-5d30-47ba-8004-459bfc4e536b", "topic": "Investment Funds", "sequence": 2, "rol1": "I'm thinking around 3 to 5 years. I'm open to both equity and balanced funds. I have a high risk tolerance, so I'm interested in maximizing potential returns. Any recommendations on how to analyze these funds effectively?", "rol2": "Great! With a 3 to 5-year time horizon and a high-risk tolerance, you're in a good position to explore equity and balanced funds that can offer higher potential returns. Here are some steps you can take to effectively analyze and compare these funds:\n\n1. **Performance History**: Look at the historical performance of the funds over different periods (1 year, 3 years, 5 years). While past performance doesn't guarantee future results, it can give you an idea of how the fund has performed in various market conditions.\n\n2. **Expense Ratios**: Check the expense ratio of each fund. Lower expense ratios can help enhance your net returns over time. Remember that actively managed funds typically have higher expenses than index funds.\n\n3. **Fund Manager's Track Record**: Investigate the experience and track record of the fund manager(s). Consistent performance by the same manager over time can be a positive indicator.\n\n4. **Portfolio Composition**: Examine the fund's holdings to ensure they align with your risk tolerance and investment goals. For equity funds, look at the sectors and geographic regions they invest in. For balanced funds, see how they allocate between stocks and bonds.\n\n5. **Volatility and Risk Measures**: Review the fund's volatility (standard deviation) and risk measures (such as beta) to understand how much risk the fund has taken to achieve its returns.\n\n6. **Morningstar Ratings**: Consider looking at independent ratings from services like Morningstar, which provide evaluations based on past performance, risk, and cost.\n\n7. **Benchmark Comparison**: Compare each fund's performance against its benchmark" }, ``` # How it was created: Powered by wizardSdata. This dataset was created using WizardSData, an open-source Python library for generating synthetic datasets using large language models. The generation process involves: * Custom persona profiles from the retail banking domain. * Topic-specific prompt templates for both customer and advisor. * LLM-powered turn-by-turn generation using gpt-4o. * Configurable parameters such as temperature, number of questions, and models per role. ## The process used: * GPT-4o for both client and financial advisor roles * Diversified profiles: 160 unique profiles specifying demographic characteristics and financial needs * Jinja2 templates: To create the prompt for user and advisor. * Differentiated temperature parameters: 0.6 for clients (higher variability) and 0.2 for advisors (greater consistency and accuracy) Profile Example: ```json { "id": 154, "age": 29, "gender": "Male", "marital_status": "Single", "country": "Italy", "residence_area": "Urban", "profession": "Marketing Manager", "employment_status": "Freelance", "financial_products": [ "Investment fund", "Insurance", "Pension plan" ], "financial_goal": "Compare current vs. savings accounts", "topic": "Bank Accounts", "investment_horizon": "Long-term", "risk_tolerance": "High", "conversation_style": "Formal", "financial_knowledge": "High" } ``` The wizardSdata library is designed to help researchers and developers create customized conversational datasets for different domains and use cases, significantly reducing the time and effort needed to collect relevant training data. ### Conversation generation. ```python pip install wizardsdata import wizardsdata as wsd # Configure parameters wsd.set_config( API_KEY="your_api_key", template_client_prompt="path/to/client/template.j2", template_advisor_prompt="path/to/advisor/template.j2", file_profiles="path/to/profiles.json", file_output="path/to/output_dataset.json", model_client="gpt-4o", # or any other compatible model model_advisor="gpt-4o", temperature_client=0.7, temperature_advisor=0.3 ) # Verify configuration if wsd.is_config_valid(): # Start generation wsd.start_generation() ``` If you want more information about how to create your own custom dataset refer to [wizardSdata repository](https://github.com/peremartra/WizardSData). # Citation. ``` @misc{RetailBankingConversations, author = {Pere Martra}, title = {RetailBankingConversations: A Dataset for Retail Banking Conversations}, year = {2025}, url = {https://huggingface.co/datasets/oopere/RetailBanking-Conversations}, note = {Created using the open-source WizardSData library} } ```
kumarsk9/test_ksk
kumarsk9
2025-02-20T22:47:07Z
15
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-20T00:32:48Z
0
--- dataset_info: features: - name: a dtype: int64 - name: b dtype: float64 - name: c dtype: string - name: d dtype: bool - name: e dtype: bool splits: - name: train num_bytes: 151 num_examples: 7 download_size: 1939 dataset_size: 151 configs: - config_name: default data_files: - split: train path: data/train-* ---
tuenguyen/logicasker
tuenguyen
2025-02-04T15:38:14Z
15
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-04T15:38:11Z
0
--- dataset_info: features: - name: Thought dtype: string - name: Prompt dtype: string - name: Solution dtype: string - name: Solution_source dtype: string - name: Answer dtype: string - name: Source dtype: string - name: Verifiable dtype: int64 splits: - name: train num_bytes: 2745955 num_examples: 10400 download_size: 425328 dataset_size: 2745955 configs: - config_name: default data_files: - split: train path: data/train-* ---
jeslev/trec_covid_toyset
jeslev
2025-01-08T13:48:57Z
18
0
[ "license:cc-by-sa-4.0", "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "medical" ]
[]
2025-01-07T15:50:36Z
0
--- license: cc-by-sa-4.0 dataset_info: features: - name: query-id dtype: int64 - name: corpus-id dtype: string - name: score dtype: int64 - name: document dtype: string - name: text dtype: string splits: - name: rel num_bytes: 28695 num_examples: 25 - name: nonrel num_bytes: 29890 num_examples: 25 download_size: 50899 dataset_size: 58585 configs: - config_name: default data_files: - split: rel path: data/rel-* - split: nonrel path: data/nonrel-* tags: - medical ---
twei11/node1_round_45
twei11
2025-04-16T19:46:32Z
15
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-16T19:46:31Z
0
--- dataset_info: features: - name: messages list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 7002006 num_examples: 1800 download_size: 3405924 dataset_size: 7002006 configs: - config_name: default data_files: - split: train path: data/train-* ---
Seungjuhan/cont_v2_0_1
Seungjuhan
2024-11-09T10:44:19Z
20
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-09T10:44:18Z
0
--- dataset_info: features: - name: entity dtype: string - name: queries sequence: string - name: chain_of_facts list: - name: supporting_document struct: - name: atomic_facts sequence: 'null' - name: outbound_links sequence: string - name: text dtype: string - name: topic dtype: string - name: url dtype: string - name: text dtype: string - name: statement dtype: string - name: paraphrased_statement dtype: string - name: augmented_statement dtype: string - name: hard_statement dtype: string - name: paraphrased_hard_statement dtype: string - name: id dtype: int64 splits: - name: train num_bytes: 20177502 num_examples: 2499 download_size: 7567831 dataset_size: 20177502 configs: - config_name: default data_files: - split: train path: data/train-* ---
Nbeau/evaluation_addition_17digits_1000examples_n_plus_less_n.jsonl
Nbeau
2024-10-09T15:31:36Z
21
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-09T15:31:34Z
0
--- dataset_info: features: - name: input_test dtype: string - name: scratchpad dtype: string - name: result_final dtype: string - name: y_digits dtype: int64 splits: - name: test num_bytes: 1624082 num_examples: 1000 download_size: 445245 dataset_size: 1624082 configs: - config_name: default data_files: - split: test path: data/test-* ---
DioulaD/Books
DioulaD
2025-03-05T21:33:14Z
24
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-05T21:33:12Z
0
--- dataset_info: features: - name: url dtype: string - name: content dtype: string - name: source dtype: string splits: - name: train num_bytes: 1355156 num_examples: 1151 download_size: 806978 dataset_size: 1355156 configs: - config_name: default data_files: - split: train path: data/train-* ---
andlyu/so100_indoor_val_0
andlyu
2025-03-30T00:58:10Z
44
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-03-30T00:56:02Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 11, "total_frames": 5834, "total_tasks": 1, "total_videos": 33, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:11" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.arm": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.gripper": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.base": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
CohenQu/CoRA_math_reasoning_benchmark_final
CohenQu
2025-05-15T05:39:39Z
0
0
[ "region:us" ]
[]
2025-05-15T03:58:03Z
0
--- dataset_info: - config_name: AIME2024 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 10081 num_examples: 30 download_size: 8726 dataset_size: 10081 - config_name: AIME2024-SFT-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 134075 num_examples: 30 download_size: 62072 dataset_size: 134075 - config_name: AIME2024-SFT-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 136496 num_examples: 30 download_size: 55740 dataset_size: 136496 - config_name: AIME2024-SFT-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 133796 num_examples: 30 download_size: 53504 dataset_size: 133796 - config_name: AIME2024-SFT-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 133958 num_examples: 30 download_size: 53287 dataset_size: 133958 - config_name: AIME2024-SFT-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 136352 num_examples: 30 download_size: 55236 dataset_size: 136352 - config_name: AIME2025 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 14629 num_examples: 30 download_size: 13142 dataset_size: 14629 - config_name: AIME2025-RPO-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 163897 num_examples: 30 download_size: 67121 dataset_size: 163897 - config_name: AIME2025-RPO-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 163784 num_examples: 30 download_size: 67037 dataset_size: 163784 - config_name: AIME2025-RPO-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 165175 num_examples: 30 download_size: 67781 dataset_size: 165175 - config_name: AIME2025-RPO-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 174265 num_examples: 30 download_size: 70521 dataset_size: 174265 - config_name: AIME2025-RPO-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 172021 num_examples: 30 download_size: 69905 dataset_size: 172021 - config_name: AIME2025-SFT-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 136686 num_examples: 30 download_size: 56096 dataset_size: 136686 - config_name: AIME2025-SFT-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 140791 num_examples: 30 download_size: 58477 dataset_size: 140791 - config_name: AIME2025-SFT-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 140561 num_examples: 30 download_size: 57568 dataset_size: 140561 - config_name: AIME2025-SFT-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 139533 num_examples: 30 download_size: 72793 dataset_size: 139533 - config_name: AIME2025-SFT-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 139386 num_examples: 30 download_size: 57384 dataset_size: 139386 - config_name: AIME2025-STaR-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 142227 num_examples: 30 download_size: 58520 dataset_size: 142227 - config_name: AIME2025-STaR-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 358410 num_examples: 30 download_size: 76179 dataset_size: 358410 - config_name: AIME2025-STaR-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 263190 num_examples: 30 download_size: 65400 dataset_size: 263190 - config_name: AIME2025-STaR-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 219376 num_examples: 30 download_size: 68912 dataset_size: 219376 - config_name: AIME2025-STaR-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 204920 num_examples: 30 download_size: 60943 dataset_size: 204920 - config_name: AIME2025-Warmstart-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 172615 num_examples: 30 download_size: 74674 dataset_size: 172615 - config_name: AIME2025-Warmstart-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 172721 num_examples: 30 download_size: 74195 dataset_size: 172721 - config_name: AIME2025-Warmstart-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 170360 num_examples: 30 download_size: 72598 dataset_size: 170360 - config_name: AIME2025-Warmstart-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 168227 num_examples: 30 download_size: 71617 dataset_size: 168227 - config_name: AIME2025-Warmstart-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 173316 num_examples: 30 download_size: 74720 dataset_size: 173316 - config_name: AMC2023 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 11158 num_examples: 40 download_size: 8969 dataset_size: 11158 - config_name: AMC2023-SFT-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 169079 num_examples: 40 download_size: 66310 dataset_size: 169079 - config_name: AMC2023-SFT-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 169190 num_examples: 40 download_size: 66345 dataset_size: 169190 - config_name: AMC2023-SFT-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 171362 num_examples: 40 download_size: 75251 dataset_size: 171362 - config_name: AMC2023-SFT-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 170540 num_examples: 40 download_size: 73781 dataset_size: 170540 - config_name: AMC2023-SFT-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 173689 num_examples: 40 download_size: 75434 dataset_size: 173689 - config_name: DeepScaleRHard-RPO-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 151700 num_examples: 30 download_size: 61115 dataset_size: 151700 - config_name: DeepScaleRHard-RPO-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 166734 num_examples: 30 download_size: 67139 dataset_size: 166734 - config_name: DeepScaleRHard-RPO-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 167849 num_examples: 30 download_size: 72957 dataset_size: 167849 - config_name: DeepScaleRHard-RPO-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 156334 num_examples: 30 download_size: 63525 dataset_size: 156334 - config_name: DeepScaleRHard-RPO-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 162683 num_examples: 30 download_size: 65407 dataset_size: 162683 - config_name: DeepScaleRHard-SFT features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 9096 num_examples: 30 download_size: 9060 dataset_size: 9096 - config_name: DeepScaleRHard-SFT-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 132359 num_examples: 30 download_size: 53198 dataset_size: 132359 - config_name: DeepScaleRHard-SFT-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 132860 num_examples: 30 download_size: 68641 dataset_size: 132860 - config_name: DeepScaleRHard-SFT-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 133143 num_examples: 30 download_size: 54141 dataset_size: 133143 - config_name: DeepScaleRHard-SFT-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 131396 num_examples: 30 download_size: 52258 dataset_size: 131396 - config_name: DeepScaleRHard-SFT-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 133606 num_examples: 30 download_size: 61539 dataset_size: 133606 - config_name: DeepScaleRHard-STaR-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 134772 num_examples: 30 download_size: 53856 dataset_size: 134772 - config_name: DeepScaleRHard-STaR-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 181718 num_examples: 30 download_size: 59102 dataset_size: 181718 - config_name: DeepScaleRHard-STaR-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 214086 num_examples: 30 download_size: 64952 dataset_size: 214086 - config_name: DeepScaleRHard-STaR-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 244012 num_examples: 30 download_size: 69605 dataset_size: 244012 - config_name: DeepScaleRHard-STaR-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 184641 num_examples: 30 download_size: 55910 dataset_size: 184641 - config_name: DeepScaleRHard-Warmstart-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 157603 num_examples: 30 download_size: 66670 dataset_size: 157603 - config_name: DeepScaleRHard-Warmstart-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 158654 num_examples: 30 download_size: 67117 dataset_size: 158654 - config_name: DeepScaleRHard-Warmstart-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 148411 num_examples: 30 download_size: 62468 dataset_size: 148411 - config_name: DeepScaleRHard-Warmstart-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 154348 num_examples: 30 download_size: 65505 dataset_size: 154348 - config_name: DeepScaleRHard-Warmstart-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 156139 num_examples: 30 download_size: 65407 dataset_size: 156139 - config_name: HMMT2025 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 10400 num_examples: 30 download_size: 7977 dataset_size: 10400 - config_name: HMMT2025-RPO-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 163344 num_examples: 30 download_size: 63493 dataset_size: 163344 - config_name: HMMT2025-RPO-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 158093 num_examples: 30 download_size: 62919 dataset_size: 158093 - config_name: HMMT2025-RPO-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 162942 num_examples: 30 download_size: 65231 dataset_size: 162942 - config_name: HMMT2025-RPO-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 171250 num_examples: 30 download_size: 68661 dataset_size: 171250 - config_name: HMMT2025-RPO-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 166884 num_examples: 30 download_size: 66103 dataset_size: 166884 - config_name: HMMT2025-SFT-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 137389 num_examples: 30 download_size: 63104 dataset_size: 137389 - config_name: HMMT2025-SFT-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 137907 num_examples: 30 download_size: 56065 dataset_size: 137907 - config_name: HMMT2025-SFT-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 137229 num_examples: 30 download_size: 63082 dataset_size: 137229 - config_name: HMMT2025-SFT-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 136647 num_examples: 30 download_size: 62505 dataset_size: 136647 - config_name: HMMT2025-SFT-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 137711 num_examples: 30 download_size: 63300 dataset_size: 137711 - config_name: HMMT2025-STaR-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 171927 num_examples: 30 download_size: 56424 dataset_size: 171927 - config_name: HMMT2025-STaR-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 140811 num_examples: 30 download_size: 56183 dataset_size: 140811 - config_name: HMMT2025-STaR-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 214077 num_examples: 30 download_size: 60292 dataset_size: 214077 - config_name: HMMT2025-STaR-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 160961 num_examples: 30 download_size: 65503 dataset_size: 160961 - config_name: HMMT2025-STaR-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 139053 num_examples: 30 download_size: 55748 dataset_size: 139053 - config_name: HMMT2025-Warmstart-hint1 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 167451 num_examples: 30 download_size: 70939 dataset_size: 167451 - config_name: HMMT2025-Warmstart-hint2 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 170609 num_examples: 30 download_size: 71077 dataset_size: 170609 - config_name: HMMT2025-Warmstart-hint3 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 174268 num_examples: 30 download_size: 68258 dataset_size: 174268 - config_name: HMMT2025-Warmstart-hint4 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 157588 num_examples: 30 download_size: 65549 dataset_size: 157588 - config_name: HMMT2025-Warmstart-hint5 features: - name: problem dtype: string - name: answer dtype: string splits: - name: test num_bytes: 158320 num_examples: 30 download_size: 66901 dataset_size: 158320 configs: - config_name: AIME2024 data_files: - split: test path: AIME2024/test-* - config_name: AIME2024-SFT-hint1 data_files: - split: test path: AIME2024-SFT-hint1/test-* - config_name: AIME2024-SFT-hint2 data_files: - split: test path: AIME2024-SFT-hint2/test-* - config_name: AIME2024-SFT-hint3 data_files: - split: test path: AIME2024-SFT-hint3/test-* - config_name: AIME2024-SFT-hint4 data_files: - split: test path: AIME2024-SFT-hint4/test-* - config_name: AIME2024-SFT-hint5 data_files: - split: test path: AIME2024-SFT-hint5/test-* - config_name: AIME2025 data_files: - split: test path: AIME2025/test-* - config_name: AIME2025-RPO-hint1 data_files: - split: test path: AIME2025-RPO-hint1/test-* - config_name: AIME2025-RPO-hint2 data_files: - split: test path: AIME2025-RPO-hint2/test-* - config_name: AIME2025-RPO-hint3 data_files: - split: test path: AIME2025-RPO-hint3/test-* - config_name: AIME2025-RPO-hint4 data_files: - split: test path: AIME2025-RPO-hint4/test-* - config_name: AIME2025-RPO-hint5 data_files: - split: test path: AIME2025-RPO-hint5/test-* - config_name: AIME2025-SFT-hint1 data_files: - split: test path: AIME2025-SFT-hint1/test-* - config_name: AIME2025-SFT-hint2 data_files: - split: test path: AIME2025-SFT-hint2/test-* - config_name: AIME2025-SFT-hint3 data_files: - split: test path: AIME2025-SFT-hint3/test-* - config_name: AIME2025-SFT-hint4 data_files: - split: test path: AIME2025-SFT-hint4/test-* - config_name: AIME2025-SFT-hint5 data_files: - split: test path: AIME2025-SFT-hint5/test-* - config_name: AIME2025-STaR-hint1 data_files: - split: test path: AIME2025-STaR-hint1/test-* - config_name: AIME2025-STaR-hint2 data_files: - split: test path: AIME2025-STaR-hint2/test-* - config_name: AIME2025-STaR-hint3 data_files: - split: test path: AIME2025-STaR-hint3/test-* - config_name: AIME2025-STaR-hint4 data_files: - split: test path: AIME2025-STaR-hint4/test-* - config_name: AIME2025-STaR-hint5 data_files: - split: test path: AIME2025-STaR-hint5/test-* - config_name: AIME2025-Warmstart-hint1 data_files: - split: test path: AIME2025-Warmstart-hint1/test-* - config_name: AIME2025-Warmstart-hint2 data_files: - split: test path: AIME2025-Warmstart-hint2/test-* - config_name: AIME2025-Warmstart-hint3 data_files: - split: test path: AIME2025-Warmstart-hint3/test-* - config_name: AIME2025-Warmstart-hint4 data_files: - split: test path: AIME2025-Warmstart-hint4/test-* - config_name: AIME2025-Warmstart-hint5 data_files: - split: test path: AIME2025-Warmstart-hint5/test-* - config_name: AMC2023 data_files: - split: test path: AMC2023/test-* - config_name: AMC2023-SFT-hint1 data_files: - split: test path: AMC2023-SFT-hint1/test-* - config_name: AMC2023-SFT-hint2 data_files: - split: test path: AMC2023-SFT-hint2/test-* - config_name: AMC2023-SFT-hint3 data_files: - split: test path: AMC2023-SFT-hint3/test-* - config_name: AMC2023-SFT-hint4 data_files: - split: test path: AMC2023-SFT-hint4/test-* - config_name: AMC2023-SFT-hint5 data_files: - split: test path: AMC2023-SFT-hint5/test-* - config_name: DeepScaleRHard-RPO-hint1 data_files: - split: test path: DeepScaleRHard-RPO-hint1/test-* - config_name: DeepScaleRHard-RPO-hint2 data_files: - split: test path: DeepScaleRHard-RPO-hint2/test-* - config_name: DeepScaleRHard-RPO-hint3 data_files: - split: test path: DeepScaleRHard-RPO-hint3/test-* - config_name: DeepScaleRHard-RPO-hint4 data_files: - split: test path: DeepScaleRHard-RPO-hint4/test-* - config_name: DeepScaleRHard-RPO-hint5 data_files: - split: test path: DeepScaleRHard-RPO-hint5/test-* - config_name: DeepScaleRHard-SFT data_files: - split: test path: DeepScaleRHard-SFT/test-* - config_name: DeepScaleRHard-SFT-hint1 data_files: - split: test path: DeepScaleRHard-SFT-hint1/test-* - config_name: DeepScaleRHard-SFT-hint2 data_files: - split: test path: DeepScaleRHard-SFT-hint2/test-* - config_name: DeepScaleRHard-SFT-hint3 data_files: - split: test path: DeepScaleRHard-SFT-hint3/test-* - config_name: DeepScaleRHard-SFT-hint4 data_files: - split: test path: DeepScaleRHard-SFT-hint4/test-* - config_name: DeepScaleRHard-SFT-hint5 data_files: - split: test path: DeepScaleRHard-SFT-hint5/test-* - config_name: DeepScaleRHard-STaR-hint1 data_files: - split: test path: DeepScaleRHard-STaR-hint1/test-* - config_name: DeepScaleRHard-STaR-hint2 data_files: - split: test path: DeepScaleRHard-STaR-hint2/test-* - config_name: DeepScaleRHard-STaR-hint3 data_files: - split: test path: DeepScaleRHard-STaR-hint3/test-* - config_name: DeepScaleRHard-STaR-hint4 data_files: - split: test path: DeepScaleRHard-STaR-hint4/test-* - config_name: DeepScaleRHard-STaR-hint5 data_files: - split: test path: DeepScaleRHard-STaR-hint5/test-* - config_name: DeepScaleRHard-Warmstart-hint1 data_files: - split: test path: DeepScaleRHard-Warmstart-hint1/test-* - config_name: DeepScaleRHard-Warmstart-hint2 data_files: - split: test path: DeepScaleRHard-Warmstart-hint2/test-* - config_name: DeepScaleRHard-Warmstart-hint3 data_files: - split: test path: DeepScaleRHard-Warmstart-hint3/test-* - config_name: DeepScaleRHard-Warmstart-hint4 data_files: - split: test path: DeepScaleRHard-Warmstart-hint4/test-* - config_name: DeepScaleRHard-Warmstart-hint5 data_files: - split: test path: DeepScaleRHard-Warmstart-hint5/test-* - config_name: HMMT2025 data_files: - split: test path: HMMT2025/test-* - config_name: HMMT2025-RPO-hint1 data_files: - split: test path: HMMT2025-RPO-hint1/test-* - config_name: HMMT2025-RPO-hint2 data_files: - split: test path: HMMT2025-RPO-hint2/test-* - config_name: HMMT2025-RPO-hint3 data_files: - split: test path: HMMT2025-RPO-hint3/test-* - config_name: HMMT2025-RPO-hint4 data_files: - split: test path: HMMT2025-RPO-hint4/test-* - config_name: HMMT2025-RPO-hint5 data_files: - split: test path: HMMT2025-RPO-hint5/test-* - config_name: HMMT2025-SFT-hint1 data_files: - split: test path: HMMT2025-SFT-hint1/test-* - config_name: HMMT2025-SFT-hint2 data_files: - split: test path: HMMT2025-SFT-hint2/test-* - config_name: HMMT2025-SFT-hint3 data_files: - split: test path: HMMT2025-SFT-hint3/test-* - config_name: HMMT2025-SFT-hint4 data_files: - split: test path: HMMT2025-SFT-hint4/test-* - config_name: HMMT2025-SFT-hint5 data_files: - split: test path: HMMT2025-SFT-hint5/test-* - config_name: HMMT2025-STaR-hint1 data_files: - split: test path: HMMT2025-STaR-hint1/test-* - config_name: HMMT2025-STaR-hint2 data_files: - split: test path: HMMT2025-STaR-hint2/test-* - config_name: HMMT2025-STaR-hint3 data_files: - split: test path: HMMT2025-STaR-hint3/test-* - config_name: HMMT2025-STaR-hint4 data_files: - split: test path: HMMT2025-STaR-hint4/test-* - config_name: HMMT2025-STaR-hint5 data_files: - split: test path: HMMT2025-STaR-hint5/test-* - config_name: HMMT2025-Warmstart-hint1 data_files: - split: test path: HMMT2025-Warmstart-hint1/test-* - config_name: HMMT2025-Warmstart-hint2 data_files: - split: test path: HMMT2025-Warmstart-hint2/test-* - config_name: HMMT2025-Warmstart-hint3 data_files: - split: test path: HMMT2025-Warmstart-hint3/test-* - config_name: HMMT2025-Warmstart-hint4 data_files: - split: test path: HMMT2025-Warmstart-hint4/test-* - config_name: HMMT2025-Warmstart-hint5 data_files: - split: test path: HMMT2025-Warmstart-hint5/test-* ---
lewtun/test-fast-parser-l1b-v2
lewtun
2024-11-26T15:02:12Z
23
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-26T14:19:45Z
0
--- dataset_info: - config_name: None--evals features: - name: n dtype: int64 - name: acc_naive dtype: float64 - name: acc_weighted dtype: float64 - name: acc_maj dtype: float64 splits: - name: train num_bytes: 288 num_examples: 9 download_size: 2556 dataset_size: 288 - config_name: default features: - name: problem dtype: string - name: solution dtype: string - name: answer dtype: string - name: subject dtype: string - name: level dtype: int64 - name: unique_id dtype: string - name: completions sequence: string - name: scores sequence: sequence: float64 - name: pred dtype: string - name: completion_tokens sequence: int64 - name: agg_scores sequence: float64 - name: completions@1 sequence: string - name: agg_scores@1 sequence: float64 - name: preds@1 sequence: string - name: pred_weighted@1 dtype: string - name: pred_maj@1 dtype: string - name: pred_naive@1 dtype: string - name: completions@2 sequence: string - name: agg_scores@2 sequence: float64 - name: preds@2 sequence: string - name: pred_weighted@2 dtype: string - name: pred_maj@2 dtype: string - name: pred_naive@2 dtype: string - name: completions@4 sequence: string - name: agg_scores@4 sequence: float64 - name: preds@4 sequence: string - name: pred_weighted@4 dtype: string - name: pred_maj@4 dtype: string - name: pred_naive@4 dtype: string - name: completions@8 sequence: string - name: agg_scores@8 sequence: float64 - name: preds@8 sequence: string - name: pred_weighted@8 dtype: string - name: pred_maj@8 dtype: string - name: pred_naive@8 dtype: string - name: completions@16 sequence: string - name: agg_scores@16 sequence: float64 - name: preds@16 sequence: string - name: pred_weighted@16 dtype: string - name: pred_maj@16 dtype: string - name: pred_naive@16 dtype: string - name: completions@32 sequence: string - name: agg_scores@32 sequence: float64 - name: preds@32 sequence: string - name: pred_weighted@32 dtype: string - name: pred_maj@32 dtype: string - name: pred_naive@32 dtype: string - name: completions@64 sequence: string - name: agg_scores@64 sequence: float64 - name: preds@64 sequence: string - name: pred_weighted@64 dtype: string - name: pred_maj@64 dtype: string - name: pred_naive@64 dtype: string - name: completions@128 sequence: string - name: agg_scores@128 sequence: float64 - name: preds@128 sequence: string - name: pred_weighted@128 dtype: string - name: pred_maj@128 dtype: string - name: pred_naive@128 dtype: string - name: completions@256 sequence: string - name: agg_scores@256 sequence: float64 - name: preds@256 sequence: string - name: pred_weighted@256 dtype: string - name: pred_maj@256 dtype: string - name: pred_naive@256 dtype: string splits: - name: train num_bytes: 669527214 num_examples: 500 download_size: 206143149 dataset_size: 669527214 configs: - config_name: None--evals data_files: - split: train path: None--evals/train-* - config_name: default data_files: - split: train path: data/train-* ---
GitBag/a_star_final_grpo_math_7_actor_aime-24_eval
GitBag
2025-05-05T09:58:50Z
0
0
[ "region:us" ]
[]
2025-05-05T09:58:43Z
0
--- dataset_info: features: - name: problem dtype: string - name: answer dtype: int64 - name: response_0 dtype: string - name: response_1 dtype: string - name: response_2 dtype: string - name: response_3 dtype: string - name: response_4 dtype: string - name: response_5 dtype: string - name: response_6 dtype: string - name: response_7 dtype: string - name: response_8 dtype: string - name: response_9 dtype: string - name: response_10 dtype: string - name: response_11 dtype: string - name: response_12 dtype: string - name: response_13 dtype: string - name: response_14 dtype: string - name: response_15 dtype: string - name: response_16 dtype: string - name: response_17 dtype: string - name: response_18 dtype: string - name: response_19 dtype: string - name: response_20 dtype: string - name: response_21 dtype: string - name: response_22 dtype: string - name: response_23 dtype: string - name: response_24 dtype: string - name: response_25 dtype: string - name: response_26 dtype: string - name: response_27 dtype: string - name: response_28 dtype: string - name: response_29 dtype: string - name: response_30 dtype: string - name: response_31 dtype: string - name: eval_0 dtype: float64 - name: eval_1 dtype: float64 - name: eval_2 dtype: float64 - name: eval_3 dtype: float64 - name: eval_4 dtype: float64 - name: eval_5 dtype: float64 - name: eval_6 dtype: float64 - name: eval_7 dtype: float64 - name: eval_8 dtype: float64 - name: eval_9 dtype: float64 - name: eval_10 dtype: float64 - name: eval_11 dtype: float64 - name: eval_12 dtype: float64 - name: eval_13 dtype: float64 - name: eval_14 dtype: float64 - name: eval_15 dtype: float64 - name: eval_16 dtype: float64 - name: eval_17 dtype: float64 - name: eval_18 dtype: float64 - name: eval_19 dtype: float64 - name: eval_20 dtype: float64 - name: eval_21 dtype: float64 - name: eval_22 dtype: float64 - name: eval_23 dtype: float64 - name: eval_24 dtype: float64 - name: eval_25 dtype: float64 - name: eval_26 dtype: float64 - name: eval_27 dtype: float64 - name: eval_28 dtype: float64 - name: eval_29 dtype: float64 - name: eval_30 dtype: float64 - name: eval_31 dtype: float64 splits: - name: train num_bytes: 1926021 num_examples: 30 download_size: 962930 dataset_size: 1926021 configs: - config_name: default data_files: - split: train path: data/train-* ---
h1ee/dataset-01
h1ee
2024-10-30T03:14:58Z
17
0
[ "license:apache-2.0", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-30T02:55:33Z
0
--- license: apache-2.0 dataset_info: features: - name: instruction dtype: string - name: input dtype: float64 - name: output dtype: string splits: - name: train num_bytes: 1537 num_examples: 10 download_size: 3658 dataset_size: 1537 configs: - config_name: default data_files: - split: train path: data/train-* ---
alea-institute/kl3m-filter-data-dotgov-www.bjs.gov
alea-institute
2025-02-03T23:10:38Z
15
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-03T23:10:37Z
0
--- dataset_info: features: - name: identifier dtype: string - name: dataset dtype: string - name: mime_type dtype: string - name: score dtype: float64 - name: tokens sequence: int64 splits: - name: train num_bytes: 8206 num_examples: 2 download_size: 4443 dataset_size: 8206 configs: - config_name: default data_files: - split: train path: data/train-* ---
Geralt-Targaryen/Children-zh
Geralt-Targaryen
2025-05-03T11:36:21Z
0
0
[ "license:apache-2.0", "size_categories:10M<n<100M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-02T08:13:47Z
0
--- license: apache-2.0 ---
juyoung-trl/databricks-dolly-15k-ja
juyoung-trl
2024-11-11T02:30:09Z
18
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-08T07:49:10Z
0
--- dataset_info: features: - name: instruction dtype: string - name: response dtype: string splits: - name: train num_bytes: 14995173 num_examples: 15015 download_size: 8940598 dataset_size: 14995173 configs: - config_name: default data_files: - split: train path: data/train-* ---
rogerscuall/pricer-data
rogerscuall
2025-05-21T22:35:38Z
0
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-21T22:32:15Z
0
--- dataset_info: features: - name: text dtype: string - name: price dtype: float64 - name: category dtype: string splits: - name: train num_bytes: 321503007 num_examples: 400000 - name: test num_bytes: 1597564 num_examples: 2000 download_size: 187829527 dataset_size: 323100571 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
palli23/spjallromur-30s-chunks80pecentSpit
palli23
2025-06-23T14:18:12Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-23T13:41:29Z
0
--- dataset_info: features: - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: start dtype: float32 - name: end dtype: float32 - name: speaker dtype: string - name: gender dtype: string - name: age dtype: string splits: - name: train num_bytes: 673687148.0 num_examples: 715 download_size: 607481286 dataset_size: 673687148.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
llama-duo/llama3-3b-closedqa-eval-by-gpt4o
llama-duo
2025-04-02T06:19:25Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-02T02:15:13Z
0
--- dataset_info: features: - name: instructions dtype: string - name: target_responses dtype: string - name: candidate_responses dtype: string - name: model_id dtype: string - name: model_sha dtype: string - name: eval_prompts dtype: string - name: similarity_scores dtype: float64 - name: precision_scores dtype: float64 - name: evaluators dtype: string - name: dates dtype: string splits: - name: llama3_3b_closedqa_gpt4o_100k_by_gpt4o num_bytes: 474582 num_examples: 60 download_size: 81695 dataset_size: 474582 configs: - config_name: default data_files: - split: llama3_3b_closedqa_gpt4o_100k_by_gpt4o path: data/llama3_3b_closedqa_gpt4o_100k_by_gpt4o-* ---
Den4ikAI/russian_dialogues_2
Den4ikAI
2024-07-05T12:03:27Z
147
12
[ "task_categories:text-generation", "task_categories:text2text-generation", "language:ru", "license:mit", "size_categories:1M<n<10M", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[ "conversational", "text-generation", "text2text-generation" ]
2023-07-05T07:16:52Z
1
--- license: mit task_categories: - conversational - text-generation - text2text-generation language: - ru size_categories: - 1M<n<10M --- ### Den4ikAI/russian_dialogues_2 Датасет русских диалогов для обучения диалоговых моделей. Количество диалогов - 1.6 миллиона Формат датасета: ``` { 'sample': ['Привет', 'Привет', 'Как дела?'] } ``` ### Citation: ``` @MISC{russian_instructions, author = {Denis Petrov}, title = {Russian context dialogues dataset for conversational agents}, url = {https://huggingface.co/datasets/Den4ikAI/russian_dialogues_2}, year = 2023 } ```
davanstrien/outputs-embeddings
davanstrien
2025-02-13T14:15:08Z
21
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-13T10:17:26Z
0
--- dataset_info: features: - name: datasetId dtype: large_string - name: author dtype: large_string - name: last_modified dtype: timestamp[us, tz=UTC] - name: downloads dtype: int64 - name: likes dtype: int64 - name: tags large_list: large_string - name: task_categories large_list: large_string - name: createdAt dtype: timestamp[us, tz=UTC] - name: card dtype: large_string - name: post_yaml_content dtype: large_string - name: summary dtype: large_string - name: text dtype: string - name: embeddings sequence: float64 splits: - name: train num_bytes: 1309341227 num_examples: 53591 download_size: 548062066 dataset_size: 1309341227 configs: - config_name: default data_files: - split: train path: data/train-* ---
steveli97/my-question-answer-dataset
steveli97
2025-04-02T10:00:14Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-02T10:00:11Z
0
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 85 num_examples: 2 download_size: 1210 dataset_size: 85 configs: - config_name: default data_files: - split: train path: data/train-* ---
datania/demanda_energia_electrica
datania
2025-03-09T00:23:27Z
20
0
[ "license:mit", "size_categories:100K<n<1M", "format:parquet", "modality:tabular", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-05T18:58:01Z
0
--- license: mit --- # demanda_energia_electrica Este conjunto de datos ha sido producido y publicado automáticamente por [datania](https://github.com/davidgasquez/datania), una plataforma de datos abiertos moderna. ## Detalles del Conjunto de Datos - **Número de filas:** 735431 - **Número de columnas:** 3
siripragadashashank/mkb
siripragadashashank
2024-01-18T11:09:02Z
121
1
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "multilinguality:translation", "source_datasets:original", "language:bn", "language:en", "language:gu", "language:hi", "language:ml", "language:mr", "language:or", "language:pa", "language:ta", "language:te", "language:ur", "license:cc-by-4.0", "size_categories:1K<n<10K", "arxiv:2007.07691", "region:us" ]
[ "text-generation", "fill-mask" ]
2022-03-02T23:29:22Z
0
--- task_categories: - text-generation - fill-mask multilinguality: - translation task_ids: - language-modeling - masked-language-modeling language: - bn - en - gu - hi - ml - mr - or - pa - ta - te - ur annotations_creators: - no-annotation source_datasets: - original size_categories: - 1K<n<10K - n<1K license: - cc-by-4.0 paperswithcode_id: null pretty_name: CVIT MKB dataset_info: - config_name: or-ur features: - name: translation dtype: translation: languages: - or - ur splits: - name: train num_bytes: 39336 num_examples: 98 download_size: 52428800 dataset_size: 39336 - config_name: ml-or features: - name: translation dtype: translation: languages: - ml - or splits: - name: train num_bytes: 224084 num_examples: 427 download_size: 52428800 dataset_size: 224084 - config_name: bn-ta features: - name: translation dtype: translation: languages: - bn - ta splits: - name: train num_bytes: 2020506 num_examples: 3460 download_size: 52428800 dataset_size: 2020506 - config_name: gu-mr features: - name: translation dtype: translation: languages: - gu - mr splits: - name: train num_bytes: 1818018 num_examples: 3658 download_size: 52428800 dataset_size: 1818018 - config_name: hi-or features: - name: translation dtype: translation: languages: - hi - or splits: - name: train num_bytes: 188779 num_examples: 389 download_size: 52428800 dataset_size: 188779 - config_name: en-or features: - name: translation dtype: translation: languages: - en - or splits: - name: train num_bytes: 276520 num_examples: 768 download_size: 52428800 dataset_size: 276520 - config_name: mr-ur features: - name: translation dtype: translation: languages: - mr - ur splits: - name: train num_bytes: 225305 num_examples: 490 download_size: 52428800 dataset_size: 225305 - config_name: en-ta features: - name: translation dtype: translation: languages: - en - ta splits: - name: train num_bytes: 2578828 num_examples: 5744 download_size: 52428800 dataset_size: 2578828 - config_name: hi-ta features: - name: translation dtype: translation: languages: - hi - ta splits: - name: train num_bytes: 1583237 num_examples: 2761 download_size: 52428800 dataset_size: 1583237 - config_name: bn-en features: - name: translation dtype: translation: languages: - bn - en splits: - name: train num_bytes: 2001834 num_examples: 5634 download_size: 52428800 dataset_size: 2001834 - config_name: bn-or features: - name: translation dtype: translation: languages: - bn - or splits: - name: train num_bytes: 220893 num_examples: 447 download_size: 52428800 dataset_size: 220893 - config_name: ml-ta features: - name: translation dtype: translation: languages: - ml - ta splits: - name: train num_bytes: 1958818 num_examples: 3124 download_size: 52428800 dataset_size: 1958818 - config_name: gu-ur features: - name: translation dtype: translation: languages: - gu - ur splits: - name: train num_bytes: 311082 num_examples: 749 download_size: 52428800 dataset_size: 311082 - config_name: bn-ml features: - name: translation dtype: translation: languages: - bn - ml splits: - name: train num_bytes: 1587528 num_examples: 2938 download_size: 52428800 dataset_size: 1587528 - config_name: bn-hi features: - name: translation dtype: translation: languages: - bn - hi splits: - name: train num_bytes: 1298611 num_examples: 2706 download_size: 52428800 dataset_size: 1298611 - config_name: gu-te features: - name: translation dtype: translation: languages: - gu - te splits: - name: train num_bytes: 1669386 num_examples: 3528 download_size: 52428800 dataset_size: 1669386 - config_name: hi-ml features: - name: translation dtype: translation: languages: - hi - ml splits: - name: train num_bytes: 1208956 num_examples: 2305 download_size: 52428800 dataset_size: 1208956 - config_name: or-te features: - name: translation dtype: translation: languages: - or - te splits: - name: train num_bytes: 209457 num_examples: 440 download_size: 52428800 dataset_size: 209457 - config_name: en-ml features: - name: translation dtype: translation: languages: - en - ml splits: - name: train num_bytes: 2007061 num_examples: 5017 download_size: 52428800 dataset_size: 2007061 - config_name: en-hi features: - name: translation dtype: translation: languages: - en - hi splits: - name: train num_bytes: 1865430 num_examples: 5272 download_size: 52428800 dataset_size: 1865430 - config_name: mr-te features: - name: translation dtype: translation: languages: - mr - te splits: - name: train num_bytes: 1434444 num_examples: 2839 download_size: 52428800 dataset_size: 1434444 - config_name: bn-te features: - name: translation dtype: translation: languages: - bn - te splits: - name: train num_bytes: 1431096 num_examples: 2939 download_size: 52428800 dataset_size: 1431096 - config_name: gu-hi features: - name: translation dtype: translation: languages: - gu - hi splits: - name: train num_bytes: 1521174 num_examples: 3213 download_size: 52428800 dataset_size: 1521174 - config_name: ta-ur features: - name: translation dtype: translation: languages: - ta - ur splits: - name: train num_bytes: 329809 num_examples: 637 download_size: 52428800 dataset_size: 329809 - config_name: te-ur features: - name: translation dtype: translation: languages: - te - ur splits: - name: train num_bytes: 254581 num_examples: 599 download_size: 52428800 dataset_size: 254581 - config_name: gu-ml features: - name: translation dtype: translation: languages: - gu - ml splits: - name: train num_bytes: 1822865 num_examples: 3469 download_size: 52428800 dataset_size: 1822865 - config_name: hi-te features: - name: translation dtype: translation: languages: - hi - te splits: - name: train num_bytes: 1078371 num_examples: 2289 download_size: 52428800 dataset_size: 1078371 - config_name: en-te features: - name: translation dtype: translation: languages: - en - te splits: - name: train num_bytes: 1784517 num_examples: 5177 download_size: 52428800 dataset_size: 1784517 - config_name: ml-te features: - name: translation dtype: translation: languages: - ml - te splits: - name: train num_bytes: 1556164 num_examples: 2898 download_size: 52428800 dataset_size: 1556164 - config_name: hi-ur features: - name: translation dtype: translation: languages: - hi - ur splits: - name: train num_bytes: 313360 num_examples: 742 download_size: 52428800 dataset_size: 313360 - config_name: mr-or features: - name: translation dtype: translation: languages: - mr - or splits: - name: train num_bytes: 219193 num_examples: 432 download_size: 52428800 dataset_size: 219193 - config_name: en-ur features: - name: translation dtype: translation: languages: - en - ur splits: - name: train num_bytes: 289419 num_examples: 1019 download_size: 52428800 dataset_size: 289419 - config_name: ml-ur features: - name: translation dtype: translation: languages: - ml - ur splits: - name: train num_bytes: 295806 num_examples: 624 download_size: 52428800 dataset_size: 295806 - config_name: bn-mr features: - name: translation dtype: translation: languages: - bn - mr splits: - name: train num_bytes: 1554154 num_examples: 3054 download_size: 52428800 dataset_size: 1554154 - config_name: gu-ta features: - name: translation dtype: translation: languages: - gu - ta splits: - name: train num_bytes: 2284643 num_examples: 3998 download_size: 52428800 dataset_size: 2284643 - config_name: bn-gu features: - name: translation dtype: translation: languages: - bn - gu splits: - name: train num_bytes: 1840059 num_examples: 3810 download_size: 52428800 dataset_size: 1840059 - config_name: bn-ur features: - name: translation dtype: translation: languages: - bn - ur splits: - name: train num_bytes: 234561 num_examples: 559 download_size: 52428800 dataset_size: 234561 - config_name: ml-mr features: - name: translation dtype: translation: languages: - ml - mr splits: - name: train num_bytes: 1568672 num_examples: 2803 download_size: 52428800 dataset_size: 1568672 - config_name: or-ta features: - name: translation dtype: translation: languages: - or - ta splits: - name: train num_bytes: 267193 num_examples: 470 download_size: 52428800 dataset_size: 267193 - config_name: ta-te features: - name: translation dtype: translation: languages: - ta - te splits: - name: train num_bytes: 1773728 num_examples: 3100 download_size: 52428800 dataset_size: 1773728 - config_name: gu-or features: - name: translation dtype: translation: languages: - gu - or splits: - name: train num_bytes: 256362 num_examples: 541 download_size: 52428800 dataset_size: 256362 - config_name: en-gu features: - name: translation dtype: translation: languages: - en - gu splits: - name: train num_bytes: 2318080 num_examples: 6615 download_size: 52428800 dataset_size: 2318080 - config_name: hi-mr features: - name: translation dtype: translation: languages: - hi - mr splits: - name: train num_bytes: 1243583 num_examples: 2491 download_size: 52428800 dataset_size: 1243583 - config_name: mr-ta features: - name: translation dtype: translation: languages: - mr - ta splits: - name: train num_bytes: 1906073 num_examples: 3175 download_size: 52428800 dataset_size: 1906073 - config_name: en-mr features: - name: translation dtype: translation: languages: - en - mr splits: - name: train num_bytes: 2140298 num_examples: 5867 download_size: 52428800 dataset_size: 2140298 config_names: - bn-en - bn-gu - bn-hi - bn-ml - bn-mr - bn-or - bn-ta - bn-te - bn-ur - en-gu - en-hi - en-ml - en-mr - en-or - en-ta - en-te - en-ur - gu-hi - gu-ml - gu-mr - gu-or - gu-ta - gu-te - gu-ur - hi-ml - hi-mr - hi-or - hi-ta - hi-te - hi-ur - ml-mr - ml-or - ml-ta - ml-te - ml-ur - mr-or - mr-ta - mr-te - mr-ur - or-ta - or-te - or-ur - ta-te - ta-ur - te-ur --- # Dataset Card for CVIT MKB ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Link](http://preon.iiit.ac.in/~jerin/bhasha/) - **Repository:** - **Paper:** [ARXIV](https://arxiv.org/abs/2007.07691) - **Leaderboard:** - **Point of Contact:** [email]([email protected]) ### Dataset Summary Indian Prime Minister's speeches - Mann Ki Baat, on All India Radio, translated into many languages. ### Supported Tasks and Leaderboards [MORE INFORMATION NEEDED] ### Languages Hindi, Telugu, Tamil, Malayalam, Gujarati, Urdu, Bengali, Oriya, Marathi, Punjabi, and English ## Dataset Structure ### Data Instances [MORE INFORMATION NEEDED] ### Data Fields - `src_tag`: `string` text in source language - `tgt_tag`: `string` translation of source language in target language ### Data Splits [MORE INFORMATION NEEDED] ## Dataset Creation ### Curation Rationale [MORE INFORMATION NEEDED] ### Source Data [MORE INFORMATION NEEDED] #### Initial Data Collection and Normalization [MORE INFORMATION NEEDED] #### Who are the source language producers? [MORE INFORMATION NEEDED] ### Annotations #### Annotation process [MORE INFORMATION NEEDED] #### Who are the annotators? [MORE INFORMATION NEEDED] ### Personal and Sensitive Information [MORE INFORMATION NEEDED] ## Considerations for Using the Data ### Social Impact of Dataset [MORE INFORMATION NEEDED] ### Discussion of Biases [MORE INFORMATION NEEDED] ### Other Known Limitations [MORE INFORMATION NEEDED] ## Additional Information ### Dataset Curators [MORE INFORMATION NEEDED] ### Licensing Information The datasets and pretrained models provided here are licensed under Creative Commons Attribution-ShareAlike 4.0 International License. ### Citation Information ``` @misc{siripragada2020multilingual, title={A Multilingual Parallel Corpora Collection Effort for Indian Languages}, author={Shashank Siripragada and Jerin Philip and Vinay P. Namboodiri and C V Jawahar}, year={2020}, eprint={2007.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
bobertonthebuilder/zxyxxxl_batch_21
bobertonthebuilder
2025-03-20T05:51:02Z
16
0
[ "format:parquet", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-20T05:51:01Z
0
--- dataset_info: features: [] splits: - name: train num_bytes: 0 num_examples: 0 download_size: 324 dataset_size: 0 configs: - config_name: default data_files: - split: train path: data/train-* ---
dhruv0808/indic_sentiment_analyzer
dhruv0808
2025-01-16T08:51:55Z
23
0
[ "language:en", "language:hi", "language:te", "language:ta", "language:kn", "language:or", "language:bn", "language:gu", "language:pa", "language:ml", "language:mr", "language:as", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:csv", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us", "synthetic", "sentiment", "multilingual", "NLP" ]
[]
2025-01-16T05:39:47Z
0
--- license: cc-by-4.0 language: - en - hi - te - ta - kn - or - bn - gu - pa - ml - mr - as tags: - synthetic - sentiment - multilingual - NLP size_categories: - 100K<n<1M --- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> # Multilingual Sentiment Analysis Dataset for Indian Languages ## Overview This repository contains a comprehensive sentiment analysis dataset covering 11 Indian languages and English. The dataset is designed to support sentiment analysis tasks across multiple domains and languages, making it valuable for developing multilingual sentiment analysis models and applications. ## Languages Covered - English (en) - Original - Hindi (hi) - Telugu (te) - Tamil (ta) - Kannada (kn) - Odia/Oriya (or) - Bengali (bn) - Gujarati (gu) - Punjabi (pa) - Malayalam (ml) - Marathi (mr) - Assamese (as) ## Dataset Composition ### Sources 1. **Generated Data** - Created using state-of-the-art language models via Groq API: - Gemma 2 9B IT - LLaMA 3.1 70B Versatile - LLaMA 3.1 8B Instant - LLaMA 3.2 1B Preview - LLaMA 3.2 3B Preview - Mixtral 8x7B 32768 2. **Existing Data Integration** - Incorporated approximately 1,000 data points per language from ai4bharat/IndicSentiment dataset ### Domain Coverage The dataset encompasses a broad spectrum of domains, providing comprehensive coverage across various sectors and industries. It includes sentiment-labeled text from customer feedback and market research, extending to political analysis and financial markets. The dataset covers customer service interactions, brand monitoring efforts, and content recommendation systems. It also incorporates data from healthcare settings, social media monitoring, and employee satisfaction surveys. The scope extends to banking services, product reviews, and general survey analysis across different sectors. The educational sector, crisis management scenarios, and entertainment industry are represented, along with e-commerce transactions and legal sector communications. The dataset further includes sentiments from human resources contexts, sports analytics, and the tourism and hospitality industry, ensuring a well-rounded representation of various professional and commercial contexts. ## Data Generation Process 1. Initial data generation in English using multiple LLMs through Groq API 2. Translation into target Indian languages 3. Integration with existing IndicSentiment dataset ## Dataset Statistics [Include relevant statistics about your dataset: - Total number of entries per language- 10.5K - Sentiment classes- "Positive", "Negative" and "Neutral"] ## Usage #### Loading the Dataset ```python from datasets import load_dataset ds = load_dataset("dhruv0808/indic_sentiment_analyzer", data_files="Multilingual_sentiment_analysis.csv") ``` ## Citation If you use this dataset in your research, please cite: ``` If you use this dataset in your research, please cite: ```bibtex @misc{bhatnagar2024multilingual, title={Multilingual Sentiment Analysis Dataset for Indian Languages}, author={Bhatnagar, Dhruv}, year={2024}, publisher={Hugging Face}, howpublished={\url{https://huggingface.co/datasets/dhruv0808/indic_sentiment_analyzer}}, note={A comprehensive multilingual sentiment dataset covering 12 languages and multiple domains} } ``` For plain text citation: ``` Bhatnagar, D. (2024). Multilingual Sentiment Analysis Dataset for Indian Languages. Available at: https://huggingface.co/datasets/dhruv0808/indic_sentiment_analyzer ``` ``` ## License This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). This license means: - ✅ You are free to: - Share — copy and redistribute the material in any medium or format - Adapt — remix, transform, and build upon the material - ❌ Restrictions: - NonCommercial — You may not use the material for commercial purposes - Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made For more details, see the [full license text](https://creativecommons.org/licenses/by-nc/4.0/legalcode) ``` ## Acknowledgments - Groq API for providing access to state-of-the-art language models - AI4Bharat for the IndicSentiment dataset ## Contact Dhruv Bhatnagar Email: [email protected]
Duy1412/Grasp5
Duy1412
2025-05-16T19:30:57Z
0
0
[ "task_categories:robotics", "region:us", "phosphobot", "so100", "phospho-dk" ]
[ "robotics" ]
2025-05-16T19:29:35Z
0
--- tags: - phosphobot - so100 - phospho-dk task_categories: - robotics --- # Grasp5 **This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).** This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
Jerry2003/SMM_data2
Jerry2003
2025-05-30T20:48:12Z
70
0
[ "license:apache-2.0", "modality:image", "region:us" ]
[]
2025-05-30T13:48:21Z
0
--- license: apache-2.0 ---
dtocre/cgr_instruct_dataset_test
dtocre
2025-04-06T04:48:06Z
16
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-06T04:48:00Z
0
--- dataset_info: features: - name: input dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 5240087 num_examples: 2208 - name: eval num_bytes: 251818 num_examples: 112 download_size: 1658223 dataset_size: 5491905 configs: - config_name: default data_files: - split: train path: data/train-* - split: eval path: data/eval-* ---
T145/connections
T145
2025-01-05T19:47:17Z
11
0
[ "language:en", "license:mit", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-01-05T19:38:17Z
0
--- dataset_info: features: - name: level dtype: int64 - name: level_id dtype: string - name: category dtype: string - name: words sequence: string splits: - name: train num_bytes: 55055 num_examples: 711 download_size: 32798 dataset_size: 55055 configs: - config_name: default data_files: - split: train path: data/train-* license: mit language: - en pretty_name: NYT Connections Answers size_categories: - n<1K --- Made with the following script using a local copy of [this website](https://word.tips/todays-nyt-connections-answers/): ```python import re from bs4 import BeautifulSoup from datasets import Dataset with open("Today’s NYT Connections Answers Jan 5, #574 - Daily Updates & Hints - Word Tips.htm", encoding="utf-8") as f: html = f.read() soup = BeautifulSoup(html, "html.parser") texts = re.findall(r'"([^"]*)"', "".join(soup.find_all("script")[9])) texts = [" ".join(text.split()).replace(" ,", ", ") for text in texts if ":" in text and (text.startswith("🟡") or text.startswith("🟢") or text.startswith("🔵") or text.startswith("🟣"))] levels = { "🟡": 1, "🟢": 2, "🔵": 3, "🟣": 4 } def gen(): for group in texts: level_id = group[:1] group = group[2:] category, group = group.split(":") entry = { "level": levels[level_id], "level_id": level_id, "category": category, "words": [word.strip() for word in group.split(",")] } #pprint(entry) yield entry dataset = Dataset.from_generator(gen) dataset.push_to_hub("T145/connections") ```
Hoang123/structured-generation-information-extraction-vlms-openbmb-RLAIF-V-Dataset
Hoang123
2025-05-13T13:53:18Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-13T13:53:16Z
0
--- dataset_info: features: - name: ds_name dtype: string - name: image dtype: image - name: question dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: origin_dataset dtype: string - name: origin_split dtype: string - name: idx dtype: string - name: image_path dtype: string - name: synthetic_question dtype: string - name: synthetic_description dtype: string - name: synthetic_quality dtype: string splits: - name: train num_bytes: 1606811.0 num_examples: 10 download_size: 1611701 dataset_size: 1606811.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_eb16c643-99b5-4c8e-abc0-022e9eda4d55
argilla-internal-testing
2024-12-11T09:14:37Z
15
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-11T09:14:37Z
0
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': positive '1': negative splits: - name: train num_bytes: 111 num_examples: 3 download_size: 1256 dataset_size: 111 configs: - config_name: default data_files: - split: train path: data/train-* ---
LeRobot-worldwide-hackathon/267-CheckLock-parasoll-sort10
LeRobot-worldwide-hackathon
2025-06-14T17:40:03Z
48
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
[ "robotics" ]
2025-06-14T17:39:57Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so101_follower", "total_episodes": 10, "total_frames": 4415, "total_tasks": 1, "total_videos": 20, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:10" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.front": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
nhagar/fineweb-2_urls
nhagar
2025-05-15T05:04:19Z
132
0
[ "task_categories:text-generation", "license:odc-by", "size_categories:1B<n<10B", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "doi:10.57967/hf/5443", "region:us" ]
[ "text-generation" ]
2025-04-23T19:09:53Z
0
--- license: odc-by task_categories: - text-generation size_categories: - 10B<n<100B configs: - config_name: default data_files: - split: train path: batch*.parquet --- # Dataset Card for fineweb-2_urls This dataset provides the URLs and top-level domains associated with training records in [HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible. ## Dataset Details ### Dataset Description This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy). - **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy) - **License:** Same as source dataset ### Dataset Sources - **Repository:** [HuggingFaceFW/fineweb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) ## Uses This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data. ### Direct Use The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve: - Identifying the most-used websites - Categorizing URLs to understand domain- or topic-level dataset composition - Comparing URLs across datasets - Digging into inclusion/exclusion patterns for a particular website ### Out-of-Scope Use This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset. ## Dataset Structure This dataset contains every record with a URL from the source dataset. It contains two columns: - `url`: The raw URL associated with each record - `domain`: The top-level domain for each URL, extracted with `tldextract` ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed]
douzi999/so100_test_25
douzi999
2025-04-03T06:45:51Z
27
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "tutorial" ]
[ "robotics" ]
2025-04-03T06:44:29Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 40, "total_frames": 19701, "total_tasks": 1, "total_videos": 120, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:40" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 12 ], "names": [ "left_shoulder_pan", "left_shoulder_lift", "left_elbow_flex", "left_wrist_flex", "left_wrist_roll", "left_gripper", "right_shoulder_pan", "right_shoulder_lift", "right_elbow_flex", "right_wrist_flex", "right_wrist_roll", "right_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 12 ], "names": [ "left_shoulder_pan", "left_shoulder_lift", "left_elbow_flex", "left_wrist_flex", "left_wrist_roll", "left_gripper", "right_shoulder_pan", "right_shoulder_lift", "right_elbow_flex", "right_wrist_flex", "right_wrist_roll", "right_gripper" ] }, "observation.images.laptop": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.fps": 30.0, "video.height": 480, "video.width": 640, "video.channels": 3, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
Imran1/jenny-tts-tags-40s-v2
Imran1
2024-10-02T07:02:01Z
29
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-02T06:59:51Z
0
--- dataset_info: features: - name: text dtype: string - name: language dtype: string - name: utterance_pitch_mean dtype: float32 - name: utterance_pitch_std dtype: float32 - name: snr dtype: float64 - name: c50 dtype: float64 - name: speaking_rate dtype: string - name: phonemes dtype: string - name: stoi dtype: float64 - name: si-sdr dtype: float64 - name: pesq dtype: float64 - name: noise dtype: string - name: reverberation dtype: string - name: speech_monotony dtype: string - name: sdr_noise dtype: string - name: pesq_speech_quality dtype: string splits: - name: train num_bytes: 13948 num_examples: 40 download_size: 15957 dataset_size: 13948 configs: - config_name: default data_files: - split: train path: data/train-* ---
Kyleyee/arranged_eval_data_imdb
Kyleyee
2025-03-22T16:21:47Z
6
0
[ "size_categories:n<1K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-22T16:08:19Z
0
--- dataset_info: features: - name: text dtype: string - name: Kyleyee_Qwen2-0.5B-DPO-imdb-tm-tp_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DPO-imdb-bm_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-tm-tp_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-tm-ep_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-tm-wp_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-tm-rp_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-bm-tp_scores dtype: float64 - name: Kyleyee_Qwen2-0.5B-DRDPO-imdb-bm-wp_scores dtype: float64 splits: - name: temperature_0 num_bytes: 144960 num_examples: 100 - name: temperature_0.25 num_bytes: 144960 num_examples: 100 - name: temperature_0.5 num_bytes: 144960 num_examples: 100 - name: temperature_0.75 num_bytes: 144960 num_examples: 100 - name: temperature_1 num_bytes: 144960 num_examples: 100 download_size: 529784 dataset_size: 724800 configs: - config_name: default data_files: - split: temperature_0 path: data/temperature_0-* - split: temperature_0.25 path: data/temperature_0.25-* - split: temperature_0.5 path: data/temperature_0.5-* - split: temperature_0.75 path: data/temperature_0.75-* - split: temperature_1 path: data/temperature_1-* ---
junkim100/multilingual_instruction_tuning_lima_bactrian
junkim100
2025-06-01T17:26:29Z
30
0
[ "size_categories:100K<n<1M", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-01T17:25:49Z
0
--- dataset_info: - config_name: bactrian features: - name: en_input dtype: string - name: en_output dtype: string - name: de_input dtype: string - name: de_output dtype: string - name: es_input dtype: string - name: es_output dtype: string - name: fr_input dtype: string - name: fr_output dtype: string - name: it_input dtype: string - name: it_output dtype: string - name: id_original_bactrian dtype: string splits: - name: train num_bytes: 210461464.65934017 num_examples: 53613 - name: val num_bytes: 26309155.170329917 num_examples: 6702 - name: test num_bytes: 26309155.170329917 num_examples: 6702 download_size: 158883391 dataset_size: 263079775.00000003 - config_name: combined features: - name: de_input dtype: string - name: de_output dtype: string - name: en_input dtype: string - name: en_output dtype: string - name: es_input dtype: string - name: es_output dtype: string - name: fr_input dtype: string - name: fr_output dtype: string - name: it_input dtype: string - name: it_output dtype: string splits: - name: train num_bytes: 222538006 num_examples: 54437 - name: val num_bytes: 28031723 num_examples: 6805 - name: test num_bytes: 27434172 num_examples: 6805 download_size: 167513898 dataset_size: 278003901 - config_name: lima features: - name: id_numeric dtype: int64 - name: en_input dtype: string - name: en_output dtype: string - name: de_input dtype: string - name: de_output dtype: string - name: es_input dtype: string - name: es_output dtype: string - name: fr_input dtype: string - name: fr_output dtype: string - name: it_input dtype: string - name: it_output dtype: string splits: - name: train num_bytes: 12773928.8 num_examples: 824 - name: val num_bytes: 1596741.1 num_examples: 103 - name: test num_bytes: 1596741.1 num_examples: 103 download_size: 9302187 dataset_size: 15967411.0 configs: - config_name: bactrian data_files: - split: train path: bactrian/train-* - split: val path: bactrian/val-* - split: test path: bactrian/test-* - config_name: combined data_files: - split: train path: combined/train-* - split: val path: combined/val-* - split: test path: combined/test-* - config_name: lima data_files: - split: train path: lima/train-* - split: val path: lima/val-* - split: test path: lima/test-* ---
Hieuman/nyt_comments_hard
Hieuman
2025-03-05T03:19:27Z
16
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-05T03:19:22Z
0
--- dataset_info: features: - name: authorIDs dtype: string - name: fullText sequence: sequence: string - name: cluster dtype: int64 - name: retrieval_idx sequence: int64 splits: - name: train num_bytes: 131578274 num_examples: 1306 download_size: 77771547 dataset_size: 131578274 configs: - config_name: default data_files: - split: train path: data/train-* ---
french-datasets/AmazonScience-mintaka
french-datasets
2025-03-31T09:12:18Z
68
0
[ "multilinguality:multilingual", "language:fra", "language:eng", "language:arb", "language:deu", "language:hi", "language:ita", "language:jpn", "language:por", "language:spa", "region:us" ]
[]
2025-03-30T17:10:40Z
0
--- language: - fra - eng - arb - deu - hi - ita - jpn - por - spa multilinguality: - multilingual viewer: false --- Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données https://huggingface.co/datasets/AmazonScience/mintaka.
dgambettavuw/D_gen3_run0_llama2-7b_xlsum_doc1000_real32_synt96_vuw
dgambettavuw
2024-12-19T08:23:45Z
17
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-19T08:23:37Z
0
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 538542 num_examples: 1000 download_size: 275283 dataset_size: 538542 configs: - config_name: default data_files: - split: train path: data/train-* ---
electricsheepafrica/Healthy-Life-Expectancy-At-Birth-Years-for-African-Countries
electricsheepafrica
2025-06-21T13:54:34Z
0
0
[ "license:apache-2.0", "size_categories:n<1K", "format:csv", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-21T13:39:10Z
0
--- license: apache-2.0 --- # Healthy life expectancy at birth (years) for African Countries ## Dataset Description This dataset contains 'Healthy life expectancy at birth (years)' data for all 54 African countries, sourced from the World Health Organization (WHO). The data is structured with years as rows and countries as columns, facilitating time-series analysis. The data is measured in: years. Missing values have been handled using linear interpolation followed by forward and backward filling to ensure a complete dataset. ## How to Use You can load the data using pandas: ```python import pandas as pd df = pd.read_csv('hf://datasets/electricsheepafrica/Healthy-Life-Expectancy-At-Birth-Years-for-African-Countries/healthy_life_expectancy_at_birth__years_.csv') print(df.head()) ```
mlfoundations-dev/load_in_science_biology_standardized
mlfoundations-dev
2025-04-17T00:51:39Z
16
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-15T23:22:50Z
0
--- dataset_info: features: - name: instruction_seed dtype: string - name: conversation_id dtype: int64 - name: _source dtype: string splits: - name: train num_bytes: 59954985 num_examples: 19825 download_size: 27599020 dataset_size: 59954985 configs: - config_name: default data_files: - split: train path: data/train-* ---
Coobiw/agiqa3k_qual
Coobiw
2025-04-27T17:26:54Z
26
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-27T17:24:15Z
0
--- dataset_info: features: - name: images sequence: image - name: problem dtype: string - name: answer dtype: float64 splits: - name: train num_bytes: 667070998.875 num_examples: 2385 - name: test num_bytes: 164424543.0 num_examples: 597 download_size: 829462977 dataset_size: 831495541.875 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
hzy/math-prm-rollouts-labels-0.5-start_2-end_3
hzy
2025-03-15T07:47:00Z
16
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-15T07:46:59Z
0
--- dataset_info: features: - name: problem dtype: string - name: level dtype: string - name: solution dtype: string - name: type dtype: string - name: completion dtype: string - name: all_one_or_zero dtype: float64 - name: rollout_points sequence: int64 - name: step_labels sequence: float64 splits: - name: train num_bytes: 12086908 num_examples: 3261 - name: test num_bytes: 7282356 num_examples: 2243 download_size: 3984532 dataset_size: 19369264 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
danigambit/D_ep2_run0_llama2-7b_wiki_doc100_tok25
danigambit
2024-10-23T23:02:58Z
18
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-23T23:02:55Z
0
--- dataset_info: features: - name: id dtype: int64 - name: doc dtype: string splits: - name: train num_bytes: 192161 num_examples: 100 download_size: 51027 dataset_size: 192161 configs: - config_name: default data_files: - split: train path: data/train-* ---
Yuyeong/rw_roman-empire_node2vec3_6_public_masked
Yuyeong
2025-05-24T05:04:57Z
0
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-24T05:03:04Z
0
--- dataset_info: features: - name: text dtype: string - name: label dtype: class_label: names: '0': '0' '1': '1' '2': '2' '3': '3' '4': '4' '5': '5' '6': '6' '7': '7' '8': '8' '9': '9' '10': '10' '11': '11' '12': '12' '13': '13' '14': '14' '15': '15' '16': '16' '17': '17' - name: group_idx dtype: int64 - name: node_idx dtype: int64 - name: train_0 dtype: bool - name: validation_0 dtype: bool - name: test_0 dtype: bool - name: train_1 dtype: bool - name: validation_1 dtype: bool - name: test_1 dtype: bool - name: train_2 dtype: bool - name: validation_2 dtype: bool - name: test_2 dtype: bool - name: train_3 dtype: bool - name: validation_3 dtype: bool - name: test_3 dtype: bool - name: train_4 dtype: bool - name: validation_4 dtype: bool - name: test_4 dtype: bool - name: train_5 dtype: bool - name: validation_5 dtype: bool - name: test_5 dtype: bool - name: train_6 dtype: bool - name: validation_6 dtype: bool - name: test_6 dtype: bool - name: train_7 dtype: bool - name: validation_7 dtype: bool - name: test_7 dtype: bool - name: train_8 dtype: bool - name: validation_8 dtype: bool - name: test_8 dtype: bool - name: train_9 dtype: bool - name: validation_9 dtype: bool - name: test_9 dtype: bool splits: - name: train num_bytes: 4000719255 num_examples: 2266200 download_size: 908736557 dataset_size: 4000719255 configs: - config_name: default data_files: - split: train path: data/train-* ---
omlab/VLM-R1
omlab
2025-04-23T11:05:37Z
1,252
15
[ "license:unknown", "modality:image", "arxiv:2504.07615", "region:us" ]
[ "visual-grounding" ]
2025-02-14T16:44:40Z
0
--- task_categories: - visual-grounding license: unknown --- This repository contains the dataset used in the paper [VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model](https://huggingface.co/papers/2504.07615). Code: https://github.com/om-ai-lab/VLM-R1
HungVu2003/opt-350m_beta_0.2_alpha_0.2_dataset_0_for_gen_5
HungVu2003
2025-04-07T06:00:19Z
15
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-04-07T06:00:18Z
0
--- dataset_info: features: - name: question dtype: string splits: - name: train num_bytes: 1260227 num_examples: 11250 download_size: 772467 dataset_size: 1260227 configs: - config_name: default data_files: - split: train path: data/train-* ---
baxromovcomston/raft_dataset
baxromovcomston
2024-12-21T13:50:19Z
42
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-21T11:50:43Z
0
--- dataset_info: features: - name: id dtype: string - name: type dtype: string - name: question dtype: string - name: context struct: - name: sentences sequence: sequence: string - name: title sequence: sequence: string - name: oracle_context dtype: string - name: cot_answer dtype: string - name: instruction dtype: string splits: - name: train num_bytes: 64664.1 num_examples: 9 - name: test num_bytes: 7521 num_examples: 1 download_size: 74829 dataset_size: 72185.1 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Ersins/trpyschology.csv
Ersins
2025-05-21T22:00:57Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-21T22:00:22Z
0
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: response dtype: string splits: - name: train num_bytes: 740894.1573426573 num_examples: 514 - name: test num_bytes: 83602.84265734266 num_examples: 58 download_size: 414392 dataset_size: 824497.0 --- # Dataset Card for "trpyschology.csv" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TristonNV/Llama-3.2-1B-Instruct-best_of_n-prm-completions
TristonNV
2025-01-16T23:08:42Z
18
0
[ "license:apache-2.0", "size_categories:10K<n<100K", "region:us" ]
[]
2025-01-16T22:53:32Z
0
--- license: apache-2.0 pretty_name: Search-n-learn test size_categories: - 10K<n<100K ---
robintomar/record-test-2
robintomar
2025-06-14T07:37:49Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot" ]
[ "robotics" ]
2025-06-14T07:05:36Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100_follower", "total_episodes": 14, "total_frames": 4802, "total_tasks": 1, "total_videos": 14, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:14" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "shoulder_pan.pos", "shoulder_lift.pos", "elbow_flex.pos", "wrist_flex.pos", "wrist_roll.pos", "gripper.pos" ] }, "observation.images.top": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
Uns0/flappy_dataset
Uns0
2025-03-28T18:59:25Z
8
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-28T18:44:07Z
0
--- dataset_info: features: - name: episode_id dtype: int32 - name: frames dtype: binary - name: actions dtype: int8 - name: health dtype: float32 - name: step_id dtype: int32 - name: batch dtype: int64 splits: - name: batch_0 num_bytes: 184280440 num_examples: 26836 - name: batch_1 num_bytes: 204439422 num_examples: 29422 - name: batch_2 num_bytes: 229412263 num_examples: 32609 - name: batch_3 num_bytes: 276249807 num_examples: 38477 - name: batch_4 num_bytes: 344239459 num_examples: 46914 - name: batch_5 num_bytes: 457815508 num_examples: 60366 - name: batch_6 num_bytes: 629928824 num_examples: 80251 - name: batch_7 num_bytes: 2174362311 num_examples: 256747 - name: batch_8 num_bytes: 2155692652 num_examples: 249749 - name: batch_9 num_bytes: 2163235870 num_examples: 247986 - name: batch_10 num_bytes: 2238754558 num_examples: 256989 download_size: 5756437427 dataset_size: 11058411114 configs: - config_name: default data_files: - split: batch_0 path: data/batch_0-* - split: batch_1 path: data/batch_1-* - split: batch_2 path: data/batch_2-* - split: batch_3 path: data/batch_3-* - split: batch_4 path: data/batch_4-* - split: batch_5 path: data/batch_5-* - split: batch_6 path: data/batch_6-* - split: batch_7 path: data/batch_7-* - split: batch_8 path: data/batch_8-* - split: batch_9 path: data/batch_9-* - split: batch_10 path: data/batch_10-* ---
dgambettaphd/D_llm3_gen10_WXS_doc1000_synt64_lr1e-04_acm_MPP
dgambettaphd
2025-05-07T19:32:14Z
0
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-07T19:32:10Z
0
--- dataset_info: features: - name: id_doc dtype: int64 - name: text dtype: string - name: dataset dtype: string - name: gen dtype: int64 - name: synt dtype: int64 - name: MPP dtype: float64 splits: - name: train num_bytes: 16855384 num_examples: 26000 download_size: 10228375 dataset_size: 16855384 configs: - config_name: default data_files: - split: train path: data/train-* ---
taxitain/so100_obj_to_bin_top0_180
taxitain
2025-05-06T07:35:53Z
37
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so100", "obj_to_bin" ]
[ "robotics" ]
2025-05-05T20:20:51Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so100 - obj_to_bin configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so100", "total_episodes": 1, "total_frames": 872, "total_tasks": 1, "total_videos": 2, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:1" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.top0": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.side1": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
abdullah-alamodi/aqedah-data
abdullah-alamodi
2024-12-12T10:58:05Z
22
0
[ "language:ar", "license:mit", "size_categories:n<1K", "format:json", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-12T10:56:14Z
0
--- license: mit language: - ar pretty_name: aqedah-data ---
zeahub/camus-sample
zeahub
2025-06-08T14:01:06Z
0
0
[ "source_datasets:https://www.creatis.insa-lyon.fr/Challenge/camus/index.html", "region:us", "medical-imaging", "ultrasound", "cardiac", "benchmark", "zea" ]
[]
2025-06-08T13:52:24Z
0
--- pretty_name: Cardiac Acquisitions for Multi-structure Ultrasound Segmentation source_datasets: - https://www.creatis.insa-lyon.fr/Challenge/camus/index.html tags: - medical-imaging - ultrasound - cardiac - benchmark - zea converted_by: "@tristan-deep" --- ## Dataset Description This contains one example for each split in the [CAMUS](https://www.creatis.insa-lyon.fr/Challenge/camus/index.html) (Cardiac Acquisitions for Multi-structure Ultrasound Segmentation) dataset. Converted to the [zea format](https://usbmd-toolbox.web.app/data-acquisition.html) for use with the [zea toolbox](https://github.com/tue-bmd/zea).
jkot/dataset_merged_preprocesssed_v2
jkot
2023-04-28T20:06:15Z
10,470
0
[ "size_categories:100K<n<1M", "format:parquet", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2023-04-28T16:23:57Z
0
--- dataset_info: features: - name: input_features sequence: sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 229523006640 num_examples: 238899 - name: test num_bytes: 12170045648 num_examples: 12669 download_size: 72324319243 dataset_size: 241693052288 --- # Dataset Card for "dataset_merged_preprocesssed_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TSXu/Physion_Dominoes_Ex3
TSXu
2025-04-02T06:00:18Z
48
0
[ "license:apache-2.0", "size_categories:n<1K", "format:text", "modality:text", "library:datasets", "library:mlcroissant", "region:us" ]
[]
2025-04-02T05:02:00Z
0
--- license: apache-2.0 ---
QuanHoangNgoc/lock_dataset_prc
QuanHoangNgoc
2025-06-08T04:51:06Z
96
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:timeseries", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-05T02:50:57Z
0
--- dataset_info: features: - name: input_values sequence: float32 - name: labels sequence: int64 splits: - name: train num_bytes: 18795072252.0 num_examples: 15023 - name: dev num_bytes: 118626196.0 num_examples: 95 download_size: 18886526489 dataset_size: 18913698448.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* ---
tsilva/GymnasiumRecording__ALE_KungFuMaster_v5
tsilva
2025-06-04T16:15:04Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-06-04T16:14:02Z
0
--- dataset_info: features: - name: episode_id dtype: int64 - name: image dtype: image - name: step dtype: int64 - name: action sequence: int64 splits: - name: train num_bytes: 376263.0 num_examples: 248 download_size: 254711 dataset_size: 376263.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
saintlyk1d/dont-say-it-prompts-player1-test-set-variant-C
saintlyk1d
2025-05-24T05:31:37Z
53
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-05-16T05:33:07Z
0
--- dataset_info: features: - name: player_word dtype: string - name: opponent_word dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 61457 num_examples: 100 download_size: 8466 dataset_size: 61457 configs: - config_name: default data_files: - split: train path: data/train-* ---
MAINLAND/xview_part4
MAINLAND
2024-11-20T12:43:01Z
112
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-20T12:32:34Z
0
--- dataset_info: features: - name: image dtype: image - name: messages list: - name: content list: - name: index dtype: int64 - name: text dtype: string - name: type dtype: string - name: role dtype: string splits: - name: train num_bytes: 42913675946.962 num_examples: 34913 download_size: 4758139345 dataset_size: 42913675946.962 configs: - config_name: default data_files: - split: train path: data/train-* ---
datacomp/imagenet-1k-random90.0
datacomp
2024-11-07T22:42:36Z
27
1
[ "size_categories:1M<n<10M", "format:parquet", "modality:image", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-05T07:42:19Z
0
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': tench, Tinca tinca '1': goldfish, Carassius auratus '2': great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias '3': tiger shark, Galeocerdo cuvieri '4': hammerhead, hammerhead shark '5': electric ray, crampfish, numbfish, torpedo '6': stingray '7': cock '8': hen '9': ostrich, Struthio camelus '10': brambling, Fringilla montifringilla '11': goldfinch, Carduelis carduelis '12': house finch, linnet, Carpodacus mexicanus '13': junco, snowbird '14': indigo bunting, indigo finch, indigo bird, Passerina cyanea '15': robin, American robin, Turdus migratorius '16': bulbul '17': jay '18': magpie '19': chickadee '20': water ouzel, dipper '21': kite '22': bald eagle, American eagle, Haliaeetus leucocephalus '23': vulture '24': great grey owl, great gray owl, Strix nebulosa '25': European fire salamander, Salamandra salamandra '26': common newt, Triturus vulgaris '27': eft '28': spotted salamander, Ambystoma maculatum '29': axolotl, mud puppy, Ambystoma mexicanum '30': bullfrog, Rana catesbeiana '31': tree frog, tree-frog '32': tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui '33': loggerhead, loggerhead turtle, Caretta caretta '34': leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea '35': mud turtle '36': terrapin '37': box turtle, box tortoise '38': banded gecko '39': common iguana, iguana, Iguana iguana '40': American chameleon, anole, Anolis carolinensis '41': whiptail, whiptail lizard '42': agama '43': frilled lizard, Chlamydosaurus kingi '44': alligator lizard '45': Gila monster, Heloderma suspectum '46': green lizard, Lacerta viridis '47': African chameleon, Chamaeleo chamaeleon '48': Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis '49': African crocodile, Nile crocodile, Crocodylus niloticus '50': American alligator, Alligator mississipiensis '51': triceratops '52': thunder snake, worm snake, Carphophis amoenus '53': ringneck snake, ring-necked snake, ring snake '54': hognose snake, puff adder, sand viper '55': green snake, grass snake '56': king snake, kingsnake '57': garter snake, grass snake '58': water snake '59': vine snake '60': night snake, Hypsiglena torquata '61': boa constrictor, Constrictor constrictor '62': rock python, rock snake, Python sebae '63': Indian cobra, Naja naja '64': green mamba '65': sea snake '66': horned viper, cerastes, sand viper, horned asp, Cerastes cornutus '67': diamondback, diamondback rattlesnake, Crotalus adamanteus '68': sidewinder, horned rattlesnake, Crotalus cerastes '69': trilobite '70': harvestman, daddy longlegs, Phalangium opilio '71': scorpion '72': black and gold garden spider, Argiope aurantia '73': barn spider, Araneus cavaticus '74': garden spider, Aranea diademata '75': black widow, Latrodectus mactans '76': tarantula '77': wolf spider, hunting spider '78': tick '79': centipede '80': black grouse '81': ptarmigan '82': ruffed grouse, partridge, Bonasa umbellus '83': prairie chicken, prairie grouse, prairie fowl '84': peacock '85': quail '86': partridge '87': African grey, African gray, Psittacus erithacus '88': macaw '89': sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita '90': lorikeet '91': coucal '92': bee eater '93': hornbill '94': hummingbird '95': jacamar '96': toucan '97': drake '98': red-breasted merganser, Mergus serrator '99': goose '100': black swan, Cygnus atratus '101': tusker '102': echidna, spiny anteater, anteater '103': platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus '104': wallaby, brush kangaroo '105': koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus '106': wombat '107': jellyfish '108': sea anemone, anemone '109': brain coral '110': flatworm, platyhelminth '111': nematode, nematode worm, roundworm '112': conch '113': snail '114': slug '115': sea slug, nudibranch '116': chiton, coat-of-mail shell, sea cradle, polyplacophore '117': chambered nautilus, pearly nautilus, nautilus '118': Dungeness crab, Cancer magister '119': rock crab, Cancer irroratus '120': fiddler crab '121': king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica '122': American lobster, Northern lobster, Maine lobster, Homarus americanus '123': spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish '124': crayfish, crawfish, crawdad, crawdaddy '125': hermit crab '126': isopod '127': white stork, Ciconia ciconia '128': black stork, Ciconia nigra '129': spoonbill '130': flamingo '131': little blue heron, Egretta caerulea '132': American egret, great white heron, Egretta albus '133': bittern '134': crane '135': limpkin, Aramus pictus '136': European gallinule, Porphyrio porphyrio '137': American coot, marsh hen, mud hen, water hen, Fulica americana '138': bustard '139': ruddy turnstone, Arenaria interpres '140': red-backed sandpiper, dunlin, Erolia alpina '141': redshank, Tringa totanus '142': dowitcher '143': oystercatcher, oyster catcher '144': pelican '145': king penguin, Aptenodytes patagonica '146': albatross, mollymawk '147': grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus '148': killer whale, killer, orca, grampus, sea wolf, Orcinus orca '149': dugong, Dugong dugon '150': sea lion '151': Chihuahua '152': Japanese spaniel '153': Maltese dog, Maltese terrier, Maltese '154': Pekinese, Pekingese, Peke '155': Shih-Tzu '156': Blenheim spaniel '157': papillon '158': toy terrier '159': Rhodesian ridgeback '160': Afghan hound, Afghan '161': basset, basset hound '162': beagle '163': bloodhound, sleuthhound '164': bluetick '165': black-and-tan coonhound '166': Walker hound, Walker foxhound '167': English foxhound '168': redbone '169': borzoi, Russian wolfhound '170': Irish wolfhound '171': Italian greyhound '172': whippet '173': Ibizan hound, Ibizan Podenco '174': Norwegian elkhound, elkhound '175': otterhound, otter hound '176': Saluki, gazelle hound '177': Scottish deerhound, deerhound '178': Weimaraner '179': Staffordshire bullterrier, Staffordshire bull terrier '180': American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier '181': Bedlington terrier '182': Border terrier '183': Kerry blue terrier '184': Irish terrier '185': Norfolk terrier '186': Norwich terrier '187': Yorkshire terrier '188': wire-haired fox terrier '189': Lakeland terrier '190': Sealyham terrier, Sealyham '191': Airedale, Airedale terrier '192': cairn, cairn terrier '193': Australian terrier '194': Dandie Dinmont, Dandie Dinmont terrier '195': Boston bull, Boston terrier '196': miniature schnauzer '197': giant schnauzer '198': standard schnauzer '199': Scotch terrier, Scottish terrier, Scottie '200': Tibetan terrier, chrysanthemum dog '201': silky terrier, Sydney silky '202': soft-coated wheaten terrier '203': West Highland white terrier '204': Lhasa, Lhasa apso '205': flat-coated retriever '206': curly-coated retriever '207': golden retriever '208': Labrador retriever '209': Chesapeake Bay retriever '210': German short-haired pointer '211': vizsla, Hungarian pointer '212': English setter '213': Irish setter, red setter '214': Gordon setter '215': Brittany spaniel '216': clumber, clumber spaniel '217': English springer, English springer spaniel '218': Welsh springer spaniel '219': cocker spaniel, English cocker spaniel, cocker '220': Sussex spaniel '221': Irish water spaniel '222': kuvasz '223': schipperke '224': groenendael '225': malinois '226': briard '227': kelpie '228': komondor '229': Old English sheepdog, bobtail '230': Shetland sheepdog, Shetland sheep dog, Shetland '231': collie '232': Border collie '233': Bouvier des Flandres, Bouviers des Flandres '234': Rottweiler '235': German shepherd, German shepherd dog, German police dog, alsatian '236': Doberman, Doberman pinscher '237': miniature pinscher '238': Greater Swiss Mountain dog '239': Bernese mountain dog '240': Appenzeller '241': EntleBucher '242': boxer '243': bull mastiff '244': Tibetan mastiff '245': French bulldog '246': Great Dane '247': Saint Bernard, St Bernard '248': Eskimo dog, husky '249': malamute, malemute, Alaskan malamute '250': Siberian husky '251': dalmatian, coach dog, carriage dog '252': affenpinscher, monkey pinscher, monkey dog '253': basenji '254': pug, pug-dog '255': Leonberg '256': Newfoundland, Newfoundland dog '257': Great Pyrenees '258': Samoyed, Samoyede '259': Pomeranian '260': chow, chow chow '261': keeshond '262': Brabancon griffon '263': Pembroke, Pembroke Welsh corgi '264': Cardigan, Cardigan Welsh corgi '265': toy poodle '266': miniature poodle '267': standard poodle '268': Mexican hairless '269': timber wolf, grey wolf, gray wolf, Canis lupus '270': white wolf, Arctic wolf, Canis lupus tundrarum '271': red wolf, maned wolf, Canis rufus, Canis niger '272': coyote, prairie wolf, brush wolf, Canis latrans '273': dingo, warrigal, warragal, Canis dingo '274': dhole, Cuon alpinus '275': African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus '276': hyena, hyaena '277': red fox, Vulpes vulpes '278': kit fox, Vulpes macrotis '279': Arctic fox, white fox, Alopex lagopus '280': grey fox, gray fox, Urocyon cinereoargenteus '281': tabby, tabby cat '282': tiger cat '283': Persian cat '284': Siamese cat, Siamese '285': Egyptian cat '286': cougar, puma, catamount, mountain lion, painter, panther, Felis concolor '287': lynx, catamount '288': leopard, Panthera pardus '289': snow leopard, ounce, Panthera uncia '290': jaguar, panther, Panthera onca, Felis onca '291': lion, king of beasts, Panthera leo '292': tiger, Panthera tigris '293': cheetah, chetah, Acinonyx jubatus '294': brown bear, bruin, Ursus arctos '295': American black bear, black bear, Ursus americanus, Euarctos americanus '296': ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus '297': sloth bear, Melursus ursinus, Ursus ursinus '298': mongoose '299': meerkat, mierkat '300': tiger beetle '301': ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle '302': ground beetle, carabid beetle '303': long-horned beetle, longicorn, longicorn beetle '304': leaf beetle, chrysomelid '305': dung beetle '306': rhinoceros beetle '307': weevil '308': fly '309': bee '310': ant, emmet, pismire '311': grasshopper, hopper '312': cricket '313': walking stick, walkingstick, stick insect '314': cockroach, roach '315': mantis, mantid '316': cicada, cicala '317': leafhopper '318': lacewing, lacewing fly '319': dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk '320': damselfly '321': admiral '322': ringlet, ringlet butterfly '323': monarch, monarch butterfly, milkweed butterfly, Danaus plexippus '324': cabbage butterfly '325': sulphur butterfly, sulfur butterfly '326': lycaenid, lycaenid butterfly '327': starfish, sea star '328': sea urchin '329': sea cucumber, holothurian '330': wood rabbit, cottontail, cottontail rabbit '331': hare '332': Angora, Angora rabbit '333': hamster '334': porcupine, hedgehog '335': fox squirrel, eastern fox squirrel, Sciurus niger '336': marmot '337': beaver '338': guinea pig, Cavia cobaya '339': sorrel '340': zebra '341': hog, pig, grunter, squealer, Sus scrofa '342': wild boar, boar, Sus scrofa '343': warthog '344': hippopotamus, hippo, river horse, Hippopotamus amphibius '345': ox '346': water buffalo, water ox, Asiatic buffalo, Bubalus bubalis '347': bison '348': ram, tup '349': bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis '350': ibex, Capra ibex '351': hartebeest '352': impala, Aepyceros melampus '353': gazelle '354': Arabian camel, dromedary, Camelus dromedarius '355': llama '356': weasel '357': mink '358': polecat, fitch, foulmart, foumart, Mustela putorius '359': black-footed ferret, ferret, Mustela nigripes '360': otter '361': skunk, polecat, wood pussy '362': badger '363': armadillo '364': three-toed sloth, ai, Bradypus tridactylus '365': orangutan, orang, orangutang, Pongo pygmaeus '366': gorilla, Gorilla gorilla '367': chimpanzee, chimp, Pan troglodytes '368': gibbon, Hylobates lar '369': siamang, Hylobates syndactylus, Symphalangus syndactylus '370': guenon, guenon monkey '371': patas, hussar monkey, Erythrocebus patas '372': baboon '373': macaque '374': langur '375': colobus, colobus monkey '376': proboscis monkey, Nasalis larvatus '377': marmoset '378': capuchin, ringtail, Cebus capucinus '379': howler monkey, howler '380': titi, titi monkey '381': spider monkey, Ateles geoffroyi '382': squirrel monkey, Saimiri sciureus '383': Madagascar cat, ring-tailed lemur, Lemur catta '384': indri, indris, Indri indri, Indri brevicaudatus '385': Indian elephant, Elephas maximus '386': African elephant, Loxodonta africana '387': lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens '388': giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca '389': barracouta, snoek '390': eel '391': coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch '392': rock beauty, Holocanthus tricolor '393': anemone fish '394': sturgeon '395': gar, garfish, garpike, billfish, Lepisosteus osseus '396': lionfish '397': puffer, pufferfish, blowfish, globefish '398': abacus '399': abaya '400': academic gown, academic robe, judge's robe '401': accordion, piano accordion, squeeze box '402': acoustic guitar '403': aircraft carrier, carrier, flattop, attack aircraft carrier '404': airliner '405': airship, dirigible '406': altar '407': ambulance '408': amphibian, amphibious vehicle '409': analog clock '410': apiary, bee house '411': apron '412': ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin '413': assault rifle, assault gun '414': backpack, back pack, knapsack, packsack, rucksack, haversack '415': bakery, bakeshop, bakehouse '416': balance beam, beam '417': balloon '418': ballpoint, ballpoint pen, ballpen, Biro '419': Band Aid '420': banjo '421': bannister, banister, balustrade, balusters, handrail '422': barbell '423': barber chair '424': barbershop '425': barn '426': barometer '427': barrel, cask '428': barrow, garden cart, lawn cart, wheelbarrow '429': baseball '430': basketball '431': bassinet '432': bassoon '433': bathing cap, swimming cap '434': bath towel '435': bathtub, bathing tub, bath, tub '436': beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon '437': beacon, lighthouse, beacon light, pharos '438': beaker '439': bearskin, busby, shako '440': beer bottle '441': beer glass '442': bell cote, bell cot '443': bib '444': bicycle-built-for-two, tandem bicycle, tandem '445': bikini, two-piece '446': binder, ring-binder '447': binoculars, field glasses, opera glasses '448': birdhouse '449': boathouse '450': bobsled, bobsleigh, bob '451': bolo tie, bolo, bola tie, bola '452': bonnet, poke bonnet '453': bookcase '454': bookshop, bookstore, bookstall '455': bottlecap '456': bow '457': bow tie, bow-tie, bowtie '458': brass, memorial tablet, plaque '459': brassiere, bra, bandeau '460': breakwater, groin, groyne, mole, bulwark, seawall, jetty '461': breastplate, aegis, egis '462': broom '463': bucket, pail '464': buckle '465': bulletproof vest '466': bullet train, bullet '467': butcher shop, meat market '468': cab, hack, taxi, taxicab '469': caldron, cauldron '470': candle, taper, wax light '471': cannon '472': canoe '473': can opener, tin opener '474': cardigan '475': car mirror '476': carousel, carrousel, merry-go-round, roundabout, whirligig '477': carpenter's kit, tool kit '478': carton '479': car wheel '480': cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM '481': cassette '482': cassette player '483': castle '484': catamaran '485': CD player '486': cello, violoncello '487': cellular telephone, cellular phone, cellphone, cell, mobile phone '488': chain '489': chainlink fence '490': chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour '491': chain saw, chainsaw '492': chest '493': chiffonier, commode '494': chime, bell, gong '495': china cabinet, china closet '496': Christmas stocking '497': church, church building '498': cinema, movie theater, movie theatre, movie house, picture palace '499': cleaver, meat cleaver, chopper '500': cliff dwelling '501': cloak '502': clog, geta, patten, sabot '503': cocktail shaker '504': coffee mug '505': coffeepot '506': coil, spiral, volute, whorl, helix '507': combination lock '508': computer keyboard, keypad '509': confectionery, confectionary, candy store '510': container ship, containership, container vessel '511': convertible '512': corkscrew, bottle screw '513': cornet, horn, trumpet, trump '514': cowboy boot '515': cowboy hat, ten-gallon hat '516': cradle '517': crane2 '518': crash helmet '519': crate '520': crib, cot '521': Crock Pot '522': croquet ball '523': crutch '524': cuirass '525': dam, dike, dyke '526': desk '527': desktop computer '528': dial telephone, dial phone '529': diaper, nappy, napkin '530': digital clock '531': digital watch '532': dining table, board '533': dishrag, dishcloth '534': dishwasher, dish washer, dishwashing machine '535': disk brake, disc brake '536': dock, dockage, docking facility '537': dogsled, dog sled, dog sleigh '538': dome '539': doormat, welcome mat '540': drilling platform, offshore rig '541': drum, membranophone, tympan '542': drumstick '543': dumbbell '544': Dutch oven '545': electric fan, blower '546': electric guitar '547': electric locomotive '548': entertainment center '549': envelope '550': espresso maker '551': face powder '552': feather boa, boa '553': file, file cabinet, filing cabinet '554': fireboat '555': fire engine, fire truck '556': fire screen, fireguard '557': flagpole, flagstaff '558': flute, transverse flute '559': folding chair '560': football helmet '561': forklift '562': fountain '563': fountain pen '564': four-poster '565': freight car '566': French horn, horn '567': frying pan, frypan, skillet '568': fur coat '569': garbage truck, dustcart '570': gasmask, respirator, gas helmet '571': gas pump, gasoline pump, petrol pump, island dispenser '572': goblet '573': go-kart '574': golf ball '575': golfcart, golf cart '576': gondola '577': gong, tam-tam '578': gown '579': grand piano, grand '580': greenhouse, nursery, glasshouse '581': grille, radiator grille '582': grocery store, grocery, food market, market '583': guillotine '584': hair slide '585': hair spray '586': half track '587': hammer '588': hamper '589': hand blower, blow dryer, blow drier, hair dryer, hair drier '590': hand-held computer, hand-held microcomputer '591': handkerchief, hankie, hanky, hankey '592': hard disc, hard disk, fixed disk '593': harmonica, mouth organ, harp, mouth harp '594': harp '595': harvester, reaper '596': hatchet '597': holster '598': home theater, home theatre '599': honeycomb '600': hook, claw '601': hoopskirt, crinoline '602': horizontal bar, high bar '603': horse cart, horse-cart '604': hourglass '605': iPod '606': iron, smoothing iron '607': jack-o'-lantern '608': jean, blue jean, denim '609': jeep, landrover '610': jersey, T-shirt, tee shirt '611': jigsaw puzzle '612': jinrikisha, ricksha, rickshaw '613': joystick '614': kimono '615': knee pad '616': knot '617': lab coat, laboratory coat '618': ladle '619': lampshade, lamp shade '620': laptop, laptop computer '621': lawn mower, mower '622': lens cap, lens cover '623': letter opener, paper knife, paperknife '624': library '625': lifeboat '626': lighter, light, igniter, ignitor '627': limousine, limo '628': liner, ocean liner '629': lipstick, lip rouge '630': Loafer '631': lotion '632': loudspeaker, speaker, speaker unit, loudspeaker system, speaker system '633': loupe, jeweler's loupe '634': lumbermill, sawmill '635': magnetic compass '636': mailbag, postbag '637': mailbox, letter box '638': maillot '639': maillot, tank suit '640': manhole cover '641': maraca '642': marimba, xylophone '643': mask '644': matchstick '645': maypole '646': maze, labyrinth '647': measuring cup '648': medicine chest, medicine cabinet '649': megalith, megalithic structure '650': microphone, mike '651': microwave, microwave oven '652': military uniform '653': milk can '654': minibus '655': miniskirt, mini '656': minivan '657': missile '658': mitten '659': mixing bowl '660': mobile home, manufactured home '661': Model T '662': modem '663': monastery '664': monitor '665': moped '666': mortar '667': mortarboard '668': mosque '669': mosquito net '670': motor scooter, scooter '671': mountain bike, all-terrain bike, off-roader '672': mountain tent '673': mouse, computer mouse '674': mousetrap '675': moving van '676': muzzle '677': nail '678': neck brace '679': necklace '680': nipple '681': notebook, notebook computer '682': obelisk '683': oboe, hautboy, hautbois '684': ocarina, sweet potato '685': odometer, hodometer, mileometer, milometer '686': oil filter '687': organ, pipe organ '688': oscilloscope, scope, cathode-ray oscilloscope, CRO '689': overskirt '690': oxcart '691': oxygen mask '692': packet '693': paddle, boat paddle '694': paddlewheel, paddle wheel '695': padlock '696': paintbrush '697': pajama, pyjama, pj's, jammies '698': palace '699': panpipe, pandean pipe, syrinx '700': paper towel '701': parachute, chute '702': parallel bars, bars '703': park bench '704': parking meter '705': passenger car, coach, carriage '706': patio, terrace '707': pay-phone, pay-station '708': pedestal, plinth, footstall '709': pencil box, pencil case '710': pencil sharpener '711': perfume, essence '712': Petri dish '713': photocopier '714': pick, plectrum, plectron '715': pickelhaube '716': picket fence, paling '717': pickup, pickup truck '718': pier '719': piggy bank, penny bank '720': pill bottle '721': pillow '722': ping-pong ball '723': pinwheel '724': pirate, pirate ship '725': pitcher, ewer '726': plane, carpenter's plane, woodworking plane '727': planetarium '728': plastic bag '729': plate rack '730': plow, plough '731': plunger, plumber's helper '732': Polaroid camera, Polaroid Land camera '733': pole '734': police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria '735': poncho '736': pool table, billiard table, snooker table '737': pop bottle, soda bottle '738': pot, flowerpot '739': potter's wheel '740': power drill '741': prayer rug, prayer mat '742': printer '743': prison, prison house '744': projectile, missile '745': projector '746': puck, hockey puck '747': punching bag, punch bag, punching ball, punchball '748': purse '749': quill, quill pen '750': quilt, comforter, comfort, puff '751': racer, race car, racing car '752': racket, racquet '753': radiator '754': radio, wireless '755': radio telescope, radio reflector '756': rain barrel '757': recreational vehicle, RV, R.V. '758': reel '759': reflex camera '760': refrigerator, icebox '761': remote control, remote '762': restaurant, eating house, eating place, eatery '763': revolver, six-gun, six-shooter '764': rifle '765': rocking chair, rocker '766': rotisserie '767': rubber eraser, rubber, pencil eraser '768': rugby ball '769': rule, ruler '770': running shoe '771': safe '772': safety pin '773': saltshaker, salt shaker '774': sandal '775': sarong '776': sax, saxophone '777': scabbard '778': scale, weighing machine '779': school bus '780': schooner '781': scoreboard '782': screen, CRT screen '783': screw '784': screwdriver '785': seat belt, seatbelt '786': sewing machine '787': shield, buckler '788': shoe shop, shoe-shop, shoe store '789': shoji '790': shopping basket '791': shopping cart '792': shovel '793': shower cap '794': shower curtain '795': ski '796': ski mask '797': sleeping bag '798': slide rule, slipstick '799': sliding door '800': slot, one-armed bandit '801': snorkel '802': snowmobile '803': snowplow, snowplough '804': soap dispenser '805': soccer ball '806': sock '807': solar dish, solar collector, solar furnace '808': sombrero '809': soup bowl '810': space bar '811': space heater '812': space shuttle '813': spatula '814': speedboat '815': spider web, spider's web '816': spindle '817': sports car, sport car '818': spotlight, spot '819': stage '820': steam locomotive '821': steel arch bridge '822': steel drum '823': stethoscope '824': stole '825': stone wall '826': stopwatch, stop watch '827': stove '828': strainer '829': streetcar, tram, tramcar, trolley, trolley car '830': stretcher '831': studio couch, day bed '832': stupa, tope '833': submarine, pigboat, sub, U-boat '834': suit, suit of clothes '835': sundial '836': sunglass '837': sunglasses, dark glasses, shades '838': sunscreen, sunblock, sun blocker '839': suspension bridge '840': swab, swob, mop '841': sweatshirt '842': swimming trunks, bathing trunks '843': swing '844': switch, electric switch, electrical switch '845': syringe '846': table lamp '847': tank, army tank, armored combat vehicle, armoured combat vehicle '848': tape player '849': teapot '850': teddy, teddy bear '851': television, television system '852': tennis ball '853': thatch, thatched roof '854': theater curtain, theatre curtain '855': thimble '856': thresher, thrasher, threshing machine '857': throne '858': tile roof '859': toaster '860': tobacco shop, tobacconist shop, tobacconist '861': toilet seat '862': torch '863': totem pole '864': tow truck, tow car, wrecker '865': toyshop '866': tractor '867': trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi '868': tray '869': trench coat '870': tricycle, trike, velocipede '871': trimaran '872': tripod '873': triumphal arch '874': trolleybus, trolley coach, trackless trolley '875': trombone '876': tub, vat '877': turnstile '878': typewriter keyboard '879': umbrella '880': unicycle, monocycle '881': upright, upright piano '882': vacuum, vacuum cleaner '883': vase '884': vault '885': velvet '886': vending machine '887': vestment '888': viaduct '889': violin, fiddle '890': volleyball '891': waffle iron '892': wall clock '893': wallet, billfold, notecase, pocketbook '894': wardrobe, closet, press '895': warplane, military plane '896': washbasin, handbasin, washbowl, lavabo, wash-hand basin '897': washer, automatic washer, washing machine '898': water bottle '899': water jug '900': water tower '901': whiskey jug '902': whistle '903': wig '904': window screen '905': window shade '906': Windsor tie '907': wine bottle '908': wing '909': wok '910': wooden spoon '911': wool, woolen, woollen '912': worm fence, snake fence, snake-rail fence, Virginia fence '913': wreck '914': yawl '915': yurt '916': web site, website, internet site, site '917': comic book '918': crossword puzzle, crossword '919': street sign '920': traffic light, traffic signal, stoplight '921': book jacket, dust cover, dust jacket, dust wrapper '922': menu '923': plate '924': guacamole '925': consomme '926': hot pot, hotpot '927': trifle '928': ice cream, icecream '929': ice lolly, lolly, lollipop, popsicle '930': French loaf '931': bagel, beigel '932': pretzel '933': cheeseburger '934': hotdog, hot dog, red hot '935': mashed potato '936': head cabbage '937': broccoli '938': cauliflower '939': zucchini, courgette '940': spaghetti squash '941': acorn squash '942': butternut squash '943': cucumber, cuke '944': artichoke, globe artichoke '945': bell pepper '946': cardoon '947': mushroom '948': Granny Smith '949': strawberry '950': orange '951': lemon '952': fig '953': pineapple, ananas '954': banana '955': jackfruit, jak, jack '956': custard apple '957': pomegranate '958': hay '959': carbonara '960': chocolate sauce, chocolate syrup '961': dough '962': meat loaf, meatloaf '963': pizza, pizza pie '964': potpie '965': burrito '966': red wine '967': espresso '968': cup '969': eggnog '970': alp '971': bubble '972': cliff, drop, drop-off '973': coral reef '974': geyser '975': lakeside, lakeshore '976': promontory, headland, head, foreland '977': sandbar, sand bar '978': seashore, coast, seacoast, sea-coast '979': valley, vale '980': volcano '981': ballplayer, baseball player '982': groom, bridegroom '983': scuba diver '984': rapeseed '985': daisy '986': yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum '987': corn '988': acorn '989': hip, rose hip, rosehip '990': buckeye, horse chestnut, conker '991': coral fungus '992': agaric '993': gyromitra '994': stinkhorn, carrion fungus '995': earthstar '996': hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa '997': bolete '998': ear, spike, capitulum '999': toilet tissue, toilet paper, bathroom tissue splits: - name: train num_bytes: 103121063842.125 num_examples: 1281167 - name: validation num_bytes: 6706896736.0 num_examples: 50000 - name: test num_bytes: 13610348261.0 num_examples: 100000 download_size: 123394795208 dataset_size: 123438308839.125 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
facebook/voxpopuli
facebook
2022-10-14T13:43:12Z
7,898
110
[ "task_categories:automatic-speech-recognition", "multilinguality:multilingual", "language:en", "language:de", "language:fr", "language:es", "language:pl", "language:it", "language:ro", "language:hu", "language:cs", "language:nl", "language:fi", "language:hr", "language:sk", "language:sl", "language:et", "language:lt", "license:cc0-1.0", "license:other", "size_categories:100K<n<1M", "modality:audio", "modality:text", "library:datasets", "library:mlcroissant", "arxiv:2101.00390", "region:us" ]
[ "automatic-speech-recognition" ]
2022-05-10T14:42:49Z
0
--- annotations_creators: [] language: - en - de - fr - es - pl - it - ro - hu - cs - nl - fi - hr - sk - sl - et - lt language_creators: [] license: - cc0-1.0 - other multilinguality: - multilingual pretty_name: VoxPopuli size_categories: [] source_datasets: [] tags: [] task_categories: - automatic-speech-recognition task_ids: [] --- # Dataset Card for Voxpopuli ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/facebookresearch/voxpopuli - **Repository:** https://github.com/facebookresearch/voxpopuli - **Paper:** https://arxiv.org/abs/2101.00390 - **Point of Contact:** [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]), [[email protected]](mailto:[email protected]) ### Dataset Summary VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials. This implementation contains transcribed speech data for 18 languages. It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents) ### Example usage VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name: ```python from datasets import load_dataset voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr") ``` To load all the languages in a single dataset use "multilang" config name: ```python voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang") ``` To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter: ```python voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"]) ``` To load accented English data, use "en_accented" config name: ```python voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented") ``` **Note that L2 English subset contains only `test` split.** ### Supported Tasks and Leaderboards * automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). Accented English subset can also be used for research in ASR for accented speech (15 L2 accents) ### Languages VoxPopuli contains labelled (transcribed) data for 18 languages: | Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens | |:---:|:---:|:---:|:---:|:---:| | English | En | 543 | 1313 | 4.8M | | German | De | 282 | 531 | 2.3M | | French | Fr | 211 | 534 | 2.1M | | Spanish | Es | 166 | 305 | 1.6M | | Polish | Pl | 111 | 282 | 802K | | Italian | It | 91 | 306 | 757K | | Romanian | Ro | 89 | 164 | 739K | | Hungarian | Hu | 63 | 143 | 431K | | Czech | Cs | 62 | 138 | 461K | | Dutch | Nl | 53 | 221 | 488K | | Finnish | Fi | 27 | 84 | 160K | | Croatian | Hr | 43 | 83 | 337K | | Slovak | Sk | 35 | 96 | 270K | | Slovene | Sl | 10 | 45 | 76K | | Estonian | Et | 3 | 29 | 18K | | Lithuanian | Lt | 2 | 21 | 10K | | Total | | 1791 | 4295 | 15M | Accented speech transcribed data has 15 various L2 accents: | Accent | Code | Transcribed Hours | Transcribed Speakers | |:---:|:---:|:---:|:---:| | Dutch | en_nl | 3.52 | 45 | | German | en_de | 3.52 | 84 | | Czech | en_cs | 3.30 | 26 | | Polish | en_pl | 3.23 | 33 | | French | en_fr | 2.56 | 27 | | Hungarian | en_hu | 2.33 | 23 | | Finnish | en_fi | 2.18 | 20 | | Romanian | en_ro | 1.85 | 27 | | Slovak | en_sk | 1.46 | 17 | | Spanish | en_es | 1.42 | 18 | | Italian | en_it | 1.11 | 15 | | Estonian | en_et | 1.08 | 6 | | Lithuanian | en_lt | 0.65 | 7 | | Croatian | en_hr | 0.42 | 9 | | Slovene | en_sl | 0.25 | 7 | ## Dataset Structure ### Data Instances ```python { 'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5', 'language': 11, # "hr" 'audio': { 'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav', 'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32), 'sampling_rate': 16000 }, 'raw_text': '', 'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.', 'gender': 'female', 'speaker_id': '119431', 'is_gold_transcript': True, 'accent': 'None' } ``` ### Data Fields * `audio_id` (string) - id of audio segment * `language` (datasets.ClassLabel) - numerical id of audio segment * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally). * `raw_text` (string) - original (orthographic) audio segment text * `normalized_text` (string) - normalized audio segment transcription * `gender` (string) - gender of speaker * `speaker_id` (string) - id of speaker * `is_gold_transcript` (bool) - ? * `accent` (string) - type of accent, for example "en_lt", if applicable, else "None". ### Data Splits All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home) #### Initial Data Collection and Normalization The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps, we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation. Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available. The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts. The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data. The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment. We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER). #### Who are the source language producers? Speakers are participants of the European Parliament events, many of them are EU officials. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data. VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers. The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials. ### Other Known Limitations ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data. ### Citation Information Please cite this paper: ```bibtex @inproceedings{wang-etal-2021-voxpopuli, title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation", author = "Wang, Changhan and Riviere, Morgane and Lee, Ann and Wu, Anne and Talnikar, Chaitanya and Haziza, Daniel and Williamson, Mary and Pino, Juan and Dupoux, Emmanuel", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.80", pages = "993--1003", } ``` ### Contributions Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
jyang-ca/so101_test_grab_place
jyang-ca
2025-06-14T15:48:48Z
0
0
[ "task_categories:robotics", "license:apache-2.0", "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:timeseries", "modality:video", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us", "LeRobot", "so101", "tutorial" ]
[ "robotics" ]
2025-06-14T15:42:54Z
0
--- license: apache-2.0 task_categories: - robotics tags: - LeRobot - so101 - tutorial configs: - config_name: default data_files: data/*/*.parquet --- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). ## Dataset Description - **Homepage:** [More Information Needed] - **Paper:** [More Information Needed] - **License:** apache-2.0 ## Dataset Structure [meta/info.json](meta/info.json): ```json { "codebase_version": "v2.1", "robot_type": "so101", "total_episodes": 51, "total_frames": 23783, "total_tasks": 1, "total_videos": 102, "total_chunks": 1, "chunks_size": 1000, "fps": 30, "splits": { "train": "0:51" }, "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", "features": { "action": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.state": { "dtype": "float32", "shape": [ 6 ], "names": [ "main_shoulder_pan", "main_shoulder_lift", "main_elbow_flex", "main_wrist_flex", "main_wrist_roll", "main_gripper" ] }, "observation.images.phone": { "dtype": "video", "shape": [ 480, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 480, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "observation.images.Orbbec": { "dtype": "video", "shape": [ 880, 640, 3 ], "names": [ "height", "width", "channels" ], "info": { "video.height": 880, "video.width": 640, "video.codec": "av1", "video.pix_fmt": "yuv420p", "video.is_depth_map": false, "video.fps": 30, "video.channels": 3, "has_audio": false } }, "timestamp": { "dtype": "float32", "shape": [ 1 ], "names": null }, "frame_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "episode_index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "index": { "dtype": "int64", "shape": [ 1 ], "names": null }, "task_index": { "dtype": "int64", "shape": [ 1 ], "names": null } } } ``` ## Citation **BibTeX:** ```bibtex [More Information Needed] ```
Doae16/Arabic_summaries_batch21
Doae16
2025-03-06T12:59:07Z
16
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-05T20:29:25Z
0
--- dataset_info: features: - name: id dtype: int64 - name: text dtype: string - name: summary dtype: string splits: - name: train num_bytes: 11204825 num_examples: 2100 download_size: 5266634 dataset_size: 11204825 configs: - config_name: default data_files: - split: train path: data/train-* ---
Shivak666/testing_3
Shivak666
2025-02-20T08:41:45Z
7
0
[ "license:mit", "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-20T08:37:01Z
0
--- license: mit configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: '@context' struct: - name: '@language' dtype: string - name: '@vocab' dtype: string - name: citeAs dtype: string - name: column dtype: string - name: conformsTo dtype: string - name: cr dtype: string - name: rai dtype: string - name: data struct: - name: '@id' dtype: string - name: '@type' dtype: string - name: dataType struct: - name: '@id' dtype: string - name: '@type' dtype: string - name: dct dtype: string - name: examples struct: - name: '@id' dtype: string - name: '@type' dtype: string - name: extract dtype: string - name: field dtype: string - name: fileProperty dtype: string - name: fileObject dtype: string - name: fileSet dtype: string - name: format dtype: string - name: includes dtype: string - name: isLiveDataset dtype: string - name: jsonPath dtype: string - name: key dtype: string - name: md5 dtype: string - name: parentField dtype: string - name: path dtype: string - name: recordSet dtype: string - name: references dtype: string - name: regex dtype: string - name: repeated dtype: string - name: replace dtype: string - name: sc dtype: string - name: separator dtype: string - name: source dtype: string - name: subField dtype: string - name: transform dtype: string - name: '@type' dtype: string - name: name dtype: string - name: description dtype: string - name: conformsTo dtype: string - name: license dtype: string - name: url dtype: string - name: version dtype: string - name: distribution list: - name: '@type' dtype: string - name: '@id' dtype: string - name: name dtype: string - name: description dtype: string - name: contentUrl dtype: string - name: encodingFormat dtype: string - name: sha256 dtype: string - name: containedIn struct: - name: '@id' dtype: string - name: includes dtype: string - name: recordSet list: - name: '@type' dtype: string - name: '@id' dtype: string - name: name dtype: string - name: dataType dtype: string - name: key struct: - name: '@id' dtype: string - name: field list: - name: '@type' dtype: string - name: '@id' dtype: string - name: description dtype: string - name: dataType dtype: string - name: name dtype: string - name: source struct: - name: fileSet struct: - name: '@id' dtype: string - name: extract struct: - name: column dtype: string - name: data struct: - name: question_splits/split_name dtype: string splits: - name: train num_bytes: 1870 num_examples: 1 download_size: 37531 dataset_size: 1870 ---
Mineru/ETH
Mineru
2024-11-30T00:44:50Z
27
0
[ "license:cc-by-sa-4.0", "size_categories:1M<n<10M", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-11-30T00:44:24Z
0
--- license: cc-by-sa-4.0 ---
nadavo11/contrastive-seams
nadavo11
2025-09-29T07:05:28Z
0
0
[ "size_categories:n<1K", "format:parquet", "modality:image", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-09-29T07:05:00Z
0
--- dataset_info: features: - name: image dtype: image - name: seam dtype: image - name: is_fake dtype: bool - name: method dtype: string - name: image_path dtype: string - name: seam_path dtype: string splits: - name: test num_bytes: 6054332 num_examples: 50 - name: train num_bytes: 23758371 num_examples: 200 download_size: 29825732 dataset_size: 29812703 configs: - config_name: default data_files: - split: test path: data/test-* - split: train path: data/train-* ---
liu-nlp/estonian-blimp-nom-sg-to-nom-pl-experimental
liu-nlp
2025-09-19T16:12:32Z
84
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-09-19T16:12:29Z
0
--- dataset_info: features: - name: original dtype: string - name: corrupted dtype: string splits: - name: train num_bytes: 357969 num_examples: 1316 download_size: 242847 dataset_size: 357969 configs: - config_name: default data_files: - split: train path: data/train-* ---
adaadig/candorNEW_P7tags
adaadig
2025-02-16T05:45:13Z
8
0
[ "size_categories:1K<n<10K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-02-16T05:44:40Z
0
--- dataset_info: features: - name: audio_filepaths dtype: string - name: avg_log_prob dtype: float64 - name: whisperbase_transcript_list dtype: string - name: confidence_score dtype: float64 - name: turn_id dtype: string - name: speaker_id dtype: string - name: file_id dtype: string - name: folderpart dtype: string - name: dialogueid dtype: string - name: adhoc_description dtype: string - name: text dtype: string - name: repetition_score dtype: float64 - name: fillers_score dtype: float64 - name: utterance_pitch_mean dtype: float32 - name: utterance_pitch_std dtype: float32 - name: snr dtype: float64 - name: c50 dtype: float64 - name: speaking_rate dtype: string - name: phonemes dtype: string - name: noise dtype: string - name: reverberation dtype: string - name: speech_monotony dtype: string splits: - name: train num_bytes: 8517515 num_examples: 9932 download_size: 3020421 dataset_size: 8517515 configs: - config_name: default data_files: - split: train path: data/train-* ---
Taywon/HH_sycophancy_biased_15k
Taywon
2024-12-05T07:26:18Z
15
0
[ "size_categories:10K<n<100K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-12-05T07:23:38Z
0
--- dataset_info: features: - name: chosen dtype: string - name: rejected dtype: string - name: flipped dtype: bool splits: - name: train num_bytes: 51774507 num_examples: 15000 - name: test num_bytes: 3580088 num_examples: 1071 download_size: 31290366 dataset_size: 55354595 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
OpenPipe/best-hn-comment-pairs-v2
OpenPipe
2024-10-17T18:37:30Z
31
1
[ "size_categories:10K<n<100K", "format:parquet", "modality:tabular", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-17T18:37:22Z
0
--- dataset_info: features: - name: chosen dtype: int64 - name: rejected dtype: int64 - name: chosen_rank dtype: int64 - name: rejected_rank dtype: int64 - name: top_level_parent dtype: int64 - name: split dtype: large_string - name: chosen_prompt dtype: large_string - name: rejected_prompt dtype: large_string splits: - name: train num_bytes: 115409826 num_examples: 30000 - name: validation num_bytes: 1964047 num_examples: 500 - name: test num_bytes: 3860648 num_examples: 1000 download_size: 68080869 dataset_size: 121234521 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
mastermani305/ps-story-content
mastermani305
2024-10-12T09:30:14Z
17
0
[ "size_categories:n<1K", "format:parquet", "modality:text", "library:datasets", "library:pandas", "library:mlcroissant", "library:polars", "region:us" ]
[]
2024-10-12T09:30:09Z
0
--- dataset_info: features: - name: url dtype: string - name: part_no dtype: int64 - name: content_as_array_of_para sequence: string - name: content dtype: string - name: chapter_no dtype: string splits: - name: train num_bytes: 21025439 num_examples: 294 download_size: 7276519 dataset_size: 21025439 configs: - config_name: default data_files: - split: train path: data/train-* ---
ohgnues/domain_fineweb-edu
ohgnues
2025-03-08T11:58:47Z
26
0
[ "size_categories:1M<n<10M", "format:parquet", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "region:us" ]
[]
2025-03-08T11:43:17Z
0
--- dataset_info: - config_name: Business_and_Industrial features: - name: text dtype: string splits: - name: train num_bytes: 4752343210 num_examples: 1000000 download_size: 2717936517 dataset_size: 4752343210 - config_name: Computers_and_Electronics features: - name: text dtype: string splits: - name: train num_bytes: 4208795253 num_examples: 1000000 download_size: 2393073581 dataset_size: 4208795253 - config_name: Health features: - name: text dtype: string splits: - name: train num_bytes: 4670790951 num_examples: 1000000 download_size: 2705536051 dataset_size: 4670790951 - config_name: Law_and_Government features: - name: text dtype: string splits: - name: train num_bytes: 6342709607 num_examples: 1000000 download_size: 3600415941 dataset_size: 6342709607 - config_name: Science features: - name: text dtype: string splits: - name: train num_bytes: 4394530292 num_examples: 1000000 download_size: 2589361241 dataset_size: 4394530292 configs: - config_name: Business_and_Industrial data_files: - split: train path: Business_and_Industrial/train-* - config_name: Computers_and_Electronics data_files: - split: train path: Computers_and_Electronics/train-* - config_name: Health data_files: - split: train path: Health/train-* - config_name: Law_and_Government data_files: - split: train path: Law_and_Government/train-* - config_name: Science data_files: - split: train path: Science/train-* ---
parler-tts/libritts_r_filtered
parler-tts
2024-08-06T16:45:54Z
1,736
20
[ "task_categories:text-to-speech", "task_categories:automatic-speech-recognition", "language:en", "license:cc-by-4.0", "size_categories:100K<n<1M", "format:parquet", "modality:audio", "modality:text", "library:datasets", "library:dask", "library:mlcroissant", "library:polars", "arxiv:2305.18802", "arxiv:2406.07969", "region:us" ]
[ "text-to-speech", "automatic-speech-recognition" ]
2024-06-13T15:48:39Z
1
--- license: cc-by-4.0 task_categories: - text-to-speech - automatic-speech-recognition language: - en size_categories: - 10K<n<100K dataset_info: - config_name: clean features: - name: audio dtype: audio: sampling_rate: 24000 - name: text_normalized dtype: string - name: text_original dtype: string - name: speaker_id dtype: string - name: path dtype: string - name: chapter_id dtype: string - name: id dtype: string splits: - name: dev.clean num_bytes: 1506311977.8882804 num_examples: 5589 - name: test.clean num_bytes: 1432099582.6705585 num_examples: 4689 - name: train.clean.100 num_bytes: 8985618654.720787 num_examples: 32215 - name: train.clean.360 num_bytes: 31794257100.91056 num_examples: 112326 download_size: 44461321972 dataset_size: 43718287316.190186 - config_name: other features: - name: audio dtype: audio: sampling_rate: 24000 - name: text_normalized dtype: string - name: text_original dtype: string - name: speaker_id dtype: string - name: path dtype: string - name: chapter_id dtype: string - name: id dtype: string splits: - name: dev.other num_bytes: 1042714063.4789225 num_examples: 4342 - name: test.other num_bytes: 1061489621.2561874 num_examples: 4716 - name: train.other.500 num_bytes: 50718457351.73659 num_examples: 194626 download_size: 54153699917 dataset_size: 52822661036.471695 configs: - config_name: clean data_files: - split: dev.clean path: clean/dev.clean-* - split: test.clean path: clean/test.clean-* - split: train.clean.100 path: clean/train.clean.100-* - split: train.clean.360 path: clean/train.clean.360-* - config_name: other data_files: - split: dev.other path: other/dev.other-* - split: test.other path: other/test.other-* - split: train.other.500 path: other/train.other.500-* pretty_name: Filtered LibriTTS-R --- # Dataset Card for Filtered LibriTTS-R This is a filtered version of [LibriTTS-R](https://huggingface.co/datasets/mythicinfinity/libritts_r). It has been filtered based on two sources: 1. LibriTTS-R paper [1], which lists samples for which speech restoration have failed 2. LibriTTS-P [2] list of [excluded speakers](https://github.com/line/LibriTTS-P/blob/main/data/excluded_spk_list.txt) for which multiple speakers have been detected. LibriTTS-R [1] is a sound quality improved version of the [LibriTTS corpus](http://www.openslr.org/60/) which is a multi-speaker English corpus of approximately 585 hours of read English speech at 24kHz sampling rate, published in 2019. ## Usage ### Example Loading the `clean` config with only the `train.clean.360` split. ```py from datasets import load_dataset load_dataset("blabble-io/libritts_r", "clean", split="train.clean.100") ``` Streaming is also supported. ```py from datasets import load_dataset load_dataset("blabble-io/libritts_r", streaming=True) ``` ### Splits There are 7 splits (dots replace dashes from the original dataset, to comply with hf naming requirements): - dev.clean - dev.other - test.clean - test.other - train.clean.100 - train.clean.360 - train.other.500 ### Configurations There are 3 configurations, each which limits the splits the `load_dataset()` function will download. The default configuration is "all". - "dev": only the "dev.clean" split (good for testing the dataset quickly) - "clean": contains only "clean" splits - "other": contains only "other" splits - "all": contains only "all" splits ### Columns ``` { "audio": datasets.Audio(sampling_rate=24_000), "text_normalized": datasets.Value("string"), "text_original": datasets.Value("string"), "speaker_id": datasets.Value("string"), "path": datasets.Value("string"), "chapter_id": datasets.Value("string"), "id": datasets.Value("string"), } ``` ### Example Row ``` { 'audio': { 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 'array': ..., 'sampling_rate': 24000 }, 'text_normalized': 'How quickly he disappeared!"', 'text_original': 'How quickly he disappeared!"', 'speaker_id': '3081', 'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/5551a515e85b9e463062524539c2e1cb52ba32affe128dffd866db0205248bdd/LibriTTS_R/dev-clean/3081/166546/3081_166546_000028_000002.wav', 'chapter_id': '166546', 'id': '3081_166546_000028_000002' } ``` ## Dataset Details ### Dataset Description - **License:** CC BY 4.0 ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Homepage:** https://www.openslr.org/141/ - **Paper:** https://arxiv.org/abs/2305.18802 ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> ``` @ARTICLE{Koizumi2023-hs, title = "{LibriTTS-R}: A restored multi-speaker text-to-speech corpus", author = "Koizumi, Yuma and Zen, Heiga and Karita, Shigeki and Ding, Yifan and Yatabe, Kohei and Morioka, Nobuyuki and Bacchiani, Michiel and Zhang, Yu and Han, Wei and Bapna, Ankur", abstract = "This paper introduces a new speech dataset called ``LibriTTS-R'' designed for text-to-speech (TTS) use. It is derived by applying speech restoration to the LibriTTS corpus, which consists of 585 hours of speech data at 24 kHz sampling rate from 2,456 speakers and the corresponding texts. The constituent samples of LibriTTS-R are identical to those of LibriTTS, with only the sound quality improved. Experimental results show that the LibriTTS-R ground-truth samples showed significantly improved sound quality compared to those in LibriTTS. In addition, neural end-to-end TTS trained with LibriTTS-R achieved speech naturalness on par with that of the ground-truth samples. The corpus is freely available for download from \textbackslashurl\{http://www.openslr.org/141/\}.", month = may, year = 2023, copyright = "http://creativecommons.org/licenses/by-nc-nd/4.0/", archivePrefix = "arXiv", primaryClass = "eess.AS", eprint = "2305.18802" } ``` ``` @misc{kawamura2024librittspcorpusspeakingstyle, title={LibriTTS-P: A Corpus with Speaking Style and Speaker Identity Prompts for Text-to-Speech and Style Captioning}, author={Masaya Kawamura and Ryuichi Yamamoto and Yuma Shirahata and Takuya Hasumi and Kentaro Tachibana}, year={2024}, eprint={2406.07969}, archivePrefix={arXiv}, primaryClass={eess.AS}, url={https://arxiv.org/abs/2406.07969}, } ```