Datasets:
metadata
task_categories:
- translation
- text2text-generation
language:
- hi
- or
- bn
- kn
- te
- ta
- ml
- gu
- pa
- sa
- mr
- ne
- bh
pretty_name: IAST<->Indic Seq2Seq Corpus for Indic languages
size_categories:
- 1M<n<10M
configs:
- config_name: sanskrit
data_files:
- split: train
path: files/sanskrit/sanskrit_wikidata_iast.csv
- config_name: odia
data_files:
- split: train
path: files/odia/odia_wikidata_iast.csv
- config_name: hindi
data_files:
- split: train
path: files/hindi/hindi_wikidata_iast.csv
- config_name: bengali
data_files:
- split: train
path: files/bengali/bengali_wikidata_iast.csv
- config_name: tamil
data_files:
- split: train
path: files/tamil/tamil_wikidata_iast.csv
- config_name: telugu
data_files:
- split: train
path: files/telugu/telugu_wikidata_iast.csv
- config_name: kannada
data_files:
- split: train
path: files/kannada/kannada_wikidata_iast.csv
- config_name: malayalam
data_files:
- split: train
path: files/malayalam/malayalam_wikidata_iast.csv
- config_name: punjabi
data_files:
- split: train
path: files/punjabi/punjabi_wikidata_iast.csv
- config_name: gujarati
data_files:
- split: train
path: files/gujarati/gujarati_wikidata_iast.csv
- config_name: marathi
data_files:
- split: train
path: files/marathi/marathi_wikidata_iast.csv
- config_name: nepali
data_files:
- split: train
path: files/nepali/nepali_wikidata_iast.csv
- config_name: bhojpuri
data_files:
- split: train
path: files/bhojpuri/bhojpuri_wikidata_iast.csv
- config_name: maithili
data_files:
- split: train
path: files/maithili/maithili_wikidata_iast.csv
license: cc-by-sa-3.0
Dataset Details
- Dataset created by transliterating existing datasets to IAST by means of IAST transliteration library
- Languages include Sanskrit, Hindi, Odia, Bengali, Tamil, Telugu, Kannada, Malayalam, Gujarati, Punjabi, Marathi, Bhojpuri, Nepali.
- Pre-existing dataset source(s) - Wikipedia.
- Across all subsets, 'source', 'target', 'source_lang', 'target_lang', 'source' columns are common.
- This is just a hobby dataset, but it should abide by the licenses of the input dataset(s).