Dataset Creation

#2
by ahorazhd - opened

Great work on this dataset! I'm particularly interested in understanding the methodology behind its creation. Given that this is a well-structured dataset with approximately 1 million records, I'm curious about the following:

Was any Large Language Model (LLM) used in the creation process?
How did you handle the specific challenges of Persian text processing, particularly:

The Ezafe construction (اضافه)
Word boundary issues and concatenation rules

Could you share details about the validation process that ensured such high accuracy in the phonetic transcriptions?

Thanks for your interest.

One of the biggest challenges when it comes to pre-processing Persian is the fact that this language is not diacritized (there's no اعراب گذاری). There's also the problem of ی میانجی, which I addressed by using Hazm to tag the role of each word in a sentence and developed an algorithm to add those, it's not as accurate as I want it to be mostly because hazm's POS Tagger doesn't do a good job on OOD. but It's better than nothing.

I have trained an ALBERT model and also an auto-regressive LLM that handles Phoneme-to-Grapheme conversion using this dataset. I have found that converting from Phonemes to Graphemes is a much easier task for LLMs to model than vice-versa. My ultimate goal is to create a proper G2P system, because without it, the entire field of Persian Speech Processing will remain stagnant. (Good news is, I have almost solved this problem. )

Overall, I wouldn't call this dataset "accurate" at all. It's a legacy that I decided to upload on the off chance that it may help some people. So I wouldn't recommend using it on any downstream task at all.

Thanks for Your Explanation

Sign up or log in to comment