Model Card for ArTST_v2

ArTST (ASR task)

ArTST model finetuned for automatic speech recognition (speech-to-text) on MGB2.

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

How to Get Started with the Model

import soundfile as sf
from transformers import (
    SpeechT5Config,
    SpeechT5FeatureExtractor,
    SpeechT5ForSpeechToText,
    SpeechT5Processor,
    SpeechT5Tokenizer,
)

from custom_tokenizer import CustomTextTokenizer

device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = SpeechT5Tokenizer.from_pretrained("mbzuai/artst_asr_v2")
processor = SpeechT5Processor.from_pretrained("mbzuai/artst_asr_v2" , tokenizer=tokenizer)
model = SpeechT5ForSpeechToText.from_pretrained("mbzuai/artst_asr_v2").to(device)

audio, sr = sf.read("audio.wav")

inputs = processor(audio=audio, sampling_rate=sr, return_tensors="pt")
predicted_ids = model.generate(**inputs.to(device), max_length=250)

transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print(transcription[0])

Usage with Pipeline

import torch
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline

device = "cuda"
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32

model_id = "MBZUAI/artst_asr_v2"

processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id).to(device)
pipe = pipeline(
    "automatic-speech-recognition",
    model=model,
    tokenizer=processor.tokenizer,
    feature_extractor=processor.feature_extractor,
    torch_dtype=torch_dtype,
    device=device,
)

audio , sr = sf.read("path/to/audio/file")
if sr != 16000: 
  audio = librosa.resample(audio), orig_sr=sr, target_sr=16000)
result = pipe(audio)
print(result['text'])

Model Sources [optional]

Citation [optional]

BibTeX:

@misc{djanibekov2024dialectalcoveragegeneralizationarabic,
      title={Dialectal Coverage And Generalization in Arabic Speech Recognition}, 
      author={Amirbek Djanibekov and Hawau Olamide Toyin and Raghad Alshalan and Abdullah Alitr and Hanan Aldarmaki},
      year={2024},
      eprint={2411.05872},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2411.05872}, 
}

@inproceedings{toyin-etal-2023-artst,
    title = "{A}r{TST}: {A}rabic Text and Speech Transformer",
    author = "Toyin, Hawau  and
      Djanibekov, Amirbek  and
      Kulkarni, Ajinkya  and
      Aldarmaki, Hanan",
    booktitle = "Proceedings of ArabicNLP 2023",
    month = dec,
    year = "2023",
    address = "Singapore (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.arabicnlp-1.5",
    doi = "10.18653/v1/2023.arabicnlp-1.5",
    pages = "41--51",
}
Downloads last month
0
Safetensors
Model size
155M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train MBZUAI/artst_asr_v2

Collection including MBZUAI/artst_asr_v2