YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

📝 Question Answers Roberta Model

This repository demonstrates how to fine-tune and quantize the deepset/roberta-base-squad2 model for Question Answering using a sample dataset from Hugging Face Hub.


🚀 Model Overview

  • Base Model: deepset/roberta-base-squad2
  • Task: Extractive Question Answering
  • Precision: Supports FP32, FP16 (half-precision), and INT8 (quantized)
  • Dataset: squad — Stanford Question Answering Dataset (Hugging Face Datasets)

📦 Dataset Used

We use the squad dataset from Hugging Face:

pip install datasets

Dataset

from datasets import load_dataset

dataset = load_dataset("squad")

Load Model & Tokenizer:


from transformers import AutoModelForQuestionAnswering, AutoTokenizer, TrainingArguments, Trainer
from datasets import load_dataset

model = AutoModelForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2")
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
dataset = load_dataset("squad")

✅ Results

Feature Benefit FP16 Fine-Tuning - Faster Training + Lower Memory INT8 Quantization - Smaller Model + Fast Inference Dataset - Stanford QA Dataset (SQuAD)

Downloads last month
10
Safetensors
Model size
124M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support