You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Science Q&A Fine-tuned Model

This is a fine-tuned model model fine-tuned for science question answering.

Model Details

  • Base Model: google/flan-t5-small
  • Task: Question Answering
  • Domain: Science
  • Training Data: Science content from PDF documents
  • Model Type: Fine-tuned Model

Usage

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("alexputhiyadom/science-qa-finetuned")
model = AutoModelForSeq2SeqLM.from_pretrained("alexputhiyadom/science-qa-finetuned")

# Example usage
question = "What is science?"
inputs = tokenizer(f"Question: {question}\nAnswer:", return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(answer)

Training

This model was trained on science content using the flan-t5-small base model. The training focused on generating concise, accurate answers to science-related questions.

License

MIT License

Downloads last month
59
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support