Ayush472 commited on
Commit
0767a99
·
verified ·
1 Parent(s): 1532b1a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - rajpurkar/squad
5
+ base_model:
6
+ - google-t5/t5-base
7
+ pipeline_tag: question-answering
8
+ ---
9
+ # T5 Question Generator
10
+
11
+ This repository contains a fine-tuned T5 model for question generation. The model takes an answer and a context paragraph as input and generates a relevant question.
12
+
13
+ ## Model Description
14
+
15
+ This model is a fine-tuned version of the T5 (Text-to-Text Transfer Transformer) model. It has been trained on a dataset of 60000 non-technical questions from SQuAD and 10000 technical questions. The model is conditioned on the answer and the context to generate a question for which the given answer is the correct response.
16
+
17
+ ## How to Use
18
+
19
+ You can use this model with the `transformers` library in Python. First, make sure you have the library installed:
20
+
21
+ ```bash
22
+ pip install transformers
23
+ pip install sentencepiece
24
+ ```
25
+
26
+ Then, you can use the following code to load the model and generate a question:
27
+
28
+ ```python
29
+ from transformers import T5ForConditionalGeneration, T5Tokenizer
30
+
31
+ model_name = "Ayush472/T5QuestionGenerator"
32
+ model = T5ForConditionalGeneration.from_pretrained(model_name)
33
+ tokenizer = T5Tokenizer.from_pretrained(model_name)
34
+
35
+ context = "The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower."
36
+ answer = "Gustave Eiffel"
37
+
38
+ input_text = f"answer: {answer} context: {context}"
39
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
40
+
41
+ output = model.generate(input_ids, max_length=100)
42
+ generated_question = tokenizer.decode(output[0], skip_special_tokens=True)
43
+
44
+ print(generated_question)
45
+ # Expected output: Who designed the Eiffel Tower?
46
+ ```
47
+
48
+ ## Model Architecture
49
+
50
+ The model is based on the T5 architecture. T5 is an encoder-decoder model that is pre-trained on a large corpus of text. It is trained using a text-to-text approach, which means that all NLP tasks are cast as a text-to-text problem.
51
+
52
+ ## About
53
+
54
+ This model was fine-tuned by Ayush. For any questions or issues, please open an issue in this repository.