File size: 2,177 Bytes
a827ecc
 
 
 
 
 
 
 
 
 
 
 
 
ed07d2e
 
4b086c6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: apache-2.0
language:
- en
- zh
base_model:
- google/flan-t5-small
pipeline_tag: summarization
library_name: transformers
tags:
- prompt
- enhance
- flan
---
![xdfgzsxdfg.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LUO0VbyTOGIp6pde17MJT.png)
# **t5-Flan-Prompt-Enhance**
T5-Flan-Prompt-Enhance is a fine-tuned model based on **Flan-T5-Small**, specifically designed to **enhance prompts, captions, and annotations**. This means the model has been further trained to improve the quality, clarity, and richness of textual inputs, making them more detailed and expressive.  

### Key Features:
1. **Prompt Expansion** – Takes short or vague prompts and enriches them with more context, depth, and specificity.  
2. **Caption Enhancement** – Improves captions by adding more descriptive details, making them more informative and engaging.  
3. **Annotation Refinement** – Enhances annotations by making them clearer, more structured, and contextually relevant.  

### Run with Transformers

```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"

# Model checkpoint
model_checkpoint = "prithivMLmods/t5-Flan-Prompt-Enhance"

# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

# Model
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

enhancer = pipeline('text2text-generation',
                    model=model,
                    tokenizer=tokenizer,
                    repetition_penalty=1.2,
                    device=0 if device == "cuda" else -1)

max_target_length = 256
prefix = "enhance prompt: "

short_prompt = "three chimneys on the roof, green trees and shrubs in front of the house"
answer = enhancer(prefix + short_prompt, max_length=max_target_length)
final_answer = answer[0]['generated_text']
print(final_answer)
```

This fine-tuning process allows **T5-Flan-Prompt-Enhance** to generate **high-quality, well-structured, and contextually relevant outputs**, which can be particularly useful for tasks such as text generation, content creation, and AI-assisted writing.