Improve model card: Add tags, project page, GitHub, and usage example

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +52 -5
README.md CHANGED
@@ -1,14 +1,61 @@
1
  ---
2
- license: mit
3
  datasets:
4
  - PAPOGalaxy/PAPO_train
 
 
 
5
  ---
6
 
7
-
8
  # PAPO Model
9
 
10
- ## Model Source
11
- This is the official model released for paper **PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning** (arxiv.org/abs/2507.06448)
 
 
12
 
13
  ## Model Version
14
- PAPO (γ=0.02)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  datasets:
3
  - PAPOGalaxy/PAPO_train
4
+ license: mit
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
  ---
8
 
 
9
  # PAPO Model
10
 
11
+ This is the official model released for the paper [**Perception-Aware Policy Optimization for Multimodal Reasoning**](https://arxiv.org/abs/2507.06448).
12
+
13
+ **Project Page**: [https://mikewangwzhl.github.io/PAPO/](https://mikewangwzhl.github.io/PAPO/)
14
+ **Code**: [https://github.com/mikewangwzhl/PAPO](https://github.com/mikewangwzhl/PAPO)
15
 
16
  ## Model Version
17
+ PAPO (γ=0.02)
18
+
19
+ ## Usage
20
+
21
+ This model can be loaded and used with the `transformers` library.
22
+
23
+ ```python
24
+ from transformers import AutoProcessor, AutoModelForCausalLM
25
+ from PIL import Image
26
+ import requests
27
+
28
+ # Load the processor and model
29
+ # Note: Replace "PAPOGalaxy/PAPO-Qwen2.5-7B" with the actual model ID if different
30
+ processor = AutoProcessor.from_pretrained("PAPOGalaxy/PAPO-Qwen2.5-7B", trust_remote_code=True)
31
+ model = AutoModelForCausalLM.from_pretrained("PAPOGalaxy/PAPO-Qwen2.5-7B", trust_remote_code=True)
32
+
33
+ # Example image (replace with your image URL or local path)
34
+ image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/preprocessor_config_vln.png"
35
+ image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
36
+
37
+ # Define your prompt
38
+ prompt = "What are the main objects in this image?"
39
+
40
+ # Format messages for the model
41
+ messages = [
42
+ {"role": "user", "content": [{"type": "image", "content": image}, {"type": "text", "text": prompt}]}
43
+ ]
44
+
45
+ # Apply chat template and tokenize
46
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
47
+ input_ids = processor(text, return_tensors="pt").input_ids
48
+
49
+ # Generate response
50
+ output_ids = model.generate(
51
+ input_ids,
52
+ max_new_tokens=100,
53
+ do_sample=True,
54
+ temperature=0.7,
55
+ top_p=0.9,
56
+ )
57
+
58
+ # Decode and print the generated text
59
+ generated_text = processor.decode(output_ids[0], skip_special_tokens=True)
60
+ print(generated_text)
61
+ ```