File size: 9,962 Bytes
4fe9c30
 
f764292
 
 
 
 
 
 
 
 
 
 
 
4fe9c30
f764292
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---

license: mit
base_model:
- OpenGVLab/InternViT-300M-448px
- internlm/internlm2_5-7b-chat
new_version: OpenGVLab/PVC-InternVL-8B
language:
- multilingual
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- internvl
- video
- token compression
---


# PVC-InternVL2-8B

[\[📂 GitHub\]](https://github.com/OpenGVLab/PVC)

## Introduction

We introduce the **Progressive Visual Token Compression (PVC)** in large vision-language models (VLMs), which unifies the visual inputs as videos and progressively compresses vision tokens across video frames. Our PVC achieves:

* Preserve spatial details and temporal dynamics for both images and videos.
* Effectively reduce the tokens used for each video frame and image tile.
* SoTA performance on various video benchmarks, including long and fine-grained short video tasks.
* No performance loss on image benchmarks, especially on detail-sensitive tasks.

<div style="text-align: center;">
    <img src="./assets/overview.png" width="70%"/>

</div>


## Results

Our implementation is based on the [InternVL2](https://github.com/OpenGVLab/InternVL) model, referred to as **PVC<sub>InternVL2</sub>**

### Video Understanding Benckmarks

| Model | LLaVA-OneVision-7B | Qwen2-VL-7B | InternVL2-8B | PVC<sub>InternVL2</sub>-8B |
| :--------------: | :--: | :--: | :--: | :--: |
| \# token/frame   | 196  | -    | 256  | 64   |
|                  |      |      |      |      |
| MVbench          | 56.7 | 67.0 | 66.4 | 73.8 |
| VideoMME w/o-sub | 58.2 | 63.3 | 54.0 | 64.1 |
| VideoMME w-sub   | 61.5 | 69.0 | 56.9 | 69.7 |
| MLVU             | 64.7 | -    | 52.0 | 72.4 |
| LongVideoBench   | 56.5 | -    | -    | 59.2 |
| NextQA           | 79.4 | -    | -    | 82.0 |
| Egoschema        | 60.1 | 66.7 | 55.0 | 59.6 |
| PercepTest       | 57.1 | 62.3 | 52.0 | 68.4 |
| AcNet-QA         | 56.6 | -    | -    | 57.1 |

### Image Understanding Benckmarks

| Model | LLaVA-OneVision-7B | Qwen2-VL-7B | InternVL2-8B | PVC<sub>InternVL2</sub>-8B |
| :--------------------: | :--: | :--: | :--: | :--: |
| \# token/image tile    | 729  | -    | 256  | 64   |
|                        |      |      |      |      |
| AI2D<sub>test</sub>    | 81.4 | 83.0 | 83.8 | 83.8 |
| ChartQA<sub>test</sub> | 80.0 | 83.0 | 83.3 | 84.1 |
| DocVQA<sub>test</sub>  | 87.5 | 94.5 | 91.6 | 92.5 |
| InfoVQA<sub>test</sub> | 68.8 | 76.5 | 74.8 | 75.0 |
| SQA<sub>test</sub>     | 96.0 | -    | 97.1 | 97.7 |
| TextVQA<sub>val</sub>  | -    | 84.3 | 77.4 | 80.0 |
| MMB<sub>en-test</sub>  | -    | 83.0 | 81.7 | 83.9 |
| MME<sub>sum</sub>      | 1998 | 2327 | 2210 | 2282 |
| MMMU<sub>val</sub>     | 48.8 | 54.1 | 49.3 | 50.9 |
| SEED<sub>I</sub>       | 75.4 | -    | 76.2 | 77.2 |
| OCRBench               | -    | 866  | 794  | 807  |

## Quick Start

```python

import numpy as np

import torch

import torchvision.transforms as T

from decord import VideoReader, cpu

from PIL import Image

from torchvision.transforms.functional import InterpolationMode

from transformers import AutoModel, AutoTokenizer



IMAGENET_MEAN = (0.485, 0.456, 0.406)

IMAGENET_STD = (0.229, 0.224, 0.225)



def build_transform(input_size):

    MEAN, STD = IMAGENET_MEAN, IMAGENET_STD

    transform = T.Compose([

        T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),

        T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),

        T.ToTensor(),

        T.Normalize(mean=MEAN, std=STD)

    ])

    return transform



def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):

    best_ratio_diff = float('inf')

    best_ratio = (1, 1)

    area = width * height

    for ratio in target_ratios:

        target_aspect_ratio = ratio[0] / ratio[1]

        ratio_diff = abs(aspect_ratio - target_aspect_ratio)

        if ratio_diff < best_ratio_diff:

            best_ratio_diff = ratio_diff

            best_ratio = ratio

        elif ratio_diff == best_ratio_diff:

            if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:

                best_ratio = ratio

    return best_ratio



def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):

    orig_width, orig_height = image.size

    aspect_ratio = orig_width / orig_height



    # calculate the existing image aspect ratio

    target_ratios = set(

        (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if

        i * j <= max_num and i * j >= min_num)

    target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])



    # find the closest aspect ratio to the target

    target_aspect_ratio = find_closest_aspect_ratio(

        aspect_ratio, target_ratios, orig_width, orig_height, image_size)



    # calculate the target width and height

    target_width = image_size * target_aspect_ratio[0]

    target_height = image_size * target_aspect_ratio[1]

    blocks = target_aspect_ratio[0] * target_aspect_ratio[1]



    # resize the image

    resized_img = image.resize((target_width, target_height))

    processed_images = []

    for i in range(blocks):

        box = (

            (i % (target_width // image_size)) * image_size,

            (i // (target_width // image_size)) * image_size,

            ((i % (target_width // image_size)) + 1) * image_size,

            ((i // (target_width // image_size)) + 1) * image_size

        )

        # split the image

        split_img = resized_img.crop(box)

        processed_images.append(split_img)

    assert len(processed_images) == blocks

    if use_thumbnail and len(processed_images) != 1:

        thumbnail_img = image.resize((image_size, image_size))

        processed_images.append(thumbnail_img)

    return processed_images



def load_image(image_file, input_size=448, max_num=12):

    image = Image.open(image_file).convert('RGB')

    transform = build_transform(input_size=input_size)

    images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)

    pixel_values = [transform(image) for image in images]

    pixel_values = torch.stack(pixel_values)

    return pixel_values



def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):

    if bound:

        start, end = bound[0], bound[1]

    else:

        start, end = -100000, 100000

    start_idx = max(first_idx, round(start * fps))

    end_idx = min(round(end * fps), max_frame)

    seg_size = float(end_idx - start_idx) / num_segments

    frame_indices = np.array([

        int(start_idx + (seg_size / 2) + np.round(seg_size * idx))

        for idx in range(num_segments)

    ])

    return frame_indices



def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):

    vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)

    max_frame = len(vr) - 1

    fps = float(vr.get_avg_fps())



    pixel_values_list, num_patches_list = [], []

    transform = build_transform(input_size=input_size)

    frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)

    for frame_index in frame_indices:

        img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')

        img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)

        pixel_values = [transform(tile) for tile in img]

        pixel_values = torch.stack(pixel_values)

        num_patches_list.append(pixel_values.shape[0])

        pixel_values_list.append(pixel_values)

    pixel_values = torch.cat(pixel_values_list)

    return pixel_values, num_patches_list





path = 'OpenGVLab/PVC-InternVL2-8B'

model = AutoModel.from_pretrained(

    path,

    torch_dtype=torch.bfloat16,

    low_cpu_mem_usage=True,

    trust_remote_code=True).eval().cuda()

tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)

generation_config = dict(max_new_tokens=1024, do_sample=True)



# single-image conversation

pixel_values = load_image('./assets/example_image1.jpg', max_num=12).to(torch.bfloat16).cuda()

data_flag = torch.tensor([1], dtype=torch.long).cuda()



question = '<image>\nWhat is in the image?'

response = model.chat(tokenizer, pixel_values, question, generation_config, data_flag=data_flag)

print(f'User: {question}\nAssistant: {response}')



# multi-image conversation

pixel_values1 = load_image('./assets/example_image1.jpg', max_num=12).to(torch.bfloat16).cuda()

pixel_values2 = load_image('./assets/example_image2.jpg', max_num=12).to(torch.bfloat16).cuda()

pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)

data_flag = torch.tensor([2], dtype=torch.long).cuda()

num_patches_list = [pixel_values1.shape[0], pixel_values2.shape[0]]



question = 'Image-1: <image>\nImage-2: <image>\nWhat are the similarities and differences between these two images.'

response = model.chat(tokenizer, pixel_values, question, generation_config, data_flag=data_flag, num_patches_list=num_patches_list)

print(f'User: {question}\nAssistant: {response}')



# video conversation

pixel_values, num_patches_list = load_video('./assets/example_video.mp4', num_segments=64, max_num=1)

pixel_values = pixel_values.to(torch.bfloat16).cuda()

video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])

# Frame1: <image>\nFrame2: <image>\n...\nFrameN: <image>\n{question}

data_flag = torch.tensor([3], dtype=torch.long).cuda()



question = video_prefix + 'Describe this video in detail.'

response = model.chat(tokenizer, pixel_values, question, generation_config, data_flag=data_flag, num_patches_list=num_patches_list)

print(f'User: {question}\nAssistant: {response}')

```

## License

This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.