VLAA-Thinker is a vision-language model that takes an image and text as input and outputs text, as described in SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models.

Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/ Code: https://github.com/UCSC-VLAA/VLAA-Thinking

Downloads last month
2,677
Safetensors
Model size
8.29B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B

Adapters
1 model
Quantizations
1 model

Collection including UCSC-VLAA/VLAA-Thinker-Qwen2.5VL-7B