license: cc-by-nc-4.0 | |
pipeline_tag: image-text-to-text | |
library_name: transformers | |
VLAA-Thinker is a vision-language model that takes an image and text as input and outputs text, as described in [SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models](https://huggingface.co/papers/2504.11468). | |
Project Page: https://ucsc-vlaa.github.io/VLAA-Thinking/ | |
Code: https://github.com/UCSC-VLAA/VLAA-Thinking |