openbmb/MiniCPM-o-4_5
Any-to-Any
•
9B
•
Updated
•
28.1k
•
654
Large Language Models
Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts
InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation
Totally Free + Zero Barriers + No Login Required