publishedAt
timestamp[ns] | title
string | thumbnail
string | numComments
int64 | submittedBy
dict | isAuthorParticipating
bool | mediaUrls
list | paper_id
string | paper_authors
list | paper_publishedAt
timestamp[ns] | paper_title
string | paper_summary
string | paper_upvotes
int64 | paper_discussionId
string | paper_projectPage
string | paper_githubRepo
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-01-24T14:50:16.765000 |
Control LLM: Controlled Evolution for Intelligence Retention in LLM
| 2 |
{
"_id": "6199b1d090692f0d92388473",
"avatarUrl": "/avatars/72693e4d37ded1542cd2564879fd9a61.svg",
"followerCount": 2,
"fullname": "Haichao Wei",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hawei",
"type": "user"
}
| true | null |
2501.10979
|
[
{
"_id": "6792cf69627180db59a51b7c",
"hidden": false,
"name": "Haichao Wei",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:08:11.344Z",
"user": {
"_id": "6199b1d090692f0d92388473",
"avatarUrl": "/avatars/72693e4d37ded1542cd2564879fd9a61.svg",
"fullname": "Haichao Wei",
"isPro": false,
"type": "user",
"user": "hawei"
}
},
{
"_id": "6792cf69627180db59a51b7d",
"hidden": false,
"name": "Yunxiang Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792cf69627180db59a51b7e",
"hidden": false,
"name": "Zhoutong Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792cf69627180db59a51b7f",
"hidden": false,
"name": "Aman Lunia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792cf69627180db59a51b80",
"hidden": false,
"name": "Yi-Lin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792cf69627180db59a51b81",
"hidden": false,
"name": "Alice Leung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792cf69627180db59a51b82",
"hidden": false,
"name": "Ya Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-19T08:06:06 |
Control LLM: Controlled Evolution for Intelligence Retention in LLM
|
Large Language Models (LLMs) demand significant computational resources,
making it essential to enhance their capabilities without retraining from
scratch. A key challenge in this domain is catastrophic forgetting
(CF), which hampers performance during Continuous Pre-training (CPT) and
Continuous Supervised Fine-Tuning (CSFT). We propose Control LLM, a
novel approach that leverages parallel pre-trained and expanded transformer
blocks, aligning their hidden-states through interpolation strategies This
method effectively preserves performance on existing tasks while seamlessly
integrating new knowledge.
Extensive experiments demonstrate the effectiveness of Control LLM in both
CPT and CSFT. On Llama3.1-8B-Instruct, it achieves significant improvements in
mathematical reasoning (+14.4% on Math-Hard) and coding performance (+10%
on MBPP-PLUS). On Llama3.1-8B, it enhances multilingual capabilities (+10.6%
on C-Eval, +6.8% on CMMLU, and +30.2% on CMMLU-0shot-CoT). It surpasses
existing methods and achieves SOTA among open-source models tuned from the same
base model, using substantially less data and compute. Crucially, these gains
are realized while preserving strong original capabilities, with minimal
degradation (<4.3% on MMLU) compared to >35% in open-source Math
and Coding models. This approach has been successfully deployed in LinkedIn's
GenAI-powered job seeker and Ads unit products.
To support further research, we release the training and evaluation code
(https://github.com/linkedin/ControlLLM) along with models trained on
public datasets ( https://huggingface.co/ControlLLM) to the community.
| 6 |
6792cf6c627180db59a51c28
| null | null |
|
2025-01-24T09:39:01.834000 |
Hallucinations Can Improve Large Language Models in Drug Discovery
| 8 |
{
"_id": "662ce44c8b8705f30371fba8",
"avatarUrl": "/avatars/b96a25a8c124e7caa9de06b7188bdc15.svg",
"followerCount": null,
"fullname": "Shuzhou Yuan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "shuzyuan",
"type": "user"
}
| true | null |
2501.13824
|
[
{
"_id": "6793a52265c4dd63499ca548",
"hidden": false,
"name": "Shuzhou Yuan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T14:52:27.570Z",
"user": {
"_id": "662ce44c8b8705f30371fba8",
"avatarUrl": "/avatars/b96a25a8c124e7caa9de06b7188bdc15.svg",
"fullname": "Shuzhou Yuan",
"isPro": false,
"type": "user",
"user": "shuzyuan"
}
},
{
"_id": "6793a52265c4dd63499ca549",
"hidden": false,
"name": "Michael Färber",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T16:45:51 |
Hallucinations Can Improve Large Language Models in Drug Discovery
|
Concerns about hallucinations in Large Language Models (LLMs) have been
raised by researchers, yet their potential in areas where creativity is vital,
such as drug discovery, merits exploration. In this paper, we come up with the
hypothesis that hallucinations can improve LLMs in drug discovery. To verify
this hypothesis, we use LLMs to describe the SMILES string of molecules in
natural language and then incorporate these descriptions as part of the prompt
to address specific tasks in drug discovery. Evaluated on seven LLMs and five
classification tasks, our findings confirm the hypothesis: LLMs can achieve
better performance with text containing hallucinations. Notably, Llama-3.1-8B
achieves an 18.35% gain in ROC-AUC compared to the baseline without
hallucination. Furthermore, hallucinations generated by GPT-4o provide the most
consistent improvements across models. Additionally, we conduct empirical
analyses and a case study to investigate key factors affecting performance and
the underlying reasons. Our research sheds light on the potential use of
hallucinations for LLMs and offers new perspectives for future research
leveraging LLMs in drug discovery.
| 10 |
6793a52465c4dd63499ca5ad
| null | null |
|
2025-01-24T08:34:50.383000 |
One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt
| 2 |
{
"_id": "65a909fe8581aad8c97a67d3",
"avatarUrl": "/avatars/96570e47117e957543d9f0fe5e1d9d57.svg",
"followerCount": null,
"fullname": "liutao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "byliutao",
"type": "user"
}
| true | null |
2501.13554
|
[
{
"_id": "6793900eddc6cc37fdc74928",
"hidden": false,
"name": "Tao Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T13:30:20.097Z",
"user": {
"_id": "65a909fe8581aad8c97a67d3",
"avatarUrl": "/avatars/96570e47117e957543d9f0fe5e1d9d57.svg",
"fullname": "liutao",
"isPro": false,
"type": "user",
"user": "byliutao"
}
},
{
"_id": "6793900eddc6cc37fdc74929",
"hidden": false,
"name": "Kai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492a",
"hidden": false,
"name": "Senmao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492b",
"hidden": false,
"name": "Joost van de Weijer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492c",
"hidden": false,
"name": "Fahad Shahbaz Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492d",
"hidden": false,
"name": "Shiqi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492e",
"hidden": false,
"name": "Yaxing Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc7492f",
"hidden": false,
"name": "Jian Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793900eddc6cc37fdc74930",
"hidden": false,
"name": "Ming-Ming Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T10:57:22 |
One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation
Using a Single Prompt
|
Text-to-image generation models can create high-quality images from input
prompts. However, they struggle to support the consistent generation of
identity-preserving requirements for storytelling. Existing approaches to this
problem typically require extensive training in large datasets or additional
modifications to the original model architectures. This limits their
applicability across different domains and diverse diffusion model
configurations. In this paper, we first observe the inherent capability of
language models, coined context consistency, to comprehend identity through
context with a single prompt. Drawing inspiration from the inherent context
consistency, we propose a novel training-free method for consistent
text-to-image (T2I) generation, termed "One-Prompt-One-Story" (1Prompt1Story).
Our approach 1Prompt1Story concatenates all prompts into a single input for T2I
diffusion models, initially preserving character identities. We then refine the
generation process using two novel techniques: Singular-Value Reweighting and
Identity-Preserving Cross-Attention, ensuring better alignment with the input
description for each frame. In our experiments, we compare our method against
various existing consistent T2I generation approaches to demonstrate its
effectiveness through quantitative metrics and qualitative assessments. Code is
available at https://github.com/byliutao/1Prompt1Story.
| 9 |
67939013ddc6cc37fdc74a9d
| null | null |
|
2025-01-24T05:39:01.676000 |
Improving Video Generation with Human Feedback
| 4 |
{
"_id": "639be86b59473c6ae02ef9c4",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/639be86b59473c6ae02ef9c4/gw34RBCVZCOkcAA79xUr3.png",
"followerCount": 14,
"fullname": "Jie Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jieliu",
"type": "user"
}
| true | null |
2501.13918
|
[
{
"_id": "679319848d46289f90266168",
"hidden": false,
"name": "Jie Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:07:53.235Z",
"user": {
"_id": "639be86b59473c6ae02ef9c4",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/639be86b59473c6ae02ef9c4/gw34RBCVZCOkcAA79xUr3.png",
"fullname": "Jie Liu",
"isPro": false,
"type": "user",
"user": "jieliu"
}
},
{
"_id": "679319848d46289f90266169",
"hidden": false,
"name": "Gongye Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T13:37:43.546Z",
"user": {
"_id": "6553316bf151de82f6a23e1d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6553316bf151de82f6a23e1d/GTBkSj4Fa3OoyM6Muz_Sc.jpeg",
"fullname": "Gongye Liu",
"isPro": false,
"type": "user",
"user": "liuhuohuo"
}
},
{
"_id": "679319848d46289f9026616a",
"hidden": false,
"name": "Jiajun Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f9026616b",
"hidden": false,
"name": "Ziyang Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f9026616c",
"hidden": false,
"name": "Xiaokun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f9026616d",
"hidden": false,
"name": "Mingwu Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f9026616e",
"hidden": false,
"name": "Xiele Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f9026616f",
"hidden": false,
"name": "Qiulin Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266170",
"hidden": false,
"name": "Wenyu Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266171",
"hidden": false,
"name": "Menghan Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266172",
"hidden": false,
"name": "Xintao Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:07:51.248Z",
"user": {
"_id": "60e272ca6c78a8c122b12127",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60e272ca6c78a8c122b12127/xldEGBzGrU-bX6IwAw0Ie.jpeg",
"fullname": "Xintao Wang",
"isPro": false,
"type": "user",
"user": "Xintao"
}
},
{
"_id": "679319848d46289f90266173",
"hidden": false,
"name": "Xiaohong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266174",
"hidden": false,
"name": "Fei Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266175",
"hidden": false,
"name": "Pengfei Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266176",
"hidden": false,
"name": "Di Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266177",
"hidden": false,
"name": "Kun Gai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266178",
"hidden": false,
"name": "Yujiu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679319848d46289f90266179",
"hidden": false,
"name": "Wanli Ouyang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T18:55:41 |
Improving Video Generation with Human Feedback
|
Video generation has achieved significant advances through rectified flow
techniques, but issues like unsmooth motion and misalignment between videos and
prompts persist. In this work, we develop a systematic pipeline that harnesses
human feedback to mitigate these problems and refine the video generation
model. Specifically, we begin by constructing a large-scale human preference
dataset focused on modern video generation models, incorporating pairwise
annotations across multi-dimensions. We then introduce VideoReward, a
multi-dimensional video reward model, and examine how annotations and various
design choices impact its rewarding efficacy. From a unified reinforcement
learning perspective aimed at maximizing reward with KL regularization, we
introduce three alignment algorithms for flow-based models by extending those
from diffusion models. These include two training-time strategies: direct
preference optimization for flow (Flow-DPO) and reward weighted regression for
flow (Flow-RWR), and an inference-time technique, Flow-NRG, which applies
reward guidance directly to noisy videos. Experimental results indicate that
VideoReward significantly outperforms existing reward models, and Flow-DPO
demonstrates superior performance compared to both Flow-RWR and standard
supervised fine-tuning methods. Additionally, Flow-NRG lets users assign custom
weights to multiple objectives during inference, meeting personalized video
quality needs. Project page: https://gongyeliu.github.io/videoalign.
| 49 |
679319858d46289f90266203
| null | null |
|
2025-01-24T04:24:01.412000 |
Video-MMMU: Evaluating Knowledge Acquisition from Multi-Discipline Professional Videos
| 2 |
{
"_id": "6400ba2b261cfa61f3a00555",
"avatarUrl": "/avatars/1311e0b5e21b1c94d73fcaf455d3c7f7.svg",
"followerCount": 5,
"fullname": "Kairui",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KairuiHu",
"type": "user"
}
| true | null |
2501.13826
|
[
{
"_id": "67934585e4e44e2866b644f2",
"hidden": false,
"name": "Kairui Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:07:46.937Z",
"user": {
"_id": "6400ba2b261cfa61f3a00555",
"avatarUrl": "/avatars/1311e0b5e21b1c94d73fcaf455d3c7f7.svg",
"fullname": "Kairui",
"isPro": false,
"type": "user",
"user": "KairuiHu"
}
},
{
"_id": "67934585e4e44e2866b644f3",
"hidden": false,
"name": "Penghao Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:21:29.750Z",
"user": {
"_id": "64101f81b27543634e377fc1",
"avatarUrl": "/avatars/557dd9d4707e3b38e0805dfb87c08004.svg",
"fullname": "Penghao Wu",
"isPro": false,
"type": "user",
"user": "craigwu"
}
},
{
"_id": "67934585e4e44e2866b644f4",
"hidden": false,
"name": "Fanyi Pu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:21:39.917Z",
"user": {
"_id": "646e1ef5075bbcc48ddf21e8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/_vJC0zeVOIvaNV2R6toqg.jpeg",
"fullname": "Pu Fanyi",
"isPro": false,
"type": "user",
"user": "pufanyi"
}
},
{
"_id": "67934585e4e44e2866b644f5",
"hidden": false,
"name": "Wang Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:21:46.540Z",
"user": {
"_id": "647efcc945baf21ad707e10c",
"avatarUrl": "/avatars/e2fab1c9031eb0eec9f015a8fc237f64.svg",
"fullname": "Wang Xiao",
"isPro": false,
"type": "user",
"user": "wangxiao1208"
}
},
{
"_id": "67934585e4e44e2866b644f6",
"hidden": false,
"name": "Yuanhan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:26:42.528Z",
"user": {
"_id": "62a993d80472c0b7f94027df",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62a993d80472c0b7f94027df/j5vp-IwLA2YBexylUHiQU.png",
"fullname": "Zhang Yuanhan",
"isPro": false,
"type": "user",
"user": "ZhangYuanhan"
}
},
{
"_id": "67934585e4e44e2866b644f7",
"hidden": false,
"name": "Xiang Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:27:14.894Z",
"user": {
"_id": "6230d750d93e84e233882dbc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6230d750d93e84e233882dbc/4MGEekLW3oWzqeFWDWvIK.jpeg",
"fullname": "Xiang Yue",
"isPro": false,
"type": "user",
"user": "yuexiang96"
}
},
{
"_id": "67934585e4e44e2866b644f8",
"hidden": false,
"name": "Bo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67934585e4e44e2866b644f9",
"hidden": false,
"name": "Ziwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:27:30.310Z",
"user": {
"_id": "62ab1ac1d48b4d8b048a3473",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png",
"fullname": "Ziwei Liu",
"isPro": false,
"type": "user",
"user": "liuziwei7"
}
}
] | 2025-01-23T16:51:47 |
Video-MMMU: Evaluating Knowledge Acquisition from Multi-Discipline
Professional Videos
|
Humans acquire knowledge through three cognitive stages: perceiving
information, comprehending knowledge, and adapting knowledge to solve novel
problems. Videos serve as an effective medium for this learning process,
facilitating a progression through these cognitive stages. However, existing
video benchmarks fail to systematically evaluate the knowledge acquisition
capabilities in Large Multimodal Models (LMMs). To address this gap, we
introduce Video-MMMU, a multi-modal, multi-disciplinary benchmark designed to
assess LMMs' ability to acquire and utilize knowledge from videos. Video-MMMU
features a curated collection of 300 expert-level videos and 900
human-annotated questions across six disciplines, evaluating knowledge
acquisition through stage-aligned question-answer pairs: Perception,
Comprehension, and Adaptation. A proposed knowledge gain metric,
{\Delta}knowledge, quantifies improvement in performance after video viewing.
Evaluation of LMMs reveals a steep decline in performance as cognitive demands
increase and highlights a significant gap between human and model knowledge
acquisition, underscoring the need for methods to enhance LMMs' capability to
learn and adapt from videos.
| 24 |
67934587e4e44e2866b64597
| null | null |
|
2025-01-24T03:24:06.601000 |
GSTAR: Gaussian Surface Tracking and Reconstruction
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2501.10283
|
[
{
"_id": "67934e1511eb9c774dd1bfc3",
"hidden": false,
"name": "Chengwei Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T13:30:16.388Z",
"user": {
"_id": "67936c63ddd1e487c0c6c691",
"avatarUrl": "/avatars/6d57469b4afdc8bedffeea9ed5f59dd4.svg",
"fullname": "Chengwei Zheng",
"isPro": false,
"type": "user",
"user": "zhengcw18"
}
},
{
"_id": "67934e1511eb9c774dd1bfc4",
"hidden": false,
"name": "Lixin Xue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:27:50.665Z",
"user": {
"_id": "645b95f8438d6cfbe1ae8256",
"avatarUrl": "/avatars/ac0ebb0a73569ab063c5b2f28c509d23.svg",
"fullname": "Lixin Xue",
"isPro": false,
"type": "user",
"user": "lxxue"
}
},
{
"_id": "67934e1511eb9c774dd1bfc5",
"hidden": false,
"name": "Juan Zarate",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67934e1511eb9c774dd1bfc6",
"hidden": false,
"name": "Jie Song",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T16:26:24 |
GSTAR: Gaussian Surface Tracking and Reconstruction
|
3D Gaussian Splatting techniques have enabled efficient photo-realistic
rendering of static scenes. Recent works have extended these approaches to
support surface reconstruction and tracking. However, tracking dynamic surfaces
with 3D Gaussians remains challenging due to complex topology changes, such as
surfaces appearing, disappearing, or splitting. To address these challenges, we
propose GSTAR, a novel method that achieves photo-realistic rendering, accurate
surface reconstruction, and reliable 3D tracking for general dynamic scenes
with changing topology. Given multi-view captures as input, GSTAR binds
Gaussians to mesh faces to represent dynamic objects. For surfaces with
consistent topology, GSTAR maintains the mesh topology and tracks the meshes
using Gaussians. In regions where topology changes, GSTAR adaptively unbinds
Gaussians from the mesh, enabling accurate registration and the generation of
new surfaces based on these optimized Gaussians. Additionally, we introduce a
surface-based scene flow method that provides robust initialization for
tracking between frames. Experiments demonstrate that our method effectively
tracks and reconstructs dynamic surfaces, enabling a range of applications. Our
project page with the code release is available at
https://eth-ait.github.io/GSTAR/.
| 5 |
67934e1611eb9c774dd1bffe
| null | null |
|
2025-01-24T03:08:08.583000 |
DiffuEraser: A Diffusion Model for Video Inpainting
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2501.10018
|
[
{
"_id": "678e125d09dc6d3a311cc04e",
"hidden": false,
"name": "Xiaowen Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:31:02.655Z",
"user": {
"_id": "6497b4464a3c31df8e4148d8",
"avatarUrl": "/avatars/4397a380468e84bc7945fddd9a6d1066.svg",
"fullname": "Xiaowen Li",
"isPro": false,
"type": "user",
"user": "asLKHFksasak"
}
},
{
"_id": "678e125d09dc6d3a311cc04f",
"hidden": false,
"name": "Haolan Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e125d09dc6d3a311cc050",
"hidden": false,
"name": "Peiran Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:30:47.443Z",
"user": {
"_id": "64b74a45f902508f0d786505",
"avatarUrl": "/avatars/8bc5aaa011642827e12524c4f0a56927.svg",
"fullname": "Peiran REN",
"isPro": false,
"type": "user",
"user": "lyraestar"
}
},
{
"_id": "678e125d09dc6d3a311cc051",
"hidden": false,
"name": "Liefeng Bo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:30:55.550Z",
"user": {
"_id": "63d0cc736b985b0f25d0412c",
"avatarUrl": "/avatars/3eb8c79f9a7c4c819038ea7b04e323dd.svg",
"fullname": "Bo",
"isPro": false,
"type": "user",
"user": "Liefeng"
}
}
] | 2025-01-17T08:03:02 |
DiffuEraser: A Diffusion Model for Video Inpainting
|
Recent video inpainting algorithms integrate flow-based pixel propagation
with transformer-based generation to leverage optical flow for restoring
textures and objects using information from neighboring frames, while
completing masked regions through visual Transformers. However, these
approaches often encounter blurring and temporal inconsistencies when dealing
with large masks, highlighting the need for models with enhanced generative
capabilities. Recently, diffusion models have emerged as a prominent technique
in image and video generation due to their impressive performance. In this
paper, we introduce DiffuEraser, a video inpainting model based on stable
diffusion, designed to fill masked regions with greater details and more
coherent structures. We incorporate prior information to provide initialization
and weak conditioning,which helps mitigate noisy artifacts and suppress
hallucinations. Additionally, to improve temporal consistency during
long-sequence inference, we expand the temporal receptive fields of both the
prior model and DiffuEraser, and further enhance consistency by leveraging the
temporal smoothing property of Video Diffusion Models. Experimental results
demonstrate that our proposed method outperforms state-of-the-art techniques in
both content completeness and temporal consistency while maintaining acceptable
efficiency.
| 14 |
678e125f09dc6d3a311cc0af
| null | null |
|
2025-01-24T02:59:28.457000 |
EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion
| 2 |
{
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
}
| false | null |
2501.13452
|
[
{
"_id": "6793480ec6fd669f7341cf41",
"hidden": false,
"name": "Jiangchuan Wei",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-14T14:24:45.099Z",
"user": {
"_id": "67af4daa602736dd990c1d43",
"avatarUrl": "/avatars/802d512c20ca9a9c6ffa49aa5f98b96c.svg",
"fullname": "weijiangchuan",
"isPro": false,
"type": "user",
"user": "weijiangchuan"
}
},
{
"_id": "6793480ec6fd669f7341cf42",
"hidden": false,
"name": "Shiyue Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793480ec6fd669f7341cf43",
"hidden": false,
"name": "Wenfeng Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:18:06.509Z",
"user": {
"_id": "6676c4f86f2ac48ee6c2f4d4",
"avatarUrl": "/avatars/fea4e5be4da7a7047df567a4aa86de0c.svg",
"fullname": "linwenfeng",
"isPro": false,
"type": "user",
"user": "linwf"
}
},
{
"_id": "6793480ec6fd669f7341cf44",
"hidden": false,
"name": "Boyuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793480ec6fd669f7341cf45",
"hidden": false,
"name": "Renjie Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793480ec6fd669f7341cf46",
"hidden": false,
"name": "Mingyu Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T08:06:11 |
EchoVideo: Identity-Preserving Human Video Generation by Multimodal
Feature Fusion
|
Recent advancements in video generation have significantly impacted various
downstream applications, particularly in identity-preserving video generation
(IPT2V). However, existing methods struggle with "copy-paste" artifacts and low
similarity issues, primarily due to their reliance on low-level facial image
information. This dependence can result in rigid facial appearances and
artifacts reflecting irrelevant details. To address these challenges, we
propose EchoVideo, which employs two key strategies: (1) an Identity Image-Text
Fusion Module (IITF) that integrates high-level semantic features from text,
capturing clean facial identity representations while discarding occlusions,
poses, and lighting variations to avoid the introduction of artifacts; (2) a
two-stage training strategy, incorporating a stochastic method in the second
phase to randomly utilize shallow facial information. The objective is to
balance the enhancements in fidelity provided by shallow features while
mitigating excessive reliance on them. This strategy encourages the model to
utilize high-level features during training, ultimately fostering a more robust
representation of facial identities. EchoVideo effectively preserves facial
identities and maintains full-body integrity. Extensive experiments demonstrate
that it achieves excellent results in generating high-quality, controllability
and fidelity videos.
| 7 |
67934811c6fd669f7341cfbf
| null | null |
|
2025-01-24T02:35:35.802000 |
SRMT: Shared Memory for Multi-agent Lifelong Pathfinding
| 3 |
{
"_id": "65c0db0fbda79a18292dfbb7",
"avatarUrl": "/avatars/1201b8282664c2d8c18beaba2396c03b.svg",
"followerCount": 1,
"fullname": "Alsu Sagirova",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "alsu-sagirova",
"type": "user"
}
| true | null |
2501.13200
|
[
{
"_id": "67933d69b843fda452c689dd",
"hidden": false,
"name": "Alsu Sagirova",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:07:49.036Z",
"user": {
"_id": "65c0db0fbda79a18292dfbb7",
"avatarUrl": "/avatars/1201b8282664c2d8c18beaba2396c03b.svg",
"fullname": "Alsu Sagirova",
"isPro": false,
"type": "user",
"user": "alsu-sagirova"
}
},
{
"_id": "67933d69b843fda452c689de",
"hidden": false,
"name": "Yuri Kuratov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-26T11:38:38.386Z",
"user": {
"_id": "618b9540682ec1c38327e586",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/618b9540682ec1c38327e586/v_ZBkfh8O9Zh6C2YQpuBX.jpeg",
"fullname": "Yury Kuratov",
"isPro": false,
"type": "user",
"user": "yurakuratov"
}
},
{
"_id": "67933d69b843fda452c689df",
"hidden": false,
"name": "Mikhail Burtsev",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:07:03.954Z",
"user": {
"_id": "639c6e978a34ed9a404c6a7b",
"avatarUrl": "/avatars/c98ca8c9f9ed8509c2f1bb6aa994fd57.svg",
"fullname": "MIKHAIL BURTSEV",
"isPro": false,
"type": "user",
"user": "mbur"
}
}
] | 2025-01-22T20:08:53 |
SRMT: Shared Memory for Multi-agent Lifelong Pathfinding
|
Multi-agent reinforcement learning (MARL) demonstrates significant progress
in solving cooperative and competitive multi-agent problems in various
environments. One of the principal challenges in MARL is the need for explicit
prediction of the agents' behavior to achieve cooperation. To resolve this
issue, we propose the Shared Recurrent Memory Transformer (SRMT) which extends
memory transformers to multi-agent settings by pooling and globally
broadcasting individual working memories, enabling agents to exchange
information implicitly and coordinate their actions. We evaluate SRMT on the
Partially Observable Multi-Agent Pathfinding problem in a toy Bottleneck
navigation task that requires agents to pass through a narrow corridor and on a
POGEMA benchmark set of tasks. In the Bottleneck task, SRMT consistently
outperforms a variety of reinforcement learning baselines, especially under
sparse rewards, and generalizes effectively to longer corridors than those seen
during training. On POGEMA maps, including Mazes, Random, and MovingAI, SRMT is
competitive with recent MARL, hybrid, and planning-based algorithms. These
results suggest that incorporating shared recurrent memory into the
transformer-based architectures can enhance coordination in decentralized
multi-agent systems. The source code for training and evaluation is available
on GitHub: https://github.com/Aloriosa/srmt.
| 65 |
67933d6ab843fda452c68a38
| null | null |
|
2025-01-24T01:17:22.150000 |
Evolution and The Knightian Blindspot of Machine Learning
| 2 |
{
"_id": "6444241e9c1bd83bd19ea70f",
"avatarUrl": "/avatars/24b4e65f26f5f8dcc1465cef67fd334b.svg",
"followerCount": 1,
"fullname": "Joel Lehman",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jal278",
"type": "user"
}
| true | null |
2501.13075
|
[
{
"_id": "6791ae54330198cc26b72479",
"hidden": false,
"name": "Joel Lehman",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-23T02:49:57.110Z",
"user": {
"_id": "6444241e9c1bd83bd19ea70f",
"avatarUrl": "/avatars/24b4e65f26f5f8dcc1465cef67fd334b.svg",
"fullname": "Joel Lehman",
"isPro": false,
"type": "user",
"user": "jal278"
}
},
{
"_id": "6791ae54330198cc26b7247a",
"hidden": false,
"name": "Elliot Meyerson",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:20:51.128Z",
"user": {
"_id": "6514b7fde1273c28705142cc",
"avatarUrl": "/avatars/072bf14abd8ef17d9393338a20157cc2.svg",
"fullname": "Elliot Meyerson",
"isPro": false,
"type": "user",
"user": "ekmeyerson"
}
},
{
"_id": "6791ae54330198cc26b7247b",
"hidden": false,
"name": "Tarek El-Gaaly",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791ae54330198cc26b7247c",
"hidden": false,
"name": "Kenneth O. Stanley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791ae54330198cc26b7247d",
"hidden": false,
"name": "Tarin Ziyaee",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T18:38:41 |
Evolution and The Knightian Blindspot of Machine Learning
|
This paper claims that machine learning (ML) largely overlooks an important
facet of general intelligence: robustness to a qualitatively unknown future in
an open world. Such robustness relates to Knightian uncertainty (KU) in
economics, i.e. uncertainty that cannot be quantified, which is excluded from
consideration in ML's key formalisms. This paper aims to identify this blind
spot, argue its importance, and catalyze research into addressing it, which we
believe is necessary to create truly robust open-world AI. To help illuminate
the blind spot, we contrast one area of ML, reinforcement learning (RL), with
the process of biological evolution. Despite staggering ongoing progress, RL
still struggles in open-world situations, often failing under unforeseen
situations. For example, the idea of zero-shot transferring a self-driving car
policy trained only in the US to the UK currently seems exceedingly ambitious.
In dramatic contrast, biological evolution routinely produces agents that
thrive within an open world, sometimes even to situations that are remarkably
out-of-distribution (e.g. invasive species; or humans, who do undertake such
zero-shot international driving). Interestingly, evolution achieves such
robustness without explicit theory, formalisms, or mathematical gradients. We
explore the assumptions underlying RL's typical formalisms, showing how they
limit RL's engagement with the unknown unknowns characteristic of an
ever-changing complex world. Further, we identify mechanisms through which
evolutionary processes foster robustness to novel and unpredictable challenges,
and discuss potential pathways to algorithmically embody them. The conclusion
is that the intriguing remaining fragility of ML may result from blind spots in
its formalisms, and that significant gains may result from direct confrontation
with the challenge of KU.
| 6 |
6791ae55330198cc26b724bc
| null | null |
|
2025-01-23T23:35:50.957000 |
Debate Helps Weak-to-Strong Generalization
| 2 |
{
"_id": "62e0ef42edb0462c8d51818d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e0ef42edb0462c8d51818d/3YM7DUynIWiiRFM6_enpg.jpeg",
"followerCount": 10,
"fullname": "Ting-En Lin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tnlin",
"type": "user"
}
| false | null |
2501.13124
|
[
{
"_id": "6793188b56f015277a9ed95c",
"hidden": false,
"name": "Hao Lang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:09:24.142Z",
"user": {
"_id": "65a6131fee7aa779f5bf8329",
"avatarUrl": "/avatars/aa25cc3153fd7e511b51b801e8107564.svg",
"fullname": "langhao",
"isPro": false,
"type": "user",
"user": "langnick"
}
},
{
"_id": "6793188b56f015277a9ed95d",
"hidden": false,
"name": "Fei Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:09:39.353Z",
"user": {
"_id": "635b8b6a37c6a2c12e2cce00",
"avatarUrl": "/avatars/229fb72180529141515d1df797b33709.svg",
"fullname": "Fei Huang",
"isPro": false,
"type": "user",
"user": "hzhwcmhf"
}
},
{
"_id": "6793188b56f015277a9ed95e",
"hidden": false,
"name": "Yongbin Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:08:59.394Z",
"user": {
"_id": "66641b2fd8e1e34bc621e688",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66641b2fd8e1e34bc621e688/csPETwnx2zCIHSWi9uAi-.png",
"fullname": "Yongbin Li",
"isPro": false,
"type": "user",
"user": "Yongbin-Li"
}
}
] | 2025-01-21T05:36:13 |
Debate Helps Weak-to-Strong Generalization
|
Common methods for aligning already-capable models with desired behavior rely
on the ability of humans to provide supervision. However, future superhuman
models will surpass the capability of humans. Therefore, humans will only be
able to weakly supervise superhuman models. This expected deficiency of human
evaluation would weaken the safety of future AI systems. Scalable oversight and
weak-to-strong generalization are two complementary approaches to tackle this
issue. In this paper, we attempt to combine the strengths of these two
approaches to further improve alignment. Specifically, we investigate ways of
improving human supervision with a strong pretrained model and then supervise
the strong model with enhanced weak human supervision. To make iterative
empirical progress, we consider an analogy: can we use a strong model to
improve weak model supervision and then use it to supervise the strong model?
We empirically test it by finetuning a small weak model on ground truth labels
with the additional help from a large strong model, and then finetuning the
strong model on labels generated by the weak model. We find that debate can
assist a weak model in extracting trustworthy information from an untrustworthy
strong model, which provides leverage as context on samples when training a
weak model. We also show that an ensemble of weak models helps exploit long
arguments generated by strong model debaters and obtain a more robust
supervision estimate. Extensive experiments on the OpenAI weak-to-strong NLP
benchmarks show that the combination approach leads to better alignment, which
indicates that debate has the potential to help weak-to-strong generalization.
| 7 |
6793188d56f015277a9ed9aa
| null | null |
|
2025-01-23T23:33:03.175000 |
Temporal Preference Optimization for Long-Form Video Understanding
| 3 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.13919
|
[
{
"_id": "679317f9d3ef2f790a539a28",
"hidden": false,
"name": "Rui Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T10:17:34.282Z",
"user": {
"_id": "6785fc7d17a2dfa3720ec082",
"avatarUrl": "/avatars/73e9d715bb16f14240c733c4843dfc22.svg",
"fullname": "Rui Li",
"isPro": false,
"type": "user",
"user": "ruili0"
}
},
{
"_id": "679317f9d3ef2f790a539a29",
"hidden": false,
"name": "Xiaohan Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:11:30.999Z",
"user": {
"_id": "65703fab7f50602340d23704",
"avatarUrl": "/avatars/324c45f5fba9cd8c38a89b30427c06b4.svg",
"fullname": "Xiaohan Wang",
"isPro": false,
"type": "user",
"user": "nicholswang"
}
},
{
"_id": "679317f9d3ef2f790a539a2a",
"hidden": false,
"name": "Yuhui Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:11:46.689Z",
"user": {
"_id": "62da55164398e21bf7f0e292",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62da55164398e21bf7f0e292/xjKkG8IA2IZZqCdjApSh3.jpeg",
"fullname": "Yuhui Zhang",
"isPro": false,
"type": "user",
"user": "yuhuizhang"
}
},
{
"_id": "679317f9d3ef2f790a539a2b",
"hidden": false,
"name": "Zeyu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679317f9d3ef2f790a539a2c",
"hidden": false,
"name": "Serena Yeung-Levy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:09:52.788Z",
"user": {
"_id": "677c8b2e92550a07fcad0f50",
"avatarUrl": "/avatars/2be26e8f25e98cfe5b1d227ee0409cd0.svg",
"fullname": "Serena Yeung-Levy",
"isPro": false,
"type": "user",
"user": "yeunglevy"
}
}
] | 2025-01-23T18:58:03 |
Temporal Preference Optimization for Long-Form Video Understanding
|
Despite significant advancements in video large multimodal models
(video-LMMs), achieving effective temporal grounding in long-form videos
remains a challenge for existing models. To address this limitation, we propose
Temporal Preference Optimization (TPO), a novel post-training framework
designed to enhance the temporal grounding capabilities of video-LMMs through
preference learning. TPO adopts a self-training approach that enables models to
differentiate between well-grounded and less accurate temporal responses by
leveraging curated preference datasets at two granularities: localized temporal
grounding, which focuses on specific video segments, and comprehensive temporal
grounding, which captures extended temporal dependencies across entire video
sequences. By optimizing on these preference datasets, TPO significantly
enhances temporal understanding while reducing reliance on manually annotated
data. Extensive experiments on three long-form video understanding
benchmarks--LongVideoBench, MLVU, and Video-MME--demonstrate the effectiveness
of TPO across two state-of-the-art video-LMMs. Notably, LLaVA-Video-TPO
establishes itself as the leading 7B model on the Video-MME benchmark,
underscoring the potential of TPO as a scalable and efficient solution for
advancing temporal reasoning in long-form video understanding. Project page:
https://ruili33.github.io/tpo_website.
| 22 |
679317fcd3ef2f790a539ad6
| null | null |
|
2025-01-23T23:31:27.973000 |
IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.13920
|
[
{
"_id": "679316ff3698fd97252a8e6f",
"hidden": false,
"name": "Jiayi Lei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:12:22.641Z",
"user": {
"_id": "64c3c72e8f31d1e6c664b052",
"avatarUrl": "/avatars/af1ad5048eaa9dc417837ad02f927911.svg",
"fullname": "jiayi lei",
"isPro": false,
"type": "user",
"user": "jyjyjyjy"
}
},
{
"_id": "679316ff3698fd97252a8e70",
"hidden": false,
"name": "Renrui Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679316ff3698fd97252a8e71",
"hidden": false,
"name": "Xiangfei Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679316ff3698fd97252a8e72",
"hidden": false,
"name": "Weifeng Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:13:07.303Z",
"user": {
"_id": "66026c9068d519ed32519e9c",
"avatarUrl": "/avatars/8fa051312c713772e5b8ba65989ff7f5.svg",
"fullname": "Weifeng Lin",
"isPro": false,
"type": "user",
"user": "Afeng-x"
}
},
{
"_id": "679316ff3698fd97252a8e73",
"hidden": false,
"name": "Zhen Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T20:51:38.124Z",
"user": {
"_id": "6285a9133ab6642179158944",
"avatarUrl": "/avatars/6e10fa07c94141fcdbe0cab02bb731ca.svg",
"fullname": "Zhen Li",
"isPro": false,
"type": "user",
"user": "Paper99"
}
},
{
"_id": "679316ff3698fd97252a8e74",
"hidden": false,
"name": "Wenjian Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679316ff3698fd97252a8e75",
"hidden": false,
"name": "Ruoyi Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:13:21.861Z",
"user": {
"_id": "64a54586c0f13de8e7093314",
"avatarUrl": "/avatars/389e43e9a32cf2fc95f8f3a23b8f0508.svg",
"fullname": "Ruoyi Du",
"isPro": false,
"type": "user",
"user": "RuoyiDu"
}
},
{
"_id": "679316ff3698fd97252a8e76",
"hidden": false,
"name": "Le Zhuo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:13:27.523Z",
"user": {
"_id": "6358a167f56b03ec9147074d",
"avatarUrl": "/avatars/e54ea7bf0c240cf76d538296efb3976c.svg",
"fullname": "Le Zhuo",
"isPro": false,
"type": "user",
"user": "JackyZhuo"
}
},
{
"_id": "679316ff3698fd97252a8e77",
"hidden": false,
"name": "Zhongyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:13:33.108Z",
"user": {
"_id": "6740a5730bb4a675446a80ad",
"avatarUrl": "/avatars/27c08e33df88e4f73c136da65f2b5adb.svg",
"fullname": "Zhong-Yu Li",
"isPro": false,
"type": "user",
"user": "lzyhha"
}
},
{
"_id": "679316ff3698fd97252a8e78",
"hidden": false,
"name": "Xinyue Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679316ff3698fd97252a8e79",
"hidden": false,
"name": "Shitian Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T13:30:14.688Z",
"user": {
"_id": "62c66504031996c36c86976a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62c66504031996c36c86976a/wIq0YJhkWnEhlzsh-TGYO.png",
"fullname": "steve z",
"isPro": false,
"type": "user",
"user": "stzhao"
}
},
{
"_id": "679316ff3698fd97252a8e7a",
"hidden": false,
"name": "Ziyu Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:14:21.821Z",
"user": {
"_id": "647d9ab61a1fcad2fdbf2d3d",
"avatarUrl": "/avatars/48c8aeae8979d2c87df8bde922437d62.svg",
"fullname": "Ziyu Guo",
"isPro": true,
"type": "user",
"user": "ZiyuG"
}
},
{
"_id": "679316ff3698fd97252a8e7b",
"hidden": false,
"name": "Yiting Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:14:50.714Z",
"user": {
"_id": "6614fb3d5aed02b298a4b469",
"avatarUrl": "/avatars/d0ddb4f989ad1a3f24128cc843347bde.svg",
"fullname": "yiting lu",
"isPro": false,
"type": "user",
"user": "yeeeeeyy"
}
},
{
"_id": "679316ff3698fd97252a8e7c",
"hidden": false,
"name": "Peng Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:15:04.489Z",
"user": {
"_id": "6759af3eccbc8817f9169179",
"avatarUrl": "/avatars/49e64c7ccf71b8f25c52783b6ae93620.svg",
"fullname": "Peng Gao",
"isPro": false,
"type": "user",
"user": "gaopenghigh"
}
},
{
"_id": "679316ff3698fd97252a8e7d",
"hidden": false,
"name": "Hongsheng Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:15:11.437Z",
"user": {
"_id": "65c04e9c27a5fdca81abcbd9",
"avatarUrl": "/avatars/12a155683c824fa23da4a9e2bed4f64e.svg",
"fullname": "Hongsheng LI",
"isPro": false,
"type": "user",
"user": "hsli-cuhk"
}
}
] | 2025-01-23T18:58:33 |
IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art
Text-to-Image Models
|
With the rapid development of diffusion models, text-to-image(T2I) models
have made significant progress, showcasing impressive abilities in prompt
following and image generation. Recently launched models such as FLUX.1 and
Ideogram2.0, along with others like Dall-E3 and Stable Diffusion 3, have
demonstrated exceptional performance across various complex tasks, raising
questions about whether T2I models are moving towards general-purpose
applicability. Beyond traditional image generation, these models exhibit
capabilities across a range of fields, including controllable generation, image
editing, video, audio, 3D, and motion generation, as well as computer vision
tasks like semantic segmentation and depth estimation. However, current
evaluation frameworks are insufficient to comprehensively assess these models'
performance across expanding domains. To thoroughly evaluate these models, we
developed the IMAGINE-E and tested six prominent models: FLUX.1, Ideogram2.0,
Midjourney, Dall-E3, Stable Diffusion 3, and Jimeng. Our evaluation is divided
into five key domains: structured output generation, realism, and physical
consistency, specific domain generation, challenging scenario generation, and
multi-style creation tasks. This comprehensive assessment highlights each
model's strengths and limitations, particularly the outstanding performance of
FLUX.1 and Ideogram2.0 in structured and specific domain tasks, underscoring
the expanding applications and potential of T2I models as foundational AI
tools. This study provides valuable insights into the current state and future
trajectory of T2I models as they evolve towards general-purpose usability.
Evaluation scripts will be released at https://github.com/jylei16/Imagine-e.
| 15 |
679317043698fd97252a8f6f
| null | null |
|
2025-01-23T22:48:16.405000 |
Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models
| 2 |
{
"_id": "63fb6e281b4b1bd4e7ffc5be",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1677422062937-noauth.jpeg",
"followerCount": 9,
"fullname": "Xiao Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lx865712528",
"type": "user"
}
| true | null |
2501.13629
|
[
{
"_id": "6792f8ed5e3ec6035dafb06a",
"hidden": false,
"name": "Zhenghao Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T09:14:52.584Z",
"user": {
"_id": "63776f1806241efce1e7aae6",
"avatarUrl": "/avatars/d67d9dcd932934c630f407ac152f2ce6.svg",
"fullname": "Zhenghao Lin",
"isPro": false,
"type": "user",
"user": "Lin0"
}
},
{
"_id": "6792f8ed5e3ec6035dafb06b",
"hidden": false,
"name": "Zihao Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T09:15:04.802Z",
"user": {
"_id": "656c6bd8e0ff1cebe966aa35",
"avatarUrl": "/avatars/1083cb58bdb0bee72036953276d42e13.svg",
"fullname": "tangzihao",
"isPro": false,
"type": "user",
"user": "tzh94588"
}
},
{
"_id": "6792f8ed5e3ec6035dafb06c",
"hidden": false,
"name": "Xiao Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:08:05.797Z",
"user": {
"_id": "63fb6e281b4b1bd4e7ffc5be",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1677422062937-noauth.jpeg",
"fullname": "Xiao Liu",
"isPro": false,
"type": "user",
"user": "lx865712528"
}
},
{
"_id": "6792f8ed5e3ec6035dafb06d",
"hidden": false,
"name": "Yeyun Gong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T09:58:33.228Z",
"user": {
"_id": "643f615aa16cd6d1f4c581de",
"avatarUrl": "/avatars/47753a3e82b44f81881600c52e1e8495.svg",
"fullname": "Yeyun Gong",
"isPro": false,
"type": "user",
"user": "yegong"
}
},
{
"_id": "6792f8ed5e3ec6035dafb06e",
"hidden": false,
"name": "Yi Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb06f",
"hidden": false,
"name": "Qi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb070",
"hidden": false,
"name": "Hang Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:03:59.680Z",
"user": {
"_id": "61342a4b488458a484dee6c4",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1630808595161-noauth.png",
"fullname": "Hang Li",
"isPro": false,
"type": "user",
"user": "hanglics"
}
},
{
"_id": "6792f8ed5e3ec6035dafb071",
"hidden": false,
"name": "Ying Xin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb072",
"hidden": false,
"name": "Ziyue Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:04:09.709Z",
"user": {
"_id": "62f6a9add3bdacb7eec0d4f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1660332390183-noauth.jpeg",
"fullname": "Ziyue Yang",
"isPro": false,
"type": "user",
"user": "ziyueyang37"
}
},
{
"_id": "6792f8ed5e3ec6035dafb073",
"hidden": false,
"name": "Kailai Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:04:16.033Z",
"user": {
"_id": "646fc402e9c03ba436d5e93e",
"avatarUrl": "/avatars/870c86dc99fb1cb6a348a7a0385b1a04.svg",
"fullname": "Kailai Yang",
"isPro": false,
"type": "user",
"user": "klyang"
}
},
{
"_id": "6792f8ed5e3ec6035dafb074",
"hidden": false,
"name": "Yu Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb075",
"hidden": false,
"name": "Xiao Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb076",
"hidden": false,
"name": "Shuai Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb077",
"hidden": false,
"name": "Yiming Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb078",
"hidden": false,
"name": "Zheheng Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:04:49.140Z",
"user": {
"_id": "6443bb593c323e0918f61a96",
"avatarUrl": "/avatars/b9e1ba17f7798b5142bc0124fba95237.svg",
"fullname": "zheheng luo",
"isPro": false,
"type": "user",
"user": "KenLuo"
}
},
{
"_id": "6792f8ed5e3ec6035dafb079",
"hidden": false,
"name": "Lei Qu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb07a",
"hidden": false,
"name": "Xuan Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb07b",
"hidden": false,
"name": "Yaoxiang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb07c",
"hidden": false,
"name": "Yuqing Xia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:05:26.287Z",
"user": {
"_id": "6369e01864aad59d4d4501ac",
"avatarUrl": "/avatars/bcbd3f9d0d194eeccd061c4fa6a6e283.svg",
"fullname": "Yuqing Xia",
"isPro": false,
"type": "user",
"user": "yuqxia"
}
},
{
"_id": "6792f8ed5e3ec6035dafb07d",
"hidden": false,
"name": "Feiyang Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:05:34.991Z",
"user": {
"_id": "673fd856a45b6f21829a3bf5",
"avatarUrl": "/avatars/deb8c5362fad22019cccaed6d03dea09.svg",
"fullname": "Feiyang Chen",
"isPro": false,
"type": "user",
"user": "PhilipChen"
}
},
{
"_id": "6792f8ed5e3ec6035dafb07e",
"hidden": false,
"name": "Yuting Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:05:41.151Z",
"user": {
"_id": "64e85f4e5ddcace745bc0a55",
"avatarUrl": "/avatars/e316355b913c73104db530010ceedeb4.svg",
"fullname": "Yuting Jiang",
"isPro": false,
"type": "user",
"user": "Stautinger"
}
},
{
"_id": "6792f8ed5e3ec6035dafb07f",
"hidden": false,
"name": "Yasen Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb080",
"hidden": false,
"name": "Hao Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb081",
"hidden": false,
"name": "Binyang Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:06:03.263Z",
"user": {
"_id": "6485714cfc41a0b97fe377cc",
"avatarUrl": "/avatars/0af8a3df9ad711a5eac739bce26c4c2a.svg",
"fullname": "Li",
"isPro": false,
"type": "user",
"user": "Binyang"
}
},
{
"_id": "6792f8ed5e3ec6035dafb082",
"hidden": false,
"name": "Guoshuai Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:05:17.780Z",
"user": {
"_id": "663de80ca920d195191807da",
"avatarUrl": "/avatars/2437ce3fa073a07b971d370c26c7ab65.svg",
"fullname": "Guoshuai Zhao",
"isPro": false,
"type": "user",
"user": "crayonshine"
}
},
{
"_id": "6792f8ed5e3ec6035dafb083",
"hidden": false,
"name": "Jui-Hao Chiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb084",
"hidden": false,
"name": "Zhongxin Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb085",
"hidden": false,
"name": "Chen Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb086",
"hidden": false,
"name": "Kun Kuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb087",
"hidden": false,
"name": "Wenjie Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:06:22.951Z",
"user": {
"_id": "66a3710a4ee2a4c936315a5a",
"avatarUrl": "/avatars/ef8da8fb1031695d77d34a5d365aa177.svg",
"fullname": "Li",
"isPro": false,
"type": "user",
"user": "WenjieLi"
}
},
{
"_id": "6792f8ed5e3ec6035dafb088",
"hidden": false,
"name": "Yelong Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:06:30.109Z",
"user": {
"_id": "6454c337a13edf669cd5d8ea",
"avatarUrl": "/avatars/a383a0dda7c2ef6a0d6c3c64651f42ff.svg",
"fullname": "Yelong Shen",
"isPro": false,
"type": "user",
"user": "uuu6"
}
},
{
"_id": "6792f8ed5e3ec6035dafb089",
"hidden": false,
"name": "Jian Jiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb08a",
"hidden": false,
"name": "Peng Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f8ed5e3ec6035dafb08b",
"hidden": false,
"name": "Mao Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T12:58:14 |
Sigma: Differential Rescaling of Query, Key and Value for Efficient
Language Models
|
We introduce Sigma, an efficient large language model specialized for the
system domain, empowered by a novel architecture including DiffQKV attention,
and pre-trained on our meticulously collected system domain data. DiffQKV
attention significantly enhances the inference efficiency of Sigma by
optimizing the Query (Q), Key (K), and Value (V) components in the attention
mechanism differentially, based on their varying impacts on the model
performance and efficiency indicators. Specifically, we (1) conduct extensive
experiments that demonstrate the model's varying sensitivity to the compression
of K and V components, leading to the development of differentially compressed
KV, and (2) propose augmented Q to expand the Q head dimension, which enhances
the model's representation capacity with minimal impacts on the inference
speed. Rigorous theoretical and empirical analyses reveal that DiffQKV
attention significantly enhances efficiency, achieving up to a 33.36%
improvement in inference speed over the conventional grouped-query attention
(GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various
sources, including 19.5B system domain data that we carefully collect and 1T
tokens of synthesized and rewritten data. In general domains, Sigma achieves
comparable performance to other state-of-arts models. In the system domain, we
introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates
remarkable performance across all tasks, significantly outperforming GPT-4 with
an absolute improvement up to 52.5%.
| 44 |
6792f8f05e3ec6035dafb140
| null | null |
|
2025-01-23T22:32:09.207000 |
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary Feedback
| 3 |
{
"_id": "5df9c78eda6d0311fd3d541f",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5df9c78eda6d0311fd3d541f/8oDFuP77l5zhvamXNVmnc.jpeg",
"followerCount": 313,
"fullname": "Yen-Ting Lin",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yentinglin",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/VXjkUKeidLg_JO5d3RWUG.jpeg"
] |
2501.10799
|
[
{
"_id": "679208664e521de952ca0cdc",
"hidden": false,
"name": "Yen-Ting Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:17:30.742Z",
"user": {
"_id": "5df9c78eda6d0311fd3d541f",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5df9c78eda6d0311fd3d541f/8oDFuP77l5zhvamXNVmnc.jpeg",
"fullname": "Yen-Ting Lin",
"isPro": true,
"type": "user",
"user": "yentinglin"
}
},
{
"_id": "679208664e521de952ca0cdd",
"hidden": false,
"name": "Di Jin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:15:36.716Z",
"user": {
"_id": "62f690dfc58915315c504af5",
"avatarUrl": "/avatars/a6732dda8cd7e37d9c0e0b1dfb68c66b.svg",
"fullname": "Di Jin",
"isPro": false,
"type": "user",
"user": "jindi"
}
},
{
"_id": "679208664e521de952ca0cde",
"hidden": false,
"name": "Tengyu Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0cdf",
"hidden": false,
"name": "Tianhao Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0ce0",
"hidden": false,
"name": "Sainbayar Sukhbaatar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:16:23.763Z",
"user": {
"_id": "66a8611eb51510d82ed54231",
"avatarUrl": "/avatars/ad559e774fee4914091b82c9831ae2a2.svg",
"fullname": "Sainbayar Sukhbaatar",
"isPro": false,
"type": "user",
"user": "sainbar"
}
},
{
"_id": "679208664e521de952ca0ce1",
"hidden": false,
"name": "Chen Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0ce2",
"hidden": false,
"name": "Yun He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:16:34.273Z",
"user": {
"_id": "6437de5d51c7ebfc813ce68a",
"avatarUrl": "/avatars/144cb1c5d5a4a645080611953494f437.svg",
"fullname": "he",
"isPro": false,
"type": "user",
"user": "yunhe"
}
},
{
"_id": "679208664e521de952ca0ce3",
"hidden": false,
"name": "Yun-Nung Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0ce4",
"hidden": false,
"name": "Jason Weston",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:15:25.564Z",
"user": {
"_id": "62f023a36a027498eaa2f9cc",
"avatarUrl": "/avatars/8ac1c5c74d0957e3c6cc94b3a7795c37.svg",
"fullname": "Jason Weston",
"isPro": false,
"type": "user",
"user": "spermwhale"
}
},
{
"_id": "679208664e521de952ca0ce5",
"hidden": false,
"name": "Yuandong Tian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:16:47.602Z",
"user": {
"_id": "6344cf73ee1504dbcd5bdfe7",
"avatarUrl": "/avatars/6dd2bf1f9c5679e5c8c85d62c9836aac.svg",
"fullname": "Yuandong Tian",
"isPro": false,
"type": "user",
"user": "tydsh"
}
},
{
"_id": "679208664e521de952ca0ce6",
"hidden": false,
"name": "Arash Rahnama",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0ce7",
"hidden": false,
"name": "Sinong Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:17:03.211Z",
"user": {
"_id": "65b483c5ed110eb9f1ee62df",
"avatarUrl": "/avatars/29100098f5aed1735675d06c516a85b7.svg",
"fullname": "Sinong Wang",
"isPro": false,
"type": "user",
"user": "TheronWong"
}
},
{
"_id": "679208664e521de952ca0ce8",
"hidden": false,
"name": "Hao Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679208664e521de952ca0ce9",
"hidden": false,
"name": "Han Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-18T15:38:03 |
Step-KTO: Optimizing Mathematical Reasoning through Stepwise Binary
Feedback
|
Large language models (LLMs) have recently demonstrated remarkable success in
mathematical reasoning. Despite progress in methods like chain-of-thought
prompting and self-consistency sampling, these advances often focus on final
correctness without ensuring that the underlying reasoning process is coherent
and reliable. This paper introduces Step-KTO, a training framework that
combines process-level and outcome-level binary feedback to guide LLMs toward
more trustworthy reasoning trajectories. By providing binary evaluations for
both the intermediate reasoning steps and the final answer, Step-KTO encourages
the model to adhere to logical progressions rather than relying on superficial
shortcuts. Our experiments on challenging mathematical benchmarks show that
Step-KTO significantly improves both final answer accuracy and the quality of
intermediate reasoning steps. For example, on the MATH-500 dataset, Step-KTO
achieves a notable improvement in Pass@1 accuracy over strong baselines. These
results highlight the promise of integrating stepwise process feedback into LLM
training, paving the way toward more interpretable and dependable reasoning
capabilities.
| 15 |
679208674e521de952ca0d2f
| null | null |
|
2025-01-23T22:08:17.598000 |
Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step
| 2 |
{
"_id": "63468720dd6d90d82ccf3450",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg",
"followerCount": 32,
"fullname": "YSH",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BestWishYsh",
"type": "user"
}
| false | null |
2501.13926
|
[
{
"_id": "6793040ec67af4a116a25d05",
"hidden": false,
"name": "Ziyu Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:07:58.258Z",
"user": {
"_id": "647d9ab61a1fcad2fdbf2d3d",
"avatarUrl": "/avatars/48c8aeae8979d2c87df8bde922437d62.svg",
"fullname": "Ziyu Guo",
"isPro": true,
"type": "user",
"user": "ZiyuG"
}
},
{
"_id": "6793040ec67af4a116a25d06",
"hidden": false,
"name": "Renrui Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793040ec67af4a116a25d07",
"hidden": false,
"name": "Chengzhuo Tong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6793040ec67af4a116a25d08",
"hidden": false,
"name": "Zhizheng Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:08:20.272Z",
"user": {
"_id": "6713a71e7dfe714b425cccfb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/95YYcbv_f6J8yWTunwn4z.png",
"fullname": "zhizhengzhao",
"isPro": false,
"type": "user",
"user": "zhizhengzhao"
}
},
{
"_id": "6793040ec67af4a116a25d09",
"hidden": false,
"name": "Peng Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:08:26.816Z",
"user": {
"_id": "6759af3eccbc8817f9169179",
"avatarUrl": "/avatars/49e64c7ccf71b8f25c52783b6ae93620.svg",
"fullname": "Peng Gao",
"isPro": false,
"type": "user",
"user": "gaopenghigh"
}
},
{
"_id": "6793040ec67af4a116a25d0a",
"hidden": false,
"name": "Hongsheng Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-24T10:08:33.312Z",
"user": {
"_id": "65c04e9c27a5fdca81abcbd9",
"avatarUrl": "/avatars/12a155683c824fa23da4a9e2bed4f64e.svg",
"fullname": "Hongsheng LI",
"isPro": false,
"type": "user",
"user": "hsli-cuhk"
}
},
{
"_id": "6793040ec67af4a116a25d0b",
"hidden": false,
"name": "Pheng-Ann Heng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T18:59:43 |
Can We Generate Images with CoT? Let's Verify and Reinforce Image
Generation Step by Step
|
Chain-of-Thought (CoT) reasoning has been extensively explored in large
models to tackle complex understanding tasks. However, it still remains an open
question whether such strategies can be applied to verifying and reinforcing
image generation scenarios. In this paper, we provide the first comprehensive
investigation of the potential of CoT reasoning to enhance autoregressive image
generation. We focus on three techniques: scaling test-time computation for
verification, aligning model preferences with Direct Preference Optimization
(DPO), and integrating these techniques for complementary effects. Our results
demonstrate that these approaches can be effectively adapted and combined to
significantly improve image generation performance. Furthermore, given the
pivotal role of reward models in our findings, we propose the Potential
Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image
generation. PARM adaptively assesses each generation step through a potential
assessment approach, merging the strengths of existing reward models, and
PARM++ further introduces a reflection mechanism to self-correct the generated
unsatisfactory image. Using our investigated reasoning strategies, we enhance a
baseline model, Show-o, to achieve superior results, with a significant +24%
improvement on the GenEval benchmark, surpassing Stable Diffusion 3 by +15%. We
hope our study provides unique insights and paves a new path for integrating
CoT reasoning with autoregressive image generation. Code and models are
released at https://github.com/ZiyuGuo99/Image-Generation-CoT
| 37 |
67930410c67af4a116a25da4
| null | null |
|
2025-01-23T21:11:50.510000 |
Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
| 5 |
{
"_id": "646afdd535c7a57f936d3ff5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/LsDQPftQrnp_Gvanzim4J.jpeg",
"followerCount": 4,
"fullname": "Jed Yang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jedyang97",
"type": "user"
}
| true | null |
2501.13928
|
[
{
"_id": "6792f5f6dc641d1a72b00736",
"hidden": false,
"name": "Jianing Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T09:08:08.128Z",
"user": {
"_id": "646afdd535c7a57f936d3ff5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/LsDQPftQrnp_Gvanzim4J.jpeg",
"fullname": "Jed Yang",
"isPro": false,
"type": "user",
"user": "jedyang97"
}
},
{
"_id": "6792f5f6dc641d1a72b00737",
"hidden": false,
"name": "Alexander Sax",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b00738",
"hidden": false,
"name": "Kevin J. Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b00739",
"hidden": false,
"name": "Mikael Henaff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b0073a",
"hidden": false,
"name": "Hao Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b0073b",
"hidden": false,
"name": "Ang Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b0073c",
"hidden": false,
"name": "Joyce Chai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b0073d",
"hidden": false,
"name": "Franziska Meier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6792f5f6dc641d1a72b0073e",
"hidden": false,
"name": "Matt Feiszli",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T18:59:55 |
Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
|
Multi-view 3D reconstruction remains a core challenge in computer vision,
particularly in applications requiring accurate and scalable representations
across diverse perspectives. Current leading methods such as DUSt3R employ a
fundamentally pairwise approach, processing images in pairs and necessitating
costly global alignment procedures to reconstruct from multiple views. In this
work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view
generalization to DUSt3R that achieves efficient and scalable 3D reconstruction
by processing many views in parallel. Fast3R's Transformer-based architecture
forwards N images in a single forward pass, bypassing the need for iterative
alignment. Through extensive experiments on camera pose estimation and 3D
reconstruction, Fast3R demonstrates state-of-the-art performance, with
significant improvements in inference speed and reduced error accumulation.
These results establish Fast3R as a robust alternative for multi-view
applications, offering enhanced scalability without compromising reconstruction
accuracy.
| 17 |
6792f601dc641d1a72b00a2d
| null | null |
|
2025-01-23T01:07:44.434000 |
Pairwise RM: Perform Best-of-N Sampling with Knockout Tournament
| 3 |
{
"_id": "62e25e2247678ea5ce1b1786",
"avatarUrl": "/avatars/1bb32e7597a9b1c89c434cbf550b5382.svg",
"followerCount": 2,
"fullname": "Yantao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "RicardoL1u",
"type": "user"
}
| false | null |
2501.13007
|
[
{
"_id": "6791d751f29664a338e7b4c5",
"hidden": false,
"name": "Yantao Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791d751f29664a338e7b4c6",
"hidden": false,
"name": "Zijun Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791d751f29664a338e7b4c7",
"hidden": false,
"name": "Rui Min",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791d751f29664a338e7b4c8",
"hidden": false,
"name": "Yixin Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791d751f29664a338e7b4c9",
"hidden": false,
"name": "Lei Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791d751f29664a338e7b4ca",
"hidden": false,
"name": "Juanzi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:49:20.111Z",
"user": {
"_id": "65df8cbc2705d9672f55d1aa",
"avatarUrl": "/avatars/63e46f15bb76bd9d4508fd0f54f39829.svg",
"fullname": "Juanzi Li",
"isPro": false,
"type": "user",
"user": "juanli"
}
}
] | 2025-01-22T16:49:37 |
Pairwise RM: Perform Best-of-N Sampling with Knockout Tournament
|
Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large
Language Models (LLMs), relies on reward models to select the best candidate
solution from multiple generations. However, traditional reward models often
assign arbitrary and inconsistent scores, limiting their effectiveness. To
address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a
knockout tournament for BoN sampling. Instead of assigning absolute scores,
given one math problem, Pairwise RM evaluates two candidate solutions'
correctness simultaneously. This approach eliminates the need for arbitrary
scoring and enables cross-validation of solutions through parallel comparison.
In the knockout tournament, Pairwise RM conducts pairwise comparisons between
candidate solutions and eliminates the incorrect ones iteratively. We construct
\ourdataset, a large-scale dataset of 443K pairwise comparisons derived from
NumiaMath and annotated using gemini-1.5-flash, and train the Pairwise
RM via supervised fine-tuning. Experiments on MATH-500 and the Olympiad Bench
demonstrate significant improvements over traditional discriminative reward
models. And a 40\% to 60\% relative improvement is achieved on the top 50\%
challenging problems.
| 20 |
6791d752f29664a338e7b4f1
| null | null |
|
2025-01-23T00:01:12.490000 |
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
| 2 |
{
"_id": "63f3502a520c14618925825a",
"avatarUrl": "/avatars/e986a2a6625e7be6890616a417f908d2.svg",
"followerCount": null,
"fullname": "Yafu Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yaful",
"type": "user"
}
| true | null |
2501.12895
|
[
{
"_id": "6791c63e9e215712a7d4bac8",
"hidden": false,
"name": "Yafu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:37:06.794Z",
"user": {
"_id": "63f3502a520c14618925825a",
"avatarUrl": "/avatars/e986a2a6625e7be6890616a417f908d2.svg",
"fullname": "Yafu Li",
"isPro": false,
"type": "user",
"user": "yaful"
}
},
{
"_id": "6791c63e9e215712a7d4bac9",
"hidden": false,
"name": "Xuyang Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:56:39.062Z",
"user": {
"_id": "6498fde776d49ee00f79cbfe",
"avatarUrl": "/avatars/4c284a71080150e6cb3b9632dfccef60.svg",
"fullname": "Xuyang Hu",
"isPro": false,
"type": "user",
"user": "huxy912"
}
},
{
"_id": "6791c63e9e215712a7d4baca",
"hidden": false,
"name": "Xiaoye Qu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:37:33.373Z",
"user": {
"_id": "64cb54da1af278541d663708",
"avatarUrl": "/avatars/c44507cc92bb2e83154bad31b90ce6dd.svg",
"fullname": "Xiaoye Qu",
"isPro": false,
"type": "user",
"user": "Xiaoye08"
}
},
{
"_id": "6791c63e9e215712a7d4bacb",
"hidden": false,
"name": "Linjie Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:37:47.571Z",
"user": {
"_id": "63db16fff03c3d71ef397206",
"avatarUrl": "/avatars/bfb7e0d730b7d03302799d5d2828d97d.svg",
"fullname": "Linjie Li",
"isPro": false,
"type": "user",
"user": "linjieli222"
}
},
{
"_id": "6791c63e9e215712a7d4bacc",
"hidden": false,
"name": "Yu Cheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T15:05:05.310Z",
"user": {
"_id": "67017abfe4d49b157ac534d9",
"avatarUrl": "/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg",
"fullname": "Yu Cheng",
"isPro": false,
"type": "user",
"user": "ych133"
}
}
] | 2025-01-22T14:15:46 |
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative
Textual Feedback
|
Large language models (LLMs) demonstrate impressive performance but lack the
flexibility to adapt to human preferences quickly without retraining. In this
work, we introduce Test-time Preference Optimization (TPO), a framework that
aligns LLM outputs with human preferences during inference, removing the need
to update model parameters. Rather than relying on purely numerical rewards,
TPO translates reward signals into textual critiques and uses them as textual
rewards to iteratively refine its response. Evaluations on benchmarks covering
instruction following, preference alignment, safety, and mathematics reveal
that TPO progressively improves alignment with human preferences. Notably,
after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can
surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO
scales efficiently with both the search width and depth during inference.
Through case studies, we illustrate how TPO exploits the innate capacity of LLM
to interpret and act upon reward signals. Our findings establish TPO as a
practical, lightweight alternative for test-time preference optimization,
achieving alignment on the fly. Our code is publicly available at
https://github.com/yafuly/TPO.
| 56 |
6791c6409e215712a7d4bc23
| null | null |
|
2025-01-23T00:00:51.542000 |
VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.13106
|
[
{
"_id": "6791cced89c55efdc9fd45e5",
"hidden": false,
"name": "Boqiang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791cced89c55efdc9fd45e6",
"hidden": false,
"name": "Kehan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:38:16.478Z",
"user": {
"_id": "66836aba7b50b433cd681182",
"avatarUrl": "/avatars/483a6fef6d27680d19f9ea1789122bdd.svg",
"fullname": "Kehan LI",
"isPro": false,
"type": "user",
"user": "CausalLi"
}
},
{
"_id": "6791cced89c55efdc9fd45e7",
"hidden": false,
"name": "Zesen Cheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:17:49.716Z",
"user": {
"_id": "65b2529285b6c21448a10d65",
"avatarUrl": "/avatars/1b09e2742aecce1bbdc57f0c4504cf38.svg",
"fullname": "Zesen Cheng",
"isPro": false,
"type": "user",
"user": "ClownRat"
}
},
{
"_id": "6791cced89c55efdc9fd45e8",
"hidden": false,
"name": "Zhiqiang Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:17:47.840Z",
"user": {
"_id": "637f228152229c63921119c3",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/637f228152229c63921119c3/acwXorra1r9_7i3KlBFjS.jpeg",
"fullname": "Zhiqiang Hu",
"isPro": false,
"type": "user",
"user": "Zhiqiang007"
}
},
{
"_id": "6791cced89c55efdc9fd45e9",
"hidden": false,
"name": "Yuqian Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791cced89c55efdc9fd45ea",
"hidden": false,
"name": "Guanzheng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:38:44.522Z",
"user": {
"_id": "645475e2548f22be59847604",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645475e2548f22be59847604/EhSurrZ25u31qQ2TVXQXt.jpeg",
"fullname": "Chen",
"isPro": false,
"type": "user",
"user": "Guanzheng"
}
},
{
"_id": "6791cced89c55efdc9fd45eb",
"hidden": false,
"name": "Sicong Leng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:38:54.047Z",
"user": {
"_id": "609115c79a8bcaa437b234a9",
"avatarUrl": "/avatars/1631a91030703d8397133363cf82c863.svg",
"fullname": "Leng Sicong",
"isPro": true,
"type": "user",
"user": "Sicong"
}
},
{
"_id": "6791cced89c55efdc9fd45ec",
"hidden": false,
"name": "Yuming Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:39:08.190Z",
"user": {
"_id": "629c95b7a5d6f5fe10e6ed45",
"avatarUrl": "/avatars/c9a5f0e7fae3a079d26db266e9ff90a3.svg",
"fullname": "Yuming Jiang",
"isPro": false,
"type": "user",
"user": "yumingj"
}
},
{
"_id": "6791cced89c55efdc9fd45ed",
"hidden": false,
"name": "Hang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791cced89c55efdc9fd45ee",
"hidden": false,
"name": "Xin Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:35.000Z",
"user": {
"_id": "63913b120cf6b11c487ca31d",
"avatarUrl": "/avatars/aec44edd5470dd6e767e0a25efd6fb5d.svg",
"fullname": "Xin Li",
"isPro": true,
"type": "user",
"user": "lixin4ever"
}
},
{
"_id": "6791cced89c55efdc9fd45ef",
"hidden": false,
"name": "Peng Jin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T15:05:03.480Z",
"user": {
"_id": "651585ea18cfed0d30bee586",
"avatarUrl": "/avatars/579e468334102472d870875fe40302e6.svg",
"fullname": "Peng Jin",
"isPro": false,
"type": "user",
"user": "Chat-UniVi"
}
},
{
"_id": "6791cced89c55efdc9fd45f0",
"hidden": false,
"name": "Wenqi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791cced89c55efdc9fd45f1",
"hidden": false,
"name": "Fan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791cced89c55efdc9fd45f2",
"hidden": false,
"name": "Lidong Bing",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:39:36.775Z",
"user": {
"_id": "6454685a548f22be598414c4",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/eMjMWKJ-AouF7eY1-RzGF.jpeg",
"fullname": "Lidong Bing",
"isPro": false,
"type": "user",
"user": "LidongBing"
}
},
{
"_id": "6791cced89c55efdc9fd45f3",
"hidden": false,
"name": "Deli Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T18:59:46 |
VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video
Understanding
|
In this paper, we propose VideoLLaMA3, a more advanced multimodal foundation
model for image and video understanding. The core design philosophy of
VideoLLaMA3 is vision-centric. The meaning of "vision-centric" is two-fold: the
vision-centric training paradigm and vision-centric framework design. The key
insight of our vision-centric training paradigm is that high-quality image-text
data is crucial for both image and video understanding. Instead of preparing
massive video-text datasets, we focus on constructing large-scale and
high-quality image-text datasets. VideoLLaMA3 has four training stages: 1)
vision-centric alignment stage, which warms up the vision encoder and
projector; 2) vision-language pretraining stage, which jointly tunes the vision
encoder, projector, and LLM with large-scale image-text data covering multiple
types (including scene images, documents, charts) as well as text-only data. 3)
multi-task fine-tuning stage, which incorporates image-text SFT data for
downstream tasks and video-text data to establish a foundation for video
understanding. 4) video-centric fine-tuning, which further improves the model's
capability in video understanding. As for the framework design, to better
capture fine-grained details in images, the pretrained vision encoder is
adapted to encode images of varying sizes into vision tokens with corresponding
numbers, rather than a fixed number of tokens. For video inputs, we reduce the
number of vision tokens according to their similarity so that the
representation of videos will be more precise and compact. Benefit from
vision-centric designs, VideoLLaMA3 achieves compelling performances in both
image and video understanding benchmarks.
| 84 |
6791ccef89c55efdc9fd46a9
| null | null |
|
2025-01-22T23:28:09.297000 |
IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI Systems
| 2 |
{
"_id": "635d618494e5b275ca73b844",
"avatarUrl": "/avatars/8cdaac6591a12b252612b99094e00959.svg",
"followerCount": 1,
"fullname": "Levi",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Eladlev",
"type": "user"
}
| true | null |
2501.11067
|
[
{
"_id": "679191d8400d620e9c3e5eef",
"hidden": false,
"name": "Elad Levi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:49:58.917Z",
"user": {
"_id": "635d618494e5b275ca73b844",
"avatarUrl": "/avatars/8cdaac6591a12b252612b99094e00959.svg",
"fullname": "Levi",
"isPro": false,
"type": "user",
"user": "Eladlev"
}
},
{
"_id": "679191d8400d620e9c3e5ef0",
"hidden": false,
"name": "Ilan Kadar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:50:10.342Z",
"user": {
"_id": "63ffa9c7e7767a895339ad5d",
"avatarUrl": "/avatars/479c605201e398abfd76f495ff448cd6.svg",
"fullname": "Kadar",
"isPro": false,
"type": "user",
"user": "Ilankad23"
}
}
] | 2025-01-19T14:58:35 |
IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI
Systems
|
Large Language Models (LLMs) are transforming artificial intelligence,
evolving into task-oriented systems capable of autonomous planning and
execution. One of the primary applications of LLMs is conversational AI
systems, which must navigate multi-turn dialogues, integrate domain-specific
APIs, and adhere to strict policy constraints. However, evaluating these agents
remains a significant challenge, as traditional methods fail to capture the
complexity and variability of real-world interactions. We introduce
IntellAgent, a scalable, open-source multi-agent framework designed to evaluate
conversational AI systems comprehensively. IntellAgent automates the creation
of diverse, synthetic benchmarks by combining policy-driven graph modeling,
realistic event generation, and interactive user-agent simulations. This
innovative approach provides fine-grained diagnostics, addressing the
limitations of static and manually curated benchmarks with coarse-grained
metrics. IntellAgent represents a paradigm shift in evaluating conversational
AI. By simulating realistic, multi-policy scenarios across varying levels of
complexity, IntellAgent captures the nuanced interplay of agent capabilities
and policy constraints. Unlike traditional methods, it employs a graph-based
policy model to represent relationships, likelihoods, and complexities of
policy interactions, enabling highly detailed diagnostics. IntellAgent also
identifies critical performance gaps, offering actionable insights for targeted
optimization. Its modular, open-source design supports seamless integration of
new domains, policies, and APIs, fostering reproducibility and community
collaboration. Our findings demonstrate that IntellAgent serves as an effective
framework for advancing conversational AI by addressing challenges in bridging
research and deployment. The framework is available at
https://github.com/plurai-ai/intellagent
| 13 |
679191d9400d620e9c3e5f26
| null | null |
|
2025-01-22T22:53:01.771000 |
Autonomy-of-Experts Models
| 5 |
{
"_id": "64b8ca3c5067873176d4b436",
"avatarUrl": "/avatars/b659d147b2454b47c9a7e89bbed525fc.svg",
"followerCount": 6,
"fullname": "AngLv",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "AngLv",
"type": "user"
}
| true | null |
2501.13074
|
[
{
"_id": "6791baa69d6eba7f285d5dce",
"hidden": false,
"name": "Ang Lv",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:39:56.484Z",
"user": {
"_id": "64b8ca3c5067873176d4b436",
"avatarUrl": "/avatars/b659d147b2454b47c9a7e89bbed525fc.svg",
"fullname": "AngLv",
"isPro": false,
"type": "user",
"user": "AngLv"
}
},
{
"_id": "6791baa69d6eba7f285d5dcf",
"hidden": false,
"name": "Ruobing Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:40:02.493Z",
"user": {
"_id": "6622443b9b0614a760dd8123",
"avatarUrl": "/avatars/acb6c1c9c429af1112530dcf76a8e420.svg",
"fullname": "Ruobing Xie",
"isPro": false,
"type": "user",
"user": "Ruobing-Xie"
}
},
{
"_id": "6791baa69d6eba7f285d5dd0",
"hidden": false,
"name": "Yining Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791baa69d6eba7f285d5dd1",
"hidden": false,
"name": "Songhao Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:40:22.474Z",
"user": {
"_id": "662aa42f4eaa187e4cf6827b",
"avatarUrl": "/avatars/17139f0b6e8092cf4c135028db03a7ff.svg",
"fullname": "Songhao Wu",
"isPro": false,
"type": "user",
"user": "shwu"
}
},
{
"_id": "6791baa69d6eba7f285d5dd2",
"hidden": false,
"name": "Xingwu Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791baa69d6eba7f285d5dd3",
"hidden": false,
"name": "Zhanhui Kang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:40:42.332Z",
"user": {
"_id": "6728117b71b9baba45c25c35",
"avatarUrl": "/avatars/5e4603f00a426c6c41e3e6fef5fa0362.svg",
"fullname": "zhanhui kang",
"isPro": false,
"type": "user",
"user": "kangzhanhui"
}
},
{
"_id": "6791baa69d6eba7f285d5dd4",
"hidden": false,
"name": "Di Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791baa69d6eba7f285d5dd5",
"hidden": false,
"name": "Rui Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T18:37:08 |
Autonomy-of-Experts Models
|
Mixture-of-Experts (MoE) models mostly use a router to assign tokens to
specific expert modules, activating only partial parameters and often
outperforming dense models. We argue that the separation between the router's
decision-making and the experts' execution is a critical yet overlooked issue,
leading to suboptimal expert selection and ineffective learning. To address
this, we propose Autonomy-of-Experts (AoE), a novel MoE paradigm in which
experts autonomously select themselves to process inputs. AoE is based on the
insight that an expert is aware of its own capacity to effectively process a
token, an awareness reflected in the scale of its internal activations. In AoE,
routers are removed; instead, experts pre-compute internal activations for
inputs and are ranked based on their activation norms. Only the top-ranking
experts proceed with the forward pass, while the others abort. The overhead of
pre-computing activations is reduced through a low-rank weight factorization.
This self-evaluating-then-partner-comparing approach ensures improved expert
selection and effective learning. We pre-train language models having 700M up
to 4B parameters, demonstrating that AoE outperforms traditional MoE models
with comparable efficiency.
| 41 |
6791baa79d6eba7f285d5f35
| null | null |
|
2025-01-22T22:27:48.680000 |
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
| 5 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.12948
|
[
{
"_id": "6791b70a76d05e183a411598",
"hidden": false,
"name": "DeepSeek-AI",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411599",
"hidden": false,
"name": "Daya Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:23:30.293Z",
"user": {
"_id": "653df20eaa1f487614da4db1",
"avatarUrl": "/avatars/12b27ce2c59f53b7e464039deab36a5d.svg",
"fullname": "Daya Guo",
"isPro": false,
"type": "user",
"user": "guoday"
}
},
{
"_id": "6791b70a76d05e183a41159a",
"hidden": false,
"name": "Dejian Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:23:47.465Z",
"user": {
"_id": "6225bb44c6e650de3a65dbaa",
"avatarUrl": "/avatars/99c99ced2461978df572c27c1b3a4904.svg",
"fullname": "DejianYang",
"isPro": false,
"type": "user",
"user": "DejianYang"
}
},
{
"_id": "6791b70a76d05e183a41159b",
"hidden": true,
"name": "Haowei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41159c",
"hidden": false,
"name": "Junxiao Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:24:07.882Z",
"user": {
"_id": "6565a2dd131d13ccc5d8cb12",
"avatarUrl": "/avatars/f5c5441ba74791b64c9740911f952bac.svg",
"fullname": "Junxiao Song",
"isPro": false,
"type": "user",
"user": "haha-point"
}
},
{
"_id": "6791b70a76d05e183a41159d",
"hidden": false,
"name": "Ruoyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41159e",
"hidden": false,
"name": "Runxin Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:24:44.711Z",
"user": {
"_id": "672ddc3bf5257413d3f461a0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/8NjMAcCHZeVWoOsUCjsto.png",
"fullname": "XuRunXin",
"isPro": false,
"type": "user",
"user": "AS-7"
}
},
{
"_id": "6791b70a76d05e183a41159f",
"hidden": false,
"name": "Qihao Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:25:09.091Z",
"user": {
"_id": "63cd76b4374057a338e8e703",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63cd76b4374057a338e8e703/i4Qk5-0aYx3oRhC8b50aJ.jpeg",
"fullname": "zhuqihao",
"isPro": false,
"type": "user",
"user": "zqh11"
}
},
{
"_id": "6791b70a76d05e183a4115a0",
"hidden": false,
"name": "Shirong Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:25:35.617Z",
"user": {
"_id": "6482e57a04f67f5f6056a61b",
"avatarUrl": "/avatars/b26faf19ba1493b91102ac7978ab3230.svg",
"fullname": "Shirong Ma",
"isPro": false,
"type": "user",
"user": "msr2000"
}
},
{
"_id": "6791b70a76d05e183a4115a1",
"hidden": false,
"name": "Peiyi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:25:43.509Z",
"user": {
"_id": "656873f33fd0bf1f82558695",
"avatarUrl": "/avatars/7a085da2e2a91d7f41988501a573ebf9.svg",
"fullname": "PEIYI, WANG",
"isPro": false,
"type": "user",
"user": "peiyiwang89"
}
},
{
"_id": "6791b70a76d05e183a4115a2",
"hidden": false,
"name": "Xiao Bi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115a3",
"hidden": false,
"name": "Xiaokang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115a4",
"hidden": false,
"name": "Xingkai Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:26:17.837Z",
"user": {
"_id": "6475b14a37ed88a749e5e48c",
"avatarUrl": "/avatars/973f4662023f0bbbc94c01dc3bb3edd3.svg",
"fullname": "Xingkai Yu",
"isPro": false,
"type": "user",
"user": "GeeeekExplorer"
}
},
{
"_id": "6791b70a76d05e183a4115a5",
"hidden": false,
"name": "Yu Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115a6",
"hidden": false,
"name": "Z. F. Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115a7",
"hidden": false,
"name": "Zhibin Gou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:26:37.876Z",
"user": {
"_id": "62dcf5d4169bd1d2ef2ca724",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62dcf5d4169bd1d2ef2ca724/oRFFmJDJTLYtPRVPCweQ_.jpeg",
"fullname": "Zhibin Gou",
"isPro": false,
"type": "user",
"user": "zubingou"
}
},
{
"_id": "6791b70a76d05e183a4115a8",
"hidden": false,
"name": "Zhihong Shao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:26:51.359Z",
"user": {
"_id": "65db64f8b62d242ed8711701",
"avatarUrl": "/avatars/753e9f980eb6786c6b53b2f1becbf745.svg",
"fullname": "Zhihong Shao",
"isPro": false,
"type": "user",
"user": "ZhihongShao"
}
},
{
"_id": "6791b70a76d05e183a4115a9",
"hidden": false,
"name": "Zhuoshu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115aa",
"hidden": false,
"name": "Ziyi Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ab",
"hidden": false,
"name": "Aixin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ac",
"hidden": false,
"name": "Bing Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ad",
"hidden": false,
"name": "Bingxuan Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:27:32.553Z",
"user": {
"_id": "6523d81d56fe05f216a559f6",
"avatarUrl": "/avatars/07fcf56b5b8a0b64c31bdfe8fbf41cc6.svg",
"fullname": "Bingxuan Wang",
"isPro": false,
"type": "user",
"user": "YellowDoge"
}
},
{
"_id": "6791b70a76d05e183a4115ae",
"hidden": false,
"name": "Bochao Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115af",
"hidden": false,
"name": "Bei Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b0",
"hidden": false,
"name": "Chengda Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b1",
"hidden": false,
"name": "Chenggang Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:28:10.434Z",
"user": {
"_id": "66053b1f9e3555d648b21c3d",
"avatarUrl": "/avatars/c8b33e7f702c4edb17add47f0eafe5e6.svg",
"fullname": "Chenggang Zhao",
"isPro": false,
"type": "user",
"user": "LyricZ"
}
},
{
"_id": "6791b70a76d05e183a4115b2",
"hidden": false,
"name": "Chengqi Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b3",
"hidden": false,
"name": "Chenyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b4",
"hidden": false,
"name": "Chong Ruan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:28:32.927Z",
"user": {
"_id": "6398203609f12714ed1935c2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6398203609f12714ed1935c2/uXgl0LgKnFYjq1Wz39-a6.jpeg",
"fullname": "Chong Ruan",
"isPro": false,
"type": "user",
"user": "Chester111"
}
},
{
"_id": "6791b70a76d05e183a4115b5",
"hidden": false,
"name": "Damai Dai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:28:49.991Z",
"user": {
"_id": "659389f8de82e1ef7b9a8b13",
"avatarUrl": "/avatars/896ed9f4cdbd317493b303d070b7e12a.svg",
"fullname": "Damai Dai",
"isPro": false,
"type": "user",
"user": "DeepSeekDDM"
}
},
{
"_id": "6791b70a76d05e183a4115b6",
"hidden": false,
"name": "Deli Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b7",
"hidden": false,
"name": "Dongjie Ji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:29:12.901Z",
"user": {
"_id": "65fce397fdc5e8ee7d07ee3b",
"avatarUrl": "/avatars/35edef17ecce02939df1e7fdd19b87c8.svg",
"fullname": "Dong Jiejie",
"isPro": false,
"type": "user",
"user": "Dj12138"
}
},
{
"_id": "6791b70a76d05e183a4115b8",
"hidden": false,
"name": "Erhang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115b9",
"hidden": false,
"name": "Fangyun Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ba",
"hidden": false,
"name": "Fucong Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115bb",
"hidden": false,
"name": "Fuli Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:29:55.378Z",
"user": {
"_id": "6538815d1bdb3c40db94fbfa",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6538815d1bdb3c40db94fbfa/id7aSY8JUgKK2agKWLERt.jpeg",
"fullname": "Fuli Luo",
"isPro": false,
"type": "user",
"user": "luofuli"
}
},
{
"_id": "6791b70a76d05e183a4115bc",
"hidden": false,
"name": "Guangbo Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115bd",
"hidden": false,
"name": "Guanting Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115be",
"hidden": false,
"name": "Guowei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115bf",
"hidden": false,
"name": "H. Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c0",
"hidden": false,
"name": "Han Bao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c1",
"hidden": false,
"name": "Hanwei Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c2",
"hidden": false,
"name": "Haocheng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c3",
"hidden": false,
"name": "Honghui Ding",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:30:55.666Z",
"user": {
"_id": "65a5d3d203ed327234be0d3e",
"avatarUrl": "/avatars/7a579207214741c4374c3051c7a1f19f.svg",
"fullname": "Honghui Ding",
"isPro": false,
"type": "user",
"user": "honghuiding"
}
},
{
"_id": "6791b70a76d05e183a4115c4",
"hidden": false,
"name": "Huajian Xin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:31:02.989Z",
"user": {
"_id": "6532a060a78e70d19c669103",
"avatarUrl": "/avatars/3cc9309b0e31da0fb83f1c3ef87dbe9f.svg",
"fullname": "HuajianXin",
"isPro": false,
"type": "user",
"user": "HuajianXin"
}
},
{
"_id": "6791b70a76d05e183a4115c5",
"hidden": false,
"name": "Huazuo Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:31:09.943Z",
"user": {
"_id": "64e370be59aa5366642ac329",
"avatarUrl": "/avatars/0fa1eb6ac6c1aeff3e65bc86a6617f64.svg",
"fullname": "Huazuo Gao",
"isPro": false,
"type": "user",
"user": "gaohuazuo"
}
},
{
"_id": "6791b70a76d05e183a4115c6",
"hidden": false,
"name": "Hui Qu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c7",
"hidden": false,
"name": "Hui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c8",
"hidden": false,
"name": "Jianzhong Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115c9",
"hidden": false,
"name": "Jiashi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:31:28.452Z",
"user": {
"_id": "64fca5f28d50404bc42ca78a",
"avatarUrl": "/avatars/ae01ac0296d6ce1277dacb6894f570b8.svg",
"fullname": "Jiashi Li",
"isPro": false,
"type": "user",
"user": "Beginlner"
}
},
{
"_id": "6791b70a76d05e183a4115ca",
"hidden": false,
"name": "Jiawei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115cb",
"hidden": false,
"name": "Jingchang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115cc",
"hidden": false,
"name": "Jingyang Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115cd",
"hidden": false,
"name": "Junjie Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ce",
"hidden": false,
"name": "Junlong Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:32:44.002Z",
"user": {
"_id": "621e40ac944c7e36aaec2369",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/621e40ac944c7e36aaec2369/Yj-FJRWps3rvsS_B2bnKo.jpeg",
"fullname": "Junlong Li",
"isPro": false,
"type": "user",
"user": "lockon"
}
},
{
"_id": "6791b70a76d05e183a4115cf",
"hidden": false,
"name": "J. L. Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d0",
"hidden": false,
"name": "Jiaqi Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d1",
"hidden": false,
"name": "Jian Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d2",
"hidden": false,
"name": "Jin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d3",
"hidden": false,
"name": "Kai Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d4",
"hidden": false,
"name": "Kai Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d5",
"hidden": false,
"name": "Kaige Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d6",
"hidden": false,
"name": "Kang Guan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d7",
"hidden": false,
"name": "Kexin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d8",
"hidden": false,
"name": "Kuai Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115d9",
"hidden": false,
"name": "Lean Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115da",
"hidden": false,
"name": "Lecong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115db",
"hidden": false,
"name": "Liang Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115dc",
"hidden": false,
"name": "Litong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115dd",
"hidden": false,
"name": "Liyue Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:32:51.322Z",
"user": {
"_id": "67367647517b82b436d74930",
"avatarUrl": "/avatars/34c1f894a3da9f38816d0b30bfdc6d50.svg",
"fullname": "Liyue Zhang",
"isPro": false,
"type": "user",
"user": "Lyriccc"
}
},
{
"_id": "6791b70a76d05e183a4115de",
"hidden": false,
"name": "Lei Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115df",
"hidden": false,
"name": "Leyi Xia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e0",
"hidden": false,
"name": "Mingchuan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e1",
"hidden": false,
"name": "Minghua Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e2",
"hidden": false,
"name": "Minghui Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:33:01.949Z",
"user": {
"_id": "64ad1b6e616d3eb36149d38d",
"avatarUrl": "/avatars/54d0d104ba430f65c04b5259a6423940.svg",
"fullname": "Minghui Tang",
"isPro": false,
"type": "user",
"user": "weicfd"
}
},
{
"_id": "6791b70a76d05e183a4115e3",
"hidden": false,
"name": "Meng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e4",
"hidden": false,
"name": "Miaojun Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e5",
"hidden": false,
"name": "Mingming Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e6",
"hidden": false,
"name": "Ning Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e7",
"hidden": false,
"name": "Panpan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e8",
"hidden": false,
"name": "Peng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115e9",
"hidden": false,
"name": "Qiancheng Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ea",
"hidden": false,
"name": "Qinyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115eb",
"hidden": false,
"name": "Qiushi Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ec",
"hidden": false,
"name": "Ruiqi Ge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ed",
"hidden": false,
"name": "Ruisong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ee",
"hidden": false,
"name": "Ruizhe Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ef",
"hidden": false,
"name": "Runji Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f0",
"hidden": false,
"name": "R. J. Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f1",
"hidden": false,
"name": "R. L. Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f2",
"hidden": false,
"name": "Ruyi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f3",
"hidden": false,
"name": "Shanghao Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f4",
"hidden": false,
"name": "Shangyan Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f5",
"hidden": false,
"name": "Shanhuang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f6",
"hidden": false,
"name": "Shengfeng Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f7",
"hidden": false,
"name": "Shiyu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f8",
"hidden": false,
"name": "Shuiping Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115f9",
"hidden": false,
"name": "Shunfeng Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115fa",
"hidden": false,
"name": "Shuting Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115fb",
"hidden": false,
"name": "S. S. Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115fc",
"hidden": false,
"name": "Shuang Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115fd",
"hidden": false,
"name": "Shaoqing Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115fe",
"hidden": false,
"name": "Shengfeng Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a4115ff",
"hidden": false,
"name": "Tao Yun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411600",
"hidden": false,
"name": "Tian Pei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411601",
"hidden": false,
"name": "Tianyu Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411602",
"hidden": false,
"name": "T. Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411603",
"hidden": false,
"name": "Wangding Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411604",
"hidden": false,
"name": "Wanjia Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411605",
"hidden": false,
"name": "Wen Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:15:38.719Z",
"user": {
"_id": "63198e8802fb322037332f2d",
"avatarUrl": "/avatars/d3f9c206e387df35beb0ed0ef1cdf865.svg",
"fullname": "Wen Liu",
"isPro": false,
"type": "user",
"user": "doubility123"
}
},
{
"_id": "6791b70a76d05e183a411606",
"hidden": false,
"name": "Wenfeng Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411607",
"hidden": false,
"name": "Wenjun Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411608",
"hidden": false,
"name": "Wenqin Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411609",
"hidden": false,
"name": "Wentao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41160a",
"hidden": false,
"name": "W. L. Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41160b",
"hidden": false,
"name": "Wei An",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41160c",
"hidden": false,
"name": "Xiaodong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41160d",
"hidden": false,
"name": "Xiaohan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41160e",
"hidden": false,
"name": "Xiaokang Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T15:55:47.839Z",
"user": {
"_id": "6635e701420baf7bc3f93561",
"avatarUrl": "/avatars/cdbb3085fee73ac520888977e2c575ea.svg",
"fullname": "Xiaokang Chen",
"isPro": false,
"type": "user",
"user": "CharlesCXK"
}
},
{
"_id": "6791b70a76d05e183a41160f",
"hidden": false,
"name": "Xiaotao Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411610",
"hidden": false,
"name": "Xin Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411611",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411612",
"hidden": false,
"name": "Xin Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411613",
"hidden": false,
"name": "Xingchao Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411614",
"hidden": false,
"name": "Xinyu Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411615",
"hidden": false,
"name": "Xinyuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411616",
"hidden": false,
"name": "Xuecheng Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411617",
"hidden": false,
"name": "Xuheng Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411618",
"hidden": false,
"name": "X. Q. Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411619",
"hidden": false,
"name": "Xiangyue Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161a",
"hidden": false,
"name": "Xiaojin Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161b",
"hidden": false,
"name": "Xiaosha Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161c",
"hidden": false,
"name": "Xiaowen Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161d",
"hidden": false,
"name": "Xiaoxiang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161e",
"hidden": false,
"name": "Xinnan Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41161f",
"hidden": false,
"name": "Xinyi Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411620",
"hidden": false,
"name": "Xianzu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411621",
"hidden": false,
"name": "Xinxia Shan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411622",
"hidden": false,
"name": "Y. K. Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411623",
"hidden": false,
"name": "Y. Q. Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411624",
"hidden": false,
"name": "Y. X. Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411625",
"hidden": false,
"name": "Yang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411626",
"hidden": false,
"name": "Yanhong Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411627",
"hidden": false,
"name": "Yao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411628",
"hidden": false,
"name": "Yao Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411629",
"hidden": false,
"name": "Yaofeng Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162a",
"hidden": false,
"name": "Yaohui Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162b",
"hidden": false,
"name": "Yi Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162c",
"hidden": false,
"name": "Yichao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162d",
"hidden": false,
"name": "Yifan Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162e",
"hidden": false,
"name": "Yiliang Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41162f",
"hidden": false,
"name": "Ying He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411630",
"hidden": false,
"name": "Yishi Piao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411631",
"hidden": false,
"name": "Yisong Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411632",
"hidden": false,
"name": "Yixuan Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411633",
"hidden": false,
"name": "Yiyang Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411634",
"hidden": false,
"name": "Yiyuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411635",
"hidden": false,
"name": "Yongqiang Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411636",
"hidden": false,
"name": "Yuan Ou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411637",
"hidden": false,
"name": "Yuduan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411638",
"hidden": false,
"name": "Yue Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411639",
"hidden": false,
"name": "Yuheng Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163a",
"hidden": false,
"name": "Yujia He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163b",
"hidden": false,
"name": "Yunfan Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163c",
"hidden": false,
"name": "Yuxiang Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163d",
"hidden": false,
"name": "Yuxiang You",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163e",
"hidden": false,
"name": "Yuxuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41163f",
"hidden": false,
"name": "Yuyang Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411640",
"hidden": false,
"name": "Y. X. Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411641",
"hidden": false,
"name": "Yanhong Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411642",
"hidden": false,
"name": "Yanping Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411643",
"hidden": false,
"name": "Yaohui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411644",
"hidden": false,
"name": "Yi Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411645",
"hidden": false,
"name": "Yuchen Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411646",
"hidden": false,
"name": "Yunxian Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411647",
"hidden": false,
"name": "Ying Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411648",
"hidden": false,
"name": "Yukun Zha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411649",
"hidden": false,
"name": "Yuting Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164a",
"hidden": false,
"name": "Z. Z. Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164b",
"hidden": false,
"name": "Zehui Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164c",
"hidden": false,
"name": "Zhangli Sha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164d",
"hidden": false,
"name": "Zhe Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164e",
"hidden": false,
"name": "Zhean Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41164f",
"hidden": false,
"name": "Zhenda Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411650",
"hidden": false,
"name": "Zhengyan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411651",
"hidden": false,
"name": "Zhewen Hao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411652",
"hidden": false,
"name": "Zhicheng Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411653",
"hidden": false,
"name": "Zhigang Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411654",
"hidden": false,
"name": "Zhiyu Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411655",
"hidden": false,
"name": "Zihui Gu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411656",
"hidden": false,
"name": "Zijia Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411657",
"hidden": false,
"name": "Zijun Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-27T10:43:24.441Z",
"user": {
"_id": "6468c76bff18750165a64df3",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6468c76bff18750165a64df3/dHhE62SHOSJZjyU60vgh7.jpeg",
"fullname": "Zijun Liu",
"isPro": false,
"type": "user",
"user": "BBQGOD"
}
},
{
"_id": "6791b70a76d05e183a411658",
"hidden": false,
"name": "Zilin Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a411659",
"hidden": false,
"name": "Ziwei Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165a",
"hidden": false,
"name": "Ziyang Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165b",
"hidden": false,
"name": "Zizheng Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165c",
"hidden": false,
"name": "Zhen Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165d",
"hidden": false,
"name": "Zhipeng Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165e",
"hidden": false,
"name": "Zhongyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b70a76d05e183a41165f",
"hidden": false,
"name": "Zhen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T15:19:35 |
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning
|
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and
DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement
learning (RL) without supervised fine-tuning (SFT) as a preliminary step,
demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero
naturally emerges with numerous powerful and intriguing reasoning behaviors.
However, it encounters challenges such as poor readability, and language
mixing. To address these issues and further enhance reasoning performance, we
introduce DeepSeek-R1, which incorporates multi-stage training and cold-start
data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217
on reasoning tasks. To support the research community, we open-source
DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B,
70B) distilled from DeepSeek-R1 based on Qwen and Llama.
| 338 |
6791b70c76d05e183a4116bf
| null | null |
|
2025-01-22T22:25:11.596000 |
FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in Virtual 3D Spaces
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.12909
|
[
{
"_id": "6791b688f0ecdeb1a89e35d2",
"hidden": false,
"name": "Zhenran Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:17:51.418Z",
"user": {
"_id": "639c379cdb7c5f35004066cb",
"avatarUrl": "/avatars/3e435506ee85aa7d2d0ec2174a07462f.svg",
"fullname": "Zhenran Xu",
"isPro": false,
"type": "user",
"user": "imryanxu"
}
},
{
"_id": "6791b688f0ecdeb1a89e35d3",
"hidden": false,
"name": "Longyue Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:35:44.815Z",
"user": {
"_id": "636b030c328133bdb3a523bc",
"avatarUrl": "/avatars/15d5d5403fef2f1368bb4185b199061d.svg",
"fullname": "Longyue Wang",
"isPro": false,
"type": "user",
"user": "longyuewang"
}
},
{
"_id": "6791b688f0ecdeb1a89e35d4",
"hidden": false,
"name": "Jifang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:35:50.208Z",
"user": {
"_id": "670fcb3ffe84ac0ce43a8507",
"avatarUrl": "/avatars/6c6e7cf19b0b71f32e7dae8f1521976b.svg",
"fullname": "Jifang Wang",
"isPro": false,
"type": "user",
"user": "PigCatchingExpert"
}
},
{
"_id": "6791b688f0ecdeb1a89e35d5",
"hidden": false,
"name": "Zhouyi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:36:08.559Z",
"user": {
"_id": "6464484a4b34d49ac754c8ba",
"avatarUrl": "/avatars/503af8204a70abd3d4c0e30f5590f608.svg",
"fullname": "Li-Zhouyi",
"isPro": false,
"type": "user",
"user": "Li-Zhouyi"
}
},
{
"_id": "6791b688f0ecdeb1a89e35d6",
"hidden": false,
"name": "Senbao Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b688f0ecdeb1a89e35d7",
"hidden": false,
"name": "Xue Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b688f0ecdeb1a89e35d8",
"hidden": false,
"name": "Yiyu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b688f0ecdeb1a89e35d9",
"hidden": false,
"name": "Baotian Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b688f0ecdeb1a89e35da",
"hidden": false,
"name": "Jun Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b688f0ecdeb1a89e35db",
"hidden": false,
"name": "Min Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T14:36:30 |
FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in
Virtual 3D Spaces
|
Virtual film production requires intricate decision-making processes,
including scriptwriting, virtual cinematography, and precise actor positioning
and actions. Motivated by recent advances in automated decision-making with
language agent-based societies, this paper introduces FilmAgent, a novel
LLM-based multi-agent collaborative framework for end-to-end film automation in
our constructed 3D virtual spaces. FilmAgent simulates various crew roles,
including directors, screenwriters, actors, and cinematographers, and covers
key stages of a film production workflow: (1) idea development transforms
brainstormed ideas into structured story outlines; (2) scriptwriting elaborates
on dialogue and character actions for each scene; (3) cinematography determines
the camera setups for each shot. A team of agents collaborates through
iterative feedback and revisions, thereby verifying intermediate scripts and
reducing hallucinations. We evaluate the generated videos on 15 ideas and 4 key
aspects. Human evaluation shows that FilmAgent outperforms all baselines across
all aspects and scores 3.98 out of 5 on average, showing the feasibility of
multi-agent collaboration in filmmaking. Further analysis reveals that
FilmAgent, despite using the less advanced GPT-4o model, surpasses the
single-agent o1, showing the advantage of a well-coordinated multi-agent
system. Lastly, we discuss the complementary strengths and weaknesses of
OpenAI's text-to-video model Sora and our FilmAgent in filmmaking.
| 68 |
6791b68ef0ecdeb1a89e3770
| null | null |
|
2025-01-22T22:23:27.498000 |
Kimi k1.5: Scaling Reinforcement Learning with LLMs
| 6 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.12599
|
[
{
"_id": "6791b6029e215712a7cf700a",
"hidden": false,
"name": "Kimi Team",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf700b",
"hidden": false,
"name": "Angang Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf700c",
"hidden": false,
"name": "Bofei Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:41:32.895Z",
"user": {
"_id": "65ae21adabf6d1ccb795e9a4",
"avatarUrl": "/avatars/b5dced62c6a3564095a8fa0959bc06cb.svg",
"fullname": "Bofei Gao",
"isPro": false,
"type": "user",
"user": "KbsdJames"
}
},
{
"_id": "6791b6029e215712a7cf700d",
"hidden": false,
"name": "Bowei Xing",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:41:55.675Z",
"user": {
"_id": "67503f270dfe827c4068a408",
"avatarUrl": "/avatars/4591c8229c7815bfd6dc4b98aea85ca8.svg",
"fullname": "Bowei Xing",
"isPro": false,
"type": "user",
"user": "xingbowei"
}
},
{
"_id": "6791b6029e215712a7cf700e",
"hidden": false,
"name": "Changjiu Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf700f",
"hidden": false,
"name": "Cheng Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7010",
"hidden": false,
"name": "Cheng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7011",
"hidden": false,
"name": "Chenjun Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:42:24.770Z",
"user": {
"_id": "66c9aa8fa40c9235cb446365",
"avatarUrl": "/avatars/616719ed848e1f8437fb39e6184feb52.svg",
"fullname": "Chenjun Xiao",
"isPro": false,
"type": "user",
"user": "shelowize"
}
},
{
"_id": "6791b6029e215712a7cf7012",
"hidden": false,
"name": "Chenzhuang Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:42:36.347Z",
"user": {
"_id": "64c21fb42426d683e56b42bf",
"avatarUrl": "/avatars/60359fe204e32af831d701d2975c4599.svg",
"fullname": "Du",
"isPro": false,
"type": "user",
"user": "DuChenZhuang"
}
},
{
"_id": "6791b6029e215712a7cf7013",
"hidden": false,
"name": "Chonghua Liao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:43:07.678Z",
"user": {
"_id": "667fdaee20ee9ac417c7708c",
"avatarUrl": "/avatars/69dfba6ff392643af1dcfe8af0a42ae9.svg",
"fullname": "Chonghua Liao",
"isPro": false,
"type": "user",
"user": "ChonghuaLiao"
}
},
{
"_id": "6791b6029e215712a7cf7014",
"hidden": false,
"name": "Chuning Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7015",
"hidden": false,
"name": "Congcong Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:43:24.138Z",
"user": {
"_id": "5eefd87c5e979253a010eee5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1603575136094-5eefd87c5e979253a010eee5.jpeg",
"fullname": "Congcong Wang",
"isPro": false,
"type": "user",
"user": "congcongwang"
}
},
{
"_id": "6791b6029e215712a7cf7016",
"hidden": false,
"name": "Dehao Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:44:01.384Z",
"user": {
"_id": "634d0eeffc4873ed0d00406a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1665994473058-noauth.jpeg",
"fullname": "ZhangDehao",
"isPro": false,
"type": "user",
"user": "ispoon"
}
},
{
"_id": "6791b6029e215712a7cf7017",
"hidden": false,
"name": "Enming Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:44:09.134Z",
"user": {
"_id": "6331606f18711776b4655e67",
"avatarUrl": "/avatars/1479c2ca743b9f92d845b0ed23fcd07b.svg",
"fullname": "Enming Yuan",
"isPro": false,
"type": "user",
"user": "EnmingYuan"
}
},
{
"_id": "6791b6029e215712a7cf7018",
"hidden": false,
"name": "Enzhe Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7019",
"hidden": false,
"name": "Fengxiang Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf701a",
"hidden": false,
"name": "Flood Sung",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:44:34.078Z",
"user": {
"_id": "6343d01a08c017b2c042305d",
"avatarUrl": "/avatars/790c4104d80da9887d481f9efb494d81.svg",
"fullname": "Flood Sung",
"isPro": false,
"type": "user",
"user": "floodsung"
}
},
{
"_id": "6791b6029e215712a7cf701b",
"hidden": false,
"name": "Guangda Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf701c",
"hidden": false,
"name": "Guokun Lai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:44:52.466Z",
"user": {
"_id": "63b4c71758f367a212c4f9ef",
"avatarUrl": "/avatars/d61736e0ae8b333a7c24eb411378698c.svg",
"fullname": "Lai",
"isPro": false,
"type": "user",
"user": "Guokun"
}
},
{
"_id": "6791b6029e215712a7cf701d",
"hidden": false,
"name": "Haiqing Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf701e",
"hidden": false,
"name": "Han Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf701f",
"hidden": false,
"name": "Hao Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7020",
"hidden": false,
"name": "Hao Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7021",
"hidden": false,
"name": "Hao Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T15:05:07.301Z",
"user": {
"_id": "64ec364e7e2ec711a7601cde",
"avatarUrl": "/avatars/6ba47d496586de65df183f056d35982b.svg",
"fullname": "Hao Yang",
"isPro": false,
"type": "user",
"user": "hayayanghao"
}
},
{
"_id": "6791b6029e215712a7cf7022",
"hidden": false,
"name": "Hao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7023",
"hidden": false,
"name": "Haotian Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7024",
"hidden": false,
"name": "Haotian Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7025",
"hidden": false,
"name": "Haoyu Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7026",
"hidden": false,
"name": "Haoze Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7027",
"hidden": false,
"name": "Haozhen Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7028",
"hidden": false,
"name": "Hongcheng Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:47:40.340Z",
"user": {
"_id": "62728f4f6253fe2068da1021",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62728f4f6253fe2068da1021/KZ65X0EH98AF3zXemPiap.jpeg",
"fullname": "Hongcheng Gao",
"isPro": false,
"type": "user",
"user": "HongchengGao"
}
},
{
"_id": "6791b6029e215712a7cf7029",
"hidden": false,
"name": "Huabin Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:47:47.850Z",
"user": {
"_id": "61860e1258cb1f8c362f9441",
"avatarUrl": "/avatars/8dbc8209ad0d918453c1ffacc8f61e7f.svg",
"fullname": "Huabin Zheng",
"isPro": false,
"type": "user",
"user": "zhenghuabin"
}
},
{
"_id": "6791b6029e215712a7cf702a",
"hidden": false,
"name": "Huan Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf702b",
"hidden": false,
"name": "Jia Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf702c",
"hidden": false,
"name": "Jianhang Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf702d",
"hidden": false,
"name": "Jianlin Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf702e",
"hidden": false,
"name": "Jianzhou Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:48:00.137Z",
"user": {
"_id": "63be6bf6da08ed0544f1eb7a",
"avatarUrl": "/avatars/19b5be6d3296da402d8822e51d6376e2.svg",
"fullname": "jianzhouWang",
"isPro": false,
"type": "user",
"user": "jianzhouWang"
}
},
{
"_id": "6791b6029e215712a7cf702f",
"hidden": false,
"name": "Jie Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7030",
"hidden": false,
"name": "Jin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7031",
"hidden": false,
"name": "Jingyuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7032",
"hidden": false,
"name": "Junjie Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7033",
"hidden": false,
"name": "Junyan Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7034",
"hidden": false,
"name": "Lidong Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7035",
"hidden": false,
"name": "Ling Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7036",
"hidden": false,
"name": "Longhui Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-24T10:01:45.720Z",
"user": {
"_id": "64b753306c169983c982f609",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64b753306c169983c982f609/oOC_N8cI1Go6avUMvIPVI.jpeg",
"fullname": "Longhui Yu",
"isPro": false,
"type": "user",
"user": "Longhui98"
}
},
{
"_id": "6791b6029e215712a7cf7037",
"hidden": false,
"name": "Mengnan Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7038",
"hidden": false,
"name": "Neo Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7039",
"hidden": false,
"name": "Ningchen Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703a",
"hidden": false,
"name": "Qiwei Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703b",
"hidden": false,
"name": "Qucheng Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703c",
"hidden": false,
"name": "Shaowei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703d",
"hidden": false,
"name": "Shengling Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703e",
"hidden": false,
"name": "Shupeng Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf703f",
"hidden": false,
"name": "Sihan Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7040",
"hidden": false,
"name": "Siying Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7041",
"hidden": false,
"name": "Tao Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7042",
"hidden": false,
"name": "Weihao Gao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7043",
"hidden": false,
"name": "Weimin Xiong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:35:12.836Z",
"user": {
"_id": "6225a9983207dfc568407204",
"avatarUrl": "/avatars/c970db6232d84ae8c0fa5f11d561d67c.svg",
"fullname": "xwm",
"isPro": false,
"type": "user",
"user": "xwm"
}
},
{
"_id": "6791b6029e215712a7cf7044",
"hidden": false,
"name": "Weiran He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7045",
"hidden": false,
"name": "Weixiao Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7046",
"hidden": false,
"name": "Wenhao Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7047",
"hidden": false,
"name": "Wenyang He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7048",
"hidden": false,
"name": "Xianghui Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7049",
"hidden": false,
"name": "Xianqing Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704a",
"hidden": false,
"name": "Xingzhe Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704b",
"hidden": false,
"name": "Xinran Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704c",
"hidden": false,
"name": "Xinxing Zu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704d",
"hidden": false,
"name": "Xinyu Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704e",
"hidden": false,
"name": "Xuehai Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf704f",
"hidden": false,
"name": "Y. Charles",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7050",
"hidden": false,
"name": "Yang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7051",
"hidden": false,
"name": "Yangyang Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7052",
"hidden": false,
"name": "Yangyang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7053",
"hidden": false,
"name": "Yanru Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7054",
"hidden": false,
"name": "Yejie Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7055",
"hidden": false,
"name": "Yibo Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7056",
"hidden": false,
"name": "Yidao Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7057",
"hidden": false,
"name": "Yifeng Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-19T09:05:48.369Z",
"user": {
"_id": "653d276681f52ceb4d12bd85",
"avatarUrl": "/avatars/56601a25e5f883a8f6dc15f6fd9dcc57.svg",
"fullname": "Yifeng Liu",
"isPro": false,
"type": "user",
"user": "Lewis-Lau"
}
},
{
"_id": "6791b6029e215712a7cf7058",
"hidden": false,
"name": "Ying Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7059",
"hidden": false,
"name": "Yiping Bao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf705a",
"hidden": false,
"name": "Yulun Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:47:18.988Z",
"user": {
"_id": "6340f31fb78ed99eab04ce33",
"avatarUrl": "/avatars/2e7fcbf0233bdc0bc9a3f4603fd8bf90.svg",
"fullname": "Du",
"isPro": false,
"type": "user",
"user": "Yulun"
}
},
{
"_id": "6791b6029e215712a7cf705b",
"hidden": false,
"name": "Yuxin Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf705c",
"hidden": false,
"name": "Yuzhi Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:46:54.213Z",
"user": {
"_id": "67127a470a82509269d738ae",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/M9qLmI3P6dT2FIwEPFJq0.png",
"fullname": "yuzhi wang",
"isPro": false,
"type": "user",
"user": "vin-tage"
}
},
{
"_id": "6791b6029e215712a7cf705d",
"hidden": false,
"name": "Zaida Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:46:46.940Z",
"user": {
"_id": "64409d69518271b0d1c033a6",
"avatarUrl": "/avatars/c79eb36c4ad96286afda834e260a1c09.svg",
"fullname": "zhouzaida",
"isPro": false,
"type": "user",
"user": "zhouzaida"
}
},
{
"_id": "6791b6029e215712a7cf705e",
"hidden": false,
"name": "Zhaoji Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf705f",
"hidden": false,
"name": "Zhaowei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7060",
"hidden": false,
"name": "Zhen Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7061",
"hidden": false,
"name": "Zheng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7062",
"hidden": false,
"name": "Zhexu Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:46:16.399Z",
"user": {
"_id": "645b8bfd78730bcc103f9757",
"avatarUrl": "/avatars/1f9b7ec00744708ab609b1c4db2c2fcc.svg",
"fullname": "Wang Zhexuan",
"isPro": false,
"type": "user",
"user": "longmao14"
}
},
{
"_id": "6791b6029e215712a7cf7063",
"hidden": false,
"name": "Zhilin Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:46:04.605Z",
"user": {
"_id": "64bf74154d2052b1aa5ca6d9",
"avatarUrl": "/avatars/7aa6f2952cdbc20cfa758fdd905f06a6.svg",
"fullname": "ZHILIN YANG",
"isPro": false,
"type": "user",
"user": "bruceyannnn"
}
},
{
"_id": "6791b6029e215712a7cf7064",
"hidden": false,
"name": "Zhiqi Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7065",
"hidden": false,
"name": "Zihao Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b6029e215712a7cf7066",
"hidden": false,
"name": "Ziyao Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:45:24.566Z",
"user": {
"_id": "64d381c3527b22502ec81eb7",
"avatarUrl": "/avatars/55145a716acf120a32a302830c1e8178.svg",
"fullname": "Ziyao Xu",
"isPro": false,
"type": "user",
"user": "xzyxzy"
}
},
{
"_id": "6791b6029e215712a7cf7067",
"hidden": false,
"name": "Zonghan Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:45:16.573Z",
"user": {
"_id": "635795f24fba057662cdf2e7",
"avatarUrl": "/avatars/07642c7cfd9b57a0f8382268b009cf6f.svg",
"fullname": "Zonghan Yang",
"isPro": false,
"type": "user",
"user": "minicheshire"
}
}
] | 2025-01-22T02:48:14 |
Kimi k1.5: Scaling Reinforcement Learning with LLMs
|
Language model pretraining with next token prediction has proved effective
for scaling compute but is limited to the amount of available training data.
Scaling reinforcement learning (RL) unlocks a new axis for the continued
improvement of artificial intelligence, with the promise that large language
models (LLMs) can scale their training data by learning to explore with
rewards. However, prior published work has not produced competitive results. In
light of this, we report on the training practice of Kimi k1.5, our latest
multi-modal LLM trained with RL, including its RL training techniques,
multi-modal data recipes, and infrastructure optimization. Long context scaling
and improved policy optimization methods are key ingredients of our approach,
which establishes a simplistic, effective RL framework without relying on more
complex techniques such as Monte Carlo tree search, value functions, and
process reward models. Notably, our system achieves state-of-the-art reasoning
performance across multiple benchmarks and modalities -- e.g., 77.5 on AIME,
96.2 on MATH 500, 94-th percentile on Codeforces, 74.9 on MathVista -- matching
OpenAI's o1. Moreover, we present effective long2short methods that use
long-CoT techniques to improve short-CoT models, yielding state-of-the-art
short-CoT reasoning results -- e.g., 60.8 on AIME, 94.6 on MATH500, 47.3 on
LiveCodeBench -- outperforming existing short-CoT models such as GPT-4o and
Claude Sonnet 3.5 by a large margin (up to +550%).
| 100 |
6791b6039e215712a7cf70aa
| null | null |
|
2025-01-22T22:03:17.988000 |
O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.12570
|
[
{
"_id": "6791b165c6fee5f0588773a9",
"hidden": false,
"name": "Haotian Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:50:29.364Z",
"user": {
"_id": "674919d380981d58adbb456a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/q9n5VYYP7Ip6iPkrlDFGm.png",
"fullname": "LuoHaotian",
"isPro": false,
"type": "user",
"user": "iNk233"
}
},
{
"_id": "6791b165c6fee5f0588773aa",
"hidden": false,
"name": "Li Shen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-26T11:38:40.905Z",
"user": {
"_id": "62de356ad89af8c07209e7d4",
"avatarUrl": "/avatars/610629958726b270418368b8b7f61469.svg",
"fullname": "Li Shen",
"isPro": false,
"type": "user",
"user": "mathshenli"
}
},
{
"_id": "6791b165c6fee5f0588773ab",
"hidden": false,
"name": "Haiying He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b165c6fee5f0588773ac",
"hidden": false,
"name": "Yibo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b165c6fee5f0588773ad",
"hidden": false,
"name": "Shiwei Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T11:15:27.241Z",
"user": {
"_id": "65b04d2291e63920a7898c9e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65b04d2291e63920a7898c9e/iUHs235G4bqK-KnH_94ti.jpeg",
"fullname": "Liu",
"isPro": false,
"type": "user",
"user": "Shiweiliuiiiiiii"
}
},
{
"_id": "6791b165c6fee5f0588773ae",
"hidden": false,
"name": "Wei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b165c6fee5f0588773af",
"hidden": false,
"name": "Naiqiang Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791b165c6fee5f0588773b0",
"hidden": false,
"name": "Xiaochun Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-23T09:50:53.299Z",
"user": {
"_id": "64ddc5ea83645d0f98599d6e",
"avatarUrl": "/avatars/99a148c29cfbff77ed6150cc279a47c0.svg",
"fullname": "caoxiaochun",
"isPro": false,
"type": "user",
"user": "cxc361461518"
}
},
{
"_id": "6791b165c6fee5f0588773b1",
"hidden": false,
"name": "Dacheng Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-22T01:35:11 |
O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning
|
Recently, long-thought reasoning LLMs, such as OpenAI's O1, adopt extended
reasoning processes similar to how humans ponder over complex problems. This
reasoning paradigm significantly enhances the model's problem-solving abilities
and has achieved promising results. However, long-thought reasoning process
leads to a substantial increase in inference time. A pressing challenge is
reducing the inference overhead of long-thought LLMs while ensuring accuracy.
In this paper, we experimentally demonstrate that long-thought reasoning models
struggle to effectively allocate token budgets based on problem difficulty and
reasoning redundancies. To address this, we propose Length-Harmonizing
Fine-Tuning (O1-Pruner), aiming at minimizing reasoning overhead while
maintaining accuracy. This effective fine-tuning method first estimates the
LLM's baseline performance through pre-sampling and then uses RL-style
fine-tuning to encourage the model to generate shorter reasoning processes
under accuracy constraints. This allows the model to achieve efficient
reasoning with lower redundancy while maintaining accuracy. Experiments on
various mathematical reasoning benchmarks show that O1-Pruner not only
significantly reduces inference overhead but also achieves higher accuracy,
providing a novel and promising solution to this challenge. Our code is coming
soon at https://github.com/StarDewXXX/O1-Pruner
| 24 |
6791b166c6fee5f0588773fc
| null | null |
|
2025-01-22T15:35:34.550000 |
Taming Teacher Forcing for Masked Autoregressive Video Generation
| 2 |
{
"_id": "6201fc5d91d53938a6432fbf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg",
"followerCount": 3,
"fullname": "Runpei Dong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "RunpeiDong",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/6201fc5d91d53938a6432fbf/qTDQmHOuoSZwDbdymh1o0.png"
] |
2501.12389
|
[
{
"_id": "6791564a65b96bef4a168e00",
"hidden": false,
"name": "Deyu Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:54:25.055Z",
"user": {
"_id": "64e821f2bddc5b1072b15c2e",
"avatarUrl": "/avatars/618b5a48f2fa62daff4e1922a9aa9e8b.svg",
"fullname": "zhoudeyu",
"isPro": false,
"type": "user",
"user": "zhoudeyu"
}
},
{
"_id": "6791564a65b96bef4a168e01",
"hidden": false,
"name": "Quan Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:54:39.129Z",
"user": {
"_id": "630d7a8f81ef9b1772b67f4c",
"avatarUrl": "/avatars/00757abd6e548ccebb5bfb233be129a2.svg",
"fullname": "Quan Sun",
"isPro": false,
"type": "user",
"user": "QuanSun"
}
},
{
"_id": "6791564a65b96bef4a168e02",
"hidden": false,
"name": "Yuang Peng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:54:44.656Z",
"user": {
"_id": "631ee086c1a8269da39265c6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/631ee086c1a8269da39265c6/wUa1epGtTGcUv2mvrLUcD.png",
"fullname": "Yuang Peng",
"isPro": false,
"type": "user",
"user": "yuangpeng"
}
},
{
"_id": "6791564a65b96bef4a168e03",
"hidden": false,
"name": "Kun Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791564a65b96bef4a168e04",
"hidden": false,
"name": "Runpei Dong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T20:49:49.933Z",
"user": {
"_id": "6201fc5d91d53938a6432fbf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6201fc5d91d53938a6432fbf/VLs8ZYaZrop4KBpZn53fH.jpeg",
"fullname": "Runpei Dong",
"isPro": false,
"type": "user",
"user": "RunpeiDong"
}
},
{
"_id": "6791564a65b96bef4a168e05",
"hidden": false,
"name": "Duomin Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:18:05.586Z",
"user": {
"_id": "64ae9b88a22a179fc4d07992",
"avatarUrl": "/avatars/c9065f04a1188ea3129e56a90328ffd3.svg",
"fullname": "wang",
"isPro": false,
"type": "user",
"user": "dorni"
}
},
{
"_id": "6791564a65b96bef4a168e06",
"hidden": false,
"name": "Zheng Ge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791564a65b96bef4a168e07",
"hidden": false,
"name": "Nan Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:55:16.663Z",
"user": {
"_id": "66422aede3e9eba09d3cb753",
"avatarUrl": "/avatars/52a6292fcc3a37265062bdcfbea73441.svg",
"fullname": "Nan Duan",
"isPro": false,
"type": "user",
"user": "opotle"
}
},
{
"_id": "6791564a65b96bef4a168e08",
"hidden": false,
"name": "Xiangyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791564a65b96bef4a168e09",
"hidden": false,
"name": "Lionel M. Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6791564a65b96bef4a168e0a",
"hidden": false,
"name": "Heung-Yeung Shum",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T18:59:31 |
Taming Teacher Forcing for Masked Autoregressive Video Generation
|
We introduce MAGI, a hybrid video generation framework that combines masked
modeling for intra-frame generation with causal modeling for next-frame
generation. Our key innovation, Complete Teacher Forcing (CTF), conditions
masked frames on complete observation frames rather than masked ones (namely
Masked Teacher Forcing, MTF), enabling a smooth transition from token-level
(patch-level) to frame-level autoregressive generation. CTF significantly
outperforms MTF, achieving a +23% improvement in FVD scores on first-frame
conditioned video prediction. To address issues like exposure bias, we employ
targeted training strategies, setting a new benchmark in autoregressive video
generation. Experiments show that MAGI can generate long, coherent video
sequences exceeding 100 frames, even when trained on as few as 16 frames,
highlighting its potential for scalable, high-quality video generation.
| 10 |
6791564c65b96bef4a168fbd
| null | null |
|
2025-01-22T14:10:17.911000 |
Fixing Imbalanced Attention to Mitigate In-Context Hallucination of Large Vision-Language Model
| 2 |
{
"_id": "654af6f173416a223f5eacf5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/FZvER_MtUz_BitTVf2QfV.jpeg",
"followerCount": 1,
"fullname": "Hasan Arif",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hasanar1f",
"type": "user"
}
| true | null |
2501.12206
|
[
{
"_id": "67914269013258a9daeec92a",
"hidden": false,
"name": "Kazi Hasan Ibn Arif",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-22T19:35:08.167Z",
"user": {
"_id": "654af6f173416a223f5eacf5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/FZvER_MtUz_BitTVf2QfV.jpeg",
"fullname": "Hasan Arif",
"isPro": false,
"type": "user",
"user": "hasanar1f"
}
},
{
"_id": "67914269013258a9daeec92b",
"hidden": false,
"name": "Sajib Acharjee Dip",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67914269013258a9daeec92c",
"hidden": false,
"name": "Khizar Hussain",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-22T19:24:54.932Z",
"user": {
"_id": "6719304706ed69d813c17f72",
"avatarUrl": "/avatars/516e6dc12c1b8de3e8c3bd26207013a3.svg",
"fullname": "Khizar Husasin",
"isPro": false,
"type": "user",
"user": "KhizarH"
}
},
{
"_id": "67914269013258a9daeec92d",
"hidden": false,
"name": "Lang Zhang",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-22T22:27:51.230Z",
"user": {
"_id": "677d52aeb04a44ce6218c6ab",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/O_co9rCqruMMlyoqXn1vS.png",
"fullname": "Lang Zhang",
"isPro": false,
"type": "user",
"user": "Lang2000"
}
},
{
"_id": "67914269013258a9daeec92e",
"hidden": false,
"name": "Chris Thomas",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T15:22:31 |
Fixing Imbalanced Attention to Mitigate In-Context Hallucination of
Large Vision-Language Model
|
Large Vision Language Models (LVLMs) have demonstrated remarkable
capabilities in understanding and describing visual content, achieving
state-of-the-art performance across various vision-language tasks. However,
these models frequently exhibit hallucination behavior, where they generate
descriptions containing objects or details absent in the input image. Our work
investigates this phenomenon by analyzing attention patterns across transformer
layers and heads, revealing that hallucinations often stem from progressive
degradation of visual grounding in deeper layers. We propose a novel attention
modification approach that combines selective token emphasis and head-specific
modulation to maintain visual grounding throughout the generation process. Our
method introduces two key components: (1) a dual-stream token selection
mechanism that identifies and prioritizes both locally informative and
spatially significant visual tokens, and (2) an attention head-specific
modulation strategy that differentially amplifies visual information processing
based on measured visual sensitivity of individual attention heads. Through
extensive experimentation on the MSCOCO dataset, we demonstrate that our
approach reduces hallucination rates by up to 62.3\% compared to baseline
models while maintaining comparable task performance. Our analysis reveals that
selectively modulating tokens across attention heads with varying levels of
visual sensitivity can significantly improve visual grounding without requiring
model retraining.
| 4 |
67914269013258a9daeec976
| null | null |
|
2025-01-22T09:28:05.021000 |
The Geometry of Tokens in Internal Representations of Large Language Models
| 2 |
{
"_id": "6632140cf69134729c68e65f",
"avatarUrl": "/avatars/f68426b505752606223bb78fd82a1af9.svg",
"followerCount": null,
"fullname": "Karthik Viswanathan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vkarthik095",
"type": "user"
}
| true | null |
2501.10573
|
[
{
"_id": "6790ffb0561e5b824100c29a",
"hidden": false,
"name": "Karthik Viswanathan",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-22T14:24:50.240Z",
"user": {
"_id": "6632140cf69134729c68e65f",
"avatarUrl": "/avatars/f68426b505752606223bb78fd82a1af9.svg",
"fullname": "Karthik Viswanathan",
"isPro": false,
"type": "user",
"user": "vkarthik095"
}
},
{
"_id": "6790ffb0561e5b824100c29b",
"hidden": false,
"name": "Yuri Gardinazzi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T15:35:14.198Z",
"user": {
"_id": "6618fbba78bd2f4adc475834",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6618fbba78bd2f4adc475834/dJBRN4jucKOnMHtq2aq42.jpeg",
"fullname": "YuriGardinazzi",
"isPro": false,
"type": "user",
"user": "YuriGardinazzi"
}
},
{
"_id": "6790ffb0561e5b824100c29c",
"hidden": false,
"name": "Giada Panerai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790ffb0561e5b824100c29d",
"hidden": false,
"name": "Alberto Cazzaniga",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790ffb0561e5b824100c29e",
"hidden": false,
"name": "Matteo Biagetti",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T22:02:17 |
The Geometry of Tokens in Internal Representations of Large Language
Models
|
We investigate the relationship between the geometry of token embeddings and
their role in the next token prediction within transformer models. An important
aspect of this connection uses the notion of empirical measure, which encodes
the distribution of token point clouds across transformer layers and drives the
evolution of token representations in the mean-field interacting picture. We
use metrics such as intrinsic dimension, neighborhood overlap, and cosine
similarity to observationally probe these empirical measures across layers. To
validate our approach, we compare these metrics to a dataset where the tokens
are shuffled, which disrupts the syntactic and semantic structure. Our findings
reveal a correlation between the geometric properties of token embeddings and
the cross-entropy loss of next token predictions, implying that prompts with
higher loss values have tokens represented in higher-dimensional spaces.
| 9 |
6790ffb2561e5b824100c308
| null | null |
|
2025-01-22T08:10:31.176000 |
Panoramic Interests: Stylistic-Content Aware Personalized Headline Generation
| 2 |
{
"_id": "63a15c583c8841cfe2dbc5c6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1671519286473-noauth.jpeg",
"followerCount": 1,
"fullname": "Lian Junhong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "THEATLAS",
"type": "user"
}
| true | null |
2501.11900
|
[
{
"_id": "679079a3a78e61ae7817a5f4",
"hidden": false,
"name": "Junhong Lian",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:10.850Z",
"user": {
"_id": "63a15c583c8841cfe2dbc5c6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1671519286473-noauth.jpeg",
"fullname": "Lian Junhong",
"isPro": false,
"type": "user",
"user": "THEATLAS"
}
},
{
"_id": "679079a3a78e61ae7817a5f5",
"hidden": false,
"name": "Xiang Ao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:53:33.832Z",
"user": {
"_id": "6462ccfc514ee1645bd27b9e",
"avatarUrl": "/avatars/38d77fa41da3af7e358a8ea65c17aa3c.svg",
"fullname": "Xiang Ao",
"isPro": false,
"type": "user",
"user": "aoroseeee"
}
},
{
"_id": "679079a3a78e61ae7817a5f6",
"hidden": false,
"name": "Xinyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679079a3a78e61ae7817a5f7",
"hidden": false,
"name": "Yang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679079a3a78e61ae7817a5f8",
"hidden": false,
"name": "Qing He",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T05:30:20 |
Panoramic Interests: Stylistic-Content Aware Personalized Headline
Generation
|
Personalized news headline generation aims to provide users with
attention-grabbing headlines that are tailored to their preferences. Prevailing
methods focus on user-oriented content preferences, but most of them overlook
the fact that diverse stylistic preferences are integral to users' panoramic
interests, leading to suboptimal personalization. In view of this, we propose a
novel Stylistic-Content Aware Personalized Headline Generation (SCAPE)
framework. SCAPE extracts both content and stylistic features from headlines
with the aid of large language model (LLM) collaboration. It further adaptively
integrates users' long- and short-term interests through a contrastive
learning-based hierarchical fusion network. By incorporating the panoramic
interests into the headline generator, SCAPE reflects users' stylistic-content
preferences during the generation process. Extensive experiments on the
real-world dataset PENS demonstrate the superiority of SCAPE over baselines.
| 6 |
679079a4a78e61ae7817a648
| null | null |
|
2025-01-22T07:05:11.749000 |
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model
| 3 |
{
"_id": "64b4eec4faa3181a5eab9c46",
"avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg",
"followerCount": 16,
"fullname": "Jiaqi Wang",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "myownskyW7",
"type": "user"
}
| true | null |
2501.12368
|
[
{
"_id": "67907fd7d37463df976acaa7",
"hidden": false,
"name": "Yuhang Zang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:44:47.687Z",
"user": {
"_id": "63859cf3b2906edaf83af9f0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63859cf3b2906edaf83af9f0/iUQm5FAomzqYi6fkqIn9F.jpeg",
"fullname": "Yuhang Zang",
"isPro": false,
"type": "user",
"user": "yuhangzang"
}
},
{
"_id": "67907fd7d37463df976acaa8",
"hidden": false,
"name": "Xiaoyi Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907fd7d37463df976acaa9",
"hidden": false,
"name": "Pan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907fd7d37463df976acaaa",
"hidden": false,
"name": "Yuhang Cao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:45:22.947Z",
"user": {
"_id": "65000bef18830fabea469fdd",
"avatarUrl": "/avatars/b320c77dfad039d9f9c54127f610d44f.svg",
"fullname": "Cao Yuhang",
"isPro": false,
"type": "user",
"user": "yhcao"
}
},
{
"_id": "67907fd7d37463df976acaab",
"hidden": false,
"name": "Ziyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907fd7d37463df976acaac",
"hidden": false,
"name": "Shengyuan Ding",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:45:51.536Z",
"user": {
"_id": "646cd947da8e99940b6e55cf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/646cd947da8e99940b6e55cf/9c0P0WppFqNW9pdo8LgOS.jpeg",
"fullname": "Shengyuan Ding",
"isPro": false,
"type": "user",
"user": "ChrisDing1105"
}
},
{
"_id": "67907fd7d37463df976acaad",
"hidden": false,
"name": "Shenxi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907fd7d37463df976acaae",
"hidden": false,
"name": "Yubo Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:46:21.440Z",
"user": {
"_id": "63883d1cad6d6d6e93574705",
"avatarUrl": "/avatars/19f17abdcfae6ff17e16d0a478c6b87f.svg",
"fullname": "Yubo Ma",
"isPro": false,
"type": "user",
"user": "yubo2333"
}
},
{
"_id": "67907fd7d37463df976acaaf",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:46:30.500Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67907fd7d37463df976acab0",
"hidden": false,
"name": "Wenwei Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:46:37.430Z",
"user": {
"_id": "64e8505321540e1da3226b54",
"avatarUrl": "/avatars/18958b8406d1ce492b54c1c839f18c54.svg",
"fullname": "Wenwei Zhang",
"isPro": false,
"type": "user",
"user": "ZwwWayne"
}
},
{
"_id": "67907fd7d37463df976acab1",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907fd7d37463df976acab2",
"hidden": false,
"name": "Dahua Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:46:44.848Z",
"user": {
"_id": "636317ed80c1a705a6eff396",
"avatarUrl": "/avatars/3db090e101b916d9256d0d3e043db71d.svg",
"fullname": "Dahua Lin",
"isPro": false,
"type": "user",
"user": "lindahua"
}
},
{
"_id": "67907fd7d37463df976acab3",
"hidden": false,
"name": "Jiaqi Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T14:04:07.315Z",
"user": {
"_id": "64b4eec4faa3181a5eab9c46",
"avatarUrl": "/avatars/bcc9bf5cbf67546ad2b4c9ec8b96ac96.svg",
"fullname": "Jiaqi Wang",
"isPro": true,
"type": "user",
"user": "myownskyW7"
}
}
] | 2025-01-21T18:47:32 |
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward
Model
|
Despite the promising performance of Large Vision Language Models (LVLMs) in
visual understanding, they occasionally generate incorrect outputs. While
reward models (RMs) with reinforcement learning or test-time scaling offer the
potential for improving generation quality, a critical gap remains: publicly
available multi-modal RMs for LVLMs are scarce, and the implementation details
of proprietary models are often unclear. We bridge this gap with
InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a simple yet effective
multi-modal reward model that aligns LVLMs with human preferences. To ensure
the robustness and versatility of IXC-2.5-Reward, we set up a high-quality
multi-modal preference corpus spanning text, image, and video inputs across
diverse domains, such as instruction following, general understanding,
text-rich documents, mathematical reasoning, and video understanding.
IXC-2.5-Reward achieves excellent results on the latest multi-modal reward
model benchmark and shows competitive performance on text-only reward model
benchmarks. We further demonstrate three key applications of IXC-2.5-Reward:
(1) Providing a supervisory signal for RL training. We integrate IXC-2.5-Reward
with Proximal Policy Optimization (PPO) yields IXC-2.5-Chat, which shows
consistent improvements in instruction following and multi-modal open-ended
dialogue; (2) Selecting the best response from candidate responses for
test-time scaling; and (3) Filtering outlier or noisy samples from existing
image and video instruction tuning training data. To ensure reproducibility and
facilitate further research, we have open-sourced all model weights and
training recipes at https://github.com/InternLM/InternLM-XComposer
| 42 |
67907fd9d37463df976acb24
| null | null |
|
2025-01-22T06:50:59.785000 |
TokenVerse: Versatile Multi-concept Personalization in Token Modulation Space
| 2 |
{
"_id": "630551fd80bc5e03dad21ea6",
"avatarUrl": "/avatars/849e41404df698cc89c68939de45ec9a.svg",
"followerCount": null,
"fullname": "Andrey Voynov",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "avoin",
"type": "user"
}
| false |
[
"https://cdn-uploads.huggingface.co/production/uploads/630551fd80bc5e03dad21ea6/iyhVz9LR0frnocT0Hc7sF.mp4"
] |
2501.12224
|
[
{
"_id": "6790db6588d8a90790d99ed4",
"hidden": false,
"name": "Daniel Garibi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:32:53.662Z",
"user": {
"_id": "65fbf768980d4143b0d3ab52",
"avatarUrl": "/avatars/018a26ce944860e9ab97ca43765e7d76.svg",
"fullname": "Daniel Garibi",
"isPro": false,
"type": "user",
"user": "garibida"
}
},
{
"_id": "6790db6588d8a90790d99ed5",
"hidden": false,
"name": "Shahar Yadin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:02.856Z",
"user": {
"_id": "66ab799e5e4f9699e476856e",
"avatarUrl": "/avatars/942943ed751c9703c34782fd7bd92be2.svg",
"fullname": "Shahar Yadin",
"isPro": false,
"type": "user",
"user": "shaharyadin"
}
},
{
"_id": "6790db6588d8a90790d99ed6",
"hidden": false,
"name": "Roni Paiss",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:11.237Z",
"user": {
"_id": "62cab8aeb189b6164d26bfbe",
"avatarUrl": "/avatars/7ed117e375dd9280a5750a242cae24ca.svg",
"fullname": "Roni Paiss",
"isPro": false,
"type": "user",
"user": "RoniPaiss"
}
},
{
"_id": "6790db6588d8a90790d99ed7",
"hidden": false,
"name": "Omer Tov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:26.476Z",
"user": {
"_id": "623889093657dc674c30954e",
"avatarUrl": "/avatars/a5a193d231e01448cc78ed5bba52bff5.svg",
"fullname": "Omer Tov",
"isPro": false,
"type": "user",
"user": "omertov"
}
},
{
"_id": "6790db6588d8a90790d99ed8",
"hidden": false,
"name": "Shiran Zada",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:37.040Z",
"user": {
"_id": "63510f62cba4ff2e81cb0492",
"avatarUrl": "/avatars/2375a841491de8f40f66b6d0fb0df7b1.svg",
"fullname": "Shiran Zada",
"isPro": false,
"type": "user",
"user": "shiranzada"
}
},
{
"_id": "6790db6588d8a90790d99ed9",
"hidden": false,
"name": "Ariel Ephrat",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:44.386Z",
"user": {
"_id": "65b6ada5ff5235e6ca9551c9",
"avatarUrl": "/avatars/2c558fd133bc1d0cd1b5d7f39d1968ee.svg",
"fullname": "Ariel Ephrat",
"isPro": false,
"type": "user",
"user": "arielephrat"
}
},
{
"_id": "6790db6588d8a90790d99eda",
"hidden": false,
"name": "Tomer Michaeli",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:33:58.984Z",
"user": {
"_id": "6457c3f50664b05a5b1d941d",
"avatarUrl": "/avatars/0132be4d068d4b314f6b498b65411461.svg",
"fullname": "Tomer Michaeli",
"isPro": false,
"type": "user",
"user": "Michato"
}
},
{
"_id": "6790db6588d8a90790d99edb",
"hidden": false,
"name": "Inbar Mosseri",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:34:07.262Z",
"user": {
"_id": "670f75a77bb118f080d29eef",
"avatarUrl": "/avatars/134687f97a476be6db5133e26de83ca5.svg",
"fullname": "Inbar Mosseri",
"isPro": false,
"type": "user",
"user": "inbarm"
}
},
{
"_id": "6790db6588d8a90790d99edc",
"hidden": false,
"name": "Tali Dekel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:34:15.392Z",
"user": {
"_id": "631cddec68f7da9ad24f6fc7",
"avatarUrl": "/avatars/7d4f1ce805e5889ca6594bd4a93f2583.svg",
"fullname": "Tali Dekel",
"isPro": false,
"type": "user",
"user": "talidekel"
}
}
] | 2025-01-21T15:49:29 |
TokenVerse: Versatile Multi-concept Personalization in Token Modulation
Space
|
We present TokenVerse -- a method for multi-concept personalization,
leveraging a pre-trained text-to-image diffusion model. Our framework can
disentangle complex visual elements and attributes from as little as a single
image, while enabling seamless plug-and-play generation of combinations of
concepts extracted from multiple images. As opposed to existing works,
TokenVerse can handle multiple images with multiple concepts each, and supports
a wide-range of concepts, including objects, accessories, materials, pose, and
lighting. Our work exploits a DiT-based text-to-image model, in which the input
text affects the generation through both attention and modulation (shift and
scale). We observe that the modulation space is semantic and enables localized
control over complex concepts. Building on this insight, we devise an
optimization-based framework that takes as input an image and a text
description, and finds for each word a distinct direction in the modulation
space. These directions can then be used to generate new images that combine
the learned concepts in a desired configuration. We demonstrate the
effectiveness of TokenVerse in challenging personalization settings, and
showcase its advantages over existing methods. project's webpage in
https://token-verse.github.io/
| 46 |
6790db6c88d8a90790d9a0f7
| null | null |
|
2025-01-22T04:22:56.830000 |
MSTS: A Multimodal Safety Test Suite for Vision-Language Models
| 2 |
{
"_id": "62e7dd4036a8e8a82700041c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e7dd4036a8e8a82700041c/Dgk9mXYLVd4LpiNLWjn-q.jpeg",
"followerCount": 11,
"fullname": "Felix Friedrich",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "felfri",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/62e7dd4036a8e8a82700041c/K3gr9O2a011NTtNgwzCEt.png"
] |
2501.10057
|
[
{
"_id": "678e6e149abc7b5b42df8807",
"hidden": false,
"name": "Paul Röttger",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:50:28.923Z",
"user": {
"_id": "602ce925374a0dbe5856eca1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/602ce925374a0dbe5856eca1/lBvTn6hYOCF0ggD-rk_mf.jpeg",
"fullname": "Paul Röttger",
"isPro": false,
"type": "user",
"user": "Paul"
}
},
{
"_id": "678e6e149abc7b5b42df8808",
"hidden": false,
"name": "Giuseppe Attanasio",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:50:34.269Z",
"user": {
"_id": "60d494e250c47659f83f5cd0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1670856721262-60d494e250c47659f83f5cd0.png",
"fullname": "Giuseppe Attanasio",
"isPro": false,
"type": "user",
"user": "g8a9"
}
},
{
"_id": "678e6e149abc7b5b42df8809",
"hidden": false,
"name": "Felix Friedrich",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:50:41.413Z",
"user": {
"_id": "62e7dd4036a8e8a82700041c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e7dd4036a8e8a82700041c/Dgk9mXYLVd4LpiNLWjn-q.jpeg",
"fullname": "Felix Friedrich",
"isPro": false,
"type": "user",
"user": "felfri"
}
},
{
"_id": "678e6e149abc7b5b42df880a",
"hidden": false,
"name": "Janis Goldzycher",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:50:46.971Z",
"user": {
"_id": "62cd7047c4eb470b622a5017",
"avatarUrl": "/avatars/94f9cf171dcc7e114b56a9b3100b44d4.svg",
"fullname": "Janis Goldzycher",
"isPro": false,
"type": "user",
"user": "jagoldz"
}
},
{
"_id": "678e6e149abc7b5b42df880b",
"hidden": false,
"name": "Alicia Parrish",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:50:52.506Z",
"user": {
"_id": "659d71eaf0ae1a57ae140831",
"avatarUrl": "/avatars/b9eb8c70b4fe5dbd0f1555602ae41b65.svg",
"fullname": "Alicia Parrish",
"isPro": false,
"type": "user",
"user": "avparrish"
}
},
{
"_id": "678e6e149abc7b5b42df880c",
"hidden": false,
"name": "Rishabh Bhardwaj",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:53:17.453Z",
"user": {
"_id": "5f278507e923d665e616271b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5f278507e923d665e616271b/tWFuswXOTXtvMdL8zSrr_.png",
"fullname": "Rishabh Bhardwaj",
"isPro": false,
"type": "user",
"user": "RishabhBhardwaj"
}
},
{
"_id": "678e6e149abc7b5b42df880d",
"hidden": false,
"name": "Chiara Di Bonaventura",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:53:11.524Z",
"user": {
"_id": "62de70fb86220b5cb895e199",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62de70fb86220b5cb895e199/k7w8pTBIvCwkrTksQJ7GM.jpeg",
"fullname": "Chiara Di Bonaventura",
"isPro": false,
"type": "user",
"user": "dibo"
}
},
{
"_id": "678e6e149abc7b5b42df880e",
"hidden": false,
"name": "Roman Eng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:53:04.803Z",
"user": {
"_id": "641f6e40f8c8b04c0babc208",
"avatarUrl": "/avatars/02438bebb15ca8aea8126e726896d158.svg",
"fullname": "Roman Eng",
"isPro": false,
"type": "user",
"user": "romaneng"
}
},
{
"_id": "678e6e149abc7b5b42df880f",
"hidden": false,
"name": "Gaia El Khoury Geagea",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8810",
"hidden": false,
"name": "Sujata Goswami",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8811",
"hidden": false,
"name": "Jieun Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8812",
"hidden": false,
"name": "Dirk Hovy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8813",
"hidden": false,
"name": "Seogyeong Jeong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8814",
"hidden": false,
"name": "Paloma Jeretič",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e6e149abc7b5b42df8815",
"hidden": false,
"name": "Flor Miriam Plaza-del-Arco",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:52:00.837Z",
"user": {
"_id": "60f7ec6ad51f9e10a4f422af",
"avatarUrl": "/avatars/12ac5037657cfba4e2b48b08c688a629.svg",
"fullname": "Flor Miriam Plaza-del-Arco",
"isPro": false,
"type": "user",
"user": "fmplaza"
}
},
{
"_id": "678e6e149abc7b5b42df8816",
"hidden": false,
"name": "Donya Rooein",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:51.329Z",
"user": {
"_id": "64074d104dc5f2846c95db00",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64074d104dc5f2846c95db00/8T4KCvjdjSldDwwTOZreF.jpeg",
"fullname": "Donya Rooein",
"isPro": false,
"type": "user",
"user": "Donya"
}
},
{
"_id": "678e6e149abc7b5b42df8817",
"hidden": false,
"name": "Patrick Schramowski",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:44.897Z",
"user": {
"_id": "62d021a3dd7bdfc5e5c61c5c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62d021a3dd7bdfc5e5c61c5c/bnQW2SqirfGaQmI84HW_c.jpeg",
"fullname": "Patrick Schramowski",
"isPro": false,
"type": "user",
"user": "PSaiml"
}
},
{
"_id": "678e6e149abc7b5b42df8818",
"hidden": false,
"name": "Anastassia Shaitarova",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:39.084Z",
"user": {
"_id": "650b2163a27ba14323dafc03",
"avatarUrl": "/avatars/9ff16cbceaee57601aed6ac8a9711171.svg",
"fullname": "Anastassia Shaitarova",
"isPro": false,
"type": "user",
"user": "ShaitanRa"
}
},
{
"_id": "678e6e149abc7b5b42df8819",
"hidden": false,
"name": "Xudong Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:32.813Z",
"user": {
"_id": "63c628610be43255f36f2a40",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1673930799006-noauth.jpeg",
"fullname": "Xudong Shen",
"isPro": false,
"type": "user",
"user": "XudongShen"
}
},
{
"_id": "678e6e149abc7b5b42df881a",
"hidden": false,
"name": "Richard Willats",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:26.120Z",
"user": {
"_id": "6718ff9d04139dd58a39e63b",
"avatarUrl": "/avatars/2a0a6d4ef9760391dce7343aa5b6a004.svg",
"fullname": "Richard WIllats",
"isPro": false,
"type": "user",
"user": "rwillats"
}
},
{
"_id": "678e6e149abc7b5b42df881b",
"hidden": false,
"name": "Andrea Zugarini",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:51:19.992Z",
"user": {
"_id": "6426a5c798a5be164d38ae44",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6426a5c798a5be164d38ae44/3crh3dtGViXB_xlDW_WeM.jpeg",
"fullname": "Andrea Zugarini",
"isPro": false,
"type": "user",
"user": "azugarini"
}
},
{
"_id": "678e6e149abc7b5b42df881c",
"hidden": false,
"name": "Bertie Vidgen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T09:22:35 |
MSTS: A Multimodal Safety Test Suite for Vision-Language Models
|
Vision-language models (VLMs), which process image and text inputs, are
increasingly integrated into chat assistants and other consumer AI
applications. Without proper safeguards, however, VLMs may give harmful advice
(e.g. how to self-harm) or encourage unsafe behaviours (e.g. to consume drugs).
Despite these clear hazards, little work so far has evaluated VLM safety and
the novel risks created by multimodal inputs. To address this gap, we introduce
MSTS, a Multimodal Safety Test Suite for VLMs. MSTS comprises 400 test prompts
across 40 fine-grained hazard categories. Each test prompt consists of a text
and an image that only in combination reveal their full unsafe meaning. With
MSTS, we find clear safety issues in several open VLMs. We also find some VLMs
to be safe by accident, meaning that they are safe because they fail to
understand even simple test prompts. We translate MSTS into ten languages,
showing non-English prompts to increase the rate of unsafe model responses. We
also show models to be safer when tested with text only rather than multimodal
prompts. Finally, we explore the automation of VLM safety assessments, finding
even the best safety classifiers to be lacking.
| 8 |
678e6e159abc7b5b42df8896
| null | null |
|
2025-01-22T01:03:06.008000 |
Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.08331
|
[
{
"_id": "679086696d5aed184a333663",
"hidden": false,
"name": "Ryan Burgert",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:54:28.795Z",
"user": {
"_id": "62f69fe504e5e02f82aef7fb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62f69fe504e5e02f82aef7fb/m4CA0XlTKDGIg_WQVmGdv.png",
"fullname": "Ryan Burgert",
"isPro": false,
"type": "user",
"user": "OneOverZero"
}
},
{
"_id": "679086696d5aed184a333664",
"hidden": false,
"name": "Yuancheng Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:54:51.597Z",
"user": {
"_id": "6505e2cdad3134ed7e63c8d0",
"avatarUrl": "/avatars/f0e5dcf5a3deb96d2c9cb50b4433cc13.svg",
"fullname": "Yuancheng Xu",
"isPro": false,
"type": "user",
"user": "YuanchengXu"
}
},
{
"_id": "679086696d5aed184a333665",
"hidden": false,
"name": "Wenqi Xian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:54:58.956Z",
"user": {
"_id": "634ada2429bfb4082451eb7d",
"avatarUrl": "/avatars/be697236b58b3451c7d1aeeaf3f3c43a.svg",
"fullname": "Wenqi Xian",
"isPro": false,
"type": "user",
"user": "wendy5823"
}
},
{
"_id": "679086696d5aed184a333666",
"hidden": false,
"name": "Oliver Pilarski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a333667",
"hidden": false,
"name": "Pascal Clausen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:55:20.961Z",
"user": {
"_id": "66f6dfae7024ed3df984b9da",
"avatarUrl": "/avatars/7ce396a8bcee0a26d5634ea3ca2a1dbb.svg",
"fullname": "Pascal Clausen",
"isPro": false,
"type": "user",
"user": "pascalclausen"
}
},
{
"_id": "679086696d5aed184a333668",
"hidden": false,
"name": "Mingming He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a333669",
"hidden": false,
"name": "Li Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a33366a",
"hidden": false,
"name": "Yitong Deng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a33366b",
"hidden": false,
"name": "Lingxiao Li",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-22T05:47:27.544Z",
"user": {
"_id": "66cb7dfcc5c61e4baa96469d",
"avatarUrl": "/avatars/6cab6e06164371778d15e4a1a3278eac.svg",
"fullname": "Lingxiao Li",
"isPro": false,
"type": "user",
"user": "lingxiaol"
}
},
{
"_id": "679086696d5aed184a33366c",
"hidden": false,
"name": "Mohsen Mousavi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a33366d",
"hidden": false,
"name": "Michael Ryoo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679086696d5aed184a33366e",
"hidden": false,
"name": "Paul Debevec",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:56:57.475Z",
"user": {
"_id": "646555c7fafbb0cdb40968d9",
"avatarUrl": "/avatars/b6a915da510c5c405ad33354deb9b5ee.svg",
"fullname": "Paul Debevec",
"isPro": false,
"type": "user",
"user": "debevec"
}
},
{
"_id": "679086696d5aed184a33366f",
"hidden": false,
"name": "Ning Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:18:07.559Z",
"user": {
"_id": "6362bcbe8f43a912fc722969",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6362bcbe8f43a912fc722969/ktl2ePfpOIseqlIbuldNa.png",
"fullname": "Ning Yu",
"isPro": false,
"type": "user",
"user": "ningyu1991"
}
}
] | 2025-01-14T18:59:10 |
Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using
Real-Time Warped Noise
|
Generative modeling aims to transform random noise into structured outputs.
In this work, we enhance video diffusion models by allowing motion control via
structured latent noise sampling. This is achieved by just a change in data: we
pre-process training videos to yield structured noise. Consequently, our method
is agnostic to diffusion model design, requiring no changes to model
architectures or training pipelines. Specifically, we propose a novel noise
warping algorithm, fast enough to run in real time, that replaces random
temporal Gaussianity with correlated warped noise derived from optical flow
fields, while preserving the spatial Gaussianity. The efficiency of our
algorithm enables us to fine-tune modern video diffusion base models using
warped noise with minimal overhead, and provide a one-stop solution for a wide
range of user-friendly motion control: local object motion control, global
camera movement control, and motion transfer. The harmonization between
temporal coherence and spatial Gaussianity in our warped noise leads to
effective motion control while maintaining per-frame pixel quality. Extensive
experiments and user studies demonstrate the advantages of our method, making
it a robust and scalable approach for controlling motion in video diffusion
models. Video results are available on our webpage:
https://vgenai-netflix-eyeline-research.github.io/Go-with-the-Flow. Source code
and model checkpoints are available on GitHub:
https://github.com/VGenAI-Netflix-Eyeline-Research/Go-with-the-Flow.
| 20 |
6790866f6d5aed184a333805
| null | null |
|
2025-01-22T00:57:27.738000 |
Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.12375
|
[
{
"_id": "679069d37b150be8ddef0657",
"hidden": false,
"name": "Sili Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679069d37b150be8ddef0658",
"hidden": false,
"name": "Hengkai Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:53:23.214Z",
"user": {
"_id": "65cc35785495933ab0edc096",
"avatarUrl": "/avatars/2738e7f95e5929044fca74b218594d09.svg",
"fullname": "Hengkai Guo",
"isPro": false,
"type": "user",
"user": "guohk10"
}
},
{
"_id": "679069d37b150be8ddef0659",
"hidden": false,
"name": "Shengnan Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:53:30.110Z",
"user": {
"_id": "678de89b94e4992bb9f2a303",
"avatarUrl": "/avatars/68dc6a556d0ba59cc769320dc7c60a54.svg",
"fullname": "Shengnan Zhu",
"isPro": false,
"type": "user",
"user": "Shane922"
}
},
{
"_id": "679069d37b150be8ddef065a",
"hidden": false,
"name": "Feihu Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:53:37.544Z",
"user": {
"_id": "674c3ac4e5965de680f691ed",
"avatarUrl": "/avatars/04085e2c2de91f466eecec1d2c9f89e5.svg",
"fullname": "feihuzhang",
"isPro": false,
"type": "user",
"user": "feihuzhang"
}
},
{
"_id": "679069d37b150be8ddef065b",
"hidden": false,
"name": "Zilong Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679069d37b150be8ddef065c",
"hidden": false,
"name": "Jiashi Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:54:06.632Z",
"user": {
"_id": "67298e44017b96a1d0101dc4",
"avatarUrl": "/avatars/1f8ed1a3e911e6a3021087b9371d284c.svg",
"fullname": "Jiashi Feng",
"isPro": false,
"type": "user",
"user": "jshfeng"
}
},
{
"_id": "679069d37b150be8ddef065d",
"hidden": false,
"name": "Bingyi Kang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:54:13.716Z",
"user": {
"_id": "647b5fef6a79fbf5e996c47c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647b5fef6a79fbf5e996c47c/IkSMnDsCY_CyEFCiMDuxe.jpeg",
"fullname": "Bingyi Kang",
"isPro": false,
"type": "user",
"user": "bykang"
}
}
] | 2025-01-21T18:53:30 |
Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
|
Depth Anything has achieved remarkable success in monocular depth estimation
with strong generalization ability. However, it suffers from temporal
inconsistency in videos, hindering its practical applications. Various methods
have been proposed to alleviate this issue by leveraging video generation
models or introducing priors from optical flow and camera poses. Nonetheless,
these methods are only applicable to short videos (< 10 seconds) and require a
trade-off between quality and computational efficiency. We propose Video Depth
Anything for high-quality, consistent depth estimation in super-long videos
(over several minutes) without sacrificing efficiency. We base our model on
Depth Anything V2 and replace its head with an efficient spatial-temporal head.
We design a straightforward yet effective temporal consistency loss by
constraining the temporal depth gradient, eliminating the need for additional
geometric priors. The model is trained on a joint dataset of video depth and
unlabeled images, similar to Depth Anything V2. Moreover, a novel
key-frame-based strategy is developed for long video inference. Experiments
show that our model can be applied to arbitrarily long videos without
compromising quality, consistency, or generalization ability. Comprehensive
evaluations on multiple video benchmarks demonstrate that our approach sets a
new state-of-the-art in zero-shot video depth estimation. We offer models of
different scales to support a range of scenarios, with our smallest model
capable of real-time performance at 30 FPS.
| 22 |
679069d57b150be8ddef06ef
| null | null |
|
2025-01-22T00:49:10.316000 |
EMO2: End-Effector Guided Audio-Driven Avatar Video Generation
| 4 |
{
"_id": "65df1f1ee98700500d4c289c",
"avatarUrl": "/avatars/be11bf61465df29ac997cc0fedad1cb9.svg",
"followerCount": 2,
"fullname": "qi wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lucaskingjade",
"type": "user"
}
| false | null |
2501.10687
|
[
{
"_id": "6790856e3b0a6384a4117d0e",
"hidden": false,
"name": "Linrui Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790856e3b0a6384a4117d0f",
"hidden": false,
"name": "Siqi Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:01:58.376Z",
"user": {
"_id": "66f9af43e45f81e73c0327ea",
"avatarUrl": "/avatars/30976181ab540fb42a3377a2734a5793.svg",
"fullname": "Siqi Hu",
"isPro": false,
"type": "user",
"user": "Siqi-Hu"
}
},
{
"_id": "6790856e3b0a6384a4117d10",
"hidden": false,
"name": "Qi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790856e3b0a6384a4117d11",
"hidden": false,
"name": "Bang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790856e3b0a6384a4117d12",
"hidden": false,
"name": "Liefeng Bo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:02:08.503Z",
"user": {
"_id": "63d0cc736b985b0f25d0412c",
"avatarUrl": "/avatars/3eb8c79f9a7c4c819038ea7b04e323dd.svg",
"fullname": "Bo",
"isPro": false,
"type": "user",
"user": "Liefeng"
}
}
] | 2025-01-18T07:51:29 |
EMO2: End-Effector Guided Audio-Driven Avatar Video Generation
|
In this paper, we propose a novel audio-driven talking head method capable of
simultaneously generating highly expressive facial expressions and hand
gestures. Unlike existing methods that focus on generating full-body or
half-body poses, we investigate the challenges of co-speech gesture generation
and identify the weak correspondence between audio features and full-body
gestures as a key limitation. To address this, we redefine the task as a
two-stage process. In the first stage, we generate hand poses directly from
audio input, leveraging the strong correlation between audio signals and hand
movements. In the second stage, we employ a diffusion model to synthesize video
frames, incorporating the hand poses generated in the first stage to produce
realistic facial expressions and body movements. Our experimental results
demonstrate that the proposed method outperforms state-of-the-art approaches,
such as CyberHost and Vlogger, in terms of both visual quality and
synchronization accuracy. This work provides a new perspective on audio-driven
gesture generation and a robust framework for creating expressive and natural
talking head animations.
| 12 |
679085813b0a6384a41183f1
| null | null |
|
2025-01-22T00:37:32.486000 |
Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.12202
|
[
{
"_id": "67908409416b83605450716a",
"hidden": false,
"name": "Zibo Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:57:22.460Z",
"user": {
"_id": "62d8ce11c60d1450a1ed8795",
"avatarUrl": "/avatars/26f1ca693ad7106be0f2f469070d8500.svg",
"fullname": "zibo.zhao",
"isPro": false,
"type": "user",
"user": "cocacola"
}
},
{
"_id": "67908409416b83605450716b",
"hidden": false,
"name": "Zeqiang Lai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:57:34.241Z",
"user": {
"_id": "63044b89eedc089484c995ad",
"avatarUrl": "/avatars/011a5e595e61c4ece97c605713abe679.svg",
"fullname": "Zeqiang Lai",
"isPro": false,
"type": "user",
"user": "ZeqiangLai"
}
},
{
"_id": "67908409416b83605450716c",
"hidden": false,
"name": "Qingxiang Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450716d",
"hidden": false,
"name": "Yunfei Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450716e",
"hidden": false,
"name": "Haolin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450716f",
"hidden": false,
"name": "Shuhui Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507170",
"hidden": false,
"name": "Yifei Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:58:31.352Z",
"user": {
"_id": "659b9c13417c3c3ecd0da903",
"avatarUrl": "/avatars/804ddae9402d70975eb8b47e9f85a4a5.svg",
"fullname": "yifei feng",
"isPro": false,
"type": "user",
"user": "scatyf3"
}
},
{
"_id": "67908409416b836054507171",
"hidden": false,
"name": "Mingxin Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507172",
"hidden": false,
"name": "Sheng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507173",
"hidden": false,
"name": "Xianghui Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:18:09.348Z",
"user": {
"_id": "647d9e881a1fcad2fdbf4954",
"avatarUrl": "/avatars/92ee8727d5c9063d852d3537b7690843.svg",
"fullname": "SeanYoung",
"isPro": false,
"type": "user",
"user": "SeanYoungxh"
}
},
{
"_id": "67908409416b836054507174",
"hidden": false,
"name": "Huiwen Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:59:03.812Z",
"user": {
"_id": "67287a522ae45f363dd0ad43",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/67287a522ae45f363dd0ad43/H6eyuxxSk6a84PzRoYcIU.png",
"fullname": "huiwenshi",
"isPro": false,
"type": "user",
"user": "Huiwenshi"
}
},
{
"_id": "67908409416b836054507175",
"hidden": false,
"name": "Sicong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507176",
"hidden": false,
"name": "Junta Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:59:38.923Z",
"user": {
"_id": "642183b9e61513ec1c63a040",
"avatarUrl": "/avatars/1c98a242175a0d77137ec814552329fc.svg",
"fullname": "Junta Wu",
"isPro": false,
"type": "user",
"user": "juntawu"
}
},
{
"_id": "67908409416b836054507177",
"hidden": false,
"name": "Yihang Lian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507178",
"hidden": false,
"name": "Fan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507179",
"hidden": false,
"name": "Ruining Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450717a",
"hidden": false,
"name": "Zebin He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:00:03.843Z",
"user": {
"_id": "64cce55c3576a06fa2731eb9",
"avatarUrl": "/avatars/10cfe4b1607cb36132ac1801ecb8b27c.svg",
"fullname": "ZebinHe",
"isPro": false,
"type": "user",
"user": "ZebinHe"
}
},
{
"_id": "67908409416b83605450717b",
"hidden": false,
"name": "Xinzhou Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:00:12.024Z",
"user": {
"_id": "6459c45b5a263a4c87b7c639",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/t3iK1wPBOP-h_kLBzZA_h.png",
"fullname": "Xinzhou Wang",
"isPro": false,
"type": "user",
"user": "zz7379"
}
},
{
"_id": "67908409416b83605450717c",
"hidden": false,
"name": "Jian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450717d",
"hidden": false,
"name": "Xuhui Zuo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450717e",
"hidden": false,
"name": "Zhuo Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450717f",
"hidden": false,
"name": "Biwen Lei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507180",
"hidden": false,
"name": "Haohan Weng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507181",
"hidden": false,
"name": "Jing Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507182",
"hidden": false,
"name": "Yiling Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507183",
"hidden": false,
"name": "Xinhai Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507184",
"hidden": false,
"name": "Lixin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507185",
"hidden": false,
"name": "Changrong Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507186",
"hidden": false,
"name": "Tianyu Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507187",
"hidden": false,
"name": "Lifu Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507188",
"hidden": false,
"name": "Jihong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507189",
"hidden": false,
"name": "Meng Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718a",
"hidden": false,
"name": "Liang Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718b",
"hidden": false,
"name": "Yiwen Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718c",
"hidden": false,
"name": "Yulin Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718d",
"hidden": false,
"name": "Jiaao Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718e",
"hidden": false,
"name": "Yixuan Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450718f",
"hidden": false,
"name": "Hao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507190",
"hidden": false,
"name": "Zheng Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507191",
"hidden": false,
"name": "Peng He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507192",
"hidden": false,
"name": "Runzhou Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507193",
"hidden": false,
"name": "Chao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507194",
"hidden": false,
"name": "Yonghao Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507195",
"hidden": false,
"name": "Jie Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507196",
"hidden": false,
"name": "Yangyu Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507197",
"hidden": false,
"name": "Jianchen Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507198",
"hidden": false,
"name": "Jinbao Xue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b836054507199",
"hidden": false,
"name": "Kai Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719a",
"hidden": false,
"name": "Chongqing Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719b",
"hidden": false,
"name": "Xinming Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719c",
"hidden": false,
"name": "Zhichao Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719d",
"hidden": false,
"name": "Lei Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719e",
"hidden": false,
"name": "Jianbing Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b83605450719f",
"hidden": false,
"name": "Zhan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a0",
"hidden": false,
"name": "Minghui Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a1",
"hidden": false,
"name": "Xipeng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a2",
"hidden": false,
"name": "Lin Niu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a3",
"hidden": false,
"name": "Paige Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a4",
"hidden": false,
"name": "Yingkai Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a5",
"hidden": false,
"name": "Haozhao Kuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a6",
"hidden": false,
"name": "Zhongyi Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a7",
"hidden": false,
"name": "Xu Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a8",
"hidden": false,
"name": "Weihao Zhuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071a9",
"hidden": false,
"name": "YingPing He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071aa",
"hidden": false,
"name": "Tian Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071ab",
"hidden": false,
"name": "Yong Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071ac",
"hidden": false,
"name": "Di Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071ad",
"hidden": false,
"name": "Yuhong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071ae",
"hidden": false,
"name": "Jie Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071af",
"hidden": false,
"name": "Jingwei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67908409416b8360545071b0",
"hidden": false,
"name": "Chunchao Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T15:16:54 |
Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D
Assets Generation
|
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for
generating high-resolution textured 3D assets. This system includes two
foundation components: a large-scale shape generation model -- Hunyuan3D-DiT,
and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape
generative model, built on a scalable flow-based diffusion transformer, aims to
create geometry that properly aligns with a given condition image, laying a
solid foundation for downstream applications. The texture synthesis model,
benefiting from strong geometric and diffusion priors, produces high-resolution
and vibrant texture maps for either generated or hand-crafted meshes.
Furthermore, we build Hunyuan3D-Studio -- a versatile, user-friendly production
platform that simplifies the re-creation process of 3D assets. It allows both
professional and amateur users to manipulate or even animate their meshes
efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0
outperforms previous state-of-the-art models, including the open-source models
and closed-source models in geometry details, condition alignment, texture
quality, and etc. Hunyuan3D 2.0 is publicly released in order to fill the gaps
in the open-source 3D community for large-scale foundation generative models.
The code and pre-trained weights of our models are available at:
https://github.com/Tencent/Hunyuan3D-2
| 35 |
6790840d416b8360545072a7
| null | null |
|
2025-01-22T00:20:57.292000 |
Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.11425
|
[
{
"_id": "679080298ad1d8203a994f7f",
"hidden": false,
"name": "Siyu Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T11:00:12.151Z",
"user": {
"_id": "62d62b333bf5e059f7d2b286",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1668513815771-62d62b333bf5e059f7d2b286.jpeg",
"fullname": "Siyu Yuan",
"isPro": false,
"type": "user",
"user": "siyuyuan"
}
},
{
"_id": "679080298ad1d8203a994f80",
"hidden": false,
"name": "Zehui Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T11:00:20.946Z",
"user": {
"_id": "64892d31cbda0d1cdb956897",
"avatarUrl": "/avatars/3cdafe03a8295124636347d15a099aaf.svg",
"fullname": "Zehui Chen",
"isPro": false,
"type": "user",
"user": "lovesnowbest"
}
},
{
"_id": "679080298ad1d8203a994f81",
"hidden": false,
"name": "Zhiheng Xi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:04:23.785Z",
"user": {
"_id": "653a6e5cae155b92bae77b74",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/653a6e5cae155b92bae77b74/TA5FWKAUsB249ux4MzD_R.jpeg",
"fullname": "Zhiheng Xi",
"isPro": false,
"type": "user",
"user": "WooooDyy"
}
},
{
"_id": "679080298ad1d8203a994f82",
"hidden": false,
"name": "Junjie Ye",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:02.519Z",
"user": {
"_id": "66384be673c2c55f2ded89fa",
"avatarUrl": "/avatars/1d8721074f0f51fab405f81474f2035f.svg",
"fullname": "Junjie Ye",
"isPro": false,
"type": "user",
"user": "Junjie-Ye"
}
},
{
"_id": "679080298ad1d8203a994f83",
"hidden": false,
"name": "Zhengyin Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679080298ad1d8203a994f84",
"hidden": false,
"name": "Jiecao Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:04:37.918Z",
"user": {
"_id": "66df70fe5a0c5910d663160d",
"avatarUrl": "/avatars/980ca32bd0049ef5bbf002e7dc9f911c.svg",
"fullname": "jiecao.chen",
"isPro": false,
"type": "user",
"user": "xmerge123"
}
}
] | 2025-01-20T11:46:04 |
Agent-R: Training Language Model Agents to Reflect via Iterative
Self-Training
|
Large Language Models (LLMs) agents are increasingly pivotal for addressing
complex tasks in interactive environments. Existing work mainly focuses on
enhancing performance through behavior cloning from stronger experts, yet such
approaches often falter in real-world applications, mainly due to the inability
to recover from errors. However, step-level critique data is difficult and
expensive to collect. Automating and dynamically constructing self-critique
datasets is thus crucial to empowering models with intelligent agent
capabilities. In this work, we propose an iterative self-training framework,
Agent-R, that enables language Agent to Reflect on the fly. Unlike traditional
methods that reward or penalize actions based on correctness, Agent-R leverages
MCTS to construct training data that recover correct trajectories from
erroneous ones. A key challenge of agent reflection lies in the necessity for
timely revision rather than waiting until the end of a rollout. To address
this, we introduce a model-guided critique construction mechanism: the actor
model identifies the first error step (within its current capability) in a
failed trajectory. Starting from it, we splice it with the adjacent correct
path, which shares the same parent node in the tree. This strategy enables the
model to learn reflection based on its current policy, therefore yielding
better learning efficiency. To further explore the scalability of this
self-improvement paradigm, we investigate iterative refinement of both error
correction capabilities and dataset construction. Our findings demonstrate that
Agent-R continuously improves the model's ability to recover from errors and
enables timely error correction. Experiments on three interactive environments
show that Agent-R effectively equips agents to correct erroneous actions while
avoiding loops, achieving superior performance compared to baseline methods
(+5.59%).
| 92 |
6790802b8ad1d8203a994fc7
| null | null |
|
2025-01-22T00:17:48.799000 |
Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks
| 2 |
{
"_id": "645b10e80c73ea27d13f7aca",
"avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg",
"followerCount": 3,
"fullname": "xuhaiyang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xhyandwyy",
"type": "user"
}
| true | null |
2501.11733
|
[
{
"_id": "6790791b203b95acf96ebf45",
"hidden": false,
"name": "Zhenhailong Wang",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-22T04:50:40.468Z",
"user": {
"_id": "628d7265db4cd1d1717c884f",
"avatarUrl": "/avatars/dff2a3dd10d84b4a73fa486402de7219.svg",
"fullname": "Zhenhailong Wang",
"isPro": false,
"type": "user",
"user": "mikewang"
}
},
{
"_id": "6790791b203b95acf96ebf46",
"hidden": false,
"name": "Haiyang Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:13.360Z",
"user": {
"_id": "645b10e80c73ea27d13f7aca",
"avatarUrl": "/avatars/95e565306472a15067440b5b43e07a6f.svg",
"fullname": "xuhaiyang",
"isPro": false,
"type": "user",
"user": "xhyandwyy"
}
},
{
"_id": "6790791b203b95acf96ebf47",
"hidden": false,
"name": "Junyang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:47:37.467Z",
"user": {
"_id": "6438f6415aa69077ffb16942",
"avatarUrl": "/avatars/c83dbd3e10e88db97c2a86092bad5917.svg",
"fullname": "Junyang Wang",
"isPro": false,
"type": "user",
"user": "junyangwang0410"
}
},
{
"_id": "6790791b203b95acf96ebf48",
"hidden": false,
"name": "Xi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790791b203b95acf96ebf49",
"hidden": false,
"name": "Ming Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790791b203b95acf96ebf4a",
"hidden": false,
"name": "Ji Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790791b203b95acf96ebf4b",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790791b203b95acf96ebf4c",
"hidden": false,
"name": "Heng Ji",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-20T20:35:46 |
Mobile-Agent-E: Self-Evolving Mobile Assistant for Complex Tasks
|
Smartphones have become indispensable in modern life, yet navigating complex
tasks on mobile devices often remains frustrating. Recent advancements in large
multimodal model (LMM)-based mobile agents have demonstrated the ability to
perceive and act in mobile environments. However, current approaches face
significant limitations: they fall short in addressing real-world human needs,
struggle with reasoning-intensive and long-horizon tasks, and lack mechanisms
to learn and improve from prior experiences. To overcome these challenges, we
introduce Mobile-Agent-E, a hierarchical multi-agent framework capable of
self-evolution through past experience. By hierarchical, we mean an explicit
separation of high-level planning and low-level action execution. The framework
comprises a Manager, responsible for devising overall plans by breaking down
complex tasks into subgoals, and four subordinate agents--Perceptor, Operator,
Action Reflector, and Notetaker--which handle fine-grained visual perception,
immediate action execution, error verification, and information aggregation,
respectively. Mobile-Agent-E also features a novel self-evolution module which
maintains a persistent long-term memory comprising Tips and Shortcuts. Tips are
general guidance and lessons learned from prior tasks on how to effectively
interact with the environment. Shortcuts are reusable, executable sequences of
atomic operations tailored for specific subroutines. The inclusion of Tips and
Shortcuts facilitates continuous refinement in performance and efficiency.
Alongside this framework, we introduce Mobile-Eval-E, a new benchmark featuring
complex mobile tasks requiring long-horizon, multi-app interactions. Empirical
results show that Mobile-Agent-E achieves a 22% absolute improvement over
previous state-of-the-art approaches across three foundation model backbones.
Project page: https://x-plug.github.io/MobileAgent.
| 28 |
67907920203b95acf96ec126
| null | null |
|
2025-01-22T00:11:18.322000 |
Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in Realistic Environments
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.10893
|
[
{
"_id": "67907dd5e1d8fc832b3e7b0f",
"hidden": false,
"name": "Hongjin Su",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:51:36.858Z",
"user": {
"_id": "632d53951538d4798a73c849",
"avatarUrl": "/avatars/b7d0a895e669bcd1303c4716b5401c36.svg",
"fullname": "Hongjin SU",
"isPro": false,
"type": "user",
"user": "multi-train"
}
},
{
"_id": "67907dd5e1d8fc832b3e7b10",
"hidden": false,
"name": "Ruoxi Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:51:57.105Z",
"user": {
"_id": "653093ad471261a41241e048",
"avatarUrl": "/avatars/54263ce75b102b81211d27b696920870.svg",
"fullname": "Ruoxi Sun",
"isPro": false,
"type": "user",
"user": "brycesun"
}
},
{
"_id": "67907dd5e1d8fc832b3e7b11",
"hidden": false,
"name": "Jinsung Yoon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:52:16.664Z",
"user": {
"_id": "64da5c05322a5774e085339a",
"avatarUrl": "/avatars/ef6a4325622799d474af778b92232555.svg",
"fullname": "Jinsung Yoon",
"isPro": false,
"type": "user",
"user": "jsyoon0823"
}
},
{
"_id": "67907dd5e1d8fc832b3e7b12",
"hidden": false,
"name": "Pengcheng Yin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:52:34.099Z",
"user": {
"_id": "65024921fa0dccbd859e55e9",
"avatarUrl": "/avatars/1f3c31c21ca692c2a63cf3437eac2507.svg",
"fullname": "Pengcheng Yin",
"isPro": false,
"type": "user",
"user": "pchyin"
}
},
{
"_id": "67907dd5e1d8fc832b3e7b13",
"hidden": false,
"name": "Tao Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67907dd5e1d8fc832b3e7b14",
"hidden": false,
"name": "Sercan Ö. Arık",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-18T22:34:41 |
Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in
Realistic Environments
|
Autonomous agents powered by large language models (LLMs) have the potential
to enhance human capabilities, assisting with digital tasks from sending emails
to performing data analysis. The abilities of existing LLMs at such tasks are
often hindered by the lack of high-quality agent data from the corresponding
environments they interact with. We propose Learn-by-interact, a data-centric
framework to adapt LLM agents to any given environments without human
annotations. Learn-by-interact synthesizes trajectories of agent-environment
interactions based on documentations, and constructs instructions by
summarizing or abstracting the interaction histories, a process called backward
construction. We assess the quality of our synthetic data by using them in both
training-based scenarios and training-free in-context learning (ICL), where we
craft innovative retrieval approaches optimized for agents. Extensive
experiments on SWE-bench, WebArena, OSWorld and Spider2-V spanning across
realistic coding, web, and desktop environments show the effectiveness of
Learn-by-interact in various downstream agentic tasks -- baseline results are
improved by up to 12.2\% for ICL with Claude-3.5 and 19.5\% for training with
Codestral-22B. We further demonstrate the critical role of backward
construction, which provides up to 14.0\% improvement for training. Our
ablation studies demonstrate the efficiency provided by our synthesized data in
ICL and the superiority of our retrieval pipeline over alternative approaches
like conventional retrieval-augmented generation (RAG). We expect that
Learn-by-interact will serve as a foundation for agent data synthesis as LLMs
are increasingly deployed at real-world environments.
| 24 |
67907dd9e1d8fc832b3e7c36
| null | null |
|
2025-01-21T23:51:53.248000 |
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
| 5 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.12326
|
[
{
"_id": "679078f902b4d94b0f2347c1",
"hidden": false,
"name": "Yujia Qin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:34:29.986Z",
"user": {
"_id": "643f37cce9d063936912048b",
"avatarUrl": "/avatars/25822ea5676a79b2e1ddf08d5fc2226c.svg",
"fullname": "Yujia Qin",
"isPro": false,
"type": "user",
"user": "YujiaHi"
}
},
{
"_id": "679078f902b4d94b0f2347c2",
"hidden": false,
"name": "Yining Ye",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:34:38.046Z",
"user": {
"_id": "636a0803fbc31ee68fed16bd",
"avatarUrl": "/avatars/331073d0fee901b2b9d93287fabe64fc.svg",
"fullname": "yining ye",
"isPro": false,
"type": "user",
"user": "yeyn2001"
}
},
{
"_id": "679078f902b4d94b0f2347c3",
"hidden": false,
"name": "Junjie Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347c4",
"hidden": false,
"name": "Haoming Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-23T09:18:11.031Z",
"user": {
"_id": "678a19ba39c63f336d24cc27",
"avatarUrl": "/avatars/5bec449236ac7d4a0936ef0dd4046761.svg",
"fullname": "Haoming Wang",
"isPro": false,
"type": "user",
"user": "MingComplex"
}
},
{
"_id": "679078f902b4d94b0f2347c5",
"hidden": false,
"name": "Shihao Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:35:23.362Z",
"user": {
"_id": "64181962db24526c7c9b519a",
"avatarUrl": "/avatars/168af397b5e26d0be2652da8a46511f0.svg",
"fullname": "liang",
"isPro": false,
"type": "user",
"user": "shihaoliang"
}
},
{
"_id": "679078f902b4d94b0f2347c6",
"hidden": false,
"name": "Shizuo Tian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:35:32.442Z",
"user": {
"_id": "63ede3b7518f6e69a3572c54",
"avatarUrl": "/avatars/86e3afd3a8344735ef0b186a313bca2c.svg",
"fullname": "Shizuo Tian",
"isPro": false,
"type": "user",
"user": "BlitherBoom"
}
},
{
"_id": "679078f902b4d94b0f2347c7",
"hidden": false,
"name": "Junda Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:35:41.058Z",
"user": {
"_id": "678b879c71d0276e3c3de55a",
"avatarUrl": "/avatars/3eb78e8f4bd7ea6e566fdb89a0d6ccb5.svg",
"fullname": "Junda Zhang",
"isPro": false,
"type": "user",
"user": "junda-zhang"
}
},
{
"_id": "679078f902b4d94b0f2347c8",
"hidden": false,
"name": "Jiahao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347c9",
"hidden": false,
"name": "Yunxin Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:36:20.394Z",
"user": {
"_id": "62fdb01bc1588e1d4c6c1a7c",
"avatarUrl": "/avatars/bd03085995b1c34e0ac8a845cf2c4e83.svg",
"fullname": "Yunxin Li",
"isPro": false,
"type": "user",
"user": "YunxinLi"
}
},
{
"_id": "679078f902b4d94b0f2347ca",
"hidden": false,
"name": "Shijue Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:36:28.456Z",
"user": {
"_id": "64ce05c631c655ff8a2e183c",
"avatarUrl": "/avatars/f2de7f8a1348b05f46946085e3e9718e.svg",
"fullname": "Shijue Huang",
"isPro": false,
"type": "user",
"user": "JoeYing"
}
},
{
"_id": "679078f902b4d94b0f2347cb",
"hidden": false,
"name": "Wanjun Zhong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:36:36.061Z",
"user": {
"_id": "643f956635e2b54a42e7feba",
"avatarUrl": "/avatars/c6185f81ae8499ae866ad451c1cbf43b.svg",
"fullname": "Wanjun Zhong",
"isPro": false,
"type": "user",
"user": "WanjunZhong"
}
},
{
"_id": "679078f902b4d94b0f2347cc",
"hidden": false,
"name": "Kuanye Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347cd",
"hidden": false,
"name": "Jiale Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347ce",
"hidden": false,
"name": "Yu Miao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347cf",
"hidden": false,
"name": "Woyu Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d0",
"hidden": false,
"name": "Longxiang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d1",
"hidden": false,
"name": "Xu Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d2",
"hidden": false,
"name": "Qianli Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d3",
"hidden": false,
"name": "Jingyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:38:27.051Z",
"user": {
"_id": "64828d5a01de428ee8abe522",
"avatarUrl": "/avatars/6e59f550988ff8ef6239cb5d41dc5e71.svg",
"fullname": "li",
"isPro": false,
"type": "user",
"user": "jingyuli"
}
},
{
"_id": "679078f902b4d94b0f2347d4",
"hidden": false,
"name": "Xiaojun Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:38:35.794Z",
"user": {
"_id": "64f466ff32285228200da712",
"avatarUrl": "/avatars/2f65939da5cc197784303878f826a0f8.svg",
"fullname": "Xiaojun Xiao",
"isPro": false,
"type": "user",
"user": "Dante1018"
}
},
{
"_id": "679078f902b4d94b0f2347d5",
"hidden": false,
"name": "Kai Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d6",
"hidden": false,
"name": "Chuang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d7",
"hidden": false,
"name": "Yaowei Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d8",
"hidden": false,
"name": "Chaolin Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347d9",
"hidden": false,
"name": "Chen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347da",
"hidden": false,
"name": "Xiao Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347db",
"hidden": false,
"name": "Minchao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:44:07.899Z",
"user": {
"_id": "64611d34846a6c8c8302885e",
"avatarUrl": "/avatars/f4bfc0705aead7b6255c53fe1ef569b2.svg",
"fullname": "minchao wang",
"isPro": false,
"type": "user",
"user": "MinChaos"
}
},
{
"_id": "679078f902b4d94b0f2347dc",
"hidden": false,
"name": "Haoli Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347dd",
"hidden": false,
"name": "Zhaojian Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347de",
"hidden": false,
"name": "Haihua Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347df",
"hidden": false,
"name": "Haifeng Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347e0",
"hidden": false,
"name": "Feng Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347e1",
"hidden": false,
"name": "Tao Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347e2",
"hidden": false,
"name": "Xin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679078f902b4d94b0f2347e3",
"hidden": false,
"name": "Guang Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:38:51.568Z",
"user": {
"_id": "63561bfdbcf42eac0b8f13cf",
"avatarUrl": "/avatars/6c126e2681b930e0bad3255358fc6e48.svg",
"fullname": "Guang Shi",
"isPro": false,
"type": "user",
"user": "anyuzx"
}
}
] | 2025-01-21T17:48:10 |
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
|
This paper introduces UI-TARS, a native GUI agent model that solely perceives
the screenshots as input and performs human-like interactions (e.g., keyboard
and mouse operations). Unlike prevailing agent frameworks that depend on
heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts
and workflows, UI-TARS is an end-to-end model that outperforms these
sophisticated frameworks. Experiments demonstrate its superior performance:
UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating
perception, grounding, and GUI task execution. Notably, in the OSWorld
benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15
steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld,
UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several
key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of
GUI screenshots for context-aware understanding of UI elements and precise
captioning; (2) Unified Action Modeling, which standardizes actions into a
unified space across platforms and achieves precise grounding and interaction
through large-scale action traces; (3) System-2 Reasoning, which incorporates
deliberate reasoning into multi-step decision making, involving multiple
reasoning patterns such as task decomposition, reflection thinking, milestone
recognition, etc. (4) Iterative Training with Reflective Online Traces, which
addresses the data bottleneck by automatically collecting, filtering, and
reflectively refining new interaction traces on hundreds of virtual machines.
Through iterative training and reflection tuning, UI-TARS continuously learns
from its mistakes and adapts to unforeseen situations with minimal human
intervention. We also analyze the evolution path of GUI agents to guide the
further development of this domain.
| 51 |
679078ff02b4d94b0f2348e0
| null | null |
|
2025-01-21T23:42:44.747000 |
Reasoning Language Models: A Blueprint
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.11223
|
[
{
"_id": "6790772b8d7df822f1fb4405",
"hidden": false,
"name": "Maciej Besta",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:48:29.913Z",
"user": {
"_id": "6613cdc25736e7f44f94df65",
"avatarUrl": "/avatars/f5b398b4da03d7833e20ddb3ce4211be.svg",
"fullname": "Maciej Besta",
"isPro": false,
"type": "user",
"user": "bestam"
}
},
{
"_id": "6790772b8d7df822f1fb4406",
"hidden": false,
"name": "Julia Barth",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4407",
"hidden": false,
"name": "Eric Schreiber",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:48:45.137Z",
"user": {
"_id": "6712941f745634a65d916056",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/T5RZTWwb4FLZESYB5tu6m.png",
"fullname": "Eric Schreiber",
"isPro": false,
"type": "user",
"user": "eschreibe1"
}
},
{
"_id": "6790772b8d7df822f1fb4408",
"hidden": false,
"name": "Ales Kubicek",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:48:52.532Z",
"user": {
"_id": "64baf20d12d00c4589bb12f7",
"avatarUrl": "/avatars/2dbc10d369788c0ea048e1be97f0c5e6.svg",
"fullname": "Ales Kubicek",
"isPro": false,
"type": "user",
"user": "aleskubicek"
}
},
{
"_id": "6790772b8d7df822f1fb4409",
"hidden": false,
"name": "Afonso Catarino",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:00.804Z",
"user": {
"_id": "66fc32aa787008467cfe20cb",
"avatarUrl": "/avatars/b1c77ce8c7deaacf546a264467078673.svg",
"fullname": "Afonso Catarino",
"isPro": false,
"type": "user",
"user": "AfonsoC"
}
},
{
"_id": "6790772b8d7df822f1fb440a",
"hidden": false,
"name": "Robert Gerstenberger",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:11.846Z",
"user": {
"_id": "65a91420c46ce42ef5da96af",
"avatarUrl": "/avatars/a9fa3a7973d0030292b1e23172112a1e.svg",
"fullname": "Robert Gerstenberger",
"isPro": false,
"type": "user",
"user": "rgersten"
}
},
{
"_id": "6790772b8d7df822f1fb440b",
"hidden": false,
"name": "Piotr Nyczyk",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:18.866Z",
"user": {
"_id": "63f89b579f87cc3e645d96f9",
"avatarUrl": "/avatars/116be80348aa8f8be95b6ea774ecb65d.svg",
"fullname": "Piotr Nyczyk",
"isPro": false,
"type": "user",
"user": "pnyczyk"
}
},
{
"_id": "6790772b8d7df822f1fb440c",
"hidden": false,
"name": "Patrick Iff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb440d",
"hidden": false,
"name": "Yueling Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:35.973Z",
"user": {
"_id": "641c07c4bbdbe642a79914df",
"avatarUrl": "/avatars/5c89ee5f5cdf93e1d5ead6e46ae6c774.svg",
"fullname": "Yueling Li",
"isPro": false,
"type": "user",
"user": "liy140"
}
},
{
"_id": "6790772b8d7df822f1fb440e",
"hidden": false,
"name": "Sam Houliston",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:43.387Z",
"user": {
"_id": "66af74ddd59c09785e02d1e0",
"avatarUrl": "/avatars/df0fec7228c35052b88cb5905bea809e.svg",
"fullname": "Sam Houliston",
"isPro": false,
"type": "user",
"user": "samhouliston"
}
},
{
"_id": "6790772b8d7df822f1fb440f",
"hidden": false,
"name": "Tomasz Sternal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:49:51.528Z",
"user": {
"_id": "6535b041b66f4bf689267d91",
"avatarUrl": "/avatars/6ef4a183ff08ec3f3595f0866f3129ac.svg",
"fullname": "Tomasz Sternal",
"isPro": false,
"type": "user",
"user": "tsternal"
}
},
{
"_id": "6790772b8d7df822f1fb4410",
"hidden": false,
"name": "Marcin Copik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4411",
"hidden": false,
"name": "Grzegorz Kwaśniewski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4412",
"hidden": false,
"name": "Jürgen Müller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4413",
"hidden": false,
"name": "Łukasz Flis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4414",
"hidden": false,
"name": "Hannes Eberhard",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:50:47.492Z",
"user": {
"_id": "62f80cbc04de855c35e32fdb",
"avatarUrl": "/avatars/ab2600b96fe8b1787ad1eddaa45ad9ae.svg",
"fullname": "Hannes Eberhard",
"isPro": false,
"type": "user",
"user": "HannesE"
}
},
{
"_id": "6790772b8d7df822f1fb4415",
"hidden": false,
"name": "Hubert Niewiadomski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6790772b8d7df822f1fb4416",
"hidden": false,
"name": "Torsten Hoefler",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-20T02:16:19 |
Reasoning Language Models: A Blueprint
|
Reasoning language models (RLMs), also known as Large Reasoning Models
(LRMs), such as OpenAI's o1 and o3, DeepSeek-V3, and Alibaba's QwQ, have
redefined AI's problem-solving capabilities by extending large language models
(LLMs) with advanced reasoning mechanisms. Yet, their high costs, proprietary
nature, and complex architectures - uniquely combining Reinforcement Learning
(RL), search heuristics, and LLMs - present accessibility and scalability
challenges. To address these, we propose a comprehensive blueprint that
organizes RLM components into a modular framework, based on a survey and
analysis of all RLM works. This blueprint incorporates diverse reasoning
structures (chains, trees, graphs, and nested forms), reasoning strategies
(e.g., Monte Carlo Tree Search, Beam Search), RL concepts (policy, value models
and others), and supervision schemes (Output-Based and Process-Based
Supervision). We also provide detailed mathematical formulations and
algorithmic specifications to simplify RLM implementation. By showing how
schemes like LLaMA-Berry, QwQ, Journey Learning, and Graph of Thoughts fit as
special cases, we demonstrate the blueprint's versatility and unifying
potential. To illustrate its utility, we introduce x1, a modular implementation
for rapid RLM prototyping and experimentation. Using x1 and a literature
review, we provide key insights, such as multi-phase training for policy and
value models, and the importance of familiar training distributions. Finally,
we outline how RLMs can integrate with a broader LLM ecosystem, including tools
and databases. Our work demystifies RLM construction, democratizes advanced
reasoning capabilities, and fosters innovation, aiming to mitigate the gap
between "rich AI" and "poor AI" by lowering barriers to RLM development and
experimentation.
| 32 |
6790772d8d7df822f1fb4493
| null | null |
|
2025-01-21T23:41:48.239000 |
GPS as a Control Signal for Image Generation
| 2 |
{
"_id": "645ab0b7c266796265baefa4",
"avatarUrl": "/avatars/bdac661996b63c4b2a56881707afa01f.svg",
"followerCount": null,
"fullname": "Chao Feng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "chfeng",
"type": "user"
}
| true | null |
2501.12390
|
[
{
"_id": "67906d622ae55818ddfd0d93",
"hidden": false,
"name": "Chao Feng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:21.429Z",
"user": {
"_id": "645ab0b7c266796265baefa4",
"avatarUrl": "/avatars/bdac661996b63c4b2a56881707afa01f.svg",
"fullname": "Chao Feng",
"isPro": false,
"type": "user",
"user": "chfeng"
}
},
{
"_id": "67906d622ae55818ddfd0d94",
"hidden": false,
"name": "Ziyang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906d622ae55818ddfd0d95",
"hidden": false,
"name": "Aleksander Holynski",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T20:49:16.653Z",
"user": {
"_id": "62f6e2792e53c2efd33faa92",
"avatarUrl": "/avatars/85d94b4022577747b8d2d10a82c2f3c7.svg",
"fullname": "Aleksander Holynski",
"isPro": false,
"type": "user",
"user": "holynski"
}
},
{
"_id": "67906d622ae55818ddfd0d96",
"hidden": false,
"name": "Alexei A. Efros",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906d622ae55818ddfd0d97",
"hidden": false,
"name": "Andrew Owens",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T18:59:46 |
GPS as a Control Signal for Image Generation
|
We show that the GPS tags contained in photo metadata provide a useful
control signal for image generation. We train GPS-to-image models and use them
for tasks that require a fine-grained understanding of how images vary within a
city. In particular, we train a diffusion model to generate images conditioned
on both GPS and text. The learned model generates images that capture the
distinctive appearance of different neighborhoods, parks, and landmarks. We
also extract 3D models from 2D GPS-to-image models through score distillation
sampling, using GPS conditioning to constrain the appearance of the
reconstruction from each viewpoint. Our evaluations suggest that our
GPS-conditioned models successfully learn to generate images that vary based on
location, and that GPS conditioning improves estimated 3D structure.
| 12 |
67906d682ae55818ddfd0f53
| null | null |
|
2025-01-21T23:27:52.660000 |
Demons in the Detail: On Implementing Load Balancing Loss for Training Specialized Mixture-of-Expert Models
| 2 |
{
"_id": "647ccbd6e07cf9bb2d485244",
"avatarUrl": "/avatars/e8915abaff04f6762247e196b7cf84df.svg",
"followerCount": 3,
"fullname": "Zihan Qiu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "QwQZh",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/647ccbd6e07cf9bb2d485244/ddUbQV_yVPwD6P0TSR5lu.png",
"https://cdn-uploads.huggingface.co/production/uploads/647ccbd6e07cf9bb2d485244/f7Q4QULppOygZlsYBUvY9.png",
"https://cdn-uploads.huggingface.co/production/uploads/647ccbd6e07cf9bb2d485244/9Jwx37bQkCjaWcccWbJ7b.png"
] |
2501.11873
|
[
{
"_id": "679071da11a3f67d8f498649",
"hidden": false,
"name": "Zihan Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:32:39.315Z",
"user": {
"_id": "647ccbd6e07cf9bb2d485244",
"avatarUrl": "/avatars/e8915abaff04f6762247e196b7cf84df.svg",
"fullname": "Zihan Qiu",
"isPro": false,
"type": "user",
"user": "QwQZh"
}
},
{
"_id": "679071da11a3f67d8f49864a",
"hidden": false,
"name": "Zeyu Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T15:35:10.842Z",
"user": {
"_id": "64a5520dea3c5861303064b0",
"avatarUrl": "/avatars/7e618bcfc4d8d3291cf9bc8f2c4c1b15.svg",
"fullname": "Zeyu hUANG",
"isPro": false,
"type": "user",
"user": "zeroyhuang"
}
},
{
"_id": "679071da11a3f67d8f49864b",
"hidden": false,
"name": "Bo Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679071da11a3f67d8f49864c",
"hidden": false,
"name": "Kaiyue Wen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T20:49:51.765Z",
"user": {
"_id": "6690a44e76c0fa097f9f65c9",
"avatarUrl": "/avatars/0c6efa10889d1b6dd9a3f9985f4d2d97.svg",
"fullname": "Kaiyue Wen",
"isPro": false,
"type": "user",
"user": "KaiyueWen"
}
},
{
"_id": "679071da11a3f67d8f49864d",
"hidden": false,
"name": "Zekun Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:31:36.863Z",
"user": {
"_id": "656832dfbd65fd41ee7aa8cd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656832dfbd65fd41ee7aa8cd/HHkyetTqNq1wIBPipzjQA.jpeg",
"fullname": "Zekun Wang",
"isPro": false,
"type": "user",
"user": "kugwzk"
}
},
{
"_id": "679071da11a3f67d8f49864e",
"hidden": false,
"name": "Rui Men",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679071da11a3f67d8f49864f",
"hidden": false,
"name": "Ivan Titov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:30:57.878Z",
"user": {
"_id": "666f6375623133f1ce79c021",
"avatarUrl": "/avatars/b57a457cff81d685fdd9eadee53445fc.svg",
"fullname": "Ivan Titov",
"isPro": false,
"type": "user",
"user": "Ivanchoo"
}
},
{
"_id": "679071da11a3f67d8f498650",
"hidden": false,
"name": "Dayiheng Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:15.815Z",
"user": {
"_id": "6434d4989bd5a84b5dd0b0f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6434d4989bd5a84b5dd0b0f5/0Elf9qbfG9Hkgypm9pTGm.jpeg",
"fullname": "Dayiheng Liu",
"isPro": false,
"type": "user",
"user": "Losin94"
}
},
{
"_id": "679071da11a3f67d8f498651",
"hidden": false,
"name": "Jingren Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:30:48.272Z",
"user": {
"_id": "602f88f5e8149a962412a667",
"avatarUrl": "/avatars/b78f0e583df8e5d5e3365934fe5f4900.svg",
"fullname": "Zhou",
"isPro": false,
"type": "user",
"user": "Jingren"
}
},
{
"_id": "679071da11a3f67d8f498652",
"hidden": false,
"name": "Junyang Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T15:30:37.427Z",
"user": {
"_id": "620760a26e3b7210c2ff1943",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"isPro": false,
"type": "user",
"user": "JustinLin610"
}
}
] | 2025-01-21T04:04:39 |
Demons in the Detail: On Implementing Load Balancing Loss for Training
Specialized Mixture-of-Expert Models
|
This paper revisits the implementation of
Load-balancing Loss (LBL) when training
Mixture-of-Experts (MoEs) models. Specifically, LBL for MoEs is defined as N_E
sum_{i=1}^{N_E} f_i p_i, where N_E is the total number of experts, f_i
represents the frequency of expert i being selected, and p_i denotes the
average gating score of the expert i. Existing MoE training frameworks
usually employ the parallel training strategy so that f_i and the LBL are
calculated within a micro-batch and then averaged across parallel
groups. In essence, a micro-batch for training billion-scale LLMs normally
contains very few sequences. So, the micro-batch LBL is almost at the sequence
level, and the router is pushed to distribute the token evenly within each
sequence. Under this strict constraint, even tokens from a domain-specific
sequence (e.g., code) are uniformly routed to all experts, thereby
inhibiting expert specialization. In this work, we propose calculating LBL
using a global-batch to loose this constraint. Because a
global-batch contains much more diverse sequences than a micro-batch, which
will encourage load balance at the corpus level. Specifically, we introduce an
extra communication step to synchronize f_i across micro-batches and then use
it to calculate the LBL. Through experiments on training MoEs-based LLMs (up to
42.8B total parameters and 400B tokens), we surprisingly
find that the global-batch LBL strategy yields excellent performance gains in
both pre-training perplexity and downstream tasks. Our analysis reveals that
the global-batch LBL also greatly improves the domain specialization of MoE
experts.
| 63 |
679071db11a3f67d8f498680
| null | null |
|
2025-01-21T23:19:52.256000 |
MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
| 2 |
{
"_id": "62f662bcc58915315c4eccea",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg",
"followerCount": 8,
"fullname": "Yilun",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yilunzhao",
"type": "user"
}
| true | null |
2501.12380
|
[
{
"_id": "67906f432565fc5140d72dc3",
"hidden": false,
"name": "Yilun Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T20:49:53.483Z",
"user": {
"_id": "62f662bcc58915315c4eccea",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg",
"fullname": "Yilun",
"isPro": true,
"type": "user",
"user": "yilunzhao"
}
},
{
"_id": "67906f432565fc5140d72dc4",
"hidden": false,
"name": "Lujing Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:06:45.496Z",
"user": {
"_id": "64ffd83b96ec8a52185dfb54",
"avatarUrl": "/avatars/a4fadc7e2f5c1125d5d455de4d5c9b8e.svg",
"fullname": "Lujing Xie",
"isPro": false,
"type": "user",
"user": "leeroylucas"
}
},
{
"_id": "67906f432565fc5140d72dc5",
"hidden": false,
"name": "Haowei Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:59:50.465Z",
"user": {
"_id": "637169557a5e5d8efdc3e58e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1668515232215-637169557a5e5d8efdc3e58e.jpeg",
"fullname": "Haowei Zhang",
"isPro": false,
"type": "user",
"user": "freesky"
}
},
{
"_id": "67906f432565fc5140d72dc6",
"hidden": false,
"name": "Guo Gan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dc7",
"hidden": false,
"name": "Yitao Long",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dc8",
"hidden": true,
"name": "Zhiyuan Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:07:09.079Z",
"user": {
"_id": "65e9945f6bcbcae600b7e64f",
"avatarUrl": "/avatars/8aa986a6c0e35c55d5d1461d1dc11ac3.svg",
"fullname": "Zhiyuan Hu",
"isPro": false,
"type": "user",
"user": "zhiyhu"
}
},
{
"_id": "67906f432565fc5140d72dc9",
"hidden": false,
"name": "Tongyan Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:59:48.493Z",
"user": {
"_id": "66e83ec5deb449d8d856e78d",
"avatarUrl": "/avatars/c5e56be65fcacb3192ce10ba6d8f48e2.svg",
"fullname": "Tongyan Hu",
"isPro": false,
"type": "user",
"user": "entropyhu"
}
},
{
"_id": "67906f432565fc5140d72dca",
"hidden": false,
"name": "Weiyuan Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T14:04:09.444Z",
"user": {
"_id": "6652e84bff6ccc0ef5ac7055",
"avatarUrl": "/avatars/70a8b6a0bae5f9e954fa15f56ab2ddc3.svg",
"fullname": "Weiyuan Chen",
"isPro": false,
"type": "user",
"user": "RaidonShogun"
}
},
{
"_id": "67906f432565fc5140d72dcb",
"hidden": false,
"name": "Chuhan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:07:16.582Z",
"user": {
"_id": "65415f1d5168c4f3487a2103",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65415f1d5168c4f3487a2103/qJuzDpOGSDL4E1-L2lGWW.jpeg",
"fullname": "Chuhan Li",
"isPro": false,
"type": "user",
"user": "ChuhanLi"
}
},
{
"_id": "67906f432565fc5140d72dcc",
"hidden": false,
"name": "Junyang Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:34:00.857Z",
"user": {
"_id": "67909add9551780939a950f9",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/vS8OfOtpYr0nNXWixn1Er.png",
"fullname": "Junyang Song",
"isPro": false,
"type": "user",
"user": "UndetectedAtom"
}
},
{
"_id": "67906f432565fc5140d72dcd",
"hidden": false,
"name": "Zhijian Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dce",
"hidden": false,
"name": "Chengye Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dcf",
"hidden": false,
"name": "Weifeng Pan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:18.785Z",
"user": {
"_id": "67907fdb6146e0f96241dcc3",
"avatarUrl": "/avatars/ba16cb87dce1e3ccbbe8091a3fe553fc.svg",
"fullname": "Wf Pan",
"isPro": false,
"type": "user",
"user": "Phil-01"
}
},
{
"_id": "67906f432565fc5140d72dd0",
"hidden": false,
"name": "Ziyao Shangguan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:06:39.157Z",
"user": {
"_id": "65dea56779f827c045b1df96",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/Jtx4JcM5CNgoVyzxTC93S.jpeg",
"fullname": "Ziyao Shangguan",
"isPro": false,
"type": "user",
"user": "ziyaosg"
}
},
{
"_id": "67906f432565fc5140d72dd1",
"hidden": false,
"name": "Xiangru Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:07:56.034Z",
"user": {
"_id": "63357c608adfa81faf2ac180",
"avatarUrl": "/avatars/ae0314c644f882251baf59b9134fd36f.svg",
"fullname": "Xiangru Tang",
"isPro": false,
"type": "user",
"user": "RTT1"
}
},
{
"_id": "67906f432565fc5140d72dd2",
"hidden": false,
"name": "Zhenwen Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:08:01.981Z",
"user": {
"_id": "62ffa3f8311cad266f9af236",
"avatarUrl": "/avatars/4c88cb518e000a475f8381573f21aa7f.svg",
"fullname": "Zhenwen Liang",
"isPro": false,
"type": "user",
"user": "invokerliang"
}
},
{
"_id": "67906f432565fc5140d72dd3",
"hidden": false,
"name": "Yixin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dd4",
"hidden": false,
"name": "Chen Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906f432565fc5140d72dd5",
"hidden": false,
"name": "Arman Cohan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T14:09:26.876Z",
"user": {
"_id": "5f5ba21188f57f65f951f255",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1599840760465-noauth.png",
"fullname": "Arman Cohan",
"isPro": false,
"type": "user",
"user": "armanc"
}
}
] | 2025-01-21T18:56:18 |
MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
|
We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark
for evaluating foundation models in video understanding. MMVU includes 3,000
expert-annotated questions spanning 27 subjects across four core disciplines:
Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to
prior benchmarks, MMVU features three key advancements. First, it challenges
models to apply domain-specific knowledge and perform expert-level reasoning to
analyze specialized-domain videos, moving beyond the basic visual perception
typically assessed in current video benchmarks. Second, each example is
annotated by human experts from scratch. We implement strict data quality
controls to ensure the high quality of the dataset. Finally, each example is
enriched with expert-annotated reasoning rationals and relevant domain
knowledge, facilitating in-depth analysis. We conduct an extensive evaluation
of 32 frontier multimodal foundation models on MMVU. The latest
System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest
performance among the tested models. However, they still fall short of matching
human expertise. Through in-depth error analyses and case studies, we offer
actionable insights for future advancements in expert-level,
knowledge-intensive video understanding for specialized domains.
| 83 |
67906f442565fc5140d72e4a
| null | null |
|
2025-01-21T22:56:36.701000 |
Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement
| 2 |
{
"_id": "630716d11801ecc7d2595021",
"avatarUrl": "/avatars/2d36a880ce4a3cf7efc5ff3987dbeaf3.svg",
"followerCount": 14,
"fullname": "Songyang Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zsytony",
"type": "user"
}
| true | null |
2501.12273
|
[
{
"_id": "67906c674932687e24e0cc08",
"hidden": false,
"name": "Maosong Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906c674932687e24e0cc09",
"hidden": false,
"name": "Taolin Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:00:36.088Z",
"user": {
"_id": "62a2b0f3575099d76bdc7259",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1654829516458-62a2b0f3575099d76bdc7259.png",
"fullname": "Taolin Zhang",
"isPro": false,
"type": "user",
"user": "ruxian"
}
},
{
"_id": "67906c674932687e24e0cc0a",
"hidden": false,
"name": "Mo Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:01:30.729Z",
"user": {
"_id": "674fe4239cad8e296c9f2d1e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/iZ22kVJClIzEK35oxzsp3.png",
"fullname": "Mo Li",
"isPro": false,
"type": "user",
"user": "MoLi-DHU"
}
},
{
"_id": "67906c674932687e24e0cc0b",
"hidden": false,
"name": "Chuyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906c674932687e24e0cc0c",
"hidden": false,
"name": "Yunxin Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67906c674932687e24e0cc0d",
"hidden": false,
"name": "Haodong Duan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-22T16:01:03.878Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67906c674932687e24e0cc0e",
"hidden": false,
"name": "Songyang Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-22T10:08:23.652Z",
"user": {
"_id": "630716d11801ecc7d2595021",
"avatarUrl": "/avatars/2d36a880ce4a3cf7efc5ff3987dbeaf3.svg",
"fullname": "Songyang Zhang",
"isPro": false,
"type": "user",
"user": "zsytony"
}
},
{
"_id": "67906c674932687e24e0cc0f",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T16:44:12 |
Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and
Refinement
|
The quality of Supervised Fine-Tuning (SFT) data plays a critical role in
enhancing the conversational capabilities of Large Language Models (LLMs).
However, as LLMs become more advanced, the availability of high-quality
human-annotated SFT data has become a significant bottleneck, necessitating a
greater reliance on synthetic training data. In this work, we introduce Condor,
a novel two-stage synthetic data generation framework that incorporates World
Knowledge Tree and Self-Reflection Refinement to produce high-quality SFT data
at scale. Our experimental results demonstrate that a base model fine-tuned on
only 20K Condor-generated samples achieves superior performance compared to
counterparts. The additional refinement stage in Condor further enables
iterative self-improvement for LLMs at various scales (up to 72B), validating
the effectiveness of our approach. Furthermore, our investigation into the
scaling for synthetic data in post-training reveals substantial unexplored
potential for performance improvements, opening promising avenues for future
research.
| 14 |
67906c684932687e24e0cc61
| null | null |
|
2025-01-21T07:45:12.825000 |
SEAL: Entangled White-box Watermarks on Low-Rank Adaptation
| 2 |
{
"_id": "63d93667255ef6add20f9272",
"avatarUrl": "/avatars/99a3aeadcc81ef85164cdfb6ab186b17.svg",
"followerCount": 2,
"fullname": "Giyeong Oh",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "BootsofLagrangian",
"type": "user"
}
| true | null |
2501.09284
|
[
{
"_id": "678dfb39f002f862857e90bf",
"hidden": false,
"name": "Giyeong Oh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T09:28:57.015Z",
"user": {
"_id": "63d93667255ef6add20f9272",
"avatarUrl": "/avatars/99a3aeadcc81ef85164cdfb6ab186b17.svg",
"fullname": "Giyeong Oh",
"isPro": false,
"type": "user",
"user": "BootsofLagrangian"
}
},
{
"_id": "678dfb39f002f862857e90c0",
"hidden": false,
"name": "Saejin Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T13:16:39.750Z",
"user": {
"_id": "646d9f60eb9268aeebc55b8b",
"avatarUrl": "/avatars/6bf4ff17c340a622a7a847dddfe40ff2.svg",
"fullname": "SJKIM",
"isPro": false,
"type": "user",
"user": "Steamout"
}
},
{
"_id": "678dfb39f002f862857e90c1",
"hidden": false,
"name": "Woohyun Cho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dfb39f002f862857e90c2",
"hidden": false,
"name": "Sangkyu Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dfb39f002f862857e90c3",
"hidden": false,
"name": "Jiwan Chung",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T14:22:48.531Z",
"user": {
"_id": "60d74d1affe0328e0167dc5f",
"avatarUrl": "/avatars/9b1a2df9402e9c26e1eb7c818af9bae0.svg",
"fullname": "Jiwan Chung",
"isPro": false,
"type": "user",
"user": "jiwan-chung"
}
},
{
"_id": "678dfb39f002f862857e90c4",
"hidden": false,
"name": "Dokyung Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T14:22:42.282Z",
"user": {
"_id": "65ab767ca92a64ef5b9c8423",
"avatarUrl": "/avatars/00abf7a05fd86a08587f72cab5b3cff3.svg",
"fullname": "Dokyung Song",
"isPro": false,
"type": "user",
"user": "dokyungs"
}
},
{
"_id": "678dfb39f002f862857e90c5",
"hidden": false,
"name": "Youngjae Yu",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-20T07:28:58.615Z",
"user": {
"_id": "6504777fb1da3747a05160c4",
"avatarUrl": "/avatars/b777d98a5ff971ddb4c3e1060bb3e070.svg",
"fullname": "Youngjae Yu",
"isPro": false,
"type": "user",
"user": "yjyu"
}
}
] | 2025-01-16T04:17:56 |
SEAL: Entangled White-box Watermarks on Low-Rank Adaptation
|
Recently, LoRA and its variants have become the de facto strategy for
training and sharing task-specific versions of large pretrained models, thanks
to their efficiency and simplicity. However, the issue of copyright protection
for LoRA weights, especially through watermark-based techniques, remains
underexplored. To address this gap, we propose SEAL (SEcure wAtermarking on
LoRA weights), the universal whitebox watermarking for LoRA. SEAL embeds a
secret, non-trainable matrix between trainable LoRA weights, serving as a
passport to claim ownership. SEAL then entangles the passport with the LoRA
weights through training, without extra loss for entanglement, and distributes
the finetuned weights after hiding the passport. When applying SEAL, we
observed no performance degradation across commonsense reasoning,
textual/visual instruction tuning, and text-to-image synthesis tasks. We
demonstrate that SEAL is robust against a variety of known attacks: removal,
obfuscation, and ambiguity attacks.
| 10 |
678dfb3af002f862857e912e
| null | null |
|
2025-01-21T05:10:08.409000 |
VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
| 2 |
{
"_id": "64e1cabf12a5504dda7e4948",
"avatarUrl": "/avatars/53851eddb4e1cae773f3e3607181094b.svg",
"followerCount": 5,
"fullname": "rzw",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "maverickrzw",
"type": "user"
}
| true | null |
2501.09781
|
[
{
"_id": "678f5a57bbe3bed7b802c477",
"hidden": false,
"name": "Zhongwei Ren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-21T10:07:03.173Z",
"user": {
"_id": "64e1cabf12a5504dda7e4948",
"avatarUrl": "/avatars/53851eddb4e1cae773f3e3607181094b.svg",
"fullname": "rzw",
"isPro": false,
"type": "user",
"user": "maverickrzw"
}
},
{
"_id": "678f5a57bbe3bed7b802c478",
"hidden": false,
"name": "Yunchao Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678f5a57bbe3bed7b802c479",
"hidden": false,
"name": "Xun Guo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T14:20:14.976Z",
"user": {
"_id": "64ae7ddf407a5cae8579c171",
"avatarUrl": "/avatars/f78e8958db16f5f5603ece527951ac23.svg",
"fullname": "Xun Guo",
"isPro": false,
"type": "user",
"user": "xunguohf"
}
},
{
"_id": "678f5a57bbe3bed7b802c47a",
"hidden": false,
"name": "Yao Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678f5a57bbe3bed7b802c47b",
"hidden": false,
"name": "Bingyi Kang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T14:19:50.039Z",
"user": {
"_id": "647b5fef6a79fbf5e996c47c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647b5fef6a79fbf5e996c47c/IkSMnDsCY_CyEFCiMDuxe.jpeg",
"fullname": "Bingyi Kang",
"isPro": false,
"type": "user",
"user": "bykang"
}
},
{
"_id": "678f5a57bbe3bed7b802c47c",
"hidden": false,
"name": "Jiashi Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T14:19:44.355Z",
"user": {
"_id": "67298e44017b96a1d0101dc4",
"avatarUrl": "/avatars/1f8ed1a3e911e6a3021087b9371d284c.svg",
"fullname": "Jiashi Feng",
"isPro": false,
"type": "user",
"user": "jshfeng"
}
},
{
"_id": "678f5a57bbe3bed7b802c47d",
"hidden": false,
"name": "Xiaojie Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-16T18:59:10 |
VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
|
This work explores whether a deep generative model can learn complex
knowledge solely from visual input, in contrast to the prevalent focus on
text-based models like large language models (LLMs). We develop VideoWorld, an
auto-regressive video generation model trained on unlabeled video data, and
test its knowledge acquisition abilities in video-based Go and robotic control
tasks. Our experiments reveal two key findings: (1) video-only training
provides sufficient information for learning knowledge, including rules,
reasoning and planning capabilities, and (2) the representation of visual
change is crucial for knowledge acquisition. To improve both the efficiency and
efficacy of this process, we introduce the Latent Dynamics Model (LDM) as a key
component of VideoWorld. Remarkably, VideoWorld reaches a 5-dan professional
level in the Video-GoBench with just a 300-million-parameter model, without
relying on search algorithms or reward mechanisms typical in reinforcement
learning. In robotic tasks, VideoWorld effectively learns diverse control
operations and generalizes across environments, approaching the performance of
oracle models in CALVIN and RLBench. This study opens new avenues for knowledge
acquisition from visual data, with all code, data, and models open-sourced for
further research.
| 26 |
678f5a59bbe3bed7b802c4d6
| null | null |
|
2025-01-20T22:21:50.434000 |
GameFactory: Creating New Games with Generative Interactive Videos
| 3 |
{
"_id": "64105a6d14215c0775dfdd14",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64105a6d14215c0775dfdd14/-VX-cUYOLjHIg7QnWhRGG.jpeg",
"followerCount": 3,
"fullname": "Jiwen Yu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "VictorYuki",
"type": "user"
}
| true | null |
2501.08325
|
[
{
"_id": "678719ac5333dfbf8e206077",
"hidden": false,
"name": "Jiwen Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:49:01.033Z",
"user": {
"_id": "64105a6d14215c0775dfdd14",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64105a6d14215c0775dfdd14/-VX-cUYOLjHIg7QnWhRGG.jpeg",
"fullname": "Jiwen Yu",
"isPro": false,
"type": "user",
"user": "VictorYuki"
}
},
{
"_id": "678719ac5333dfbf8e206078",
"hidden": false,
"name": "Yiran Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678719ac5333dfbf8e206079",
"hidden": false,
"name": "Xintao Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T10:08:34.083Z",
"user": {
"_id": "60e272ca6c78a8c122b12127",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60e272ca6c78a8c122b12127/xldEGBzGrU-bX6IwAw0Ie.jpeg",
"fullname": "Xintao Wang",
"isPro": false,
"type": "user",
"user": "Xintao"
}
},
{
"_id": "678719ac5333dfbf8e20607a",
"hidden": false,
"name": "Pengfei Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678719ac5333dfbf8e20607b",
"hidden": true,
"name": "Di Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-21T10:08:59.963Z",
"user": {
"_id": "64bce15bafd1e46c5504ad38",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64bce15bafd1e46c5504ad38/bQFX1iFbXEBXcQvUNL811.png",
"fullname": "Di Zhang",
"isPro": false,
"type": "user",
"user": "di-zhang-fdu"
}
},
{
"_id": "678719ac5333dfbf8e20607c",
"hidden": false,
"name": "Xihui Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-21T10:07:46.546Z",
"user": {
"_id": "65d5ec74cd05bc1eaa125040",
"avatarUrl": "/avatars/2de1b1539a86452c2c89570eeb02f5ab.svg",
"fullname": "Xihui Liu",
"isPro": false,
"type": "user",
"user": "XihuiLiu"
}
}
] | 2025-01-14T18:57:21 |
GameFactory: Creating New Games with Generative Interactive Videos
|
Generative game engines have the potential to revolutionize game development
by autonomously creating new content and reducing manual workload. However,
existing video-based game generation methods fail to address the critical
challenge of scene generalization, limiting their applicability to existing
games with fixed styles and scenes. In this paper, we present GameFactory, a
framework focused on exploring scene generalization in game video generation.
To enable the creation of entirely new and diverse games, we leverage
pre-trained video diffusion models trained on open-domain video data. To bridge
the domain gap between open-domain priors and small-scale game dataset, we
propose a multi-phase training strategy that decouples game style learning from
action control, preserving open-domain generalization while achieving action
controllability. Using Minecraft as our data source, we release GF-Minecraft, a
high-quality and diversity action-annotated video dataset for research.
Furthermore, we extend our framework to enable autoregressive
action-controllable game video generation, allowing the production of
unlimited-length interactive game videos. Experimental results demonstrate that
GameFactory effectively generates open-domain, diverse, and action-controllable
game videos, representing a significant step forward in AI-driven game
generation. Our dataset and project page are publicly available at
https://vvictoryuki.github.io/gamefactory/.
| 64 |
678719ae5333dfbf8e206106
| null | null |
|
2025-01-20T07:14:47.264000 |
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
| 2 |
{
"_id": "628e39f4b1596566033b8d7b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/628e39f4b1596566033b8d7b/-Y807up1cgMmAQsczdOPn.jpeg",
"followerCount": 6,
"fullname": "Clément Christophe",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "cchristophe",
"type": "user"
}
| true | null |
2501.09825
|
[
{
"_id": "678e3e0c8aeb001443af5cb1",
"hidden": false,
"name": "Nada Saadi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:12:06.609Z",
"user": {
"_id": "66bb35988b09ede0b7b92313",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66bb35988b09ede0b7b92313/M06mQ3ifyRwuladTNwMS2.png",
"fullname": "Nada Saadi",
"isPro": false,
"type": "user",
"user": "Nadas31"
}
},
{
"_id": "678e3e0c8aeb001443af5cb2",
"hidden": false,
"name": "Tathagata Raha",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:12:13.246Z",
"user": {
"_id": "5f5f6c113c67af20d9945afb",
"avatarUrl": "/avatars/06b2eb3a5d27864280d4d02e6d00d782.svg",
"fullname": "Tathagata Raha",
"isPro": false,
"type": "user",
"user": "tathagataraha"
}
},
{
"_id": "678e3e0c8aeb001443af5cb3",
"hidden": false,
"name": "Clément Christophe",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:12:19.413Z",
"user": {
"_id": "628e39f4b1596566033b8d7b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/628e39f4b1596566033b8d7b/-Y807up1cgMmAQsczdOPn.jpeg",
"fullname": "Clément Christophe",
"isPro": false,
"type": "user",
"user": "cchristophe"
}
},
{
"_id": "678e3e0c8aeb001443af5cb4",
"hidden": false,
"name": "Marco AF Pimentel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e3e0c8aeb001443af5cb5",
"hidden": false,
"name": "Ronnie Rajan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:12:33.526Z",
"user": {
"_id": "65281d6ef61ca80b9c2ee707",
"avatarUrl": "/avatars/090ea7210a4bb6549b0f7fee71525625.svg",
"fullname": "Ronnie Rajan",
"isPro": false,
"type": "user",
"user": "ronnierajan"
}
},
{
"_id": "678e3e0c8aeb001443af5cb6",
"hidden": false,
"name": "Praveen K Kanithi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T14:07:22.890Z",
"user": {
"_id": "65280984b794fe3d06544d77",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65280984b794fe3d06544d77/tyrxbxtDG02On1uiRaVbL.jpeg",
"fullname": "Praveenkumar",
"isPro": false,
"type": "user",
"user": "pkanithi"
}
}
] | 2025-01-16T20:24:56 |
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
|
This paper investigates the challenges of developing large language models
(LLMs) proficient in both multilingual understanding and medical knowledge. We
demonstrate that simply translating medical data does not guarantee strong
performance on clinical tasks in the target language. Our experiments reveal
that the optimal language mix in training data varies significantly across
different medical tasks. We find that larger models with carefully calibrated
language ratios achieve superior performance on native-language clinical tasks.
Furthermore, our results suggest that relying solely on fine-tuning may not be
the most effective approach for incorporating new language knowledge into LLMs.
Instead, data and computationally intensive pretraining methods may still be
necessary to achieve optimal performance in multilingual medical settings.
These findings provide valuable guidance for building effective and inclusive
medical AI systems for diverse linguistic communities.
| 14 |
678e3e0d8aeb001443af5cf4
| null | null |
|
2025-01-20T04:45:24.921000 |
Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs) More Self-Confident Even When They Are Wrong
| 2 |
{
"_id": "64f31365ed48e3bb9c487d5d",
"avatarUrl": "/avatars/979c1979eadbd4529c95b925bbb58d78.svg",
"followerCount": null,
"fullname": "Gonzalo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gonzmart",
"type": "user"
}
| true | null |
2501.09775
|
[
{
"_id": "678e1b151a99a49b98056e1c",
"hidden": false,
"name": "Tairan Fu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678e1b151a99a49b98056e1d",
"hidden": false,
"name": "Javier Conde",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-21T10:51:53.077Z",
"user": {
"_id": "66852115f8c3f32152346733",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66852115f8c3f32152346733/EuApIooUvMesCSr27Auze.jpeg",
"fullname": "Javier Conde",
"isPro": false,
"type": "user",
"user": "javicond3"
}
},
{
"_id": "678e1b151a99a49b98056e1e",
"hidden": false,
"name": "Gonzalo Martínez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T16:59:39.580Z",
"user": {
"_id": "64f31365ed48e3bb9c487d5d",
"avatarUrl": "/avatars/979c1979eadbd4529c95b925bbb58d78.svg",
"fullname": "Gonzalo",
"isPro": false,
"type": "user",
"user": "gonzmart"
}
},
{
"_id": "678e1b151a99a49b98056e1f",
"hidden": false,
"name": "María Grandury",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-20T10:35:14.566Z",
"user": {
"_id": "5f9c00a5777efc07d7f1e4be",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1665073337782-5f9c00a5777efc07d7f1e4be.png",
"fullname": "María Grandury",
"isPro": false,
"type": "user",
"user": "mariagrandury"
}
},
{
"_id": "678e1b151a99a49b98056e20",
"hidden": false,
"name": "Pedro Reviriego",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T13:16:45.528Z",
"user": {
"_id": "6574f66b06fdcd4ca9491299",
"avatarUrl": "/avatars/e31821949d75efb750ab2d9ebe12b9a8.svg",
"fullname": "pedro reviriego",
"isPro": false,
"type": "user",
"user": "reviriego"
}
}
] | 2025-01-16T10:27:51 |
Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs)
More Self-Confident Even When They Are Wrong
|
One of the most widely used methods to evaluate LLMs are Multiple Choice
Question (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on
almost any topic at scale as the results can be processed automatically. To
help the LLM answer, a few examples called few shots can be included in the
prompt. Moreover, the LLM can be asked to answer the question directly with the
selected option or to first provide the reasoning and then the selected answer,
which is known as chain of thought. In addition to checking whether the
selected answer is correct, the evaluation can look at the LLM-estimated
probability of its response as an indication of the confidence of the LLM in
the response. In this paper, we study how the LLM confidence in its answer
depends on whether the model has been asked to answer directly or to provide
the reasoning before answering. The results of the evaluation of questions on a
wide range of topics in seven different models show that LLMs are more
confident in their answers when they provide reasoning before the answer. This
occurs regardless of whether the selected answer is correct. Our hypothesis is
that this behavior is due to the reasoning that modifies the probability of the
selected answer, as the LLM predicts the answer based on the input question and
the reasoning that supports the selection made. Therefore, LLM estimated
probabilities seem to have intrinsic limitations that should be understood in
order to use them in evaluation procedures. Interestingly, the same behavior
has been observed in humans, for whom explaining an answer increases confidence
in its correctness.
| 29 |
678e1b161a99a49b98056e61
| null | null |
|
2025-01-20T00:21:54.930000 |
X-Dyna: Expressive Dynamic Human Image Animation
| 2 |
{
"_id": "64a5d8219f3b568c202b3137",
"avatarUrl": "/avatars/eef6fb7c70d272555a53183c0e50dbaf.svg",
"followerCount": 3,
"fullname": "Di Chang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Boese0601",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/64a5d8219f3b568c202b3137/3siRENiyngtaOyP2D65Fk.mp4"
] |
2501.10021
|
[
{
"_id": "678dc72ab3cda33f4a7e3b94",
"hidden": false,
"name": "Di Chang",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-20T04:33:59.756Z",
"user": {
"_id": "64a5d8219f3b568c202b3137",
"avatarUrl": "/avatars/eef6fb7c70d272555a53183c0e50dbaf.svg",
"fullname": "Di Chang",
"isPro": false,
"type": "user",
"user": "Boese0601"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b95",
"hidden": false,
"name": "Hongyi Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc72ab3cda33f4a7e3b96",
"hidden": false,
"name": "You Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:20:36.757Z",
"user": {
"_id": "6408dfd4b6a334f53e24023c",
"avatarUrl": "/avatars/b7e3fa4fbec6313e94ff3384b74dabfc.svg",
"fullname": "You Xie",
"isPro": false,
"type": "user",
"user": "youxie"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b97",
"hidden": false,
"name": "Yipeng Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:20:23.934Z",
"user": {
"_id": "67312401d433c6b122c38202",
"avatarUrl": "/avatars/599523d1412ab01671aa7f86c14d508a.svg",
"fullname": "Yipeng Gao",
"isPro": false,
"type": "user",
"user": "YipengGao"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b98",
"hidden": false,
"name": "Zhengfei Kuang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:20:17.627Z",
"user": {
"_id": "673460e68f847a342114a00d",
"avatarUrl": "/avatars/a8bed17291f166a6e1dc79413f6ce80a.svg",
"fullname": "Zhengfei Kuang",
"isPro": false,
"type": "user",
"user": "zfkuang"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b99",
"hidden": false,
"name": "Shengqu Cai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:20:11.259Z",
"user": {
"_id": "66a1da7cc9e703d2af5ad742",
"avatarUrl": "/avatars/f9cd9ae3407e249ab4569479200feb1f.svg",
"fullname": "Shengqu Cai",
"isPro": true,
"type": "user",
"user": "primecai"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b9a",
"hidden": false,
"name": "Chenxu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc72ab3cda33f4a7e3b9b",
"hidden": false,
"name": "Guoxian Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:19:38.417Z",
"user": {
"_id": "63086a237dc1b1a54cc6c24d",
"avatarUrl": "/avatars/477b94134edc4c18c8f769ecbb7d8091.svg",
"fullname": "Song",
"isPro": false,
"type": "user",
"user": "guoxiansong"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b9c",
"hidden": false,
"name": "Chao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc72ab3cda33f4a7e3b9d",
"hidden": false,
"name": "Yichun Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:19:30.964Z",
"user": {
"_id": "6309ad81b105f8675bd5a796",
"avatarUrl": "/avatars/01c2186924486da4606e128e83709164.svg",
"fullname": "Shi",
"isPro": true,
"type": "user",
"user": "Yichun"
}
},
{
"_id": "678dc72ab3cda33f4a7e3b9e",
"hidden": false,
"name": "Zeyuan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc72ab3cda33f4a7e3b9f",
"hidden": false,
"name": "Shijie Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:18:48.920Z",
"user": {
"_id": "642a276516d4d8293c9a47e8",
"avatarUrl": "/avatars/80e6db8bc2544f3486b11b57858a8692.svg",
"fullname": "Shijie Zhou",
"isPro": false,
"type": "user",
"user": "shijiezhou"
}
},
{
"_id": "678dc72ab3cda33f4a7e3ba0",
"hidden": false,
"name": "Linjie Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc72ab3cda33f4a7e3ba1",
"hidden": false,
"name": "Gordon Wetzstein",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:17:49.662Z",
"user": {
"_id": "6694e583ac96ca2c17131505",
"avatarUrl": "/avatars/6e7a31f257e36cf301da6f879dc0a122.svg",
"fullname": "Gordon Wetzstein",
"isPro": false,
"type": "user",
"user": "wetzste1"
}
},
{
"_id": "678dc72ab3cda33f4a7e3ba2",
"hidden": false,
"name": "Mohammad Soleymani",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:17:43.486Z",
"user": {
"_id": "65fcb99d383d3f256c3a92d2",
"avatarUrl": "/avatars/b85d32f4d7a19816b8d499e05b173ad1.svg",
"fullname": "Mohammad Soleymani",
"isPro": false,
"type": "user",
"user": "msoleymani"
}
}
] | 2025-01-17T08:10:53 |
X-Dyna: Expressive Dynamic Human Image Animation
|
We introduce X-Dyna, a novel zero-shot, diffusion-based pipeline for
animating a single human image using facial expressions and body movements
derived from a driving video, that generates realistic, context-aware dynamics
for both the subject and the surrounding environment. Building on prior
approaches centered on human pose control, X-Dyna addresses key shortcomings
causing the loss of dynamic details, enhancing the lifelike qualities of human
video animations. At the core of our approach is the Dynamics-Adapter, a
lightweight module that effectively integrates reference appearance context
into the spatial attentions of the diffusion backbone while preserving the
capacity of motion modules in synthesizing fluid and intricate dynamic details.
Beyond body pose control, we connect a local control module with our model to
capture identity-disentangled facial expressions, facilitating accurate
expression transfer for enhanced realism in animated scenes. Together, these
components form a unified framework capable of learning physical human motion
and natural scene dynamics from a diverse blend of human and scene videos.
Comprehensive qualitative and quantitative evaluations demonstrate that X-Dyna
outperforms state-of-the-art methods, creating highly lifelike and expressive
animations. The code is available at https://github.com/bytedance/X-Dyna.
| 14 |
678dc72bb3cda33f4a7e3c15
| null | null |
|
2025-01-19T23:37:31.452000 |
HiFi-SR: A Unified Generative Transformer-Convolutional Adversarial Network for High-Fidelity Speech Super-Resolution
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.10045
|
[
{
"_id": "678dd3044ce7abd7ef1f5345",
"hidden": false,
"name": "Shengkui Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd3044ce7abd7ef1f5346",
"hidden": false,
"name": "Kun Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd3044ce7abd7ef1f5347",
"hidden": false,
"name": "Zexu Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd3044ce7abd7ef1f5348",
"hidden": false,
"name": "Yukun Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd3044ce7abd7ef1f5349",
"hidden": false,
"name": "Chong Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd3044ce7abd7ef1f534a",
"hidden": false,
"name": "Bin Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T09:04:38 |
HiFi-SR: A Unified Generative Transformer-Convolutional Adversarial
Network for High-Fidelity Speech Super-Resolution
|
The application of generative adversarial networks (GANs) has recently
advanced speech super-resolution (SR) based on intermediate representations
like mel-spectrograms. However, existing SR methods that typically rely on
independently trained and concatenated networks may lead to inconsistent
representations and poor speech quality, especially in out-of-domain scenarios.
In this work, we propose HiFi-SR, a unified network that leverages end-to-end
adversarial training to achieve high-fidelity speech super-resolution. Our
model features a unified transformer-convolutional generator designed to
seamlessly handle both the prediction of latent representations and their
conversion into time-domain waveforms. The transformer network serves as a
powerful encoder, converting low-resolution mel-spectrograms into latent space
representations, while the convolutional network upscales these representations
into high-resolution waveforms. To enhance high-frequency fidelity, we
incorporate a multi-band, multi-scale time-frequency discriminator, along with
a multi-scale mel-reconstruction loss in the adversarial training process.
HiFi-SR is versatile, capable of upscaling any input speech signal between 4
kHz and 32 kHz to a 48 kHz sampling rate. Experimental results demonstrate that
HiFi-SR significantly outperforms existing speech SR methods across both
objective metrics and ABX preference tests, for both in-domain and
out-of-domain scenarios (https://github.com/modelscope/ClearerVoice-Studio).
| 9 |
678dd3054ce7abd7ef1f53b8
| null | null |
|
2025-01-19T23:32:27.791000 |
Textoon: Generating Vivid 2D Cartoon Characters from Text Descriptions
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.10020
|
[
{
"_id": "678dd1d27e1f344cdc26b717",
"hidden": false,
"name": "Chao He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd1d27e1f344cdc26b718",
"hidden": false,
"name": "Jianqiang Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dd1d27e1f344cdc26b719",
"hidden": false,
"name": "Liefeng Bo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:07:42.307Z",
"user": {
"_id": "63d0cc736b985b0f25d0412c",
"avatarUrl": "/avatars/3eb8c79f9a7c4c819038ea7b04e323dd.svg",
"fullname": "Bo",
"isPro": false,
"type": "user",
"user": "Liefeng"
}
}
] | 2025-01-17T08:09:06 |
Textoon: Generating Vivid 2D Cartoon Characters from Text Descriptions
|
The 2D cartoon style is a prominent art form in digital character creation,
particularly popular among younger audiences. While advancements in digital
human technology have spurred extensive research into photorealistic digital
humans and 3D characters, interactive 2D cartoon characters have received
comparatively less attention. Unlike 3D counterparts, which require
sophisticated construction and resource-intensive rendering, Live2D, a
widely-used format for 2D cartoon characters, offers a more efficient
alternative, which allows to animate 2D characters in a manner that simulates
3D movement without the necessity of building a complete 3D model. Furthermore,
Live2D employs lightweight HTML5 (H5) rendering, improving both accessibility
and efficiency. In this technical report, we introduce Textoon, an innovative
method for generating diverse 2D cartoon characters in the Live2D format based
on text descriptions. The Textoon leverages cutting-edge language and vision
models to comprehend textual intentions and generate 2D appearance, capable of
creating a wide variety of stunning and interactive 2D characters within one
minute. The project homepage is https://human3daigc.github.io/Textoon_webpage/.
| 22 |
678dd1d47e1f344cdc26b7b2
| null | null |
|
2025-01-19T23:15:32.498000 |
GaussianAvatar-Editor: Photorealistic Animatable Gaussian Head Avatar Editor
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.09978
|
[
{
"_id": "678dcddd1c0169c0bc73364e",
"hidden": false,
"name": "Xiangyue Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-21T13:10:10.397Z",
"user": {
"_id": "64edbc604f6a405a135ee506",
"avatarUrl": "/avatars/9f4976d8bc093cb8c5604b7235e980e4.svg",
"fullname": "Xiangyue Liu",
"isPro": false,
"type": "user",
"user": "Liuxy-cx"
}
},
{
"_id": "678dcddd1c0169c0bc73364f",
"hidden": false,
"name": "Kunming Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:37:28.443Z",
"user": {
"_id": "64baa2a530a1f0f0f03e758c",
"avatarUrl": "/avatars/100deaba962f49520a028c94b51720b1.svg",
"fullname": "Kunming Luo",
"isPro": false,
"type": "user",
"user": "coolbeam"
}
},
{
"_id": "678dcddd1c0169c0bc733650",
"hidden": false,
"name": "Heng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dcddd1c0169c0bc733651",
"hidden": false,
"name": "Qi Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-21T10:07:10.035Z",
"user": {
"_id": "65001a3632d2159207e9af77",
"avatarUrl": "/avatars/4b7bca2bdc9216fc06857dc1c2183ae6.svg",
"fullname": "Qiqi Meng",
"isPro": false,
"type": "user",
"user": "mayushii"
}
},
{
"_id": "678dcddd1c0169c0bc733652",
"hidden": false,
"name": "Yuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dcddd1c0169c0bc733653",
"hidden": false,
"name": "Li Yi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dcddd1c0169c0bc733654",
"hidden": false,
"name": "Ping Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T06:40:20 |
GaussianAvatar-Editor: Photorealistic Animatable Gaussian Head Avatar
Editor
|
We introduce GaussianAvatar-Editor, an innovative framework for text-driven
editing of animatable Gaussian head avatars that can be fully controlled in
expression, pose, and viewpoint. Unlike static 3D Gaussian editing, editing
animatable 4D Gaussian avatars presents challenges related to motion occlusion
and spatial-temporal inconsistency. To address these issues, we propose the
Weighted Alpha Blending Equation (WABE). This function enhances the blending
weight of visible Gaussians while suppressing the influence on non-visible
Gaussians, effectively handling motion occlusion during editing. Furthermore,
to improve editing quality and ensure 4D consistency, we incorporate
conditional adversarial learning into the editing process. This strategy helps
to refine the edited results and maintain consistency throughout the animation.
By integrating these methods, our GaussianAvatar-Editor achieves photorealistic
and consistent results in animatable 4D Gaussian editing. We conduct
comprehensive experiments across various subjects to validate the effectiveness
of our proposed techniques, which demonstrates the superiority of our approach
over existing methods. More results and code are available at: [Project
Link](https://xiangyueliu.github.io/GaussianAvatar-Editor/).
| 6 |
678dcde21c0169c0bc7337fb
| null | null |
|
2025-01-19T22:30:10.779000 |
Evolving Deeper LLM Thinking
| 5 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.09891
|
[
{
"_id": "678dc332281a0e32feb5fbfe",
"hidden": false,
"name": "Kuang-Huei Lee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T13:16:59.574Z",
"user": {
"_id": "669fbf44200c01b737751dc5",
"avatarUrl": "/avatars/2023c01b2a8cc1625cafcd0b625871dc.svg",
"fullname": "Kuang-Huei Lee",
"isPro": false,
"type": "user",
"user": "khlee112"
}
},
{
"_id": "678dc332281a0e32feb5fbff",
"hidden": false,
"name": "Ian Fischer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T13:18:01.647Z",
"user": {
"_id": "64ef4f9866f36326b3ec5b8c",
"avatarUrl": "/avatars/9eea179e0797f952664f33e1aef21e88.svg",
"fullname": "Ian Fischer",
"isPro": false,
"type": "user",
"user": "Ianfischer"
}
},
{
"_id": "678dc332281a0e32feb5fc00",
"hidden": false,
"name": "Yueh-Hua Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc332281a0e32feb5fc01",
"hidden": false,
"name": "Dave Marwood",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc332281a0e32feb5fc02",
"hidden": false,
"name": "Shumeet Baluja",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc332281a0e32feb5fc03",
"hidden": false,
"name": "Dale Schuurmans",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc332281a0e32feb5fc04",
"hidden": false,
"name": "Xinyun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T13:18:37.820Z",
"user": {
"_id": "64d0268001931c60161e026a",
"avatarUrl": "/avatars/0ea48c47b0270b449fd6b97b495e64a6.svg",
"fullname": "Xinyun Chen",
"isPro": true,
"type": "user",
"user": "xinyunchen"
}
}
] | 2025-01-17T00:41:44 |
Evolving Deeper LLM Thinking
|
We explore an evolutionary search strategy for scaling inference time compute
in Large Language Models. The proposed approach, Mind Evolution, uses a
language model to generate, recombine and refine candidate responses. The
proposed approach avoids the need to formalize the underlying inference problem
whenever a solution evaluator is available. Controlling for inference cost, we
find that Mind Evolution significantly outperforms other inference strategies
such as Best-of-N and Sequential Revision in natural language planning tasks.
In the TravelPlanner and Natural Plan benchmarks, Mind Evolution solves more
than 98% of the problem instances using Gemini 1.5 Pro without the use of a
formal solver.
| 106 |
678dc333281a0e32feb5fc2c
| null | null |
|
2025-01-19T22:27:10.419000 |
PaSa: An LLM Agent for Comprehensive Academic Paper Search
| 10 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.10120
|
[
{
"_id": "678dc283d4a7a158a8e5cf08",
"hidden": false,
"name": "Yichen He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T13:19:13.591Z",
"user": {
"_id": "60ea81771cc8dc259c58e905",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60ea81771cc8dc259c58e905/kmGlaNvdS4EEHc_5qongT.jpeg",
"fullname": "yichen he",
"isPro": false,
"type": "user",
"user": "hyc2026"
}
},
{
"_id": "678dc283d4a7a158a8e5cf09",
"hidden": false,
"name": "Guanhua Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc283d4a7a158a8e5cf0a",
"hidden": false,
"name": "Peiyuan Feng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T13:19:41.033Z",
"user": {
"_id": "662608797670fa2bc0d0fa0a",
"avatarUrl": "/avatars/eef336e6ba43fae56c90e17f60606f4d.svg",
"fullname": "fengpeiyuan",
"isPro": false,
"type": "user",
"user": "fpybytedance"
}
},
{
"_id": "678dc283d4a7a158a8e5cf0b",
"hidden": false,
"name": "Yuan Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc283d4a7a158a8e5cf0c",
"hidden": false,
"name": "Yuchen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc283d4a7a158a8e5cf0d",
"hidden": false,
"name": "Hang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678dc283d4a7a158a8e5cf0e",
"hidden": false,
"name": "Weinan E",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-17T11:12:28 |
PaSa: An LLM Agent for Comprehensive Academic Paper Search
|
We introduce PaSa, an advanced Paper Search agent powered by large language
models. PaSa can autonomously make a series of decisions, including invoking
search tools, reading papers, and selecting relevant references, to ultimately
obtain comprehensive and accurate results for complex scholarly queries. We
optimize PaSa using reinforcement learning with a synthetic dataset,
AutoScholarQuery, which includes 35k fine-grained academic queries and
corresponding papers sourced from top-tier AI conference publications.
Additionally, we develop RealScholarQuery, a benchmark collecting real-world
academic queries to assess PaSa performance in more realistic scenarios.
Despite being trained on synthetic data, PaSa significantly outperforms
existing baselines on RealScholarQuery, including Google, Google Scholar,
Google with GPT-4 for paraphrased queries, chatGPT (search-enabled GPT-4o),
GPT-o1, and PaSa-GPT-4o (PaSa implemented by prompting GPT-4o). Notably,
PaSa-7B surpasses the best Google-based baseline, Google with GPT-4o, by 37.78%
in recall@20 and 39.90% in recall@50. It also exceeds PaSa-GPT-4o by 30.36% in
recall and 4.25% in precision. Model, datasets, and code are available at
https://github.com/bytedance/pasa.
| 43 |
678dc284d4a7a158a8e5cf48
| null | null |
|
2025-01-19T21:57:49.821000 |
ComplexFuncBench: Exploring Multi-Step and Constrained Function Calling under Long-Context Scenario
| 2 |
{
"_id": "60eff04e22ab0ac83b0fc9d8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60eff04e22ab0ac83b0fc9d8/pBjftNyFN1qB8Br3FZQmD.jpeg",
"followerCount": null,
"fullname": "lucen zhong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "anchorzhong",
"type": "user"
}
| true | null |
2501.10132
|
[
{
"_id": "678db8f76e06f11c16d15ff5",
"hidden": false,
"name": "Lucen Zhong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:32:47.710Z",
"user": {
"_id": "60eff04e22ab0ac83b0fc9d8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60eff04e22ab0ac83b0fc9d8/pBjftNyFN1qB8Br3FZQmD.jpeg",
"fullname": "lucen zhong",
"isPro": false,
"type": "user",
"user": "anchorzhong"
}
},
{
"_id": "678db8f76e06f11c16d15ff6",
"hidden": false,
"name": "Zhengxiao Du",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:32:53.512Z",
"user": {
"_id": "63033dc4e1e7f0e03a5e1a31",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661157784937-63033dc4e1e7f0e03a5e1a31.jpeg",
"fullname": "Zhengxiao Du",
"isPro": false,
"type": "user",
"user": "zxdu20"
}
},
{
"_id": "678db8f76e06f11c16d15ff7",
"hidden": false,
"name": "Xiaohan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678db8f76e06f11c16d15ff8",
"hidden": false,
"name": "Haiyi Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:33:03.717Z",
"user": {
"_id": "66e40e556998c3d86c3e9263",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/3aZyGNr8bbnY0Ry0iObr4.jpeg",
"fullname": "Haiyi Hu",
"isPro": false,
"type": "user",
"user": "haithesea"
}
},
{
"_id": "678db8f76e06f11c16d15ff9",
"hidden": false,
"name": "Jie Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-20T14:33:09.675Z",
"user": {
"_id": "640dff05474aa6f89556677e",
"avatarUrl": "/avatars/1b4591c7322d649c797b3125148f1915.svg",
"fullname": "Jie Tang",
"isPro": false,
"type": "user",
"user": "jerytang"
}
}
] | 2025-01-17T11:41:53 |
ComplexFuncBench: Exploring Multi-Step and Constrained Function Calling
under Long-Context Scenario
|
Enhancing large language models (LLMs) with real-time APIs can help generate
more accurate and up-to-date responses. However, evaluating the function
calling abilities of LLMs in real-world scenarios remains under-explored due to
the complexity of data collection and evaluation. In this work, we introduce
ComplexFuncBench, a benchmark for complex function calling across five
real-world scenarios. Compared to existing benchmarks, ComplexFuncBench
encompasses multi-step and constrained function calling, which requires
long-parameter filing, parameter value reasoning, and 128k long context.
Additionally, we propose an automatic framework, ComplexEval, for
quantitatively evaluating complex function calling tasks. Through comprehensive
experiments, we demonstrate the deficiencies of state-of-the-art LLMs in
function calling and suggest future directions for optimizing these
capabilities. The data and code are available at
https://github.com/THUDM/ComplexFuncBench.
| 19 |
678db8f86e06f11c16d16040
| null | null |
|
2025-01-17T09:21:33.913000 |
The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating Large Language Models
| 2 |
{
"_id": "60107b385ac3e86b3ea4fc34",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"followerCount": 570,
"fullname": "Daniel van Strien",
"isHf": true,
"isMod": false,
"isPro": true,
"name": "davanstrien",
"type": "user"
}
| false | null |
2501.09653
|
[
{
"_id": "678a632dd03f325c5a5ad954",
"hidden": false,
"name": "Jonathan Katzy",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T17:05:55.731Z",
"user": {
"_id": "643d13b04816d7cb420966dc",
"avatarUrl": "/avatars/4f4875e35fa57c2b958cdb5dd6b70c27.svg",
"fullname": "Jonathan katzy",
"isPro": false,
"type": "user",
"user": "Jkatzy"
}
},
{
"_id": "678a632dd03f325c5a5ad955",
"hidden": false,
"name": "Razvan Mihai Popescu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T17:05:53.530Z",
"user": {
"_id": "6446fea4bd9e0e0edf521d49",
"avatarUrl": "/avatars/493c16b52e1a9a940b4e02b6cd6c6cc3.svg",
"fullname": "Razvan Popescu",
"isPro": false,
"type": "user",
"user": "Razvan27"
}
},
{
"_id": "678a632dd03f325c5a5ad956",
"hidden": false,
"name": "Arie van Deursen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678a632dd03f325c5a5ad957",
"hidden": false,
"name": "Maliheh Izadi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T14:25:21.390Z",
"user": {
"_id": "63dd4b2af37111482526e0e9",
"avatarUrl": "/avatars/ffd2a07c61ea6272009df6184cb9dcef.svg",
"fullname": "Maliheh Izadi",
"isPro": false,
"type": "user",
"user": "MalihehIzadi"
}
}
] | 2025-01-16T16:48:41 |
The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating
Large Language Models
|
The recent rise in the popularity of large language models has spurred the
development of extensive code datasets needed to train them. This has left
limited code available for collection and use in the downstream investigation
of specific behaviors, or evaluation of large language models without suffering
from data contamination. To address this problem, we release The Heap, a large
multilingual dataset covering 57 programming languages that has been
deduplicated with respect to other open datasets of code, enabling researchers
to conduct fair evaluations of large language models without significant data
cleaning overhead.
| 12 |
678a632dd03f325c5a5ad997
| null | null |
|
2025-01-17T04:30:36.847000 |
Do generative video models learn physical principles from watching videos?
| 3 |
{
"_id": "6475c37b04c82116f9bb2356",
"avatarUrl": "/avatars/6ec34eb3cfd091a38454ac3de72aaddc.svg",
"followerCount": null,
"fullname": "saman motamed",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sam-motamed",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/6475c37b04c82116f9bb2356/V3qzZlvvmsm_9IssX9aVF.mp4"
] |
2501.09038
|
[
{
"_id": "678a20d55a84f1087bd61c82",
"hidden": false,
"name": "Saman Motamed",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:51:43.421Z",
"user": {
"_id": "6475c37b04c82116f9bb2356",
"avatarUrl": "/avatars/6ec34eb3cfd091a38454ac3de72aaddc.svg",
"fullname": "saman motamed",
"isPro": false,
"type": "user",
"user": "sam-motamed"
}
},
{
"_id": "678a20d55a84f1087bd61c83",
"hidden": false,
"name": "Laura Culp",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678a20d55a84f1087bd61c84",
"hidden": false,
"name": "Kevin Swersky",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:51:57.327Z",
"user": {
"_id": "630e6ef664f1f8d0c771b758",
"avatarUrl": "/avatars/1df2bfadb2b6fdf8307189936efc6ef0.svg",
"fullname": "Kevin Swersky",
"isPro": false,
"type": "user",
"user": "kswersky"
}
},
{
"_id": "678a20d55a84f1087bd61c85",
"hidden": false,
"name": "Priyank Jaini",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678a20d55a84f1087bd61c86",
"hidden": false,
"name": "Robert Geirhos",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:52:10.657Z",
"user": {
"_id": "673bbe0d7dfcdedd52619ec2",
"avatarUrl": "/avatars/531a44f05d0c738bbe3e028c76c2e948.svg",
"fullname": "Robert Geirhos",
"isPro": false,
"type": "user",
"user": "rgeirhos"
}
}
] | 2025-01-14T20:59:37 |
Do generative video models learn physical principles from watching
videos?
|
AI video generation is undergoing a revolution, with quality and realism
advancing rapidly. These advances have led to a passionate scientific debate:
Do video models learn ``world models'' that discover laws of physics -- or,
alternatively, are they merely sophisticated pixel predictors that achieve
visual realism without understanding the physical principles of reality? We
address this question by developing Physics-IQ, a comprehensive benchmark
dataset that can only be solved by acquiring a deep understanding of various
physical principles, like fluid dynamics, optics, solid mechanics, magnetism
and thermodynamics. We find that across a range of current models (Sora,
Runway, Pika, Lumiere, Stable Video Diffusion, and VideoPoet), physical
understanding is severely limited, and unrelated to visual realism. At the same
time, some test cases can already be successfully solved. This indicates that
acquiring certain physical principles from observation alone may be possible,
but significant challenges remain. While we expect rapid advances ahead, our
work demonstrates that visual realism does not imply physical understanding.
Our project page is at https://physics-iq.github.io; code at
https://github.com/google-deepmind/physics-IQ-benchmark.
| 32 |
678a20d95a84f1087bd61d65
| null | null |
|
2025-01-17T01:24:01.494000 |
OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.09751
|
[
{
"_id": "6789f776766dd160379b89fb",
"hidden": false,
"name": "Zekun Xi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:29:57.432Z",
"user": {
"_id": "647229256facfb01d8ae7b89",
"avatarUrl": "/avatars/2fc34d2739b28c1089b20e7a7fa40f0e.svg",
"fullname": "Xi Ze Kun",
"isPro": false,
"type": "user",
"user": "ZekunXi"
}
},
{
"_id": "6789f776766dd160379b89fc",
"hidden": false,
"name": "Wenbiao Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f776766dd160379b89fd",
"hidden": false,
"name": "Jizhan Fang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:30:13.133Z",
"user": {
"_id": "669663472d25bd04e9af1d66",
"avatarUrl": "/avatars/8b11d5d79d1b8b205baa498a942f573c.svg",
"fullname": "Jizhan Fang",
"isPro": false,
"type": "user",
"user": "JizhanFang"
}
},
{
"_id": "6789f776766dd160379b89fe",
"hidden": false,
"name": "Jialong Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:31:40.494Z",
"user": {
"_id": "644a4fbc2166258fccc664bc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg",
"fullname": "Jialong Wu",
"isPro": false,
"type": "user",
"user": "callanwu"
}
},
{
"_id": "6789f776766dd160379b89ff",
"hidden": false,
"name": "Runnan Fang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:31:49.555Z",
"user": {
"_id": "63d32cd7b734eaa4d4fa410b",
"avatarUrl": "/avatars/68acb80f62bc6493e1ad26506999b6c4.svg",
"fullname": "Runnan Fang",
"isPro": false,
"type": "user",
"user": "Runnaning"
}
},
{
"_id": "6789f776766dd160379b8a00",
"hidden": false,
"name": "Ningyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:25:07.769Z",
"user": {
"_id": "620b3bbb0668e435407c8d0a",
"avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg",
"fullname": "Ningyu Zhang",
"isPro": false,
"type": "user",
"user": "Ningyu"
}
},
{
"_id": "6789f776766dd160379b8a01",
"hidden": false,
"name": "Jiang Yong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f776766dd160379b8a02",
"hidden": false,
"name": "Pengjun Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:32:07.000Z",
"user": {
"_id": "63a091e42fabbbb89991f5ce",
"avatarUrl": "/avatars/d55485b06461764c36c9edf9d6e8892c.svg",
"fullname": "pengjun xie",
"isPro": false,
"type": "user",
"user": "xpjandy"
}
},
{
"_id": "6789f776766dd160379b8a03",
"hidden": false,
"name": "Fei Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:32:22.107Z",
"user": {
"_id": "635b8b6a37c6a2c12e2cce00",
"avatarUrl": "/avatars/229fb72180529141515d1df797b33709.svg",
"fullname": "Fei Huang",
"isPro": false,
"type": "user",
"user": "hzhwcmhf"
}
},
{
"_id": "6789f776766dd160379b8a04",
"hidden": false,
"name": "Huajun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:32:36.118Z",
"user": {
"_id": "64931296137833d7ec7689cd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64931296137833d7ec7689cd/TBihNdp1ZwIWjhfAWjRr6.jpeg",
"fullname": "Huajun Chen",
"isPro": false,
"type": "user",
"user": "huajunsir"
}
}
] | 2025-01-16T18:58:06 |
OmniThink: Expanding Knowledge Boundaries in Machine Writing through
Thinking
|
Machine writing with large language models often relies on
retrieval-augmented generation. However, these approaches remain confined
within the boundaries of the model's predefined scope, limiting the generation
of content with rich information. Specifically, vanilla-retrieved information
tends to lack depth, utility, and suffers from redundancy, which negatively
impacts the quality of generated articles, leading to shallow, repetitive, and
unoriginal outputs. To address these issues, we propose OmniThink, a machine
writing framework that emulates the human-like process of iterative expansion
and reflection. The core idea behind OmniThink is to simulate the cognitive
behavior of learners as they progressively deepen their knowledge of the
topics. Experimental results demonstrate that OmniThink improves the knowledge
density of generated articles without compromising metrics such as coherence
and depth. Human evaluations and expert feedback further highlight the
potential of OmniThink to address real-world challenges in the generation of
long-form articles.
| 47 |
6789f777766dd160379b8a39
| null | null |
|
2025-01-17T01:05:31.701000 |
Exploring the Inquiry-Diagnosis Relationship with Advanced Patient Simulators
| 4 |
{
"_id": "60ec4a33375c1280fb422704",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1633664276374-60ec4a33375c1280fb422704.png",
"followerCount": 5,
"fullname": "Wang Yulong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wangyulong",
"type": "user"
}
| false | null |
2501.09484
|
[
{
"_id": "6789f2795a84f1087bc9274a",
"hidden": false,
"name": "Zhaocheng Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:25:21.951Z",
"user": {
"_id": "633e570be7d5ce7bfe037a53",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/633e570be7d5ce7bfe037a53/zV8ULv4Mu7YIGZ8D3JtmK.jpeg",
"fullname": "Zhaocheng Liu",
"isPro": false,
"type": "user",
"user": "zhaocheng"
}
},
{
"_id": "6789f2795a84f1087bc9274b",
"hidden": false,
"name": "Quan Tu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f2795a84f1087bc9274c",
"hidden": false,
"name": "Wen Ye",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T09:29:03.948Z",
"user": {
"_id": "670bd33ec9c7e01d3dbaf59d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/PCL5O0f3DXjKrc_EvdAgb.png",
"fullname": "yewen",
"isPro": false,
"type": "user",
"user": "yw271227"
}
},
{
"_id": "6789f2795a84f1087bc9274d",
"hidden": false,
"name": "Yu Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f2795a84f1087bc9274e",
"hidden": false,
"name": "Zhishou Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f2795a84f1087bc9274f",
"hidden": false,
"name": "Hengfu Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f2795a84f1087bc92750",
"hidden": false,
"name": "Yalun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789f2795a84f1087bc92751",
"hidden": false,
"name": "Qiang Ju",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:25:20.100Z",
"user": {
"_id": "62dcdb86d36b2070f928a51e",
"avatarUrl": "/avatars/a341e4305217f8abd14cff97201a24aa.svg",
"fullname": "sdujq",
"isPro": false,
"type": "user",
"user": "sdujq"
}
},
{
"_id": "6789f2795a84f1087bc92752",
"hidden": false,
"name": "Shizheng Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:34:35.155Z",
"user": {
"_id": "64ab92c362b769f936bba203",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64ab92c362b769f936bba203/Kq3Nlnq3DTPwungx8r9G5.jpeg",
"fullname": "Shizheng Li",
"isPro": false,
"type": "user",
"user": "ShizhengLi"
}
},
{
"_id": "6789f2795a84f1087bc92753",
"hidden": true,
"name": "Jian Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:34:48.855Z",
"user": {
"_id": "62d65139667051e0a29bffe7",
"avatarUrl": "/avatars/0252aa2bcd4cf1c8e4b87e5f164b6da5.svg",
"fullname": "Jian Xie",
"isPro": false,
"type": "user",
"user": "hsaest"
}
}
] | 2025-01-16T11:41:14 |
Exploring the Inquiry-Diagnosis Relationship with Advanced Patient
Simulators
|
Online medical consultation (OMC) restricts doctors to gathering patient
information solely through inquiries, making the already complex sequential
decision-making process of diagnosis even more challenging. Recently, the rapid
advancement of large language models has demonstrated a significant potential
to transform OMC. However, most studies have primarily focused on improving
diagnostic accuracy under conditions of relatively sufficient information,
while paying limited attention to the "inquiry" phase of the consultation
process. This lack of focus has left the relationship between "inquiry" and
"diagnosis" insufficiently explored. In this paper, we first extract real
patient interaction strategies from authentic doctor-patient conversations and
use these strategies to guide the training of a patient simulator that closely
mirrors real-world behavior. By inputting medical records into our patient
simulator to simulate patient responses, we conduct extensive experiments to
explore the relationship between "inquiry" and "diagnosis" in the consultation
process. Experimental results demonstrate that inquiry and diagnosis adhere to
the Liebig's law: poor inquiry quality limits the effectiveness of diagnosis,
regardless of diagnostic capability, and vice versa. Furthermore, the
experiments reveal significant differences in the inquiry performance of
various models. To investigate this phenomenon, we categorize the inquiry
process into four types: (1) chief complaint inquiry; (2) specification of
known symptoms; (3) inquiry about accompanying symptoms; and (4) gathering
family or medical history. We analyze the distribution of inquiries across the
four types for different models to explore the reasons behind their significant
performance differences. We plan to open-source the weights and related code of
our patient simulator at https://github.com/LIO-H-ZEN/PatientSimulator.
| 19 |
6789f27a5a84f1087bc9279e
| null | null |
|
2025-01-17T00:25:22.582000 |
SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.09756
|
[
{
"_id": "6789e9b78a27185f5084533a",
"hidden": false,
"name": "Sumit Chaturvedi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e9b78a27185f5084533b",
"hidden": false,
"name": "Mengwei Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:35:43.879Z",
"user": {
"_id": "63b48ed5a50cfcefda9dbe67",
"avatarUrl": "/avatars/98a2f07ea6a7ce3792f250cf9fecf402.svg",
"fullname": "Mengwei Ren",
"isPro": false,
"type": "user",
"user": "mengweir"
}
},
{
"_id": "6789e9b78a27185f5084533c",
"hidden": false,
"name": "Yannick Hold-Geoffroy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e9b78a27185f5084533d",
"hidden": false,
"name": "Jingyuan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e9b78a27185f5084533e",
"hidden": false,
"name": "Julie Dorsey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e9b78a27185f5084533f",
"hidden": false,
"name": "Zhixin Shu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:36:05.684Z",
"user": {
"_id": "62a8efd508a7ea93ff18785a",
"avatarUrl": "/avatars/ff259233d437833a304329bb973a5a04.svg",
"fullname": "Zhixin Shu",
"isPro": false,
"type": "user",
"user": "zhixinshu"
}
}
] | 2025-01-16T18:59:48 |
SynthLight: Portrait Relighting with Diffusion Model by Learning to
Re-render Synthetic Faces
|
We introduce SynthLight, a diffusion model for portrait relighting. Our
approach frames image relighting as a re-rendering problem, where pixels are
transformed in response to changes in environmental lighting conditions. Using
a physically-based rendering engine, we synthesize a dataset to simulate this
lighting-conditioned transformation with 3D head assets under varying lighting.
We propose two training and inference strategies to bridge the gap between the
synthetic and real image domains: (1) multi-task training that takes advantage
of real human portraits without lighting labels; (2) an inference time
diffusion sampling procedure based on classifier-free guidance that leverages
the input portrait to better preserve details. Our method generalizes to
diverse real photographs and produces realistic illumination effects, including
specular highlights and cast shadows, while preserving the subject's identity.
Our quantitative experiments on Light Stage data demonstrate results comparable
to state-of-the-art relighting methods. Our qualitative results on in-the-wild
images showcase rich and unprecedented illumination effects. Project Page:
https://vrroom.github.io/synthlight/
| 19 |
6789e9bd8a27185f50845514
| null | null |
|
2025-01-17T00:22:48.005000 |
FAST: Efficient Action Tokenization for Vision-Language-Action Models
| 5 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.09747
|
[
{
"_id": "6789e918e1b3fda757de947f",
"hidden": false,
"name": "Karl Pertsch",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:36:46.947Z",
"user": {
"_id": "65d4c1ff29b4ac81c265e6e6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65d4c1ff29b4ac81c265e6e6/GXgs28okxGfpBdqhxov9-.png",
"fullname": "Karl Pertsch",
"isPro": false,
"type": "user",
"user": "KarlP"
}
},
{
"_id": "6789e918e1b3fda757de9480",
"hidden": false,
"name": "Kyle Stachowicz",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:36:59.830Z",
"user": {
"_id": "6307eabda670ed10f9d2571f",
"avatarUrl": "/avatars/b0811b25ed4fb7e48dd380898049c764.svg",
"fullname": "Kyle Stachowicz",
"isPro": false,
"type": "user",
"user": "kylestach"
}
},
{
"_id": "6789e918e1b3fda757de9481",
"hidden": false,
"name": "Brian Ichter",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:37:05.229Z",
"user": {
"_id": "633a26ab474cfeb1a864dc56",
"avatarUrl": "/avatars/cd40accd894fc78810fc0d5108f413e9.svg",
"fullname": "Brian I",
"isPro": false,
"type": "user",
"user": "brianichter"
}
},
{
"_id": "6789e918e1b3fda757de9482",
"hidden": false,
"name": "Danny Driess",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:37:10.829Z",
"user": {
"_id": "67225875b46c703941fa7967",
"avatarUrl": "/avatars/7c89fbdd9a135210209bcd0cbfe7988a.svg",
"fullname": "Danny Driess",
"isPro": false,
"type": "user",
"user": "dannydriess"
}
},
{
"_id": "6789e918e1b3fda757de9483",
"hidden": false,
"name": "Suraj Nair",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e918e1b3fda757de9484",
"hidden": false,
"name": "Quan Vuong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e918e1b3fda757de9485",
"hidden": false,
"name": "Oier Mees",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:37:30.435Z",
"user": {
"_id": "663a7190d9dee283e3f56150",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/663a7190d9dee283e3f56150/zoYBdFIQPGWTS0R7bpg07.jpeg",
"fullname": "Oier Mees",
"isPro": false,
"type": "user",
"user": "oier-mees"
}
},
{
"_id": "6789e918e1b3fda757de9486",
"hidden": false,
"name": "Chelsea Finn",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:37:36.079Z",
"user": {
"_id": "64ac22a9193f0a807deb673d",
"avatarUrl": "/avatars/fcac4912678ad3cb6e817d40bdee9aea.svg",
"fullname": "Chelsea Finn",
"isPro": false,
"type": "user",
"user": "cbfinn"
}
},
{
"_id": "6789e918e1b3fda757de9487",
"hidden": false,
"name": "Sergey Levine",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:37:41.213Z",
"user": {
"_id": "665ce54120a307a3754849dd",
"avatarUrl": "/avatars/e698726e9be61dd50ce2efe372ed5dac.svg",
"fullname": "Sergey Levine",
"isPro": false,
"type": "user",
"user": "svlevine"
}
}
] | 2025-01-16T18:57:04 |
FAST: Efficient Action Tokenization for Vision-Language-Action Models
|
Autoregressive sequence models, such as Transformer-based vision-language
action (VLA) policies, can be tremendously effective for capturing complex and
generalizable robotic behaviors. However, such models require us to choose a
tokenization of our continuous action signals, which determines how the
discrete symbols predicted by the model map to continuous robot actions. We
find that current approaches for robot action tokenization, based on simple
per-dimension, per-timestep binning schemes, typically perform poorly when
learning dexterous skills from high-frequency robot data. To address this
challenge, we propose a new compression-based tokenization scheme for robot
actions, based on the discrete cosine transform. Our tokenization approach,
Frequency-space Action Sequence Tokenization (FAST), enables us to train
autoregressive VLAs for highly dexterous and high-frequency tasks where
standard discretization methods fail completely. Based on FAST, we release
FAST+, a universal robot action tokenizer, trained on 1M real robot action
trajectories. It can be used as a black-box tokenizer for a wide range of robot
action sequences, with diverse action spaces and control frequencies. Finally,
we show that, when combined with the pi0 VLA, our method can scale to training
on 10k hours of robot data and match the performance of diffusion VLAs, while
reducing training time by up to 5x.
| 23 |
6789e91ae1b3fda757de94d3
| null | null |
|
2025-01-17T00:18:47.834000 |
AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.09503
|
[
{
"_id": "6789e8178a14e7d6ea183dd3",
"hidden": false,
"name": "Junjie He",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:25.079Z",
"user": {
"_id": "63b54949889aa6707f08bfd7",
"avatarUrl": "/avatars/52101bb4289ba1e0b33b925a1a9536c0.svg",
"fullname": "Junjie He",
"isPro": false,
"type": "user",
"user": "Junjie96"
}
},
{
"_id": "6789e8178a14e7d6ea183dd4",
"hidden": false,
"name": "Yuxiang Tuo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:47:02.023Z",
"user": {
"_id": "64130f4014c9a170ae873bd8",
"avatarUrl": "/avatars/97f4609bbfe81660b28a12f3b135cdc3.svg",
"fullname": "tuoyuxiang",
"isPro": false,
"type": "user",
"user": "tuoyuxiang"
}
},
{
"_id": "6789e8178a14e7d6ea183dd5",
"hidden": false,
"name": "Binghui Chen",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-28T03:05:45.786Z",
"user": {
"_id": "63b66df8889aa6707f167f5d",
"avatarUrl": "/avatars/62248a70698a95f74ce7267ca42cacef.svg",
"fullname": "ashui",
"isPro": false,
"type": "user",
"user": "ashui"
}
},
{
"_id": "6789e8178a14e7d6ea183dd6",
"hidden": false,
"name": "Chongyang Zhong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e8178a14e7d6ea183dd7",
"hidden": false,
"name": "Yifeng Geng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:47:21.325Z",
"user": {
"_id": "659caf6b0030e8faffb00c41",
"avatarUrl": "/avatars/dde52bcf694120e59307b4ab8a7eeb33.svg",
"fullname": "gengyifeng",
"isPro": false,
"type": "user",
"user": "gengyifeng"
}
},
{
"_id": "6789e8178a14e7d6ea183dd8",
"hidden": false,
"name": "Liefeng Bo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:47:29.725Z",
"user": {
"_id": "63d0cc736b985b0f25d0412c",
"avatarUrl": "/avatars/3eb8c79f9a7c4c819038ea7b04e323dd.svg",
"fullname": "Bo",
"isPro": false,
"type": "user",
"user": "Liefeng"
}
}
] | 2025-01-16T12:28:39 |
AnyStory: Towards Unified Single and Multiple Subject Personalization in
Text-to-Image Generation
|
Recently, large-scale generative models have demonstrated outstanding
text-to-image generation capabilities. However, generating high-fidelity
personalized images with specific subjects still presents challenges,
especially in cases involving multiple subjects. In this paper, we propose
AnyStory, a unified approach for personalized subject generation. AnyStory not
only achieves high-fidelity personalization for single subjects, but also for
multiple subjects, without sacrificing subject fidelity. Specifically, AnyStory
models the subject personalization problem in an "encode-then-route" manner. In
the encoding step, AnyStory utilizes a universal and powerful image encoder,
i.e., ReferenceNet, in conjunction with CLIP vision encoder to achieve
high-fidelity encoding of subject features. In the routing step, AnyStory
utilizes a decoupled instance-aware subject router to accurately perceive and
predict the potential location of the corresponding subject in the latent
space, and guide the injection of subject conditions. Detailed experimental
results demonstrate the excellent performance of our method in retaining
subject details, aligning text descriptions, and personalizing for multiple
subjects. The project page is at https://aigcdesigngroup.github.io/AnyStory/ .
| 13 |
6789e81b8a14e7d6ea183f16
| null | null |
|
2025-01-17T00:14:48.500000 |
CaPa: Carve-n-Paint Synthesis for Efficient 4K Textured Mesh Generation
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.09433
|
[
{
"_id": "6789e73d7a7ce2b07045dd3e",
"hidden": false,
"name": "Hwan Heo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T09:29:05.735Z",
"user": {
"_id": "65f9520fee1763d31af63615",
"avatarUrl": "/avatars/6f7ad9f02a0f19b4452de4dbcafc2715.svg",
"fullname": "Hwan Heo",
"isPro": false,
"type": "user",
"user": "hhhwan"
}
},
{
"_id": "6789e73d7a7ce2b07045dd3f",
"hidden": false,
"name": "Jangyeong Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:48:03.141Z",
"user": {
"_id": "634c1f9bb6628cbe2861dcc2",
"avatarUrl": "/avatars/dd48dff0b639123c605b5c0ee10577d7.svg",
"fullname": "Jangyeong.Kim",
"isPro": false,
"type": "user",
"user": "longshiine"
}
},
{
"_id": "6789e73d7a7ce2b07045dd40",
"hidden": false,
"name": "Seongyeong Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e73d7a7ce2b07045dd41",
"hidden": false,
"name": "Jeong A Wi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e73d7a7ce2b07045dd42",
"hidden": false,
"name": "Junyoung Choi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e73d7a7ce2b07045dd43",
"hidden": false,
"name": "Sangjun Ahn",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-16T10:03:15 |
CaPa: Carve-n-Paint Synthesis for Efficient 4K Textured Mesh Generation
|
The synthesis of high-quality 3D assets from textual or visual inputs has
become a central objective in modern generative modeling. Despite the
proliferation of 3D generation algorithms, they frequently grapple with
challenges such as multi-view inconsistency, slow generation times, low
fidelity, and surface reconstruction problems. While some studies have
addressed some of these issues, a comprehensive solution remains elusive. In
this paper, we introduce CaPa, a carve-and-paint framework that
generates high-fidelity 3D assets efficiently. CaPa employs a two-stage
process, decoupling geometry generation from texture synthesis. Initially, a 3D
latent diffusion model generates geometry guided by multi-view inputs, ensuring
structural consistency across perspectives. Subsequently, leveraging a novel,
model-agnostic Spatially Decoupled Attention, the framework synthesizes
high-resolution textures (up to 4K) for a given geometry. Furthermore, we
propose a 3D-aware occlusion inpainting algorithm that fills untextured
regions, resulting in cohesive results across the entire model. This pipeline
generates high-quality 3D assets in less than 30 seconds, providing
ready-to-use outputs for commercial applications. Experimental results
demonstrate that CaPa excels in both texture fidelity and geometric stability,
establishing a new standard for practical, scalable 3D asset generation.
| 18 |
6789e7437a7ce2b07045df3b
| null | null |
|
2025-01-16T23:54:42.823000 |
Learnings from Scaling Visual Tokenizers for Reconstruction and Generation
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.09755
|
[
{
"_id": "6789e275810f471d6aa3d2fa",
"hidden": false,
"name": "Philippe Hansen-Estruch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d2fb",
"hidden": false,
"name": "David Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d2fc",
"hidden": false,
"name": "Ching-Yao Chung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d2fd",
"hidden": false,
"name": "Orr Zohar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:38:14.289Z",
"user": {
"_id": "648c9605565e3a44f3c9bb7b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/648c9605565e3a44f3c9bb7b/W5chvk17Zol6-2QSWkFVR.jpeg",
"fullname": "Orr Zohar",
"isPro": true,
"type": "user",
"user": "orrzohar"
}
},
{
"_id": "6789e275810f471d6aa3d2fe",
"hidden": false,
"name": "Jialiang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d2ff",
"hidden": false,
"name": "Tingbo Hou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:38:38.454Z",
"user": {
"_id": "655846d7ed8df83128f5826a",
"avatarUrl": "/avatars/d7ce174d7d1b8614d5f6f071225c0057.svg",
"fullname": "Hou",
"isPro": false,
"type": "user",
"user": "Tingbo"
}
},
{
"_id": "6789e275810f471d6aa3d300",
"hidden": false,
"name": "Tao Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d301",
"hidden": false,
"name": "Sriram Vishwanath",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d302",
"hidden": false,
"name": "Peter Vajda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e275810f471d6aa3d303",
"hidden": false,
"name": "Xinlei Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:38:56.604Z",
"user": {
"_id": "63e58e3a006a775275e59e41",
"avatarUrl": "/avatars/75262a35b27a2ae1939df9118120d99e.svg",
"fullname": "Xinlei Chen",
"isPro": false,
"type": "user",
"user": "endernewton"
}
}
] | 2025-01-16T18:59:04 |
Learnings from Scaling Visual Tokenizers for Reconstruction and
Generation
|
Visual tokenization via auto-encoding empowers state-of-the-art image and
video generative models by compressing pixels into a latent space. Although
scaling Transformer-based generators has been central to recent advances, the
tokenizer component itself is rarely scaled, leaving open questions about how
auto-encoder design choices influence both its objective of reconstruction and
downstream generative performance. Our work aims to conduct an exploration of
scaling in auto-encoders to fill in this blank. To facilitate this exploration,
we replace the typical convolutional backbone with an enhanced Vision
Transformer architecture for Tokenization (ViTok). We train ViTok on
large-scale image and video datasets far exceeding ImageNet-1K, removing data
constraints on tokenizer scaling. We first study how scaling the auto-encoder
bottleneck affects both reconstruction and generation -- and find that while it
is highly correlated with reconstruction, its relationship with generation is
more complex. We next explored the effect of separately scaling the
auto-encoders' encoder and decoder on reconstruction and generation
performance. Crucially, we find that scaling the encoder yields minimal gains
for either reconstruction or generation, while scaling the decoder boosts
reconstruction but the benefits for generation are mixed. Building on our
exploration, we design ViTok as a lightweight auto-encoder that achieves
competitive performance with state-of-the-art auto-encoders on ImageNet-1K and
COCO reconstruction tasks (256p and 512p) while outperforming existing
auto-encoders on 16-frame 128p video reconstruction for UCF-101, all with 2-5x
fewer FLOPs. When integrated with Diffusion Transformers, ViTok demonstrates
competitive performance on image generation for ImageNet-1K and sets new
state-of-the-art benchmarks for class-conditional video generation on UCF-101.
| 34 |
6789e27a810f471d6aa3d4e2
| null | null |
|
2025-01-16T23:52:15.279000 |
Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.09732
|
[
{
"_id": "6789e1dfaa9f64e4af498482",
"hidden": false,
"name": "Nanye Ma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:32:46.832Z",
"user": {
"_id": "66c398fc5f4422f886b71a00",
"avatarUrl": "/avatars/9cd690d7857de1b926ddcdc2bccbfdfa.svg",
"fullname": "Nanye Ma",
"isPro": false,
"type": "user",
"user": "willllis"
}
},
{
"_id": "6789e1dfaa9f64e4af498483",
"hidden": false,
"name": "Shangyuan Tong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:32:52.717Z",
"user": {
"_id": "64d39e0f5ceebf9c30359082",
"avatarUrl": "/avatars/7c21f18874498a793dd2275277d4dafb.svg",
"fullname": "Shangyuan Tong",
"isPro": false,
"type": "user",
"user": "sytong"
}
},
{
"_id": "6789e1dfaa9f64e4af498484",
"hidden": false,
"name": "Haolin Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e1dfaa9f64e4af498485",
"hidden": false,
"name": "Hexiang Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:33:08.220Z",
"user": {
"_id": "643441ccd55dea2d0ec2c309",
"avatarUrl": "/avatars/82e99d445e2b513ad7270fa852adbcbb.svg",
"fullname": "Hexiang Hu",
"isPro": false,
"type": "user",
"user": "hexianghu"
}
},
{
"_id": "6789e1dfaa9f64e4af498486",
"hidden": false,
"name": "Yu-Chuan Su",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:33:14.078Z",
"user": {
"_id": "6729508649696b4e066b0506",
"avatarUrl": "/avatars/246e13b236edf039f2ca27e4f4051be8.svg",
"fullname": "Yu-Chuan Su",
"isPro": false,
"type": "user",
"user": "ycsu"
}
},
{
"_id": "6789e1dfaa9f64e4af498487",
"hidden": false,
"name": "Mingda Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:33:35.040Z",
"user": {
"_id": "65676b1711b2bbd6c2ab093a",
"avatarUrl": "/avatars/bdd4364057c6b9e54d7ec451ad1ffb64.svg",
"fullname": "mingdazhang",
"isPro": false,
"type": "user",
"user": "mingdazhang"
}
},
{
"_id": "6789e1dfaa9f64e4af498488",
"hidden": false,
"name": "Xuan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e1dfaa9f64e4af498489",
"hidden": false,
"name": "Yandong Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:33:47.975Z",
"user": {
"_id": "6338914220fc636fd8b27fb8",
"avatarUrl": "/avatars/4c5a0c925c0f0296e02aa498218f339d.svg",
"fullname": "li",
"isPro": false,
"type": "user",
"user": "yandong"
}
},
{
"_id": "6789e1dfaa9f64e4af49848a",
"hidden": false,
"name": "Tommi Jaakkola",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e1dfaa9f64e4af49848b",
"hidden": false,
"name": "Xuhui Jia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:33:58.031Z",
"user": {
"_id": "634ce7ce05f736dff37aae5f",
"avatarUrl": "/avatars/1405c76a38f7ea497e4439f0e4e786a8.svg",
"fullname": "Xuhui Jia",
"isPro": false,
"type": "user",
"user": "Jxh-cuit"
}
},
{
"_id": "6789e1dfaa9f64e4af49848c",
"hidden": false,
"name": "Saining Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:34:03.297Z",
"user": {
"_id": "6596422646624a86ff3b3bda",
"avatarUrl": "/avatars/216e12b77e45ac5f1fa20932f5745411.svg",
"fullname": "Saining Xie",
"isPro": false,
"type": "user",
"user": "sainx"
}
}
] | 2025-01-16T18:30:37 |
Inference-Time Scaling for Diffusion Models beyond Scaling Denoising
Steps
|
Generative models have made significant impacts across various domains,
largely due to their ability to scale during training by increasing data,
computational resources, and model size, a phenomenon characterized by the
scaling laws. Recent research has begun to explore inference-time scaling
behavior in Large Language Models (LLMs), revealing how performance can further
improve with additional computation during inference. Unlike LLMs, diffusion
models inherently possess the flexibility to adjust inference-time computation
via the number of denoising steps, although the performance gains typically
flatten after a few dozen. In this work, we explore the inference-time scaling
behavior of diffusion models beyond increasing denoising steps and investigate
how the generation performance can further improve with increased computation.
Specifically, we consider a search problem aimed at identifying better noises
for the diffusion sampling process. We structure the design space along two
axes: the verifiers used to provide feedback, and the algorithms used to find
better noise candidates. Through extensive experiments on class-conditioned and
text-conditioned image generation benchmarks, our findings reveal that
increasing inference-time compute leads to substantial improvements in the
quality of samples generated by diffusion models, and with the complicated
nature of images, combinations of the components in the framework can be
specifically chosen to conform with different application scenario.
| 70 |
6789e1e4aa9f64e4af498679
| null | null |
|
2025-01-16T23:45:20.697000 |
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.09686
|
[
{
"_id": "6789e04652739943c940af38",
"hidden": false,
"name": "Fengli Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af39",
"hidden": false,
"name": "Qianyue Hao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:42:15.441Z",
"user": {
"_id": "6583e2b283a9e1460c6fb1e0",
"avatarUrl": "/avatars/a949165b1cec5e1d1d55f3af98182156.svg",
"fullname": "Qianyue Hao",
"isPro": false,
"type": "user",
"user": "haohao11"
}
},
{
"_id": "6789e04652739943c940af3a",
"hidden": false,
"name": "Zefang Zong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:42:26.065Z",
"user": {
"_id": "64feba7efa64465422ce3003",
"avatarUrl": "/avatars/abdc7a3748f6a4e15ebc6aa8d616c87d.svg",
"fullname": "zongzefang",
"isPro": false,
"type": "user",
"user": "zzfoutofspace"
}
},
{
"_id": "6789e04652739943c940af3b",
"hidden": false,
"name": "Jingwei Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T08:41:20.675Z",
"user": {
"_id": "66e59ff27d43e55cfd31bec9",
"avatarUrl": "/avatars/e046b51bf2a530255d3fb707991546a4.svg",
"fullname": "Chen Gao",
"isPro": false,
"type": "user",
"user": "chgao96"
}
},
{
"_id": "6789e04652739943c940af3c",
"hidden": false,
"name": "Yunke Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:43:05.430Z",
"user": {
"_id": "64071cbc2e309e65451f87b6",
"avatarUrl": "/avatars/2201244380d221b9db5661f20510d853.svg",
"fullname": "yunke zhang",
"isPro": false,
"type": "user",
"user": "berserkerko"
}
},
{
"_id": "6789e04652739943c940af3d",
"hidden": false,
"name": "Jingyi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af3e",
"hidden": false,
"name": "Xiaochong Lan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af3f",
"hidden": false,
"name": "Jiahui Gong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:43:44.415Z",
"user": {
"_id": "6509a040d95f30b9dcdbf789",
"avatarUrl": "/avatars/4e4b441e22a1f2c3387e5b981ba6bbbe.svg",
"fullname": "Jiahui Gong",
"isPro": false,
"type": "user",
"user": "zhazhahui7"
}
},
{
"_id": "6789e04652739943c940af40",
"hidden": false,
"name": "Tianjian Ouyang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:43:50.599Z",
"user": {
"_id": "6566fcb5118497d0af91dc3b",
"avatarUrl": "/avatars/e081d7d1b1657245cd818e5417cdcb2e.svg",
"fullname": "TIANJIAN OUYANG",
"isPro": false,
"type": "user",
"user": "Ouyangtj"
}
},
{
"_id": "6789e04652739943c940af41",
"hidden": false,
"name": "Fanjin Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af42",
"hidden": false,
"name": "Chenyang Shao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:44:14.869Z",
"user": {
"_id": "654dd731671d2c1ced0539e1",
"avatarUrl": "/avatars/7eaa5eee16aac9105cd2af7ee84841c3.svg",
"fullname": "shaochenyang",
"isPro": false,
"type": "user",
"user": "l-1-l"
}
},
{
"_id": "6789e04652739943c940af43",
"hidden": false,
"name": "Yuwei Yan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:44:24.670Z",
"user": {
"_id": "668e965959d9ffef7e8b035a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/tqjvWWxXgTkNhYeNhoPJQ.jpeg",
"fullname": "Yuwei Yan",
"isPro": false,
"type": "user",
"user": "PinkGranite"
}
},
{
"_id": "6789e04652739943c940af44",
"hidden": false,
"name": "Qinglong Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:44:31.561Z",
"user": {
"_id": "663ef727f9c2d3c9da3b8d4b",
"avatarUrl": "/avatars/4bb431e410078f7ba2487597d1beb589.svg",
"fullname": "qinglong yang",
"isPro": false,
"type": "user",
"user": "m912218831"
}
},
{
"_id": "6789e04652739943c940af45",
"hidden": false,
"name": "Yiwen Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:44:57.348Z",
"user": {
"_id": "6517326e335b4c728b3406a4",
"avatarUrl": "/avatars/cde4a672778fc69c172e6bdeee2d540a.svg",
"fullname": "Yiwen Song",
"isPro": false,
"type": "user",
"user": "yiwen-song"
}
},
{
"_id": "6789e04652739943c940af46",
"hidden": false,
"name": "Sijian Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af47",
"hidden": false,
"name": "Xinyuan Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af48",
"hidden": false,
"name": "Yu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789e04652739943c940af49",
"hidden": false,
"name": "Jie Feng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-20T09:29:08.372Z",
"user": {
"_id": "6465d3bd63e7e09dd02e95c3",
"avatarUrl": "/avatars/b2798bd5f8368f956bf7fab79d9432f0.svg",
"fullname": "Jie",
"isPro": false,
"type": "user",
"user": "JJ-TMT"
}
},
{
"_id": "6789e04652739943c940af4a",
"hidden": false,
"name": "Chen Gao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:45:59.888Z",
"user": {
"_id": "65049f41c6ae3df8f28cfe96",
"avatarUrl": "/avatars/a653ed4a215c4bd34710a1cee9f4d9cd.svg",
"fullname": "Chen Gao",
"isPro": false,
"type": "user",
"user": "gaochen315"
}
},
{
"_id": "6789e04652739943c940af4b",
"hidden": false,
"name": "Yong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-16T17:37:58 |
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with
Large Language Models
|
Language has long been conceived as an essential tool for human reasoning.
The breakthrough of Large Language Models (LLMs) has sparked significant
research interest in leveraging these models to tackle complex reasoning tasks.
Researchers have moved beyond simple autoregressive token generation by
introducing the concept of "thought" -- a sequence of tokens representing
intermediate steps in the reasoning process. This innovative paradigm enables
LLMs' to mimic complex human reasoning processes, such as tree search and
reflective thinking. Recently, an emerging trend of learning to reason has
applied reinforcement learning (RL) to train LLMs to master reasoning
processes. This approach enables the automatic generation of high-quality
reasoning trajectories through trial-and-error search algorithms, significantly
expanding LLMs' reasoning capacity by providing substantially more training
data. Furthermore, recent studies demonstrate that encouraging LLMs to "think"
with more tokens during test-time inference can further significantly boost
reasoning accuracy. Therefore, the train-time and test-time scaling combined to
show a new research frontier -- a path toward Large Reasoning Model. The
introduction of OpenAI's o1 series marks a significant milestone in this
research direction. In this survey, we present a comprehensive review of recent
progress in LLM reasoning. We begin by introducing the foundational background
of LLMs and then explore the key technical components driving the development
of large reasoning models, with a focus on automated data construction,
learning-to-reason techniques, and test-time scaling. We also analyze popular
open-source projects at building large reasoning models, and conclude with open
challenges and future research directions.
| 37 |
6789e04752739943c940afa5
| null | null |
|
2025-01-16T23:16:25.298000 |
RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation
| 2 |
{
"_id": "650237b5d1b0b0db4f29ae8a",
"avatarUrl": "/avatars/8f92cf8f3f1ddb45c2c58c4a59ce4633.svg",
"followerCount": null,
"fullname": "KAIQU LIANG",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "kaiquliang",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/650237b5d1b0b0db4f29ae8a/TdAsH0rQE1qFCphiura5C.png"
] |
2501.08617
|
[
{
"_id": "6789d842f16a4dec461a2040",
"hidden": false,
"name": "Kaiqu Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-17T10:39:14.928Z",
"user": {
"_id": "650237b5d1b0b0db4f29ae8a",
"avatarUrl": "/avatars/8f92cf8f3f1ddb45c2c58c4a59ce4633.svg",
"fullname": "KAIQU LIANG",
"isPro": false,
"type": "user",
"user": "kaiquliang"
}
},
{
"_id": "6789d842f16a4dec461a2041",
"hidden": false,
"name": "Haimin Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789d842f16a4dec461a2042",
"hidden": false,
"name": "Ryan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789d842f16a4dec461a2043",
"hidden": false,
"name": "Thomas L. Griffiths",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6789d842f16a4dec461a2044",
"hidden": false,
"name": "Jaime Fernández Fisac",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-15T06:33:15 |
RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation
|
Generative AI systems like foundation models (FMs) must align well with human
values to ensure their behavior is helpful and trustworthy. While Reinforcement
Learning from Human Feedback (RLHF) has shown promise for optimizing model
performance using human judgments, existing RLHF pipelines predominantly rely
on immediate feedback, which can fail to accurately reflect the downstream
impact of an interaction on users' utility. We demonstrate that feedback based
on evaluators' foresight estimates of downstream consequences systematically
induces Goodhart's Law dynamics, incentivizing misaligned behaviors like
sycophancy and deception and ultimately degrading user outcomes. To alleviate
this, we propose decoupling evaluation from prediction by refocusing RLHF on
hindsight feedback. Our theoretical analysis reveals that conditioning
evaluator feedback on downstream observations mitigates misalignment and
improves expected human utility, even when these observations are simulated by
the AI system itself. To leverage this insight in a practical alignment
algorithm, we introduce Reinforcement Learning from Hindsight Simulation
(RLHS), which first simulates plausible consequences and then elicits feedback
to assess what behaviors were genuinely beneficial in hindsight. We apply RLHS
to two widely-employed online and offline preference optimization methods --
Proximal Policy Optimization (PPO) and Direct Preference Optimization (DPO) --
and show empirically that misalignment is significantly reduced with both
methods. Through an online human user study, we show that RLHS consistently
outperforms RLHF in helping users achieve their goals and earns higher
satisfaction ratings, despite being trained solely with simulated hindsight
feedback. These results underscore the importance of focusing on long-term
consequences, even simulated ones, to mitigate misalignment in RLHF.
| 10 |
6789d844f16a4dec461a20dc
| null | null |
|
2025-01-16T12:01:52.392000 |
Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous Sensors via Language Grounding
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2501.04693
|
[
{
"_id": "6786a3632631abf6966ce890",
"hidden": false,
"name": "Joshua Jones",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786a3632631abf6966ce891",
"hidden": false,
"name": "Oier Mees",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786a3632631abf6966ce892",
"hidden": false,
"name": "Carmelo Sferrazza",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786a3632631abf6966ce893",
"hidden": false,
"name": "Kyle Stachowicz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786a3632631abf6966ce894",
"hidden": false,
"name": "Pieter Abbeel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786a3632631abf6966ce895",
"hidden": false,
"name": "Sergey Levine",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-08T18:57:33 |
Beyond Sight: Finetuning Generalist Robot Policies with Heterogeneous
Sensors via Language Grounding
|
Interacting with the world is a multi-sensory experience: achieving effective
general-purpose interaction requires making use of all available modalities --
including vision, touch, and audio -- to fill in gaps from partial observation.
For example, when vision is occluded reaching into a bag, a robot should rely
on its senses of touch and sound. However, state-of-the-art generalist robot
policies are typically trained on large datasets to predict robot actions
solely from visual and proprioceptive observations. In this work, we propose
FuSe, a novel approach that enables finetuning visuomotor generalist policies
on heterogeneous sensor modalities for which large datasets are not readily
available by leveraging natural language as a common cross-modal grounding. We
combine a multimodal contrastive loss with a sensory-grounded language
generation loss to encode high-level semantics. In the context of robot
manipulation, we show that FuSe enables performing challenging tasks that
require reasoning jointly over modalities such as vision, touch, and sound in a
zero-shot setting, such as multimodal prompting, compositional cross-modal
prompting, and descriptions of objects it interacts with. We show that the same
recipe is applicable to widely different generalist policies, including both
diffusion-based generalist policies and large vision-language-action (VLA)
models. Extensive experiments in the real world show that FuSeis able to
increase success rates by over 20% compared to all considered baselines.
| 3 |
6786a3642631abf6966ce8de
| null | null |
|
2025-01-16T12:00:35.249000 |
MINIMA: Modality Invariant Image Matching
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2412.19412
|
[
{
"_id": "67893aed91260c834a4d7cd9",
"hidden": false,
"name": "Xingyu Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67893aed91260c834a4d7cda",
"hidden": false,
"name": "Jiangwei Ren",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T17:06:00.673Z",
"user": {
"_id": "668e05b7410a13fa3dabb2c3",
"avatarUrl": "/avatars/c197ce4957404b2e3223d0ac3031581c.svg",
"fullname": "Jiangwei Ren",
"isPro": false,
"type": "user",
"user": "lsxi77777"
}
},
{
"_id": "67893aed91260c834a4d7cdb",
"hidden": false,
"name": "Zizhuo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67893aed91260c834a4d7cdc",
"hidden": false,
"name": "Xin Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67893aed91260c834a4d7cdd",
"hidden": false,
"name": "Dingkang Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T22:13:25.294Z",
"user": {
"_id": "67467b5979406f42a14517e9",
"avatarUrl": "/avatars/a8c82dd45b7c94d48ad7fdf8a87bbe66.svg",
"fullname": "Dingkang Liang",
"isPro": false,
"type": "user",
"user": "dkliang"
}
},
{
"_id": "67893aed91260c834a4d7cde",
"hidden": false,
"name": "Xiang Bai",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-12-27T02:39:50 |
MINIMA: Modality Invariant Image Matching
|
Image matching for both cross-view and cross-modality plays a critical role
in multimodal perception. In practice, the modality gap caused by different
imaging systems/styles poses great challenges to the matching task. Existing
works try to extract invariant features for specific modalities and train on
limited datasets, showing poor generalization. In this paper, we present
MINIMA, a unified image matching framework for multiple cross-modal cases.
Without pursuing fancy modules, our MINIMA aims to enhance universal
performance from the perspective of data scaling up. For such purpose, we
propose a simple yet effective data engine that can freely produce a large
dataset containing multiple modalities, rich scenarios, and accurate matching
labels. Specifically, we scale up the modalities from cheap but rich RGB-only
matching data, by means of generative models. Under this setting, the matching
labels and rich diversity of the RGB dataset are well inherited by the
generated multimodal data. Benefiting from this, we construct MD-syn, a new
comprehensive dataset that fills the data gap for general multimodal image
matching. With MD-syn, we can directly train any advanced matching pipeline on
randomly selected modality pairs to obtain cross-modal ability. Extensive
experiments on in-domain and zero-shot matching tasks, including 19
cross-modal cases, demonstrate that our MINIMA can significantly outperform the
baselines and even surpass modality-specific methods. The dataset and code are
available at https://github.com/LSXI7/MINIMA .
| 4 |
67893af091260c834a4d7da8
| null | null |
|
2025-01-16T04:42:23.941000 |
Towards Best Practices for Open Datasets for LLM Training
| 3 |
{
"_id": "5e6a3d4ea9afd5125d9ec064",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1584020801691-noauth.jpeg",
"followerCount": 2307,
"fullname": "Stefan Schweter",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "stefan-it",
"type": "user"
}
| false | null |
2501.08365
|
[
{
"_id": "6788d4566cc82aa3a079f632",
"hidden": false,
"name": "Stefan Baack",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:43:49.638Z",
"user": {
"_id": "645954bafbf75ae1c71fb8aa",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645954bafbf75ae1c71fb8aa/twyFXx2-M8SruwLwRBV1W.jpeg",
"fullname": "Stefan Baack",
"isPro": false,
"type": "user",
"user": "stefan-baack"
}
},
{
"_id": "6788d4566cc82aa3a079f633",
"hidden": false,
"name": "Stella Biderman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:43:56.119Z",
"user": {
"_id": "60347d3660e3dd96631c9093",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60347d3660e3dd96631c9093/B3fuZer5N04tZIAYrLnz4.jpeg",
"fullname": "Stella Biderman",
"isPro": false,
"type": "user",
"user": "stellaathena"
}
},
{
"_id": "6788d4566cc82aa3a079f634",
"hidden": false,
"name": "Kasia Odrozek",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f635",
"hidden": false,
"name": "Aviya Skowron",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:44:07.420Z",
"user": {
"_id": "63c5dfc8d5a5cd2043e6f03c",
"avatarUrl": "/avatars/edcfcd9cfb03286d670e6c5743efef6a.svg",
"fullname": "Aviya Skowron",
"isPro": false,
"type": "user",
"user": "avi-skowron"
}
},
{
"_id": "6788d4566cc82aa3a079f636",
"hidden": false,
"name": "Ayah Bdeir",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:44:13.401Z",
"user": {
"_id": "66fe985ff3ba4a0ed6d2bc89",
"avatarUrl": "/avatars/7ded4065561a6bd571fa94a27f328c18.svg",
"fullname": "ayah bdeir",
"isPro": false,
"type": "user",
"user": "ayahbdeir"
}
},
{
"_id": "6788d4566cc82aa3a079f637",
"hidden": false,
"name": "Jillian Bommarito",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f638",
"hidden": false,
"name": "Jennifer Ding",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:44:22.871Z",
"user": {
"_id": "62a0da842e30aaf94ebaaa12",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1679073278459-62a0da842e30aaf94ebaaa12.jpeg",
"fullname": "Jennifer Ding",
"isPro": false,
"type": "user",
"user": "jending12"
}
},
{
"_id": "6788d4566cc82aa3a079f639",
"hidden": false,
"name": "Maximilian Gahntz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f63a",
"hidden": false,
"name": "Paul Keller",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:44:50.535Z",
"user": {
"_id": "63692f631e9d04886d555da6",
"avatarUrl": "/avatars/13035d88679a570c20c74b7325d89542.svg",
"fullname": "Paul Keller",
"isPro": false,
"type": "user",
"user": "paulkeller"
}
},
{
"_id": "6788d4566cc82aa3a079f63b",
"hidden": false,
"name": "Pierre-Carl Langlais",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:44:59.405Z",
"user": {
"_id": "64ce091a9e9ca8123d7a42b0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64ce091a9e9ca8123d7a42b0/OEPggp82RwigxNLL35LgT.jpeg",
"fullname": "Pierre-Carl Langlais",
"isPro": false,
"type": "user",
"user": "Pclanglais"
}
},
{
"_id": "6788d4566cc82aa3a079f63c",
"hidden": false,
"name": "Greg Lindahl",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:45:10.564Z",
"user": {
"_id": "656fbeae7734a829bbd16252",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656fbeae7734a829bbd16252/s0_NEIevmFM3dncq0gHPn.jpeg",
"fullname": "Greg Lindahl",
"isPro": false,
"type": "user",
"user": "greglindahl"
}
},
{
"_id": "6788d4566cc82aa3a079f63d",
"hidden": false,
"name": "Sebastian Majstorovic",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:45:17.228Z",
"user": {
"_id": "636071759ddc44e710e0f5ce",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/636071759ddc44e710e0f5ce/-gmEhY5PidmSXIQPi2-QB.jpeg",
"fullname": "Sebastian Majstorovic",
"isPro": true,
"type": "user",
"user": "storytracer"
}
},
{
"_id": "6788d4566cc82aa3a079f63e",
"hidden": false,
"name": "Nik Marda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f63f",
"hidden": false,
"name": "Guilherme Penedo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:45:26.371Z",
"user": {
"_id": "62596f9e1c0a084224b93e00",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62596f9e1c0a084224b93e00/X2aLkJ0ofhkXwAg7lXvxD.jpeg",
"fullname": "Guilherme Penedo",
"isPro": false,
"type": "user",
"user": "guipenedo"
}
},
{
"_id": "6788d4566cc82aa3a079f640",
"hidden": false,
"name": "Maarten Van Segbroeck",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f641",
"hidden": false,
"name": "Jennifer Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f642",
"hidden": false,
"name": "Leandro von Werra",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:46:10.239Z",
"user": {
"_id": "5e48005437cb5b49818287a5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5e48005437cb5b49818287a5/4uCXGGui-9QifAT4qelxU.png",
"fullname": "Leandro von Werra",
"isPro": false,
"type": "user",
"user": "lvwerra"
}
},
{
"_id": "6788d4566cc82aa3a079f643",
"hidden": false,
"name": "Mitchell Baker",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:46:17.572Z",
"user": {
"_id": "63741e742b908db633716c80",
"avatarUrl": "/avatars/97fa9158afab29053e47ce3067714bee.svg",
"fullname": "Mitchell Baker",
"isPro": false,
"type": "user",
"user": "HOOisDead"
}
},
{
"_id": "6788d4566cc82aa3a079f644",
"hidden": false,
"name": "Julie Belião",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f645",
"hidden": false,
"name": "Kasia Chmielinski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f646",
"hidden": false,
"name": "Marzieh Fadaee",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:46:34.586Z",
"user": {
"_id": "6441042d5d600fb0951a5f99",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6441042d5d600fb0951a5f99/4CbOaYcEz99BtVAQvnGTn.jpeg",
"fullname": "Marzieh Fadaee",
"isPro": false,
"type": "user",
"user": "MarziehFadaee"
}
},
{
"_id": "6788d4566cc82aa3a079f647",
"hidden": false,
"name": "Lisa Gutermuth",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f648",
"hidden": false,
"name": "Hynek Kydlíček",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:46:45.941Z",
"user": {
"_id": "626ede24d2fa9e7d598c8709",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/626ede24d2fa9e7d598c8709/JKS8-Y2Jw87EgNQZBRswq.jpeg",
"fullname": "Hynek Kydlicek",
"isPro": true,
"type": "user",
"user": "hynky"
}
},
{
"_id": "6788d4566cc82aa3a079f649",
"hidden": false,
"name": "Greg Leppert",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:46:51.942Z",
"user": {
"_id": "623b6a04ae0ec315881b9c97",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/623b6a04ae0ec315881b9c97/IprOmck5cUmwKB6yoAU4L.jpeg",
"fullname": "Greg Leppert",
"isPro": false,
"type": "user",
"user": "leppert"
}
},
{
"_id": "6788d4566cc82aa3a079f64a",
"hidden": false,
"name": "EM Lewis-Jong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f64b",
"hidden": false,
"name": "Solana Larsen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f64c",
"hidden": false,
"name": "Shayne Longpre",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:47:15.078Z",
"user": {
"_id": "61f4283a81c4d30f58140242",
"avatarUrl": "/avatars/a1cf1ef1fd442c36ed65c68e51919fed.svg",
"fullname": "Shayne Longpre",
"isPro": false,
"type": "user",
"user": "Shayne"
}
},
{
"_id": "6788d4566cc82aa3a079f64d",
"hidden": false,
"name": "Angela Oduor Lungati",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f64e",
"hidden": false,
"name": "Cullen Miller",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:47:27.114Z",
"user": {
"_id": "6571bd30e82edf86f269fac0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6571bd30e82edf86f269fac0/310j0bc6evOUY6dmtiBq5.jpeg",
"fullname": "Cullen Miller",
"isPro": false,
"type": "user",
"user": "cullenmiller"
}
},
{
"_id": "6788d4566cc82aa3a079f64f",
"hidden": false,
"name": "Victor Miller",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:47:46.377Z",
"user": {
"_id": "638a93f1ed88cf97afd53e42",
"avatarUrl": "/avatars/147ed42f13847b5e4d534511ef5388a3.svg",
"fullname": "Victor Miller",
"isPro": false,
"type": "user",
"user": "victormiller"
}
},
{
"_id": "6788d4566cc82aa3a079f650",
"hidden": false,
"name": "Max Ryabinin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:47:53.155Z",
"user": {
"_id": "607d59fb921db717010c7ccc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1625736058289-607d59fb921db717010c7ccc.png",
"fullname": "Max Ryabinin",
"isPro": false,
"type": "user",
"user": "mryab"
}
},
{
"_id": "6788d4566cc82aa3a079f651",
"hidden": false,
"name": "Kathleen Siminyu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f652",
"hidden": false,
"name": "Andrew Strait",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f653",
"hidden": false,
"name": "Mark Surman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f654",
"hidden": false,
"name": "Anna Tumadóttir",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f655",
"hidden": false,
"name": "Maurice Weber",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:48:17.851Z",
"user": {
"_id": "6329ee3dab49d487dd1439ec",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6329ee3dab49d487dd1439ec/vxGvdBK0XMZaCpc5dGOIa.jpeg",
"fullname": "Maurice Weber",
"isPro": false,
"type": "user",
"user": "mauriceweber"
}
},
{
"_id": "6788d4566cc82aa3a079f656",
"hidden": false,
"name": "Rebecca Weiss",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:48:24.368Z",
"user": {
"_id": "6430688e0d7e3248d0616a64",
"avatarUrl": "/avatars/6d3ff97af3dd0da6f4781523e8cb2778.svg",
"fullname": "Rebecca Weiss",
"isPro": false,
"type": "user",
"user": "rjweiss"
}
},
{
"_id": "6788d4566cc82aa3a079f657",
"hidden": false,
"name": "Lee White",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d4566cc82aa3a079f658",
"hidden": false,
"name": "Thomas Wolf",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:43:36.026Z",
"user": {
"_id": "5df7e9e5da6d0311fd3d53f9",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1583857746553-5df7e9e5da6d0311fd3d53f9.jpeg",
"fullname": "Thomas Wolf",
"isPro": true,
"type": "user",
"user": "thomwolf"
}
}
] | 2025-01-14T17:18:05 |
Towards Best Practices for Open Datasets for LLM Training
|
Many AI companies are training their large language models (LLMs) on data
without the permission of the copyright owners. The permissibility of doing so
varies by jurisdiction: in countries like the EU and Japan, this is allowed
under certain restrictions, while in the United States, the legal landscape is
more ambiguous. Regardless of the legal status, concerns from creative
producers have led to several high-profile copyright lawsuits, and the threat
of litigation is commonly cited as a reason for the recent trend towards
minimizing the information shared about training datasets by both corporate and
public interest actors. This trend in limiting data information causes harm by
hindering transparency, accountability, and innovation in the broader ecosystem
by denying researchers, auditors, and impacted individuals access to the
information needed to understand AI models.
While this could be mitigated by training language models on open access and
public domain data, at the time of writing, there are no such models (trained
at a meaningful scale) due to the substantial technical and sociological
challenges in assembling the necessary corpus. These challenges include
incomplete and unreliable metadata, the cost and complexity of digitizing
physical records, and the diverse set of legal and technical skills required to
ensure relevance and responsibility in a quickly changing landscape. Building
towards a future where AI systems can be trained on openly licensed data that
is responsibly curated and governed requires collaboration across legal,
technical, and policy domains, along with investments in metadata standards,
digitization, and fostering a culture of openness.
| 56 |
6788d4566cc82aa3a079f68d
| null | null |
|
2025-01-16T04:37:06.686000 |
Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography
| 2 |
{
"_id": "6475c2794766357252e69e9f",
"avatarUrl": "/avatars/db428715dfd2239df2aeaaff1282323f.svg",
"followerCount": null,
"fullname": "i",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "iliashum",
"type": "user"
}
| false | null |
2501.08970
|
[
{
"_id": "6788d316f2e4691811b58fd0",
"hidden": false,
"name": "Ilia Shumailov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d316f2e4691811b58fd1",
"hidden": false,
"name": "Daniel Ramage",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:41:48.984Z",
"user": {
"_id": "643c627626f177a3e41912e9",
"avatarUrl": "/avatars/472bbef2630458773cb76bf0d44f0028.svg",
"fullname": "Daniel Ramagem",
"isPro": false,
"type": "user",
"user": "danrama"
}
},
{
"_id": "6788d316f2e4691811b58fd2",
"hidden": false,
"name": "Sarah Meiklejohn",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d316f2e4691811b58fd3",
"hidden": false,
"name": "Peter Kairouz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788d316f2e4691811b58fd4",
"hidden": false,
"name": "Florian Hartmann",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:42:04.838Z",
"user": {
"_id": "6728eb13efafaef60cff09a5",
"avatarUrl": "/avatars/2effe4ae7f0d0707accfcc308a11c3a3.svg",
"fullname": "Florian Hartmann",
"isPro": false,
"type": "user",
"user": "fhartmann"
}
},
{
"_id": "6788d316f2e4691811b58fd5",
"hidden": false,
"name": "Borja Balle",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:42:21.854Z",
"user": {
"_id": "668298c155ecd182e4b78afb",
"avatarUrl": "/avatars/f08100666c609ae1ef5fc50504feacf4.svg",
"fullname": "Borja Balle",
"isPro": false,
"type": "user",
"user": "bballe"
}
},
{
"_id": "6788d316f2e4691811b58fd6",
"hidden": false,
"name": "Eugene Bagdasarian",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-15T17:28:53 |
Trusted Machine Learning Models Unlock Private Inference for Problems
Currently Infeasible with Cryptography
|
We often interact with untrusted parties. Prioritization of privacy can limit
the effectiveness of these interactions, as achieving certain goals
necessitates sharing private data. Traditionally, addressing this challenge has
involved either seeking trusted intermediaries or constructing cryptographic
protocols that restrict how much data is revealed, such as multi-party
computations or zero-knowledge proofs. While significant advances have been
made in scaling cryptographic approaches, they remain limited in terms of the
size and complexity of applications they can be used for. In this paper, we
argue that capable machine learning models can fulfill the role of a trusted
third party, thus enabling secure computations for applications that were
previously infeasible. In particular, we describe Trusted Capable Model
Environments (TCMEs) as an alternative approach for scaling secure computation,
where capable machine learning model(s) interact under input/output
constraints, with explicit information flow control and explicit statelessness.
This approach aims to achieve a balance between privacy and computational
efficiency, enabling private inference where classical cryptographic solutions
are currently infeasible. We describe a number of use cases that are enabled by
TCME, and show that even some simple classic cryptographic problems can already
be solved with TCME. Finally, we outline current limitations and discuss the
path forward in implementing them.
| 6 |
6788d31df2e4691811b591f6
| null | null |
|
2025-01-16T03:47:27.618000 |
Parameter-Inverted Image Pyramid Networks for Visual Perception and Multimodal Understanding
| 2 |
{
"_id": "665d4b515fdfe8f923e347a7",
"avatarUrl": "/avatars/d114b24c02dadfca0a8aee104755a8ec.svg",
"followerCount": 3,
"fullname": "Zhaokai Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wzk1015",
"type": "user"
}
| true | null |
2501.07783
|
[
{
"_id": "6788c7156bc665d65caf8b3c",
"hidden": false,
"name": "Zhaokai Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T08:49:11.408Z",
"user": {
"_id": "665d4b515fdfe8f923e347a7",
"avatarUrl": "/avatars/d114b24c02dadfca0a8aee104755a8ec.svg",
"fullname": "Zhaokai Wang",
"isPro": false,
"type": "user",
"user": "wzk1015"
}
},
{
"_id": "6788c7156bc665d65caf8b3d",
"hidden": false,
"name": "Xizhou Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:51:26.043Z",
"user": {
"_id": "64ae2359179421d320b1694b",
"avatarUrl": "/avatars/c387a75191005bcaa473091de5383a10.svg",
"fullname": "Xizhou Zhu",
"isPro": false,
"type": "user",
"user": "Einsiedler"
}
},
{
"_id": "6788c7156bc665d65caf8b3e",
"hidden": false,
"name": "Xue Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788c7156bc665d65caf8b3f",
"hidden": false,
"name": "Gen Luo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:51:36.685Z",
"user": {
"_id": "650aac7c23196fb2d86a0b37",
"avatarUrl": "/avatars/418035a2e8f514118bc67d16ee41b6b0.svg",
"fullname": "Gen Luo",
"isPro": false,
"type": "user",
"user": "favor123"
}
},
{
"_id": "6788c7156bc665d65caf8b40",
"hidden": false,
"name": "Hao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788c7156bc665d65caf8b41",
"hidden": false,
"name": "Changyao Tian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:51:46.333Z",
"user": {
"_id": "64b7475efa7eabaae5f7ba94",
"avatarUrl": "/avatars/346e53b345ccd9e8557ab8d2ec17a8f3.svg",
"fullname": "Changyao Tian",
"isPro": false,
"type": "user",
"user": "Changyao"
}
},
{
"_id": "6788c7156bc665d65caf8b42",
"hidden": false,
"name": "Wenhan Dou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:51:57.729Z",
"user": {
"_id": "66efe658de163a536aa84178",
"avatarUrl": "/avatars/fddc42450cabf41ca1ab2f70b185f51c.svg",
"fullname": "dou wenhan",
"isPro": false,
"type": "user",
"user": "douwh"
}
},
{
"_id": "6788c7156bc665d65caf8b43",
"hidden": false,
"name": "Junqi Ge",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:52:04.125Z",
"user": {
"_id": "6695ee745e5e72a434fdfbc2",
"avatarUrl": "/avatars/01c0af2dac291ed4f52615e07094ea93.svg",
"fullname": "Junqi Ge",
"isPro": false,
"type": "user",
"user": "gejq16148"
}
},
{
"_id": "6788c7156bc665d65caf8b44",
"hidden": false,
"name": "Lewei Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:52:10.641Z",
"user": {
"_id": "65ead3ea908526a39082e641",
"avatarUrl": "/avatars/dcf870695fd56b06ca03d82f831e9019.svg",
"fullname": "Lewei Lu",
"isPro": false,
"type": "user",
"user": "luotto"
}
},
{
"_id": "6788c7156bc665d65caf8b45",
"hidden": false,
"name": "Yu Qiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788c7156bc665d65caf8b46",
"hidden": false,
"name": "Jifeng Dai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:52:18.008Z",
"user": {
"_id": "64686f7172d9180d4ac8b4e4",
"avatarUrl": "/avatars/db67dd6c4b2b41054ddcce5a18ade6f8.svg",
"fullname": "Jifeng Dai",
"isPro": false,
"type": "user",
"user": "daijifeng"
}
}
] | 2025-01-14T01:57:41 |
Parameter-Inverted Image Pyramid Networks for Visual Perception and
Multimodal Understanding
|
Image pyramids are widely adopted in top-performing methods to obtain
multi-scale features for precise visual perception and understanding. However,
current image pyramids use the same large-scale model to process multiple
resolutions of images, leading to significant computational cost. To address
this challenge, we propose a novel network architecture, called
Parameter-Inverted Image Pyramid Networks (PIIP). Specifically, PIIP uses
pretrained models (ViTs or CNNs) as branches to process multi-scale images,
where images of higher resolutions are processed by smaller network branches to
balance computational cost and performance. To integrate information from
different spatial scales, we further propose a novel cross-branch feature
interaction mechanism. To validate PIIP, we apply it to various perception
models and a representative multimodal large language model called LLaVA, and
conduct extensive experiments on various tasks such as object detection,
segmentation, image classification and multimodal understanding. PIIP achieves
superior performance compared to single-branch and existing multi-resolution
approaches with lower computational cost. When applied to InternViT-6B, a
large-scale vision foundation model, PIIP can improve its performance by 1%-2%
on detection and segmentation with only 40%-60% of the original computation,
finally achieving 60.0 box AP on MS COCO and 59.7 mIoU on ADE20K. For
multimodal understanding, our PIIP-LLaVA achieves 73.0% accuracy on TextVQA and
74.5% on MMBench with only 2.8M training data. Our code is released at
https://github.com/OpenGVLab/PIIP.
| 7 |
6788c71a6bc665d65caf8c91
| null | null |
|
2025-01-16T01:14:37.187000 |
Multimodal LLMs Can Reason about Aesthetics in Zero-Shot
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.09012
|
[
{
"_id": "6788a30ee9e04d1c80fb1d6d",
"hidden": false,
"name": "Ruixiang Jiang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:54:16.366Z",
"user": {
"_id": "6351382f40dffad651ef3fbd",
"avatarUrl": "/avatars/3ac2de7c49086bb37cc4f4bd29ed72f2.svg",
"fullname": "JIANG",
"isPro": false,
"type": "user",
"user": "Ruixiang"
}
},
{
"_id": "6788a30ee9e04d1c80fb1d6e",
"hidden": false,
"name": "Changwen Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:54:22.754Z",
"user": {
"_id": "64f95c12ec913b4f977ba028",
"avatarUrl": "/avatars/b128850ee2a3667d0b1972659cd5f0ae.svg",
"fullname": "Chang Wen Cheng",
"isPro": false,
"type": "user",
"user": "Vincentchang"
}
}
] | 2025-01-15T18:56:22 |
Multimodal LLMs Can Reason about Aesthetics in Zero-Shot
|
We present the first study on how Multimodal LLMs' (MLLMs) reasoning ability
shall be elicited to evaluate the aesthetics of artworks. To facilitate this
investigation, we construct MM-StyleBench, a novel high-quality dataset for
benchmarking artistic stylization. We then develop a principled method for
human preference modeling and perform a systematic correlation analysis between
MLLMs' responses and human preference. Our experiments reveal an inherent
hallucination issue of MLLMs in art evaluation, associated with response
subjectivity. ArtCoT is proposed, demonstrating that art-specific task
decomposition and the use of concrete language boost MLLMs' reasoning ability
for aesthetics. Our findings offer valuable insights into MLLMs for art and can
benefit a wide range of downstream applications, such as style transfer and
artistic image generation. Code available at
https://github.com/songrise/MLLM4Art.
| 10 |
6788a30fe9e04d1c80fb1dc5
| null | null |
|
2025-01-16T00:24:15.441000 |
CityDreamer4D: Compositional Generative Model of Unbounded 4D Cities
| 2 |
{
"_id": "63f47b5321eb234ab739e91a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63f47b5321eb234ab739e91a/vWfFNVtMkHl8gieha5PPd.jpeg",
"followerCount": 12,
"fullname": "Haozhe Xie",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hzxie",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/63f47b5321eb234ab739e91a/oA3_MgZyDEpWX4ITsGSng.webp"
] |
2501.08983
|
[
{
"_id": "678897a4d42825b51c19d65a",
"hidden": false,
"name": "Haozhe Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:50:10.101Z",
"user": {
"_id": "63f47b5321eb234ab739e91a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63f47b5321eb234ab739e91a/vWfFNVtMkHl8gieha5PPd.jpeg",
"fullname": "Haozhe Xie",
"isPro": false,
"type": "user",
"user": "hzxie"
}
},
{
"_id": "678897a4d42825b51c19d65b",
"hidden": false,
"name": "Zhaoxi Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:50:16.000Z",
"user": {
"_id": "62fc8cf7ee999004b5a8b982",
"avatarUrl": "/avatars/6c5dda9e58747054a989f077a078f3dc.svg",
"fullname": "Zhaoxi Chen",
"isPro": false,
"type": "user",
"user": "FrozenBurning"
}
},
{
"_id": "678897a4d42825b51c19d65c",
"hidden": false,
"name": "Fangzhou Hong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:51:11.083Z",
"user": {
"_id": "623c530013a63ea865f96c8e",
"avatarUrl": "/avatars/164455a1a94f92b71733fc778c21bd89.svg",
"fullname": "Fangzhou Hong",
"isPro": false,
"type": "user",
"user": "hongfz16"
}
},
{
"_id": "678897a4d42825b51c19d65d",
"hidden": false,
"name": "Ziwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:50:34.647Z",
"user": {
"_id": "62ab1ac1d48b4d8b048a3473",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png",
"fullname": "Ziwei Liu",
"isPro": false,
"type": "user",
"user": "liuziwei7"
}
}
] | 2025-01-15T17:59:56 |
CityDreamer4D: Compositional Generative Model of Unbounded 4D Cities
|
3D scene generation has garnered growing attention in recent years and has
made significant progress. Generating 4D cities is more challenging than 3D
scenes due to the presence of structurally complex, visually diverse objects
like buildings and vehicles, and heightened human sensitivity to distortions in
urban environments. To tackle these issues, we propose CityDreamer4D, a
compositional generative model specifically tailored for generating unbounded
4D cities. Our main insights are 1) 4D city generation should separate dynamic
objects (e.g., vehicles) from static scenes (e.g., buildings and roads), and 2)
all objects in the 4D scene should be composed of different types of neural
fields for buildings, vehicles, and background stuff. Specifically, we propose
Traffic Scenario Generator and Unbounded Layout Generator to produce dynamic
traffic scenarios and static city layouts using a highly compact BEV
representation. Objects in 4D cities are generated by combining stuff-oriented
and instance-oriented neural fields for background stuff, buildings, and
vehicles. To suit the distinct characteristics of background stuff and
instances, the neural fields employ customized generative hash grids and
periodic positional embeddings as scene parameterizations. Furthermore, we
offer a comprehensive suite of datasets for city generation, including OSM,
GoogleEarth, and CityTopia. The OSM dataset provides a variety of real-world
city layouts, while the Google Earth and CityTopia datasets deliver
large-scale, high-quality city imagery complete with 3D instance annotations.
Leveraging its compositional design, CityDreamer4D supports a range of
downstream applications, such as instance editing, city stylization, and urban
simulation, while delivering state-of-the-art performance in generating
realistic 4D cities.
| 20 |
678897a7d42825b51c19d702
|
https://haozhexie.com/project/city-dreamer-4d
|
https://github.com/hzxie/CityDreamer4D
|
|
2025-01-16T00:15:13.323000 |
MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.08828
|
[
{
"_id": "67889537383254ec3f017a1d",
"hidden": false,
"name": "Kuicai Dong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T09:41:42.249Z",
"user": {
"_id": "66337cb5bd8ef15a47e72ce0",
"avatarUrl": "/avatars/cc49056fcdc6bdabfe72a0d3de5c196d.svg",
"fullname": "DONG KUICAI",
"isPro": false,
"type": "user",
"user": "daviddongdong"
}
},
{
"_id": "67889537383254ec3f017a1e",
"hidden": false,
"name": "Yujing Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67889537383254ec3f017a1f",
"hidden": false,
"name": "Xin Deik Goh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67889537383254ec3f017a20",
"hidden": false,
"name": "Dexun Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67889537383254ec3f017a21",
"hidden": false,
"name": "Ruiming Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67889537383254ec3f017a22",
"hidden": false,
"name": "Yong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-15T14:30:13 |
MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents
|
Multi-modal document retrieval is designed to identify and retrieve various
forms of multi-modal content, such as figures, tables, charts, and layout
information from extensive documents. Despite its significance, there is a
notable lack of a robust benchmark to effectively evaluate the performance of
systems in multi-modal document retrieval. To address this gap, this work
introduces a new benchmark, named as MMDocIR, encompassing two distinct tasks:
page-level and layout-level retrieval. The former focuses on localizing the
most relevant pages within a long document, while the latter targets the
detection of specific layouts, offering a more fine-grained granularity than
whole-page analysis. A layout can refer to a variety of elements such as
textual paragraphs, equations, figures, tables, or charts. The MMDocIR
benchmark comprises a rich dataset featuring expertly annotated labels for
1,685 questions and bootstrapped labels for 173,843 questions, making it a
pivotal resource for advancing multi-modal document retrieval for both training
and evaluation. Through rigorous experiments, we reveal that (i) visual
retrievers significantly outperform their text counterparts, (ii) MMDocIR train
set can effectively benefit the training process of multi-modal document
retrieval and (iii) text retrievers leveraging on VLM-text perform much better
than those using OCR-text. These findings underscores the potential advantages
of integrating visual elements for multi-modal document retrieval.
| 30 |
67889539383254ec3f017a72
| null | null |
|
2025-01-16T00:09:01.580000 |
RepVideo: Rethinking Cross-Layer Representation for Video Generation
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.08994
|
[
{
"_id": "6788945e2b5050a9154d939d",
"hidden": false,
"name": "Chenyang Si",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:49:14.615Z",
"user": {
"_id": "635f8ed47c05eb9f59963d3a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/635f8ed47c05eb9f59963d3a/uQf4p9N9pSaFy87Wg9v4k.jpeg",
"fullname": "ChenyangSi",
"isPro": false,
"type": "user",
"user": "ChenyangSi"
}
},
{
"_id": "6788945e2b5050a9154d939e",
"hidden": false,
"name": "Weichen Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:49:38.866Z",
"user": {
"_id": "6481764e8af4675862efb22e",
"avatarUrl": "/avatars/fc2e076bc861693f598a528a068a696e.svg",
"fullname": "weichenfan",
"isPro": false,
"type": "user",
"user": "weepiess2383"
}
},
{
"_id": "6788945e2b5050a9154d939f",
"hidden": false,
"name": "Zhengyao Lv",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:49:45.309Z",
"user": {
"_id": "645aff5121ab438e732c47c1",
"avatarUrl": "/avatars/23b2a853139b0f2ae1fa88e2bd4e0056.svg",
"fullname": "Zhengyao Lv",
"isPro": false,
"type": "user",
"user": "cszy98"
}
},
{
"_id": "6788945e2b5050a9154d93a0",
"hidden": false,
"name": "Ziqi Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:49:51.123Z",
"user": {
"_id": "60efe7fa0d920bc7805cada5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60efe7fa0d920bc7805cada5/2LBrJBjSCOP5ilZIpWLHl.png",
"fullname": "Ziqi Huang",
"isPro": false,
"type": "user",
"user": "Ziqi"
}
},
{
"_id": "6788945e2b5050a9154d93a1",
"hidden": false,
"name": "Yu Qiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6788945e2b5050a9154d93a2",
"hidden": false,
"name": "Ziwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:49:58.339Z",
"user": {
"_id": "62ab1ac1d48b4d8b048a3473",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1656826685333-62ab1ac1d48b4d8b048a3473.png",
"fullname": "Ziwei Liu",
"isPro": false,
"type": "user",
"user": "liuziwei7"
}
}
] | 2025-01-15T18:20:37 |
RepVideo: Rethinking Cross-Layer Representation for Video Generation
|
Video generation has achieved remarkable progress with the introduction of
diffusion models, which have significantly improved the quality of generated
videos. However, recent research has primarily focused on scaling up model
training, while offering limited insights into the direct impact of
representations on the video generation process. In this paper, we initially
investigate the characteristics of features in intermediate layers, finding
substantial variations in attention maps across different layers. These
variations lead to unstable semantic representations and contribute to
cumulative differences between features, which ultimately reduce the similarity
between adjacent frames and negatively affect temporal coherence. To address
this, we propose RepVideo, an enhanced representation framework for
text-to-video diffusion models. By accumulating features from neighboring
layers to form enriched representations, this approach captures more stable
semantic information. These enhanced representations are then used as inputs to
the attention mechanism, thereby improving semantic expressiveness while
ensuring feature consistency across adjacent frames. Extensive experiments
demonstrate that our RepVideo not only significantly enhances the ability to
generate accurate spatial appearances, such as capturing complex spatial
relationships between multiple objects, but also improves temporal consistency
in video generation.
| 15 |
678894602b5050a9154d945b
| null | null |
|
2025-01-16T00:08:30.356000 |
Ouroboros-Diffusion: Exploring Consistent Content Generation in Tuning-free Long Video Diffusion
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.09019
|
[
{
"_id": "678893e40a465aa0613dc9b2",
"hidden": false,
"name": "Jingyuan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893e40a465aa0613dc9b3",
"hidden": false,
"name": "Fuchen Long",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T09:41:40.611Z",
"user": {
"_id": "6449f2dfeb7db8f70fb990f8",
"avatarUrl": "/avatars/72c02e754bb35dab05c4a5f1e69c95f1.svg",
"fullname": "Fuchen",
"isPro": false,
"type": "user",
"user": "FireCRT"
}
},
{
"_id": "678893e40a465aa0613dc9b4",
"hidden": false,
"name": "Jie An",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:25:29.353Z",
"user": {
"_id": "6267fb1387d7f9040e031d39",
"avatarUrl": "/avatars/c7170aa3d28d2cb851e075b98ca5f5b1.svg",
"fullname": "Jie An",
"isPro": false,
"type": "user",
"user": "pkuanjie"
}
},
{
"_id": "678893e40a465aa0613dc9b5",
"hidden": false,
"name": "Zhaofan Qiu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:54:51.062Z",
"user": {
"_id": "64ac099294ae6b609d0f0713",
"avatarUrl": "/avatars/7f50e3e1597fb5fd0dad9fe7fd2e26ec.svg",
"fullname": "Zhaofan Qiu",
"isPro": false,
"type": "user",
"user": "qiudavy"
}
},
{
"_id": "678893e40a465aa0613dc9b6",
"hidden": false,
"name": "Ting Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893e40a465aa0613dc9b7",
"hidden": false,
"name": "Jiebo Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893e40a465aa0613dc9b8",
"hidden": false,
"name": "Tao Mei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T08:55:14.020Z",
"user": {
"_id": "66a8d386fbde2b9cb1120393",
"avatarUrl": "/avatars/ccfb6e72764e547be2b3528713151693.svg",
"fullname": "Tao Mei",
"isPro": false,
"type": "user",
"user": "GiantBision"
}
}
] | 2025-01-15T18:59:15 |
Ouroboros-Diffusion: Exploring Consistent Content Generation in
Tuning-free Long Video Diffusion
|
The first-in-first-out (FIFO) video diffusion, built on a pre-trained
text-to-video model, has recently emerged as an effective approach for
tuning-free long video generation. This technique maintains a queue of video
frames with progressively increasing noise, continuously producing clean frames
at the queue's head while Gaussian noise is enqueued at the tail. However,
FIFO-Diffusion often struggles to keep long-range temporal consistency in the
generated videos due to the lack of correspondence modeling across frames. In
this paper, we propose Ouroboros-Diffusion, a novel video denoising framework
designed to enhance structural and content (subject) consistency, enabling the
generation of consistent videos of arbitrary length. Specifically, we introduce
a new latent sampling technique at the queue tail to improve structural
consistency, ensuring perceptually smooth transitions among frames. To enhance
subject consistency, we devise a Subject-Aware Cross-Frame Attention (SACFA)
mechanism, which aligns subjects across frames within short segments to achieve
better visual coherence. Furthermore, we introduce self-recurrent guidance.
This technique leverages information from all previous cleaner frames at the
front of the queue to guide the denoising of noisier frames at the end,
fostering rich and contextual global information interaction. Extensive
experiments of long video generation on the VBench benchmark demonstrate the
superiority of our Ouroboros-Diffusion, particularly in terms of subject
consistency, motion smoothness, and temporal consistency.
| 12 |
678893e50a465aa0613dca2f
| null | null |
|
2025-01-16T00:05:01.556000 |
XMusic: Towards a Generalized and Controllable Symbolic Music Generation Framework
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.08809
|
[
{
"_id": "678893719b735715ac69debe",
"hidden": false,
"name": "Sida Tian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893719b735715ac69debf",
"hidden": false,
"name": "Can Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893719b735715ac69dec0",
"hidden": false,
"name": "Wei Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893719b735715ac69dec1",
"hidden": false,
"name": "Wei Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678893719b735715ac69dec2",
"hidden": false,
"name": "Wenjie Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-15T14:08:44 |
XMusic: Towards a Generalized and Controllable Symbolic Music Generation
Framework
|
In recent years, remarkable advancements in artificial intelligence-generated
content (AIGC) have been achieved in the fields of image synthesis and text
generation, generating content comparable to that produced by humans. However,
the quality of AI-generated music has not yet reached this standard, primarily
due to the challenge of effectively controlling musical emotions and ensuring
high-quality outputs. This paper presents a generalized symbolic music
generation framework, XMusic, which supports flexible prompts (i.e., images,
videos, texts, tags, and humming) to generate emotionally controllable and
high-quality symbolic music. XMusic consists of two core components, XProjector
and XComposer. XProjector parses the prompts of various modalities into
symbolic music elements (i.e., emotions, genres, rhythms and notes) within the
projection space to generate matching music. XComposer contains a Generator and
a Selector. The Generator generates emotionally controllable and melodious
music based on our innovative symbolic music representation, whereas the
Selector identifies high-quality symbolic music by constructing a multi-task
learning scheme involving quality assessment, emotion recognition, and genre
recognition tasks. In addition, we build XMIDI, a large-scale symbolic music
dataset that contains 108,023 MIDI files annotated with precise emotion and
genre labels. Objective and subjective evaluations show that XMusic
significantly outperforms the current state-of-the-art methods with impressive
music quality. Our XMusic has been awarded as one of the nine Highlights of
Collectibles at WAIC 2023. The project homepage of XMusic is
https://xmusic-project.github.io.
| 10 |
678893729b735715ac69deed
| null | null |
|
2025-01-15T13:49:13.107000 |
MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training
| 3 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2501.07556
|
[
{
"_id": "67880298f2b3aef29aea2777",
"hidden": false,
"name": "Xingyi He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea2778",
"hidden": false,
"name": "Hao Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea2779",
"hidden": false,
"name": "Sida Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea277a",
"hidden": false,
"name": "Dongli Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea277b",
"hidden": false,
"name": "Zehong Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea277c",
"hidden": false,
"name": "Hujun Bao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67880298f2b3aef29aea277d",
"hidden": false,
"name": "Xiaowei Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-13T18:37:36 |
MatchAnything: Universal Cross-Modality Image Matching with Large-Scale
Pre-Training
|
Image matching, which aims to identify corresponding pixel locations between
images, is crucial in a wide range of scientific disciplines, aiding in image
registration, fusion, and analysis. In recent years, deep learning-based image
matching algorithms have dramatically outperformed humans in rapidly and
accurately finding large amounts of correspondences. However, when dealing with
images captured under different imaging modalities that result in significant
appearance changes, the performance of these algorithms often deteriorates due
to the scarcity of annotated cross-modal training data. This limitation hinders
applications in various fields that rely on multiple image modalities to obtain
complementary information. To address this challenge, we propose a large-scale
pre-training framework that utilizes synthetic cross-modal training signals,
incorporating diverse data from various sources, to train models to recognize
and match fundamental structures across images. This capability is transferable
to real-world, unseen cross-modality image matching tasks. Our key finding is
that the matching model trained with our framework achieves remarkable
generalizability across more than eight unseen cross-modality registration
tasks using the same network weight, substantially outperforming existing
methods, whether designed for generalization or tailored for specific tasks.
This advancement significantly enhances the applicability of image matching
technologies across various scientific disciplines and paves the way for new
applications in multi-modality human and artificial intelligence analysis and
beyond.
| 5 |
678802a5f2b3aef29aea2b03
| null | null |
|
2025-01-15T09:50:03.046000 |
3DIS-FLUX: simple and efficient multi-instance generation with DiT rendering
| 2 |
{
"_id": "64e99fc07e2ec711a7138262",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64e99fc07e2ec711a7138262/FmP3F8_UXgh9K-0gwS99A.jpeg",
"followerCount": 5,
"fullname": "谢集",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "sanaka87",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/64e99fc07e2ec711a7138262/qBkh68ldWX3LoBNZYW6NC.png",
"https://cdn-uploads.huggingface.co/production/uploads/64e99fc07e2ec711a7138262/-pwsPgiyVrqOVZlI4leB7.png",
"https://cdn-uploads.huggingface.co/production/uploads/64e99fc07e2ec711a7138262/d-54oe5r7u4dWtQ6_Swow.png"
] |
2501.05131
|
[
{
"_id": "678739a51f65db189f9e75c5",
"hidden": false,
"name": "Dewei Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:49:15.222Z",
"user": {
"_id": "65eaa1e2b11eeb516a973508",
"avatarUrl": "/avatars/beecd135bb940fdc02406f9063b3fa67.svg",
"fullname": "Dewei Zhou",
"isPro": false,
"type": "user",
"user": "limuloo1999"
}
},
{
"_id": "678739a51f65db189f9e75c6",
"hidden": false,
"name": "Ji Xie",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:22.289Z",
"user": {
"_id": "64e99fc07e2ec711a7138262",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64e99fc07e2ec711a7138262/FmP3F8_UXgh9K-0gwS99A.jpeg",
"fullname": "谢集",
"isPro": false,
"type": "user",
"user": "sanaka87"
}
},
{
"_id": "678739a51f65db189f9e75c7",
"hidden": false,
"name": "Zongxin Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:49:20.390Z",
"user": {
"_id": "619bf9b3cbedb87e1a92fb3b",
"avatarUrl": "/avatars/ee280db0232e21416c948ab9a9a2344e.svg",
"fullname": "Zongxin Yang",
"isPro": false,
"type": "user",
"user": "z-x-yang"
}
},
{
"_id": "678739a51f65db189f9e75c8",
"hidden": false,
"name": "Yi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-09T10:34:00 |
3DIS-FLUX: simple and efficient multi-instance generation with DiT
rendering
|
The growing demand for controllable outputs in text-to-image generation has
driven significant advancements in multi-instance generation (MIG), enabling
users to define both instance layouts and attributes. Currently, the
state-of-the-art methods in MIG are primarily adapter-based. However, these
methods necessitate retraining a new adapter each time a more advanced model is
released, resulting in significant resource consumption. A methodology named
Depth-Driven Decoupled Instance Synthesis (3DIS) has been introduced, which
decouples MIG into two distinct phases: 1) depth-based scene construction and
2) detail rendering with widely pre-trained depth control models. The 3DIS
method requires adapter training solely during the scene construction phase,
while enabling various models to perform training-free detail rendering.
Initially, 3DIS focused on rendering techniques utilizing U-Net architectures
such as SD1.5, SD2, and SDXL, without exploring the potential of recent
DiT-based models like FLUX. In this paper, we present 3DIS-FLUX, an extension
of the 3DIS framework that integrates the FLUX model for enhanced rendering
capabilities. Specifically, we employ the FLUX.1-Depth-dev model for depth map
controlled image generation and introduce a detail renderer that manipulates
the Attention Mask in FLUX's Joint Attention mechanism based on layout
information. This approach allows for the precise rendering of fine-grained
attributes of each instance. Our experimental results indicate that 3DIS-FLUX,
leveraging the FLUX model, outperforms the original 3DIS method, which utilized
SD2 and SDXL, and surpasses current state-of-the-art adapter-based methods in
terms of both performance and image quality. Project Page:
https://limuloo.github.io/3DIS/.
| 34 |
678739a71f65db189f9e7629
| null | null |
|
2025-01-15T08:27:24.456000 |
In-situ graph reasoning and knowledge expansion using Graph-PReFLexOR
| 2 |
{
"_id": "623ce1c6b66fedf374859fe7",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/623ce1c6b66fedf374859fe7/lhbMLg6BxLCb9DD4rgjfx.jpeg",
"followerCount": 24,
"fullname": "Markus Buehler",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "mjbuehler",
"type": "user"
}
| true | null |
2501.08120
|
[
{
"_id": "6787b437bbb0287a08f3b32d",
"hidden": false,
"name": "Markus J. Buehler",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:16:02.561Z",
"user": {
"_id": "623ce1c6b66fedf374859fe7",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/623ce1c6b66fedf374859fe7/lhbMLg6BxLCb9DD4rgjfx.jpeg",
"fullname": "Markus Buehler",
"isPro": true,
"type": "user",
"user": "mjbuehler"
}
}
] | 2025-01-14T13:52:41 |
In-situ graph reasoning and knowledge expansion using Graph-PReFLexOR
|
The pursuit of automated scientific discovery has fueled progress from
symbolic logic to modern AI, forging new frontiers in reasoning and pattern
recognition. Transformers function as potential systems, where every possible
relationship remains latent potentiality until tasks impose constraints, akin
to measurement. Yet, refining their sampling requires more than probabilistic
selection: solutions must conform to specific structures or rules, ensuring
consistency and the invocation of general principles. We present
Graph-PReFLexOR (Graph-based Preference-based Recursive Language Modeling for
Exploratory Optimization of Reasoning), a framework that combines graph
reasoning with symbolic abstraction to dynamically expand domain knowledge.
Inspired by reinforcement learning, Graph-PReFLexOR defines reasoning as a
structured mapping, where tasks yield knowledge graphs, abstract patterns, and
ultimately, final answers. Inspired by category theory, it encodes concepts as
nodes and their relationships as edges, supporting hierarchical inference and
adaptive learning through isomorphic representations. Demonstrations include
hypothesis generation, materials design, and creative reasoning, such as
discovering relationships between mythological concepts like 'thin places' with
materials science. We propose a 'knowledge garden growth' strategy that
integrates insights across domains, promoting interdisciplinary connections.
Results with a 3-billion-parameter Graph-PReFLexOR model show superior
reasoning depth and adaptability, underscoring the potential for transparent,
multidisciplinary AI-driven discovery. It lays the groundwork for general
autonomous reasoning solutions.
| 5 |
6787b438bbb0287a08f3b370
| null | null |
|
2025-01-15T03:52:43.037000 |
Omni-RGPT: Unifying Image and Video Region-level Understanding via Token Marks
| 2 |
{
"_id": "64ae22dd1aee69ece065cdcd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64ae22dd1aee69ece065cdcd/JG7QaHIrr4i2k4uwR4pZK.png",
"followerCount": 3,
"fullname": "Min-Hung Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "cmhungsteve",
"type": "user"
}
| true | null |
2501.08326
|
[
{
"_id": "6787772516c02260b4e6592b",
"hidden": false,
"name": "Miran Heo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:55:26.369Z",
"user": {
"_id": "64b0640d99fb025c573d1bfc",
"avatarUrl": "/avatars/b022f9854af8cd165c6282174dc8bdc5.svg",
"fullname": "Miran Heo",
"isPro": false,
"type": "user",
"user": "miranheo"
}
},
{
"_id": "6787772516c02260b4e6592c",
"hidden": false,
"name": "Min-Hung Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T16:35:59.983Z",
"user": {
"_id": "64ae22dd1aee69ece065cdcd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64ae22dd1aee69ece065cdcd/JG7QaHIrr4i2k4uwR4pZK.png",
"fullname": "Min-Hung Chen",
"isPro": false,
"type": "user",
"user": "cmhungsteve"
}
},
{
"_id": "6787772516c02260b4e6592d",
"hidden": false,
"name": "De-An Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:55:31.462Z",
"user": {
"_id": "641d1c5ec3983aa94915c162",
"avatarUrl": "/avatars/127985b837ecf61e43c835deee578b5e.svg",
"fullname": "De-An Huang",
"isPro": false,
"type": "user",
"user": "deahuang"
}
},
{
"_id": "6787772516c02260b4e6592e",
"hidden": false,
"name": "Sifei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:55:45.921Z",
"user": {
"_id": "62fab69f8cd542e895bafd6e",
"avatarUrl": "/avatars/c553bff4bd52b9a4f79e9c76fa22e27e.svg",
"fullname": "Sifei Liu",
"isPro": false,
"type": "user",
"user": "zwrq"
}
},
{
"_id": "6787772516c02260b4e6592f",
"hidden": false,
"name": "Subhashree Radhakrishnan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:55:54.632Z",
"user": {
"_id": "63b609f50d5913eee489be7a",
"avatarUrl": "/avatars/2d1a6cdd11a76d5c03a97bbf8c52b7e5.svg",
"fullname": "Subhashree Radhakrishnan",
"isPro": false,
"type": "user",
"user": "subhashreer"
}
},
{
"_id": "6787772516c02260b4e65930",
"hidden": false,
"name": "Seon Joo Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6787772516c02260b4e65931",
"hidden": false,
"name": "Yu-Chiang Frank Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6787772516c02260b4e65932",
"hidden": false,
"name": "Ryo Hachiuma",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:56:10.401Z",
"user": {
"_id": "65b33e5f7cd0069ad648c4e8",
"avatarUrl": "/avatars/1a746ea535cffa92ea08006e05ea414a.svg",
"fullname": "Ryo Hachiuma",
"isPro": false,
"type": "user",
"user": "rhachiuma"
}
}
] | 2025-01-14T18:58:04 |
Omni-RGPT: Unifying Image and Video Region-level Understanding via Token
Marks
|
We present Omni-RGPT, a multimodal large language model designed to
facilitate region-level comprehension for both images and videos. To achieve
consistent region representation across spatio-temporal dimensions, we
introduce Token Mark, a set of tokens highlighting the target regions within
the visual feature space. These tokens are directly embedded into spatial
regions using region prompts (e.g., boxes or masks) and simultaneously
incorporated into the text prompt to specify the target, establishing a direct
connection between visual and text tokens. To further support robust video
understanding without requiring tracklets, we introduce an auxiliary task that
guides Token Mark by leveraging the consistency of the tokens, enabling stable
region interpretation across the video. Additionally, we introduce a
large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT
achieves state-of-the-art results on image and video-based commonsense
reasoning benchmarks while showing strong performance in captioning and
referring expression comprehension tasks.
| 32 |
6787772816c02260b4e65a22
| null | null |
|
2025-01-15T03:05:10.948000 |
Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
| 2 |
{
"_id": "61c865e3d3702a3bbf50bc04",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61c865e3d3702a3bbf50bc04/eGgcyeUnOHz0hyap5vhvr.jpeg",
"followerCount": 1,
"fullname": "Michael Toker",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tokeron",
"type": "user"
}
| true | null |
2501.06751
|
[
{
"_id": "67876c0783ec692b4570e0af",
"hidden": false,
"name": "Michael Toker",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:45:04.686Z",
"user": {
"_id": "61c865e3d3702a3bbf50bc04",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61c865e3d3702a3bbf50bc04/eGgcyeUnOHz0hyap5vhvr.jpeg",
"fullname": "Michael Toker",
"isPro": false,
"type": "user",
"user": "tokeron"
}
},
{
"_id": "67876c0783ec692b4570e0b0",
"hidden": false,
"name": "Ido Galil",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:45:37.404Z",
"user": {
"_id": "66867535e44c9931f329ca43",
"avatarUrl": "/avatars/ec6fc05b9f596f1ce38ec061d3d1d230.svg",
"fullname": "Ido Galil",
"isPro": false,
"type": "user",
"user": "IdoGalilNvidia"
}
},
{
"_id": "67876c0783ec692b4570e0b1",
"hidden": false,
"name": "Hadas Orgad",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:45:44.092Z",
"user": {
"_id": "60f82853c53e95176a7c6d45",
"avatarUrl": "/avatars/16f6ea944a014af6ebe60499f3460784.svg",
"fullname": "Hadas Orgad",
"isPro": false,
"type": "user",
"user": "hadasor"
}
},
{
"_id": "67876c0783ec692b4570e0b2",
"hidden": false,
"name": "Rinon Gal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:45:49.560Z",
"user": {
"_id": "627c1360f19c5eb46d55ba05",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1652712871747-627c1360f19c5eb46d55ba05.jpeg",
"fullname": "Rinon Gal",
"isPro": false,
"type": "user",
"user": "rinong"
}
},
{
"_id": "67876c0783ec692b4570e0b3",
"hidden": false,
"name": "Yoad Tewel",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:46:02.818Z",
"user": {
"_id": "6201838f305742efd5a6a0e3",
"avatarUrl": "/avatars/4e47ab9ef1a4b9f59756a6d8c90a970f.svg",
"fullname": "Yoad Tewel",
"isPro": false,
"type": "user",
"user": "YoadTew"
}
},
{
"_id": "67876c0783ec692b4570e0b4",
"hidden": false,
"name": "Gal Chechik",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:46:09.446Z",
"user": {
"_id": "6493393f357b252af72196c5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6493393f357b252af72196c5/EWSy18XRcMRa_4XMM3Fu-.jpeg",
"fullname": "Gal Chechik",
"isPro": false,
"type": "user",
"user": "galchechik"
}
},
{
"_id": "67876c0783ec692b4570e0b5",
"hidden": false,
"name": "Yonatan Belinkov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:46:15.360Z",
"user": {
"_id": "614c57f1ee44bcfe57b366d6",
"avatarUrl": "/avatars/186a9aed84681246f48ed2a012c50def.svg",
"fullname": "Yonatan Belinkov",
"isPro": false,
"type": "user",
"user": "belinkov"
}
}
] | 2025-01-12T08:36:38 |
Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
|
Text-to-image (T2I) diffusion models rely on encoded prompts to guide the
image generation process. Typically, these prompts are extended to a fixed
length by adding padding tokens before text encoding. Despite being a default
practice, the influence of padding tokens on the image generation process has
not been investigated. In this work, we conduct the first in-depth analysis of
the role padding tokens play in T2I models. We develop two causal techniques to
analyze how information is encoded in the representation of tokens across
different components of the T2I pipeline. Using these techniques, we
investigate when and how padding tokens impact the image generation process.
Our findings reveal three distinct scenarios: padding tokens may affect the
model's output during text encoding, during the diffusion process, or be
effectively ignored. Moreover, we identify key relationships between these
scenarios and the model's architecture (cross or self-attention) and its
training process (frozen or trained text encoder). These insights contribute to
a deeper understanding of the mechanisms of padding tokens, potentially
informing future model design and training practices in T2I systems.
| 31 |
67876c0d83ec692b4570e1ec
| null | null |
|
2025-01-15T02:16:22.721000 |
Enhancing Automated Interpretability with Output-Centric Feature Descriptions
| 2 |
{
"_id": "5e7749883d77a72421292d07",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"followerCount": 213,
"fullname": "Gabriele Sarti",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "gsarti",
"type": "user"
}
| false | null |
2501.08319
|
[
{
"_id": "67876067c377690c01ab4478",
"hidden": false,
"name": "Yoav Gur-Arieh",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T21:21:15.130Z",
"user": {
"_id": "67609a46525a7cf186ca8ca4",
"avatarUrl": "/avatars/f9027eca2181dee7dce899e7a590e803.svg",
"fullname": "Yoav Gur Arieh",
"isPro": false,
"type": "user",
"user": "yoavgurarieh"
}
},
{
"_id": "67876067c377690c01ab4479",
"hidden": false,
"name": "Roy Mayan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:59:28.079Z",
"user": {
"_id": "66640b931284bd77155ea6cf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66640b931284bd77155ea6cf/pvhrHFLbr0vUPfuYGQT-Q.jpeg",
"fullname": "Roy Mayan",
"isPro": false,
"type": "user",
"user": "roym44"
}
},
{
"_id": "67876067c377690c01ab447a",
"hidden": false,
"name": "Chen Agassy",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T09:00:03.349Z",
"user": {
"_id": "6788ca64f7306dbc1b757596",
"avatarUrl": "/avatars/5e0cee8de90372fbf8f61aecdc76d498.svg",
"fullname": "Chen Agassy",
"isPro": false,
"type": "user",
"user": "chenagassy"
}
},
{
"_id": "67876067c377690c01ab447b",
"hidden": false,
"name": "Atticus Geiger",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-15T15:36:25.790Z",
"user": {
"_id": "627b2d0527dc4650b62eef42",
"avatarUrl": "/avatars/e70381850f5657b54e90f5539f3d74eb.svg",
"fullname": "Atticus Geiger",
"isPro": false,
"type": "user",
"user": "atticusg"
}
},
{
"_id": "67876067c377690c01ab447c",
"hidden": false,
"name": "Mor Geva",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:59:15.469Z",
"user": {
"_id": "610b729f9da682cd54ad9adf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1628140189042-noauth.jpeg",
"fullname": "Mor Geva",
"isPro": false,
"type": "user",
"user": "mega"
}
}
] | 2025-01-14T18:53:00 |
Enhancing Automated Interpretability with Output-Centric Feature
Descriptions
|
Automated interpretability pipelines generate natural language descriptions
for the concepts represented by features in large language models (LLMs), such
as plants or the first word in a sentence. These descriptions are derived using
inputs that activate the feature, which may be a dimension or a direction in
the model's representation space. However, identifying activating inputs is
costly, and the mechanistic role of a feature in model behavior is determined
both by how inputs cause a feature to activate and by how feature activation
affects outputs. Using steering evaluations, we reveal that current pipelines
provide descriptions that fail to capture the causal effect of the feature on
outputs. To fix this, we propose efficient, output-centric methods for
automatically generating feature descriptions. These methods use the tokens
weighted higher after feature stimulation or the highest weight tokens after
applying the vocabulary "unembedding" head directly to the feature. Our
output-centric descriptions better capture the causal effect of a feature on
model outputs than input-centric descriptions, but combining the two leads to
the best performance on both input and output evaluations. Lastly, we show that
output-centric descriptions can be used to find inputs that activate features
previously thought to be "dead".
| 10 |
67876068c377690c01ab44cb
| null | null |
|
2025-01-15T02:06:29.459000 |
AfriHate: A Multilingual Collection of Hate Speech and Abusive Language Datasets for African Languages
| 2 |
{
"_id": "5e6a3d4ea9afd5125d9ec064",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1584020801691-noauth.jpeg",
"followerCount": 2307,
"fullname": "Stefan Schweter",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "stefan-it",
"type": "user"
}
| false | null |
2501.08284
|
[
{
"_id": "67875e554d9e0e1baf29ab55",
"hidden": false,
"name": "Shamsuddeen Hassan Muhammad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab56",
"hidden": false,
"name": "Idris Abdulmumin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:12:07.480Z",
"user": {
"_id": "6303c0845c70c21d0eaa13c6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/rmUFmyfBM-0qAZrZZGMiA.png",
"fullname": "Idris Abdulmumin",
"isPro": false,
"type": "user",
"user": "abumafrim"
}
},
{
"_id": "67875e554d9e0e1baf29ab57",
"hidden": false,
"name": "Abinew Ali Ayele",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab58",
"hidden": false,
"name": "David Ifeoluwa Adelani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab59",
"hidden": false,
"name": "Ibrahim Said Ahmad",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:12:26.895Z",
"user": {
"_id": "6471c2fa6facfb01d8ac3380",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6471c2fa6facfb01d8ac3380/UO1qtexgn1pLA6yHn79vz.png",
"fullname": "Ibrahim Said Ahmad",
"isPro": false,
"type": "user",
"user": "isab7070"
}
},
{
"_id": "67875e554d9e0e1baf29ab5a",
"hidden": false,
"name": "Saminu Mohammad Aliyu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:12:34.348Z",
"user": {
"_id": "6474e8d17d131daf633eacc8",
"avatarUrl": "/avatars/2b98d33a9ac9626c95ac1dce599a79a6.svg",
"fullname": "Saminu Mohammad Aliyu",
"isPro": false,
"type": "user",
"user": "Saminukiri"
}
},
{
"_id": "67875e554d9e0e1baf29ab5b",
"hidden": false,
"name": "Nelson Odhiambo Onyango",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab5c",
"hidden": false,
"name": "Lilian D. A. Wanzare",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab5d",
"hidden": false,
"name": "Samuel Rutunda",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:13:19.192Z",
"user": {
"_id": "60bb7e3129800c34660339e4",
"avatarUrl": "/avatars/48e3304f04be70ceaf6abe9bde4f2e91.svg",
"fullname": "Samuel Rutunda",
"isPro": false,
"type": "user",
"user": "rutsam"
}
},
{
"_id": "67875e554d9e0e1baf29ab5e",
"hidden": false,
"name": "Lukman Jibril Aliyu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:13:31.269Z",
"user": {
"_id": "64e88f2ff2f5545edaff5083",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64e88f2ff2f5545edaff5083/ckZTNArwRsH5RfJksL-YJ.jpeg",
"fullname": "Lukman Jibril Aliyu",
"isPro": false,
"type": "user",
"user": "lukmanaj"
}
},
{
"_id": "67875e554d9e0e1baf29ab5f",
"hidden": false,
"name": "Esubalew Alemneh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab60",
"hidden": false,
"name": "Oumaima Hourrane",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:13:47.918Z",
"user": {
"_id": "60530f2eab7edf24eae545c8",
"avatarUrl": "/avatars/d8f365a6876925745a0d1bfd7aff658e.svg",
"fullname": "Oumaima Hourrane",
"isPro": false,
"type": "user",
"user": "oumaimahourrane"
}
},
{
"_id": "67875e554d9e0e1baf29ab61",
"hidden": false,
"name": "Hagos Tesfahun Gebremichael",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab62",
"hidden": false,
"name": "Elyas Abdi Ismail",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab63",
"hidden": false,
"name": "Meriem Beloucif",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab64",
"hidden": false,
"name": "Ebrahim Chekol Jibril",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab65",
"hidden": false,
"name": "Andiswa Bukula",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab66",
"hidden": false,
"name": "Rooweither Mabuya",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:01.840Z",
"user": {
"_id": "64802f1bc7f87934d08203cb",
"avatarUrl": "/avatars/4d10442e228856380b0485e1cb131b1a.svg",
"fullname": "Rooweither /mabuya",
"isPro": false,
"type": "user",
"user": "roowym"
}
},
{
"_id": "67875e554d9e0e1baf29ab67",
"hidden": false,
"name": "Salomey Osei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:11.230Z",
"user": {
"_id": "6053e3b83efc404ddc7c0c13",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1638538775320-6053e3b83efc404ddc7c0c13.jpeg",
"fullname": "Salomey Osei",
"isPro": false,
"type": "user",
"user": "salomey"
}
},
{
"_id": "67875e554d9e0e1baf29ab68",
"hidden": false,
"name": "Abigail Oppong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:19.061Z",
"user": {
"_id": "65a67519f1d4e7bccc2e82bf",
"avatarUrl": "/avatars/c4a0bdc7dc7110b8ce0a2f95524cbff1.svg",
"fullname": "Abigail Oppong",
"isPro": false,
"type": "user",
"user": "abioppong"
}
},
{
"_id": "67875e554d9e0e1baf29ab69",
"hidden": false,
"name": "Tadesse Destaw Belay",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:26.926Z",
"user": {
"_id": "646da72ef077f9ec2c615331",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/646da72ef077f9ec2c615331/LbeId4WSlZDz5QCdaghDV.jpeg",
"fullname": "Tadesse Destaw Belay",
"isPro": false,
"type": "user",
"user": "Tadesse"
}
},
{
"_id": "67875e554d9e0e1baf29ab6a",
"hidden": false,
"name": "Tadesse Kebede Guge",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:34.333Z",
"user": {
"_id": "646920e0dcbb937d56b7f6fd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/NYeYDHoF29rvRrJytdTuk.jpeg",
"fullname": "Tadesse Kebede Guge",
"isPro": false,
"type": "user",
"user": "tadesse381"
}
},
{
"_id": "67875e554d9e0e1baf29ab6b",
"hidden": false,
"name": "Tesfa Tegegne Asfaw",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:41.243Z",
"user": {
"_id": "6607e0ae542f834b6981853c",
"avatarUrl": "/avatars/1acf7a1a1ebc6897df593661a4d7fcb5.svg",
"fullname": "Tesfa Tegegne Asfaw",
"isPro": false,
"type": "user",
"user": "Tesfat"
}
},
{
"_id": "67875e554d9e0e1baf29ab6c",
"hidden": false,
"name": "Chiamaka Ijeoma Chukwuneke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab6d",
"hidden": false,
"name": "Paul Röttger",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:14:54.795Z",
"user": {
"_id": "602ce925374a0dbe5856eca1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/602ce925374a0dbe5856eca1/lBvTn6hYOCF0ggD-rk_mf.jpeg",
"fullname": "Paul Röttger",
"isPro": false,
"type": "user",
"user": "Paul"
}
},
{
"_id": "67875e554d9e0e1baf29ab6e",
"hidden": false,
"name": "Seid Muhie Yimam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67875e554d9e0e1baf29ab6f",
"hidden": false,
"name": "Nedjma Ousidhoum",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T18:00:07 |
AfriHate: A Multilingual Collection of Hate Speech and Abusive Language
Datasets for African Languages
|
Hate speech and abusive language are global phenomena that need
socio-cultural background knowledge to be understood, identified, and
moderated. However, in many regions of the Global South, there have been
several documented occurrences of (1) absence of moderation and (2) censorship
due to the reliance on keyword spotting out of context. Further, high-profile
individuals have frequently been at the center of the moderation process, while
large and targeted hate speech campaigns against minorities have been
overlooked. These limitations are mainly due to the lack of high-quality data
in the local languages and the failure to include local communities in the
collection, annotation, and moderation processes. To address this issue, we
present AfriHate: a multilingual collection of hate speech and abusive language
datasets in 15 African languages. Each instance in AfriHate is annotated by
native speakers familiar with the local culture. We report the challenges
related to the construction of the datasets and present various classification
baseline results with and without using LLMs. The datasets, individual
annotations, and hate speech and offensive language lexicons are available on
https://github.com/AfriHate/AfriHate
| 6 |
67875e574d9e0e1baf29abcc
| null | null |
|
2025-01-15T00:57:59.401000 |
OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for LLM Training
| 2 |
{
"_id": "6374c494958cd71fa7ea0a9d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6374c494958cd71fa7ea0a9d/2YCKv6tXCZXtsIOFIIXjs.png",
"followerCount": 41,
"fullname": "yuyijiong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yuyijiong",
"type": "user"
}
| true | null |
2501.08197
|
[
{
"_id": "67873d8945c53fa98281bb59",
"hidden": false,
"name": "Yijiong Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:47:14.751Z",
"user": {
"_id": "6374c494958cd71fa7ea0a9d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6374c494958cd71fa7ea0a9d/2YCKv6tXCZXtsIOFIIXjs.png",
"fullname": "yuyijiong",
"isPro": false,
"type": "user",
"user": "yuyijiong"
}
},
{
"_id": "67873d8945c53fa98281bb5a",
"hidden": false,
"name": "Ziyun Dai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:00:20.722Z",
"user": {
"_id": "667ec2ffa016ced375edfae9",
"avatarUrl": "/avatars/73f1298569fb0667194d1ac17fb508d6.svg",
"fullname": "Ziyun dai",
"isPro": false,
"type": "user",
"user": "Oliviadzy"
}
},
{
"_id": "67873d8945c53fa98281bb5b",
"hidden": true,
"name": "Zekun Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:00:34.066Z",
"user": {
"_id": "656832dfbd65fd41ee7aa8cd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656832dfbd65fd41ee7aa8cd/HHkyetTqNq1wIBPipzjQA.jpeg",
"fullname": "Zekun Wang",
"isPro": false,
"type": "user",
"user": "kugwzk"
}
},
{
"_id": "67873d8945c53fa98281bb5c",
"hidden": false,
"name": "Wei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67873d8945c53fa98281bb5d",
"hidden": false,
"name": "Ran Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67873d8945c53fa98281bb5e",
"hidden": false,
"name": "Ji Pei",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T15:22:47 |
OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for
LLM Training
|
Large language models (LLMs) have demonstrated remarkable capabilities, but
their success heavily relies on the quality of pretraining corpora. For Chinese
LLMs, the scarcity of high-quality Chinese datasets presents a significant
challenge, often limiting their performance. To address this issue, we propose
the OpenCSG Chinese Corpus, a series of high-quality datasets specifically
designed for LLM pretraining, post-training, and fine-tuning. This corpus
includes Fineweb-edu-chinese, Fineweb-edu-chinese-v2, Cosmopedia-chinese, and
Smoltalk-chinese, each with distinct characteristics: Fineweb-edu datasets
focus on filtered, high-quality content derived from diverse Chinese web
sources; Cosmopedia-chinese provides synthetic, textbook-style data for
knowledge-intensive training; and Smoltalk-chinese emphasizes stylistic and
diverse chat-format data. The OpenCSG Chinese Corpus is characterized by its
high-quality text, diverse coverage across domains, and scalable, reproducible
data curation processes. Additionally, we conducted extensive experimental
analyses, including evaluations on smaller parameter models, which demonstrated
significant performance improvements in tasks such as C-Eval, showcasing the
effectiveness of the corpus for training Chinese LLMs.
| 8 |
67873d8a45c53fa98281bba1
| null | null |
|
2025-01-15T00:40:26.270000 |
Potential and Perils of Large Language Models as Judges of Unstructured Textual Data
| 2 |
{
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"followerCount": 3,
"fullname": "Aman Chadha",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "amanchadha",
"type": "user"
}
| true | null |
2501.08167
|
[
{
"_id": "678748ee95b9364769858656",
"hidden": false,
"name": "Rewina Bedemariam",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:01:26.170Z",
"user": {
"_id": "667482835a2cb9e96011aac1",
"avatarUrl": "/avatars/d6072736df0dbf7f3b8741bc7782b7fd.svg",
"fullname": "Rewina Bedemariam",
"isPro": false,
"type": "user",
"user": "RewyB"
}
},
{
"_id": "678748ee95b9364769858657",
"hidden": false,
"name": "Natalie Perez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678748ee95b9364769858658",
"hidden": false,
"name": "Sreyoshi Bhaduri",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:01:37.500Z",
"user": {
"_id": "6760b279deb31f2c1e5b4f42",
"avatarUrl": "/avatars/b3863ab0d63d052198dc9b4261def623.svg",
"fullname": "sreyoshi bhaduri",
"isPro": false,
"type": "user",
"user": "sreyoshibhaduri"
}
},
{
"_id": "678748ee95b9364769858659",
"hidden": false,
"name": "Satya Kapoor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678748ee95b936476985865a",
"hidden": false,
"name": "Alex Gil",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:02:07.216Z",
"user": {
"_id": "6660d475754f90a858038310",
"avatarUrl": "/avatars/a279a0ede35953f1d860d0fdfc053e57.svg",
"fullname": "Alex Gil",
"isPro": false,
"type": "user",
"user": "alexgil"
}
},
{
"_id": "678748ee95b936476985865b",
"hidden": false,
"name": "Elizabeth Conjar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678748ee95b936476985865c",
"hidden": false,
"name": "Ikkei Itoku",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:02:49.300Z",
"user": {
"_id": "6760b36ee4b55ba1b2b35103",
"avatarUrl": "/avatars/27df8f19ba16e8d9cce4d224e57853bb.svg",
"fullname": "Ikkei Itoku",
"isPro": false,
"type": "user",
"user": "ikkeiitoku"
}
},
{
"_id": "678748ee95b936476985865d",
"hidden": false,
"name": "David Theil",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678748ee95b936476985865e",
"hidden": false,
"name": "Aman Chadha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:47:10.299Z",
"user": {
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"fullname": "Aman Chadha",
"isPro": false,
"type": "user",
"user": "amanchadha"
}
},
{
"_id": "678748ee95b936476985865f",
"hidden": false,
"name": "Naumaan Nayyar",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T14:49:14 |
Potential and Perils of Large Language Models as Judges of Unstructured
Textual Data
|
Rapid advancements in large language models have unlocked remarkable
capabilities when it comes to processing and summarizing unstructured text
data. This has implications for the analysis of rich, open-ended datasets, such
as survey responses, where LLMs hold the promise of efficiently distilling key
themes and sentiments. However, as organizations increasingly turn to these
powerful AI systems to make sense of textual feedback, a critical question
arises, can we trust LLMs to accurately represent the perspectives contained
within these text based datasets? While LLMs excel at generating human-like
summaries, there is a risk that their outputs may inadvertently diverge from
the true substance of the original responses. Discrepancies between the
LLM-generated outputs and the actual themes present in the data could lead to
flawed decision-making, with far-reaching consequences for organizations. This
research investigates the effectiveness of LLMs as judge models to evaluate the
thematic alignment of summaries generated by other LLMs. We utilized an
Anthropic Claude model to generate thematic summaries from open-ended survey
responses, with Amazon's Titan Express, Nova Pro, and Meta's Llama serving as
LLM judges. The LLM-as-judge approach was compared to human evaluations using
Cohen's kappa, Spearman's rho, and Krippendorff's alpha, validating a scalable
alternative to traditional human centric evaluation methods. Our findings
reveal that while LLMs as judges offer a scalable solution comparable to human
raters, humans may still excel at detecting subtle, context-specific nuances.
This research contributes to the growing body of knowledge on AI assisted text
analysis. We discuss limitations and provide recommendations for future
research, emphasizing the need for careful consideration when generalizing LLM
judge models across various contexts and use cases.
| 6 |
678748ee95b9364769858684
| null | null |
|
2025-01-14T23:43:33.935000 |
Tarsier2: Advancing Large Vision-Language Models from Detailed Video Description to Comprehensive Video Understanding
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.07888
|
[
{
"_id": "67873cdd1f65db189f9f64d7",
"hidden": false,
"name": "Liping Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:07:12.581Z",
"user": {
"_id": "6638cd255b81e56d337a4a98",
"avatarUrl": "/avatars/b00805982755e4838b5ce0b23e3357c2.svg",
"fullname": "Liping Yuan",
"isPro": false,
"type": "user",
"user": "yuanlp"
}
},
{
"_id": "67873cdd1f65db189f9f64d8",
"hidden": false,
"name": "Jiawei Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T21:21:17.147Z",
"user": {
"_id": "6788122301d5ba1d3eff23ba",
"avatarUrl": "/avatars/203dc8e1d542be55ea16eafcd7f396ff.svg",
"fullname": "Jiawei Wang",
"isPro": false,
"type": "user",
"user": "0nejiawei"
}
},
{
"_id": "67873cdd1f65db189f9f64d9",
"hidden": false,
"name": "Haomiao Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:07:46.243Z",
"user": {
"_id": "668685161589885791b76c25",
"avatarUrl": "/avatars/ffb9339114b75205ab5840b70f59f4e5.svg",
"fullname": "sunhaomiao",
"isPro": false,
"type": "user",
"user": "sunhm15"
}
},
{
"_id": "67873cdd1f65db189f9f64da",
"hidden": false,
"name": "Yuchen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67873cdd1f65db189f9f64db",
"hidden": false,
"name": "Yuan Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T06:54:39 |
Tarsier2: Advancing Large Vision-Language Models from Detailed Video
Description to Comprehensive Video Understanding
|
We introduce Tarsier2, a state-of-the-art large vision-language model (LVLM)
designed for generating detailed and accurate video descriptions, while also
exhibiting superior general video understanding capabilities. Tarsier2 achieves
significant advancements through three key upgrades: (1) Scaling pre-training
data from 11M to 40M video-text pairs, enriching both volume and diversity; (2)
Performing fine-grained temporal alignment during supervised fine-tuning; (3)
Using model-based sampling to automatically construct preference data and
applying DPO training for optimization. Extensive experiments show that
Tarsier2-7B consistently outperforms leading proprietary models, including
GPT-4o and Gemini 1.5 Pro, in detailed video description tasks. On the DREAM-1K
benchmark, Tarsier2-7B improves F1 by 2.8\% over GPT-4o and 5.8\% over
Gemini-1.5-Pro. In human side-by-side evaluations, Tarsier2-7B shows a +8.6\%
performance advantage over GPT-4o and +24.9\% over Gemini-1.5-Pro. Tarsier2-7B
also sets new state-of-the-art results across 15 public benchmarks, spanning
tasks such as video question-answering, video grounding, hallucination test,
and embodied question-answering, demonstrating its versatility as a robust
generalist vision-language model.
| 15 |
67873ce11f65db189f9f6615
| null | null |
|
2025-01-14T23:17:00.915000 |
PokerBench: Training Large Language Models to become Professional Poker Players
| 2 |
{
"_id": "64e8f4a24f3f7b0b84834315",
"avatarUrl": "/avatars/242bb68c7ccffe5061c2d1c229ea3b0b.svg",
"followerCount": 1,
"fullname": "Akshat Gupta",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "akshat57",
"type": "user"
}
| true | null |
2501.08328
|
[
{
"_id": "6787367e42691fc7dc545292",
"hidden": false,
"name": "Richard Zhuang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:49:44.178Z",
"user": {
"_id": "6596f6b992afb150dde380cb",
"avatarUrl": "/avatars/f069bd55f288e1f008a476e7b6e3f150.svg",
"fullname": "Richard Zhuang",
"isPro": true,
"type": "user",
"user": "RZ412"
}
},
{
"_id": "6787367e42691fc7dc545293",
"hidden": false,
"name": "Akshat Gupta",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-15T04:19:21.989Z",
"user": {
"_id": "64e8f4a24f3f7b0b84834315",
"avatarUrl": "/avatars/242bb68c7ccffe5061c2d1c229ea3b0b.svg",
"fullname": "Akshat Gupta",
"isPro": false,
"type": "user",
"user": "akshat57"
}
},
{
"_id": "6787367e42691fc7dc545294",
"hidden": false,
"name": "Richard Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:49:50.389Z",
"user": {
"_id": "63ff98b4b09f82a81a1f07c5",
"avatarUrl": "/avatars/691c6a5e6946c225d2f3ac6d97a8dd52.svg",
"fullname": "Yang",
"isPro": false,
"type": "user",
"user": "RichardYang"
}
},
{
"_id": "6787367e42691fc7dc545295",
"hidden": false,
"name": "Aniket Rahane",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:50:05.690Z",
"user": {
"_id": "65971d98260bcd5fb5225889",
"avatarUrl": "/avatars/494c10ebcc084d1bdfb894e160c6883a.svg",
"fullname": "Aniket Rahane",
"isPro": false,
"type": "user",
"user": "aniketarahane"
}
},
{
"_id": "6787367e42691fc7dc545296",
"hidden": false,
"name": "Zhengyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:50:11.928Z",
"user": {
"_id": "64807bfbf5be39206aef41b5",
"avatarUrl": "/avatars/9a8d037f117223326520c5b522180487.svg",
"fullname": "Zhengyu Li",
"isPro": false,
"type": "user",
"user": "LZY729"
}
},
{
"_id": "6787367e42691fc7dc545297",
"hidden": false,
"name": "Gopala Anumanchipalli",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:50:19.727Z",
"user": {
"_id": "60523e4aa7226b25aaeea2b8",
"avatarUrl": "/avatars/316ca348da91ebced86991f36150c959.svg",
"fullname": "Gopala Anumanchipalli",
"isPro": false,
"type": "user",
"user": "gopalakr"
}
}
] | 2025-01-14T18:59:03 |
PokerBench: Training Large Language Models to become Professional Poker
Players
|
We introduce PokerBench - a benchmark for evaluating the poker-playing
abilities of large language models (LLMs). As LLMs excel in traditional NLP
tasks, their application to complex, strategic games like poker poses a new
challenge. Poker, an incomplete information game, demands a multitude of skills
such as mathematics, reasoning, planning, strategy, and a deep understanding of
game theory and human psychology. This makes Poker the ideal next frontier for
large language models. PokerBench consists of a comprehensive compilation of
11,000 most important scenarios, split between pre-flop and post-flop play,
developed in collaboration with trained poker players. We evaluate prominent
models including GPT-4, ChatGPT 3.5, and various Llama and Gemma series models,
finding that all state-of-the-art LLMs underperform in playing optimal poker.
However, after fine-tuning, these models show marked improvements. We validate
PokerBench by having models with different scores compete with each other,
demonstrating that higher scores on PokerBench lead to higher win rates in
actual poker games. Through gameplay between our fine-tuned model and GPT-4, we
also identify limitations of simple supervised fine-tuning for learning optimal
playing strategy, suggesting the need for more advanced methodologies for
effectively training language models to excel in games. PokerBench thus
presents a unique benchmark for a quick and reliable evaluation of the
poker-playing ability of LLMs as well as a comprehensive benchmark to study the
progress of LLMs in complex game-playing scenarios. The dataset and code will
be made available at: https://github.com/pokerllm/pokerbench.
| 17 |
6787368042691fc7dc5452dd
| null | null |
|
2025-01-14T23:11:25.137000 |
Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
| 3 |
{
"_id": "661c9059bcd78151e5c06ea1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png",
"followerCount": 6,
"fullname": "Ju He",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "turkeyju",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/-7KLvUVPYbrrljt6Nx6iS.png",
"https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/SH9x-3yrSYBXH9Nkn9uNy.png",
"https://cdn-uploads.huggingface.co/production/uploads/661c9059bcd78151e5c06ea1/ITR8a3J2vHuYGmCWfX-xS.png"
] |
2501.07730
|
[
{
"_id": "678734168c1e7b6c4a6e5ff9",
"hidden": false,
"name": "Dongwon Kim",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T08:31:43.848Z",
"user": {
"_id": "64dc5208c38427829de81b16",
"avatarUrl": "/avatars/43a08e46a7a78f1e3d1f6645a9b1d26b.svg",
"fullname": "Dongwon",
"isPro": false,
"type": "user",
"user": "kdwon"
}
},
{
"_id": "678734168c1e7b6c4a6e5ffa",
"hidden": false,
"name": "Ju He",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:25.249Z",
"user": {
"_id": "661c9059bcd78151e5c06ea1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/661c9059bcd78151e5c06ea1/27bfNo1LZeZQ77vWuAa10.png",
"fullname": "Ju He",
"isPro": false,
"type": "user",
"user": "turkeyju"
}
},
{
"_id": "678734168c1e7b6c4a6e5ffb",
"hidden": false,
"name": "Qihang Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:54:50.093Z",
"user": {
"_id": "677b60e17279b5c57354108b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/677b60e17279b5c57354108b/YOwDhVf9DkeRjOCOLErb6.png",
"fullname": "QihangYu",
"isPro": false,
"type": "user",
"user": "QihangYu"
}
},
{
"_id": "678734168c1e7b6c4a6e5ffc",
"hidden": false,
"name": "Chenglin Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678734168c1e7b6c4a6e5ffd",
"hidden": false,
"name": "Xiaohui Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:55:07.851Z",
"user": {
"_id": "6430aa1b32a732121cd81f98",
"avatarUrl": "/avatars/5419f8d6d4d36fa5ac83e30667b9fd99.svg",
"fullname": "Xiaohui Shen",
"isPro": false,
"type": "user",
"user": "XiaohuiShen"
}
},
{
"_id": "678734168c1e7b6c4a6e5ffe",
"hidden": false,
"name": "Suha Kwak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678734168c1e7b6c4a6e5fff",
"hidden": false,
"name": "Liang-Chieh Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-13T22:37:17 |
Democratizing Text-to-Image Masked Generative Models with Compact
Text-Aware One-Dimensional Tokens
|
Image tokenizers form the foundation of modern text-to-image generative
models but are notoriously difficult to train. Furthermore, most existing
text-to-image models rely on large-scale, high-quality private datasets, making
them challenging to replicate. In this work, we introduce Text-Aware
Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful
image tokenizer that can utilize either discrete or continuous 1-dimensional
tokens. TA-TiTok uniquely integrates textual information during the tokenizer
decoding stage (i.e., de-tokenization), accelerating convergence and enhancing
performance. TA-TiTok also benefits from a simplified, yet effective, one-stage
training process, eliminating the need for the complex two-stage distillation
used in previous 1-dimensional tokenizers. This design allows for seamless
scalability to large datasets. Building on this, we introduce a family of
text-to-image Masked Generative Models (MaskGen), trained exclusively on open
data while achieving comparable performance to models trained on private data.
We aim to release both the efficient, strong TA-TiTok tokenizers and the
open-data, open-weight MaskGen models to promote broader access and democratize
the field of text-to-image masked generative models.
| 16 |
678734178c1e7b6c4a6e6071
| null | null |
|
2025-01-14T23:01:53.467000 |
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
| 2 |
{
"_id": "645dbaa6f5760d1530d7580d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645dbaa6f5760d1530d7580d/Bqob8arLZoHIgMwNZpL9I.jpeg",
"followerCount": 31,
"fullname": "Simeon Emanuilov",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "s-emanuilov",
"type": "user"
}
| false | null |
2501.08292
|
[
{
"_id": "678731bbe24feaaa753768b5",
"hidden": false,
"name": "Abhilasha Ravichander",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T17:13:38.972Z",
"user": {
"_id": "6349886c429608888c42319a",
"avatarUrl": "/avatars/f84b5fe8b76172878274754e3399d6ec.svg",
"fullname": "Abhilasha Ravichander",
"isPro": false,
"type": "user",
"user": "lasha-nlp"
}
},
{
"_id": "678731bbe24feaaa753768b6",
"hidden": false,
"name": "Shrusti Ghela",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:06:55.911Z",
"user": {
"_id": "65020c3506a2c356286c1c47",
"avatarUrl": "/avatars/d445d361430bd205d858c2e9c78127bf.svg",
"fullname": "Shrusti Ghela",
"isPro": false,
"type": "user",
"user": "shrustighela"
}
},
{
"_id": "678731bbe24feaaa753768b7",
"hidden": false,
"name": "David Wadden",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:06:50.718Z",
"user": {
"_id": "5ff4e2a1463be69ae4bd42bd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1609884433338-5ff4e2a1463be69ae4bd42bd.jpeg",
"fullname": "David Wadden",
"isPro": false,
"type": "user",
"user": "dwadden"
}
},
{
"_id": "678731bbe24feaaa753768b8",
"hidden": false,
"name": "Yejin Choi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T17:06:44.995Z",
"user": {
"_id": "64d42729f63b01b7f676b176",
"avatarUrl": "/avatars/52e54bdd6a1fb6c774a40cd70f3d7925.svg",
"fullname": "Yejin Choi",
"isPro": false,
"type": "user",
"user": "yejinchoinka"
}
}
] | 2025-01-14T18:13:08 |
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
|
Despite their impressive ability to generate high-quality and fluent text,
generative large language models (LLMs) also produce hallucinations: statements
that are misaligned with established world knowledge or provided input context.
However, measuring hallucination can be challenging, as having humans verify
model generations on-the-fly is both expensive and time-consuming. In this
work, we release HALoGEN, a comprehensive hallucination benchmark consisting
of: (1) 10,923 prompts for generative models spanning nine domains including
programming, scientific attribution, and summarization, and (2) automatic
high-precision verifiers for each use case that decompose LLM generations into
atomic units, and verify each unit against a high-quality knowledge source. We
use this framework to evaluate ~150,000 generations from 14 language models,
finding that even the best-performing models are riddled with hallucinations
(sometimes up to 86% of generated atomic facts depending on the domain). We
further define a novel error classification for LLM hallucinations based on
whether they likely stem from incorrect recollection of training data (Type A
errors), or incorrect knowledge in training data (Type B errors), or are
fabrication (Type C errors). We hope our framework provides a foundation to
enable the principled study of why generative models hallucinate, and advances
the development of trustworthy large language models.
| 17 |
678731bde24feaaa7537695b
| null | null |
|
2025-01-14T22:36:58.221000 |
FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors
| 2 |
{
"_id": "62cd752ba9be5c19555c2b4c",
"avatarUrl": "/avatars/ed72684f0f2139516ccde24cd467cea6.svg",
"followerCount": 5,
"fullname": "YaboZhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yabo",
"type": "user"
}
| true | null |
2501.08225
|
[
{
"_id": "67872d3659aa2a284c52ad5f",
"hidden": false,
"name": "Yabo Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:48:10.703Z",
"user": {
"_id": "62cd752ba9be5c19555c2b4c",
"avatarUrl": "/avatars/ed72684f0f2139516ccde24cd467cea6.svg",
"fullname": "YaboZhang",
"isPro": false,
"type": "user",
"user": "Yabo"
}
},
{
"_id": "67872d3659aa2a284c52ad60",
"hidden": false,
"name": "Xinpeng Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:48:17.107Z",
"user": {
"_id": "66a7044a315d9b5c32d2a399",
"avatarUrl": "/avatars/e2da8af4a4f758aa5de7bf34b085f764.svg",
"fullname": "zhou",
"isPro": false,
"type": "user",
"user": "xinpengzhou"
}
},
{
"_id": "67872d3659aa2a284c52ad61",
"hidden": false,
"name": "Yihan Zeng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:48:26.078Z",
"user": {
"_id": "65beb318358734fd090f742e",
"avatarUrl": "/avatars/962420a21b2aa8689dd0e1d4531fbf35.svg",
"fullname": "Yihan Zeng",
"isPro": false,
"type": "user",
"user": "vikyzeng2"
}
},
{
"_id": "67872d3659aa2a284c52ad62",
"hidden": false,
"name": "Hang Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67872d3659aa2a284c52ad63",
"hidden": false,
"name": "Hui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67872d3659aa2a284c52ad64",
"hidden": false,
"name": "Wangmeng Zuo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T16:09:16 |
FramePainter: Endowing Interactive Image Editing with Video Diffusion
Priors
|
Interactive image editing allows users to modify images through visual
interaction operations such as drawing, clicking, and dragging. Existing
methods construct such supervision signals from videos, as they capture how
objects change with various physical interactions. However, these models are
usually built upon text-to-image diffusion models, so necessitate (i) massive
training samples and (ii) an additional reference encoder to learn real-world
dynamics and visual consistency. In this paper, we reformulate this task as an
image-to-video generation problem, so that inherit powerful video diffusion
priors to reduce training costs and ensure temporal consistency. Specifically,
we introduce FramePainter as an efficient instantiation of this formulation.
Initialized with Stable Video Diffusion, it only uses a lightweight sparse
control encoder to inject editing signals. Considering the limitations of
temporal attention in handling large motion between two frames, we further
propose matching attention to enlarge the receptive field while encouraging
dense correspondence between edited and source image tokens. We highlight the
effectiveness and efficiency of FramePainter across various of editing signals:
it domainantly outperforms previous state-of-the-art methods with far less
training data, achieving highly seamless and coherent editing of images, \eg,
automatically adjust the reflection of the cup. Moreover, FramePainter also
exhibits exceptional generalization in scenarios not present in real-world
videos, \eg, transform the clownfish into shark-like shape. Our code will be
available at https://github.com/YBYBZhang/FramePainter.
| 18 |
67872d3a59aa2a284c52aed0
| null | null |
|
2025-01-14T22:13:07.257000 |
MangaNinja: Line Art Colorization with Precise Reference Following
| 3 |
{
"_id": "6479925ab77e18dbf640bd67",
"avatarUrl": "/avatars/bb52ecd22ca4b49157f8668be35409e7.svg",
"followerCount": 6,
"fullname": "Zhiheng Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Johanan0528",
"type": "user"
}
| true | null |
2501.08332
|
[
{
"_id": "678727aadd2e5dbecdf08fe3",
"hidden": false,
"name": "Zhiheng Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:42:29.526Z",
"user": {
"_id": "6479925ab77e18dbf640bd67",
"avatarUrl": "/avatars/bb52ecd22ca4b49157f8668be35409e7.svg",
"fullname": "Zhiheng Liu",
"isPro": false,
"type": "user",
"user": "Johanan0528"
}
},
{
"_id": "678727aadd2e5dbecdf08fe4",
"hidden": false,
"name": "Ka Leong Cheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:42:34.938Z",
"user": {
"_id": "64acd2ec39fcfebff8c79c00",
"avatarUrl": "/avatars/9419384846b92182f2c47ce2fbd0f8d3.svg",
"fullname": "Ka Leong Cheng",
"isPro": false,
"type": "user",
"user": "felixcheng97"
}
},
{
"_id": "678727aadd2e5dbecdf08fe5",
"hidden": false,
"name": "Xi Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:42:44.702Z",
"user": {
"_id": "644a1b6401e18bf93a6f45c1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png",
"fullname": "xichen",
"isPro": false,
"type": "user",
"user": "xichenhku"
}
},
{
"_id": "678727aadd2e5dbecdf08fe6",
"hidden": false,
"name": "Jie Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:43:12.886Z",
"user": {
"_id": "646b2767ec9a61e8717a41a7",
"avatarUrl": "/avatars/5503aefdfd6cddd6f83bf9fbcced4c90.svg",
"fullname": "jiexiao",
"isPro": false,
"type": "user",
"user": "jiexiao"
}
},
{
"_id": "678727aadd2e5dbecdf08fe7",
"hidden": false,
"name": "Hao Ouyang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678727aadd2e5dbecdf08fe8",
"hidden": false,
"name": "Kai Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678727aadd2e5dbecdf08fe9",
"hidden": false,
"name": "Yu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678727aadd2e5dbecdf08fea",
"hidden": false,
"name": "Yujun Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "678727aadd2e5dbecdf08feb",
"hidden": false,
"name": "Qifeng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:43:50.764Z",
"user": {
"_id": "6467b121e7a6a374fd19b44b",
"avatarUrl": "/avatars/3f2874d58986d651aef55e3408b05700.svg",
"fullname": "Qifeng Chen",
"isPro": false,
"type": "user",
"user": "cqf"
}
},
{
"_id": "678727aadd2e5dbecdf08fec",
"hidden": false,
"name": "Ping Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T18:59:55 |
MangaNinja: Line Art Colorization with Precise Reference Following
|
Derived from diffusion models, MangaNinjia specializes in the task of
reference-guided line art colorization. We incorporate two thoughtful designs
to ensure precise character detail transcription, including a patch shuffling
module to facilitate correspondence learning between the reference color image
and the target line art, and a point-driven control scheme to enable
fine-grained color matching. Experiments on a self-collected benchmark
demonstrate the superiority of our model over current solutions in terms of
precise colorization. We further showcase the potential of the proposed
interactive point control in handling challenging cases, cross-character
colorization, multi-reference harmonization, beyond the reach of existing
algorithms.
| 57 |
678727acdd2e5dbecdf09097
| null | null |
|
2025-01-14T22:04:35.473000 |
Diffusion Adversarial Post-Training for One-Step Video Generation
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.08316
|
[
{
"_id": "678725b38c1e7b6c4a69f88a",
"hidden": false,
"name": "Shanchuan Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:34.462Z",
"user": {
"_id": "645863f7dc18eb1a9b5d29df",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645863f7dc18eb1a9b5d29df/t49Nnyl4tbkUn7CmQqKZh.jpeg",
"fullname": "Peter Lin",
"isPro": false,
"type": "user",
"user": "PeterL1n"
}
},
{
"_id": "678725b38c1e7b6c4a69f88b",
"hidden": false,
"name": "Xin Xia",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:26:06.599Z",
"user": {
"_id": "63089a78ff78e2aead8d10e7",
"avatarUrl": "/avatars/f326fd08abee7e31599a78923be30003.svg",
"fullname": "XinXia",
"isPro": false,
"type": "user",
"user": "XiaXin-Aloys"
}
},
{
"_id": "678725b38c1e7b6c4a69f88c",
"hidden": false,
"name": "Yuxi Ren",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:46:42.832Z",
"user": {
"_id": "6618d5e83b412cdc85334ca8",
"avatarUrl": "/avatars/5fe356d58c4c822a60370dbee8d78a69.svg",
"fullname": "renyuxi",
"isPro": false,
"type": "user",
"user": "renyuxi"
}
},
{
"_id": "678725b38c1e7b6c4a69f88d",
"hidden": false,
"name": "Ceyuan Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:46:55.730Z",
"user": {
"_id": "64accf3fc1da7c4dbc69392d",
"avatarUrl": "/avatars/ec8ad2498f8a49a7a00c77a7f70d34bb.svg",
"fullname": "Ceyuan",
"isPro": false,
"type": "user",
"user": "Ceyuan"
}
},
{
"_id": "678725b38c1e7b6c4a69f88e",
"hidden": false,
"name": "Xuefeng Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:47:01.595Z",
"user": {
"_id": "646b7f71df2609a541c1ab9f",
"avatarUrl": "/avatars/48b82e5fd9b06f41ff825507c36816cd.svg",
"fullname": "Xuefeng Xiao",
"isPro": false,
"type": "user",
"user": "xiaoxuefeng"
}
},
{
"_id": "678725b38c1e7b6c4a69f88f",
"hidden": false,
"name": "Lu Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T18:51:48 |
Diffusion Adversarial Post-Training for One-Step Video Generation
|
The diffusion models are widely used for image and video generation, but
their iterative generation process is slow and expansive. While existing
distillation approaches have demonstrated the potential for one-step generation
in the image domain, they still suffer from significant quality degradation. In
this work, we propose Adversarial Post-Training (APT) against real data
following diffusion pre-training for one-step video generation. To improve the
training stability and quality, we introduce several improvements to the model
architecture and training procedures, along with an approximated R1
regularization objective. Empirically, our experiments show that our
adversarial post-trained model, Seaweed-APT, can generate 2-second, 1280x720,
24fps videos in real time using a single forward evaluation step. Additionally,
our model is capable of generating 1024px images in a single step, achieving
quality comparable to state-of-the-art methods.
| 33 |
678725b68c1e7b6c4a69f911
| null | null |
|
2025-01-14T21:58:03.573000 |
A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following
| 2 |
{
"_id": "620b3bbb0668e435407c8d0a",
"avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg",
"followerCount": 19,
"fullname": "Ningyu Zhang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ningyu",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/Y3Br4s8MMP0E14TrwYF7E.png"
] |
2501.08187
|
[
{
"_id": "67871cb6ce7f3eb12692b222",
"hidden": false,
"name": "Yin Fang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:58.033Z",
"user": {
"_id": "63d660ae44f1d8fbe585d463",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674993743489-noauth.jpeg",
"fullname": "Yin Fang",
"isPro": false,
"type": "user",
"user": "Fangyinfff"
}
},
{
"_id": "67871cb6ce7f3eb12692b223",
"hidden": false,
"name": "Xinle Deng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:47:14.066Z",
"user": {
"_id": "65cad52fd6c974694fc20b8e",
"avatarUrl": "/avatars/8232a7c5db590ed26751a47c45d481b8.svg",
"fullname": "Xinle Deng",
"isPro": false,
"type": "user",
"user": "Linear-Matrix-Probability"
}
},
{
"_id": "67871cb6ce7f3eb12692b224",
"hidden": false,
"name": "Kangwei Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:47:20.167Z",
"user": {
"_id": "63e5fbbb2d2c508de9f77550",
"avatarUrl": "/avatars/f44d205bac9ea0e4f481771c0bdfbc96.svg",
"fullname": "kangwei liu",
"isPro": false,
"type": "user",
"user": "sadgaj"
}
},
{
"_id": "67871cb6ce7f3eb12692b225",
"hidden": false,
"name": "Ningyu Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:54.355Z",
"user": {
"_id": "620b3bbb0668e435407c8d0a",
"avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg",
"fullname": "Ningyu Zhang",
"isPro": false,
"type": "user",
"user": "Ningyu"
}
},
{
"_id": "67871cb6ce7f3eb12692b226",
"hidden": false,
"name": "Jingyang Qian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871cb6ce7f3eb12692b227",
"hidden": true,
"name": "Penghui Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:47:33.365Z",
"user": {
"_id": "6508463c423b46492eec64e2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6508463c423b46492eec64e2/WSU7NSqjk92Pr2xUIWjCk.png",
"fullname": "Penghui Yang",
"isPro": false,
"type": "user",
"user": "phyang"
}
},
{
"_id": "67871cb6ce7f3eb12692b228",
"hidden": false,
"name": "Xiaohui Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871cb6ce7f3eb12692b229",
"hidden": false,
"name": "Huajun Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:47:53.920Z",
"user": {
"_id": "64931296137833d7ec7689cd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64931296137833d7ec7689cd/TBihNdp1ZwIWjhfAWjRr6.jpeg",
"fullname": "Huajun Chen",
"isPro": false,
"type": "user",
"user": "huajunsir"
}
}
] | 2025-01-14T15:12:19 |
A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction
Following
|
Large language models excel at interpreting complex natural language
instructions, enabling them to perform a wide range of tasks. In the life
sciences, single-cell RNA sequencing (scRNA-seq) data serves as the "language
of cellular biology", capturing intricate gene expression patterns at the
single-cell level. However, interacting with this "language" through
conventional tools is often inefficient and unintuitive, posing challenges for
researchers. To address these limitations, we present InstructCell, a
multi-modal AI copilot that leverages natural language as a medium for more
direct and flexible single-cell analysis. We construct a comprehensive
multi-modal instruction dataset that pairs text-based instructions with
scRNA-seq profiles from diverse tissues and species. Building on this, we
develop a multi-modal cell language architecture capable of simultaneously
interpreting and processing both modalities. InstructCell empowers researchers
to accomplish critical tasks-such as cell type annotation, conditional
pseudo-cell generation, and drug sensitivity prediction-using straightforward
natural language commands. Extensive evaluations demonstrate that InstructCell
consistently meets or exceeds the performance of existing single-cell
foundation models, while adapting to diverse experimental conditions. More
importantly, InstructCell provides an accessible and intuitive tool for
exploring complex single-cell data, lowering technical barriers and enabling
deeper biological insights.
| 24 |
67871cbcce7f3eb12692b37e
| null | null |
|
2025-01-14T21:48:31.318000 |
MiniMax-01: Scaling Foundation Models with Lightning Attention
| 6 |
{
"_id": "642e4d4d6748dd4f8eeb7732",
"avatarUrl": "/avatars/fd911e9143d1a7aedd21a7d611543fcc.svg",
"followerCount": 6,
"fullname": "Xuyang Shen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Ryan1122",
"type": "user"
}
| true | null |
2501.08313
|
[
{
"_id": "67871e6ef492fb2235af8978",
"hidden": false,
"name": "MiniMax",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:38:32.547Z",
"user": {
"_id": "676e38ad04af5bec20bc9faf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/676e38ad04af5bec20bc9faf/AG8Q9wAUzGtPWyjd5QO2l.jpeg",
"fullname": "MiniMax",
"isPro": false,
"type": "user",
"user": "MiniMax-AI"
}
},
{
"_id": "67871e6ef492fb2235af8979",
"hidden": false,
"name": "Aonian Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af897a",
"hidden": false,
"name": "Bangwei Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af897b",
"hidden": false,
"name": "Bo Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af897c",
"hidden": false,
"name": "Boji Shan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:37:05.624Z",
"user": {
"_id": "656617f5b0e94dc742d5a8ad",
"avatarUrl": "/avatars/ea43dbdfdc0e880efa695632367f5608.svg",
"fullname": "Boji Shan",
"isPro": false,
"type": "user",
"user": "kamuy-shennai"
}
},
{
"_id": "67871e6ef492fb2235af897d",
"hidden": false,
"name": "Chang Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:37:12.282Z",
"user": {
"_id": "6638574ec78619ba63879031",
"avatarUrl": "/avatars/c9c95f129abfe7614493e926e7a7e971.svg",
"fullname": "Chang Liu",
"isPro": false,
"type": "user",
"user": "changliu01"
}
},
{
"_id": "67871e6ef492fb2235af897e",
"hidden": false,
"name": "Cheng Zhu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:37:18.159Z",
"user": {
"_id": "644b2ae13a619fe72b1806f9",
"avatarUrl": "/avatars/ceee1779b26792ffd28c6e9a23d3439a.svg",
"fullname": "chengzhu",
"isPro": false,
"type": "user",
"user": "chengzhu"
}
},
{
"_id": "67871e6ef492fb2235af897f",
"hidden": false,
"name": "Chunhao Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:51.874Z",
"user": {
"_id": "642662fa22bddcea3d289f0a",
"avatarUrl": "/avatars/9b28e1325d866a24d33fdfafcaa85c4b.svg",
"fullname": "Enoch Zhang",
"isPro": false,
"type": "user",
"user": "enochzhang"
}
},
{
"_id": "67871e6ef492fb2235af8980",
"hidden": false,
"name": "Congchao Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8981",
"hidden": false,
"name": "Da Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8982",
"hidden": false,
"name": "Dong Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-16T08:31:45.832Z",
"user": {
"_id": "61cd713f78c1c6b489a231fc",
"avatarUrl": "/avatars/af261b85bbd10a1182372ffc459640b8.svg",
"fullname": "Dong Li",
"isPro": false,
"type": "user",
"user": "liddalidd"
}
},
{
"_id": "67871e6ef492fb2235af8983",
"hidden": false,
"name": "Enwei Jiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8984",
"hidden": false,
"name": "Gengxin Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8985",
"hidden": false,
"name": "Guojun Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:00:15.292Z",
"user": {
"_id": "6511d02684bedffb5aa9509e",
"avatarUrl": "/avatars/ed4f546f4fef206a71227513995d4d72.svg",
"fullname": "Guojun Zhang",
"isPro": false,
"type": "user",
"user": "gordon-z"
}
},
{
"_id": "67871e6ef492fb2235af8986",
"hidden": false,
"name": "Haohai Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:03:56.423Z",
"user": {
"_id": "64b76660f92b20f7a37c3df7",
"avatarUrl": "/avatars/40158717bb9370f1e5d0ed156a6fed1f.svg",
"fullname": "HaohaiSun",
"isPro": false,
"type": "user",
"user": "HaohaiSun"
}
},
{
"_id": "67871e6ef492fb2235af8987",
"hidden": false,
"name": "Houze Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8988",
"hidden": false,
"name": "Jiadai Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8989",
"hidden": false,
"name": "Jiaqi Zhuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898a",
"hidden": false,
"name": "Jiayuan Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898b",
"hidden": false,
"name": "Jin Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898c",
"hidden": false,
"name": "Jingtao Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898d",
"hidden": false,
"name": "Jingyang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898e",
"hidden": false,
"name": "Junbin Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af898f",
"hidden": false,
"name": "Junhao Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8990",
"hidden": false,
"name": "Junjie Yan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:03:32.671Z",
"user": {
"_id": "63390ce41718795719635b1e",
"avatarUrl": "/avatars/ad03a2b349f01c1ac1fedfb95d02d43e.svg",
"fullname": "JunjieYan",
"isPro": false,
"type": "user",
"user": "JunjieYan"
}
},
{
"_id": "67871e6ef492fb2235af8991",
"hidden": false,
"name": "Kaishun Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8992",
"hidden": false,
"name": "Kecheng Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:02:25.327Z",
"user": {
"_id": "634788bc6f8773f2a28e0eb5",
"avatarUrl": "/avatars/463af54e7a9279ed93e58b403c6197ca.svg",
"fullname": "kecheng xiao",
"isPro": false,
"type": "user",
"user": "Drakos"
}
},
{
"_id": "67871e6ef492fb2235af8993",
"hidden": false,
"name": "Kexi Kang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8994",
"hidden": false,
"name": "Le Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8995",
"hidden": false,
"name": "Leyang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8996",
"hidden": false,
"name": "Lianfei Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8997",
"hidden": false,
"name": "Liheng Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8998",
"hidden": false,
"name": "Lin Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af8999",
"hidden": false,
"name": "Linbo Chai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899a",
"hidden": false,
"name": "Long Xing",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899b",
"hidden": false,
"name": "Meizhi Ju",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899c",
"hidden": false,
"name": "Mingyuan Chi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899d",
"hidden": false,
"name": "Mozhi Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899e",
"hidden": false,
"name": "Peikai Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af899f",
"hidden": false,
"name": "Pengcheng Niu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a0",
"hidden": false,
"name": "Pengfei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a1",
"hidden": false,
"name": "Pengyu Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a2",
"hidden": false,
"name": "Qi Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a3",
"hidden": false,
"name": "Qidi Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a4",
"hidden": false,
"name": "Qiexiang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:01:58.949Z",
"user": {
"_id": "665897f1df3aa79d9bb271a4",
"avatarUrl": "/avatars/f78e7322e6d7e7f6de5c50f8c87819dc.svg",
"fullname": "Qiexiang Wang",
"isPro": false,
"type": "user",
"user": "wzyxwqx"
}
},
{
"_id": "67871e6ef492fb2235af89a5",
"hidden": false,
"name": "Qin Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a6",
"hidden": false,
"name": "Qiuhui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a7",
"hidden": false,
"name": "Ruitao Leng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a8",
"hidden": false,
"name": "Shengmin Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89a9",
"hidden": false,
"name": "Shuqi Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89aa",
"hidden": false,
"name": "Sichen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ab",
"hidden": false,
"name": "Songquan Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ac",
"hidden": false,
"name": "Tao Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ad",
"hidden": false,
"name": "Tianrun Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-15T16:38:47.564Z",
"user": {
"_id": "667646b9d5ac75b5915003b2",
"avatarUrl": "/avatars/44f8c3be66feae5fc828e3e23e52989e.svg",
"fullname": "liangtianrun",
"isPro": false,
"type": "user",
"user": "andachi"
}
},
{
"_id": "67871e6ef492fb2235af89ae",
"hidden": false,
"name": "Weigao Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:18.454Z",
"user": {
"_id": "6246bb33da617c00b48e4d92",
"avatarUrl": "/avatars/0304a9f6eb7f5dee4d933d03222f94e9.svg",
"fullname": "Weigao Sun",
"isPro": false,
"type": "user",
"user": "weigao266"
}
},
{
"_id": "67871e6ef492fb2235af89af",
"hidden": false,
"name": "Weixuan Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b0",
"hidden": false,
"name": "Weiyu Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b1",
"hidden": false,
"name": "Wenkai Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b2",
"hidden": false,
"name": "Xiangjun Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b3",
"hidden": false,
"name": "Xiao Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b4",
"hidden": false,
"name": "Xiaodong Han",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-15T08:48:47.808Z",
"user": {
"_id": "64638bddc615cbc12447f9f1",
"avatarUrl": "/avatars/52910d1717451ff983f745322e5850dd.svg",
"fullname": "Xiaodong Han",
"isPro": false,
"type": "user",
"user": "Hannnnnxd"
}
},
{
"_id": "67871e6ef492fb2235af89b5",
"hidden": false,
"name": "Xinjie Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b6",
"hidden": false,
"name": "Xinzhu Hou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b7",
"hidden": false,
"name": "Xu Min",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b8",
"hidden": false,
"name": "Xun Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89b9",
"hidden": false,
"name": "Xuyang Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ba",
"hidden": false,
"name": "Yan Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89bb",
"hidden": false,
"name": "Yingjie Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89bc",
"hidden": false,
"name": "Yipeng Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89bd",
"hidden": false,
"name": "Yiran Zhong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T10:26:51.780Z",
"user": {
"_id": "64c525e4d68946edad6c7067",
"avatarUrl": "/avatars/1b108661634af602717a4ab4b66a151f.svg",
"fullname": "Yiran Zhong",
"isPro": false,
"type": "user",
"user": "IanZhong"
}
},
{
"_id": "67871e6ef492fb2235af89be",
"hidden": false,
"name": "Yongyi Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89bf",
"hidden": false,
"name": "Yuanxiang Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-16T09:00:47.428Z",
"user": {
"_id": "64813e4b3fb124fc98503a7e",
"avatarUrl": "/avatars/f871b7d84f04f827041d4a23cb1cdc9f.svg",
"fullname": "Yuanxiang Fan",
"isPro": false,
"type": "user",
"user": "ShiroFFF"
}
},
{
"_id": "67871e6ef492fb2235af89c0",
"hidden": false,
"name": "Yue Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c1",
"hidden": false,
"name": "Yufeng Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c2",
"hidden": false,
"name": "Yuhao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c3",
"hidden": false,
"name": "Yunan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c4",
"hidden": false,
"name": "Yunji Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c5",
"hidden": false,
"name": "Yunpeng Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c6",
"hidden": false,
"name": "Yunzhi Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c7",
"hidden": false,
"name": "Yuxin Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c8",
"hidden": false,
"name": "Zehan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89c9",
"hidden": false,
"name": "Zekang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ca",
"hidden": false,
"name": "Zewei Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89cb",
"hidden": false,
"name": "Zewen Ying",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89cc",
"hidden": false,
"name": "Zhaoyang Cong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89cd",
"hidden": false,
"name": "Zhen Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89ce",
"hidden": false,
"name": "Zhenhua Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89cf",
"hidden": false,
"name": "Zhihang Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89d0",
"hidden": false,
"name": "Zhuo Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67871e6ef492fb2235af89d1",
"hidden": false,
"name": "Zijia Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-14T18:50:05 |
MiniMax-01: Scaling Foundation Models with Lightning Attention
|
We introduce MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01,
which are comparable to top-tier models while offering superior capabilities in
processing longer contexts. The core lies in lightning attention and its
efficient scaling. To maximize computational capacity, we integrate it with
Mixture of Experts (MoE), creating a model with 32 experts and 456 billion
total parameters, of which 45.9 billion are activated for each token. We
develop an optimized parallel strategy and highly efficient
computation-communication overlap techniques for MoE and lightning attention.
This approach enables us to conduct efficient training and inference on models
with hundreds of billions of parameters across contexts spanning millions of
tokens. The context window of MiniMax-Text-01 can reach up to 1 million tokens
during training and extrapolate to 4 million tokens during inference at an
affordable cost. Our vision-language model, MiniMax-VL-01 is built through
continued training with 512 billion vision-language tokens. Experiments on both
standard and in-house benchmarks show that our models match the performance of
state-of-the-art models like GPT-4o and Claude-3.5-Sonnet while offering 20-32
times longer context window. We publicly release MiniMax-01 at
https://github.com/MiniMax-AI.
| 274 |
67871e6ff492fb2235af8a6a
| null | null |
|
2025-01-14T09:41:28.476000 |
Evaluating Sample Utility for Data Selection by Mimicking Model Weights
| 2 |
{
"_id": "659590c8994d0ef581417913",
"avatarUrl": "/avatars/cef10191f915a99583a69e9e0ac8a330.svg",
"followerCount": 2,
"fullname": "Manjot Bilkhu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "mbilkhu",
"type": "user"
}
| true | null |
2501.06708
|
[
{
"_id": "6786749192a268ed83174bfd",
"hidden": false,
"name": "Tzu-Heng Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786749192a268ed83174bfe",
"hidden": false,
"name": "Manjot Bilkhu",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-14T14:45:33.269Z",
"user": {
"_id": "659590c8994d0ef581417913",
"avatarUrl": "/avatars/cef10191f915a99583a69e9e0ac8a330.svg",
"fullname": "Manjot Bilkhu",
"isPro": false,
"type": "user",
"user": "mbilkhu"
}
},
{
"_id": "6786749192a268ed83174bff",
"hidden": false,
"name": "Frederic Sala",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6786749192a268ed83174c00",
"hidden": false,
"name": "Javier Movellan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-12T04:28:14 |
Evaluating Sample Utility for Data Selection by Mimicking Model Weights
|
Foundation models rely on large-scale web-crawled datasets, which frequently
contain noisy data, biases, and irrelevant content. Existing data selection
techniques typically use human heuristics, downstream evaluation datasets, or
specialized scoring models, and can overlook samples' utility in the training
process. Instead, we propose a new approach, Mimic Score, a data quality metric
that uses a pretrained reference model as a guide to assess the usefulness of
data samples for training a new model. It relies on the alignment between the
gradient of the new model parameters and the vector pointing toward the
reference model in weight space. Samples that misalign with this direction are
considered low-value and can be filtered out. Motivated by the Mimic score, we
develop Grad-Mimic, a data selection framework that identifies and prioritizes
useful samples, automating the selection process to create effective filters.
Empirically, using Mimic scores to guide model training results in consistent
performance gains across six image datasets and enhances the performance of
CLIP models. Moreover, Mimic scores and their associated filters improve upon
existing filtering methods and offer accurate estimation of dataset quality.
| 5 |
6786749292a268ed83174c42
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.