publishedAt
timestamp[ns]
title
string
thumbnail
string
numComments
int64
submittedBy
dict
isAuthorParticipating
bool
mediaUrls
list
paper_id
string
paper_authors
list
paper_publishedAt
timestamp[ns]
paper_title
string
paper_summary
string
paper_upvotes
int64
paper_discussionId
string
paper_projectPage
string
paper_githubRepo
string
2025-01-02T23:09:43.423000
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
https://cdn-thumbnails.h…s/2501.00599.png
2
{ "_id": "64a3fe3dde901eb01df12398", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64a3fe3dde901eb01df12398/Js2bEx4rxKuEKVt5z9I2D.jpeg", "followerCount": 3, "fullname": "YuqianYuan", "isHf": false, "isMod": false, "isPro": false, "name": "CircleRadon", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/64a3fe3dde901eb01df12398/FXY8u9gsbbaE2-0k8DH9r.mp4" ]
2501.00599
[ { "_id": "677761253c2cb54a3ac7918e", "hidden": false, "name": "Yuqian Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac7918f", "hidden": false, "name": "Hang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79190", "hidden": false, "name": "Wentong Li", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:03:13.776Z", "user": { "_id": "64c48a78d07620bdc99777d4", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64c48a78d07620bdc99777d4/NJC4Ot0a7YSdU5RC6dgga.jpeg", "fullname": "LI WENTONG", "isPro": false, "type": "user", "user": "sunshine-lwt" } }, { "_id": "677761253c2cb54a3ac79191", "hidden": false, "name": "Zesen Cheng", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:02:16.139Z", "user": { "_id": "65b2529285b6c21448a10d65", "avatarUrl": "/avatars/1b09e2742aecce1bbdc57f0c4504cf38.svg", "fullname": "Zesen Cheng", "isPro": false, "type": "user", "user": "ClownRat" } }, { "_id": "677761253c2cb54a3ac79192", "hidden": false, "name": "Boqiang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79193", "hidden": false, "name": "Long Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79194", "hidden": false, "name": "Xin Li", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:02:45.998Z", "user": { "_id": "63913b120cf6b11c487ca31d", "avatarUrl": "/avatars/aec44edd5470dd6e767e0a25efd6fb5d.svg", "fullname": "Xin Li", "isPro": true, "type": "user", "user": "lixin4ever" } }, { "_id": "677761253c2cb54a3ac79195", "hidden": false, "name": "Deli Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79196", "hidden": false, "name": "Wenqiao Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79197", "hidden": false, "name": "Yueting Zhuang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79198", "hidden": false, "name": "Jianke Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677761253c2cb54a3ac79199", "hidden": false, "name": "Lidong Bing", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:02:48.302Z", "user": { "_id": "6454685a548f22be598414c4", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/eMjMWKJ-AouF7eY1-RzGF.jpeg", "fullname": "Lidong Bing", "isPro": false, "type": "user", "user": "LidongBing" } } ]
2024-12-31T18:56:46
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM
Video Large Language Models (Video LLMs) have recently exhibited remarkable capabilities in general video understanding. However, they mainly focus on holistic comprehension and struggle with capturing fine-grained spatial and temporal details. Besides, the lack of high-quality object-level video instruction data and a comprehensive benchmark further hinders their advancements. To tackle these challenges, we introduce the VideoRefer Suite to empower Video LLM for finer-level spatial-temporal video understanding, i.e., enabling perception and reasoning on any objects throughout the video. Specially, we thoroughly develop VideoRefer Suite across three essential aspects: dataset, model, and benchmark. Firstly, we introduce a multi-agent data engine to meticulously curate a large-scale, high-quality object-level video instruction dataset, termed VideoRefer-700K. Next, we present the VideoRefer model, which equips a versatile spatial-temporal object encoder to capture precise regional and sequential representations. Finally, we meticulously create a VideoRefer-Bench to comprehensively assess the spatial-temporal understanding capability of a Video LLM, evaluating it across various aspects. Extensive experiments and analyses demonstrate that our VideoRefer model not only achieves promising performance on video referring benchmarks but also facilitates general video understanding capabilities.
41
677761283c2cb54a3ac79251
null
null
2025-01-02T22:59:12.319000
Dynamic Scaling of Unit Tests for Code Reward Modeling
https://cdn-thumbnails.h…s/2501.01054.png
2
{ "_id": "6384c07fdfffab4824ff45fb", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669644372381-noauth.jpeg", "followerCount": 2, "fullname": "Zeyao Ma", "isHf": false, "isMod": false, "isPro": false, "name": "KAKA22", "type": "user" }
true
null
2501.01054
[ { "_id": "67774afb5c6bccc41be7ba19", "hidden": false, "name": "Zeyao Ma", "status": "extracted_pending", "statusLastChangedAt": "2025-01-03T02:27:07.916Z", "user": { "_id": "6384c07fdfffab4824ff45fb", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669644372381-noauth.jpeg", "fullname": "Zeyao Ma", "isPro": false, "type": "user", "user": "KAKA22" } }, { "_id": "67774afb5c6bccc41be7ba1a", "hidden": false, "name": "Xiaokang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67774afb5c6bccc41be7ba1b", "hidden": false, "name": "Jing Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67774afb5c6bccc41be7ba1c", "hidden": false, "name": "Jifan Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67774afb5c6bccc41be7ba1d", "hidden": false, "name": "Sijia Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67774afb5c6bccc41be7ba1e", "hidden": false, "name": "Jie Tang", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-01-02T04:33:31
Dynamic Scaling of Unit Tests for Code Reward Modeling
Current large language models (LLMs) often struggle to produce accurate responses on the first attempt for complex reasoning tasks like code generation. Prior research tackles this challenge by generating multiple candidate solutions and validating them with LLM-generated unit tests. The execution results of unit tests serve as reward signals to identify correct solutions. As LLMs always confidently make mistakes, these unit tests are not reliable, thereby diminishing the quality of reward signals. Motivated by the observation that scaling the number of solutions improves LLM performance, we explore the impact of scaling unit tests to enhance reward signal quality. Our pioneer experiment reveals a positive correlation between the number of unit tests and reward signal quality, with greater benefits observed in more challenging problems. Based on these insights, we propose CodeRM-8B, a lightweight yet effective unit test generator that enables efficient and high-quality unit test scaling. Additionally, we implement a dynamic scaling mechanism that adapts the number of unit tests based on problem difficulty, further improving efficiency. Experimental results show that our approach significantly improves performance across various models on three benchmarks (e.g., with gains of 18.43% for Llama3-8B and 3.42% for GPT-4o-mini on HumanEval Plus).
17
67774afb5c6bccc41be7ba62
null
null
2025-01-02T22:41:28.381000
VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
https://cdn-thumbnails.h…s/2501.01427.png
3
{ "_id": "644a1b6401e18bf93a6f45c1", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png", "followerCount": 41, "fullname": "xichen", "isHf": false, "isMod": false, "isPro": false, "name": "xichenhku", "type": "user" }
true
null
2501.01427
[ { "_id": "677752371ab3b33411033089", "hidden": false, "name": "Yuanpeng Tu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:03:30.344Z", "user": { "_id": "65fd2da40e543c5a84586eb5", "avatarUrl": "/avatars/ebb4e4bda4b025f167fb9fb4099e4cfd.svg", "fullname": "yuanpeng", "isPro": false, "type": "user", "user": "Tuyuanpeng" } }, { "_id": "677752371ab3b3341103308a", "hidden": false, "name": "Hao Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677752371ab3b3341103308b", "hidden": false, "name": "Xi Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:02:56.258Z", "user": { "_id": "644a1b6401e18bf93a6f45c1", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644a1b6401e18bf93a6f45c1/P0i_CgCrIzOS2tYRlxoE9.png", "fullname": "xichen", "isPro": false, "type": "user", "user": "xichenhku" } }, { "_id": "677752371ab3b3341103308c", "hidden": false, "name": "Sihui Ji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677752371ab3b3341103308d", "hidden": false, "name": "Xiang Bai", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:01:17.737Z", "user": { "_id": "641790e2f1e86908935d82a0", "avatarUrl": "/avatars/ced7a137c6344c74b7ac0d5c84833fc8.svg", "fullname": "Xiang Bai", "isPro": false, "type": "user", "user": "baixianger" } }, { "_id": "677752371ab3b3341103308e", "hidden": false, "name": "Hengshuang Zhao", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-01-02T18:59:54
VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
Despite significant advancements in video generation, inserting a given object into videos remains a challenging task. The difficulty lies in preserving the appearance details of the reference object and accurately modeling coherent motions at the same time. In this paper, we propose VideoAnydoor, a zero-shot video object insertion framework with high-fidelity detail preservation and precise motion control. Starting from a text-to-video model, we utilize an ID extractor to inject the global identity and leverage a box sequence to control the overall motion. To preserve the detailed appearance and meanwhile support fine-grained motion control, we design a pixel warper. It takes the reference image with arbitrary key-points and the corresponding key-point trajectories as inputs. It warps the pixel details according to the trajectories and fuses the warped features with the diffusion U-Net, thus improving detail preservation and supporting users in manipulating the motion trajectories. In addition, we propose a training strategy involving both videos and static images with a reweight reconstruction loss to enhance insertion quality. VideoAnydoor demonstrates significant superiority over existing methods and naturally supports various downstream applications (e.g., talking head generation, video virtual try-on, multi-region editing) without task-specific fine-tuning.
51
6777523c1ab3b3341103325a
null
null
2025-01-02T22:28:45.077000
A3: Android Agent Arena for Mobile GUI Agents
https://cdn-thumbnails.h…s/2501.01149.png
3
{ "_id": "6458ce236fa580137af5aa95", "avatarUrl": "/avatars/db65a7332e375eb5daad5c1b076b1e3b.svg", "followerCount": 1, "fullname": "Yuxiang Chai", "isHf": false, "isMod": false, "isPro": false, "name": "Yuxiang007", "type": "user" }
true
null
2501.01149
[ { "_id": "6777587d7fcef9e7b225cbae", "hidden": false, "name": "Yuxiang Chai", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:03:27.243Z", "user": { "_id": "6458ce236fa580137af5aa95", "avatarUrl": "/avatars/db65a7332e375eb5daad5c1b076b1e3b.svg", "fullname": "Yuxiang Chai", "isPro": false, "type": "user", "user": "Yuxiang007" } }, { "_id": "6777587d7fcef9e7b225cbaf", "hidden": false, "name": "Hanhao Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777587d7fcef9e7b225cbb0", "hidden": false, "name": "Jiayu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777587d7fcef9e7b225cbb1", "hidden": false, "name": "Liang Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777587d7fcef9e7b225cbb2", "hidden": false, "name": "Guozhi Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777587d7fcef9e7b225cbb3", "hidden": false, "name": "Shuai Ren", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777587d7fcef9e7b225cbb4", "hidden": false, "name": "Siyuan Huang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:02:52.150Z", "user": { "_id": "634e4120038b5879133552f5", "avatarUrl": "/avatars/34ec861b4bbf1aecf927a7d6e726c7a4.svg", "fullname": "Siyuan", "isPro": true, "type": "user", "user": "SiyuanH" } }, { "_id": "6777587d7fcef9e7b225cbb5", "hidden": false, "name": "Hongsheng Li", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-01-02T09:03:56
A3: Android Agent Arena for Mobile GUI Agents
AI agents have become increasingly prevalent in recent years, driven by significant advancements in the field of large language models (LLMs). Mobile GUI agents, a subset of AI agents, are designed to autonomously perform tasks on mobile devices. While numerous studies have introduced agents, datasets, and benchmarks to advance mobile GUI agent research, many existing datasets focus on static frame evaluations and fail to provide a comprehensive platform for assessing performance on real-world, in-the-wild tasks. To address this gap, we present Android Agent Arena (A3), a novel evaluation platform. Unlike existing in-the-wild systems, A3 offers: (1) meaningful and practical tasks, such as real-time online information retrieval and operational instructions; (2) a larger, more flexible action space, enabling compatibility with agents trained on any dataset; and (3) automated business-level LLM-based evaluation process. A3 includes 21 widely used general third-party apps and 201 tasks representative of common user scenarios, providing a robust foundation for evaluating mobile GUI agents in real-world situations and a new autonomous evaluation process for less human labor and coding expertise. The project is available at https://yuxiangchai.github.io/Android-Agent-Arena/.
22
677758817fcef9e7b225cf0a
null
null
2025-01-02T22:27:32.841000
MLLM-as-a-Judge for Image Safety without Human Labeling
https://cdn-thumbnails.h…s/2501.00192.png
2
{ "_id": "64dfcc62e8b6f3f3baa950e0", "avatarUrl": "/avatars/21bbff67d46c08044efe2406575aa77e.svg", "followerCount": null, "fullname": "Zhenting Wang", "isHf": false, "isMod": false, "isPro": false, "name": "ztwang", "type": "user" }
true
null
2501.00192
[ { "_id": "6777591138f9a731d4f575bb", "hidden": false, "name": "Zhenting Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:03:15.648Z", "user": { "_id": "64dfcc62e8b6f3f3baa950e0", "avatarUrl": "/avatars/21bbff67d46c08044efe2406575aa77e.svg", "fullname": "Zhenting Wang", "isPro": false, "type": "user", "user": "ztwang" } }, { "_id": "6777591138f9a731d4f575bc", "hidden": false, "name": "Shuming Hu", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:53:03.995Z", "user": { "_id": "6563c6846ef2a1d0f989f11b", "avatarUrl": "/avatars/b03f6680ac3e718a1c38fec59a211b62.svg", "fullname": "Shuming Hu", "isPro": false, "type": "user", "user": "shumingh" } }, { "_id": "6777591138f9a731d4f575bd", "hidden": false, "name": "Shiyu Zhao", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:53:11.798Z", "user": { "_id": "667caf5baeba4a9f63860cf3", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/667caf5baeba4a9f63860cf3/p4D4IXN1JgHwI8RGe5p6F.jpeg", "fullname": "Shiyu Zhao", "isPro": false, "type": "user", "user": "xiaofeng-94" } }, { "_id": "6777591138f9a731d4f575be", "hidden": false, "name": "Xiaowen Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575bf", "hidden": false, "name": "Felix Juefei-Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c0", "hidden": false, "name": "Zhuowei Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c1", "hidden": false, "name": "Ligong Han", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:54:08.096Z", "user": { "_id": "631e56ecf318ed8dfd3738cc", "avatarUrl": "/avatars/df4f71ac795cb8c251db7f30529dc2f8.svg", "fullname": "Ligong Han", "isPro": false, "type": "user", "user": "ligongh" } }, { "_id": "6777591138f9a731d4f575c2", "hidden": false, "name": "Harihar Subramanyam", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:54:14.540Z", "user": { "_id": "6303da377373aacccd868d0f", "avatarUrl": "/avatars/88813105f863a0b2125fbe8bfffdec0e.svg", "fullname": "Harihar Subramanyam", "isPro": false, "type": "user", "user": "Harihar" } }, { "_id": "6777591138f9a731d4f575c3", "hidden": false, "name": "Li Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c4", "hidden": false, "name": "Jianfa Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c5", "hidden": false, "name": "Nan Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c6", "hidden": false, "name": "Lingjuan Lyu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c7", "hidden": false, "name": "Shiqing Ma", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:56:30.168Z", "user": { "_id": "644f91f317b6189cda55deea", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/gX6s1BFk_JRKq4_HSQIcU.png", "fullname": "Shiqing Ma", "isPro": false, "type": "user", "user": "Kuingsmile" } }, { "_id": "6777591138f9a731d4f575c8", "hidden": false, "name": "Dimitris N. Metaxas", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6777591138f9a731d4f575c9", "hidden": false, "name": "Ankit Jain", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-31T00:06:04
MLLM-as-a-Judge for Image Safety without Human Labeling
Image content safety has become a significant challenge with the rise of visual media on online platforms. Meanwhile, in the age of AI-generated content (AIGC), many image generation models are capable of producing harmful content, such as images containing sexual or violent material. Thus, it becomes crucial to identify such unsafe images based on established safety rules. Pre-trained Multimodal Large Language Models (MLLMs) offer potential in this regard, given their strong pattern recognition abilities. Existing approaches typically fine-tune MLLMs with human-labeled datasets, which however brings a series of drawbacks. First, relying on human annotators to label data following intricate and detailed guidelines is both expensive and labor-intensive. Furthermore, users of safety judgment systems may need to frequently update safety rules, making fine-tuning on human-based annotation more challenging. This raises the research question: Can we detect unsafe images by querying MLLMs in a zero-shot setting using a predefined safety constitution (a set of safety rules)? Our research showed that simply querying pre-trained MLLMs does not yield satisfactory results. This lack of effectiveness stems from factors such as the subjectivity of safety rules, the complexity of lengthy constitutions, and the inherent biases in the models. To address these challenges, we propose a MLLM-based method includes objectifying safety rules, assessing the relevance between rules and images, making quick judgments based on debiased token probabilities with logically complete yet simplified precondition chains for safety rules, and conducting more in-depth reasoning with cascaded chain-of-thought processes if necessary. Experiment results demonstrate that our method is highly effective for zero-shot image safety judgment tasks.
25
6777591238f9a731d4f5762f
null
null
2025-01-02T22:04:45.023000
Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models
https://cdn-thumbnails.h…s/2501.01423.png
2
{ "_id": "6375dfa7f9aafd41ce145254", "avatarUrl": "/avatars/19e7460dd7ea6c60e1c52d5707660cc8.svg", "followerCount": 1, "fullname": "Lunbin Zeng", "isHf": false, "isMod": false, "isPro": false, "name": "xiazhi", "type": "user" }
false
null
2501.01423
[ { "_id": "677753a28376dfe003a3fbd3", "hidden": false, "name": "Jingfeng Yao", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:17:09.787Z", "user": { "_id": "67756c9c846a267749304255", "avatarUrl": "/avatars/01f09805b561887c55d1b9ad4e96b461.svg", "fullname": "Jingfeng Yao", "isPro": false, "type": "user", "user": "MapleF9" } }, { "_id": "677753a28376dfe003a3fbd4", "hidden": false, "name": "Xinggang Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-01-06T14:17:16.030Z", "user": { "_id": "62600de6d47e3dbae32ce1ce", "avatarUrl": "/avatars/a536417cfec6e10ac415091bd1829426.svg", "fullname": "Xinggang Wang", "isPro": false, "type": "user", "user": "xinggangw" } } ]
2025-01-02T18:59:40
Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models
Latent diffusion models with Transformer architectures excel at generating high-fidelity images. However, recent studies reveal an optimization dilemma in this two-stage design: while increasing the per-token feature dimension in visual tokenizers improves reconstruction quality, it requires substantially larger diffusion models and more training iterations to achieve comparable generation performance. Consequently, existing systems often settle for sub-optimal solutions, either producing visual artifacts due to information loss within tokenizers or failing to converge fully due to expensive computation costs. We argue that this dilemma stems from the inherent difficulty in learning unconstrained high-dimensional latent spaces. To address this, we propose aligning the latent space with pre-trained vision foundation models when training the visual tokenizers. Our proposed VA-VAE (Vision foundation model Aligned Variational AutoEncoder) significantly expands the reconstruction-generation frontier of latent diffusion models, enabling faster convergence of Diffusion Transformers (DiT) in high-dimensional latent spaces. To exploit the full potential of VA-VAE, we build an enhanced DiT baseline with improved training strategies and architecture designs, termed LightningDiT. The integrated system achieves state-of-the-art (SOTA) performance on ImageNet 256x256 generation with an FID score of 1.35 while demonstrating remarkable training efficiency by reaching an FID score of 2.11 in just 64 epochs--representing an over 21 times convergence speedup compared to the original DiT. Models and codes are available at: https://github.com/hustvl/LightningDiT.
37
677753a38376dfe003a3fc2b
null
null
2025-01-02T21:58:41.673000
ProgCo: Program Helps Self-Correction of Large Language Models
https://cdn-thumbnails.h…s/2501.01264.png
2
{ "_id": "61cd4b833dd34ba1985e0753", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61cd4b833dd34ba1985e0753/BfHfrwotoMESpXZOHiIe4.png", "followerCount": 17, "fullname": "KABI", "isHf": false, "isMod": false, "isPro": false, "name": "dongguanting", "type": "user" }
false
null
2501.01264
[ { "_id": "677751f23308d0c478a26abe", "hidden": false, "name": "Xiaoshuai Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677751f23308d0c478a26abf", "hidden": false, "name": "Yanan Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677751f23308d0c478a26ac0", "hidden": false, "name": "Weixun Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677751f23308d0c478a26ac1", "hidden": false, "name": "Jiaheng Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:02:58.578Z", "user": { "_id": "65377c30e48353201e6fdda0", "avatarUrl": "/avatars/a8f803b6f2e598eaee9c52c0d2ddfc16.svg", "fullname": "Jiaheng Liu", "isPro": false, "type": "user", "user": "CheeryLJH" } }, { "_id": "677751f23308d0c478a26ac2", "hidden": false, "name": "Wenbo Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677751f23308d0c478a26ac3", "hidden": false, "name": "Bo Zheng", "status": null, "statusLastChangedAt": null, "user": null } ]
2025-01-02T13:59:20
ProgCo: Program Helps Self-Correction of Large Language Models
Self-Correction aims to enable large language models (LLMs) to self-verify and self-refine their initial responses without external feedback. However, LLMs often fail to effectively self-verify and generate correct feedback, further misleading refinement and leading to the failure of self-correction, especially in complex reasoning tasks. In this paper, we propose Program-driven Self-Correction (ProgCo). First, program-driven verification (ProgVe) achieves complex verification logic and extensive validation through self-generated, self-executing verification pseudo-programs. Then, program-driven refinement (ProgRe) receives feedback from ProgVe, conducts dual reflection and refinement on both responses and verification programs to mitigate misleading of incorrect feedback in complex reasoning tasks. Experiments on three instruction-following and mathematical benchmarks indicate that ProgCo achieves effective self-correction, and can be further enhance performance when combined with real program tools.
25
677751f33308d0c478a26b14
null
null
2025-01-02T20:53:29.611000
Are Vision-Language Models Truly Understanding Multi-vision Sensor?
https://cdn-thumbnails.h…s/2412.20750.png
2
{ "_id": "65a4bf8e90b5e87bcdff41c7", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65a4bf8e90b5e87bcdff41c7/I6OSoZigV7Fl6OKRl_Yqy.jpeg", "followerCount": 5, "fullname": "Sangyun Chung", "isHf": false, "isMod": false, "isPro": false, "name": "topyun", "type": "user" }
true
null
2412.20750
[ { "_id": "67742e073adba6b02dd84e3e", "hidden": false, "name": "Sangyun Chung", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:03.795Z", "user": { "_id": "65a4bf8e90b5e87bcdff41c7", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65a4bf8e90b5e87bcdff41c7/I6OSoZigV7Fl6OKRl_Yqy.jpeg", "fullname": "Sangyun Chung", "isPro": false, "type": "user", "user": "topyun" } }, { "_id": "67742e073adba6b02dd84e3f", "hidden": false, "name": "Youngjoon Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67742e073adba6b02dd84e40", "hidden": false, "name": "Youngchae Chee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67742e073adba6b02dd84e41", "hidden": false, "name": "Se Yeon Kim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67742e073adba6b02dd84e42", "hidden": false, "name": "Byung-Kwan Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67742e073adba6b02dd84e43", "hidden": false, "name": "Yong Man Ro", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T06:44:25
Are Vision-Language Models Truly Understanding Multi-vision Sensor?
Large-scale Vision-Language Models (VLMs) have advanced by aligning vision inputs with text, significantly improving performance in computer vision tasks. Moreover, for VLMs to be effectively utilized in real-world applications, an understanding of diverse multi-vision sensor data, such as thermal, depth, and X-ray information, is essential. However, we find that current VLMs process multi-vision sensor images without deep understanding of sensor information, disregarding each sensor's unique physical properties. This limitation restricts their capacity to interpret and respond to complex questions requiring multi-vision sensor reasoning. To address this, we propose a novel Multi-vision Sensor Perception and Reasoning (MS-PR) benchmark, assessing VLMs on their capacity for sensor-specific reasoning. Moreover, we introduce Diverse Negative Attributes (DNA) optimization to enable VLMs to perform deep reasoning on multi-vision sensor tasks, helping to bridge the core information gap between images and sensor data. Extensive experimental results validate that the proposed DNA method can significantly improve the multi-vision sensor reasoning for VLMs.
20
67742e083adba6b02dd84e8b
null
null
2025-01-02T18:48:09.282000
VMix: Improving Text-to-Image Diffusion Model with Cross-Attention Mixing Control
https://cdn-thumbnails.h…s/2412.20800.png
2
{ "_id": "660114b38ae190912a61be5d", "avatarUrl": "/avatars/abc4ab10d6f9769d2b5e697ccbf3fb70.svg", "followerCount": 1, "fullname": "ShaojinWu", "isHf": false, "isMod": false, "isPro": false, "name": "fenfan", "type": "user" }
true
null
2412.20800
[ { "_id": "677365f3ed53dd3b007d7a97", "hidden": false, "name": "Shaojin Wu", "status": "admin_assigned", "statusLastChangedAt": "2025-01-02T19:45:00.373Z", "user": { "_id": "660114b38ae190912a61be5d", "avatarUrl": "/avatars/abc4ab10d6f9769d2b5e697ccbf3fb70.svg", "fullname": "ShaojinWu", "isPro": false, "type": "user", "user": "fenfan" } }, { "_id": "677365f3ed53dd3b007d7a98", "hidden": false, "name": "Fei Ding", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677365f3ed53dd3b007d7a99", "hidden": false, "name": "Mengqi Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677365f3ed53dd3b007d7a9a", "hidden": false, "name": "Wei Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677365f3ed53dd3b007d7a9b", "hidden": false, "name": "Qian He", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T08:47:25
VMix: Improving Text-to-Image Diffusion Model with Cross-Attention Mixing Control
While diffusion models show extraordinary talents in text-to-image generation, they may still fail to generate highly aesthetic images. More specifically, there is still a gap between the generated images and the real-world aesthetic images in finer-grained dimensions including color, lighting, composition, etc. In this paper, we propose Cross-Attention Value Mixing Control (VMix) Adapter, a plug-and-play aesthetics adapter, to upgrade the quality of generated images while maintaining generality across visual concepts by (1) disentangling the input text prompt into the content description and aesthetic description by the initialization of aesthetic embedding, and (2) integrating aesthetic conditions into the denoising process through value-mixed cross-attention, with the network connected by zero-initialized linear layers. Our key insight is to enhance the aesthetic presentation of existing diffusion models by designing a superior condition control method, all while preserving the image-text alignment. Through our meticulous design, VMix is flexible enough to be applied to community models for better visual performance without retraining. To validate the effectiveness of our method, we conducted extensive experiments, showing that VMix outperforms other state-of-the-art methods and is compatible with other community modules (e.g., LoRA, ControlNet, and IPAdapter) for image generation. The project page is https://vmix-diffusion.github.io/VMix/.
10
677365f6ed53dd3b007d7b13
null
null
2025-01-02T11:26:22.995000
HUNYUANPROVER: A Scalable Data Synthesis Framework and Guided Tree Search for Automated Theorem Proving
https://cdn-thumbnails.h…s/2412.20735.png
2
{ "_id": "64c94eddcb2f1bf0e7db5a4d", "avatarUrl": "/avatars/f7e2532d3c85d5e5b5a02c579ea68c3a.svg", "followerCount": null, "fullname": "Linfeng Song", "isHf": false, "isMod": false, "isPro": false, "name": "freesunshine0316", "type": "user" }
false
null
2412.20735
[ { "_id": "6776bdde4f9262d263382cf3", "hidden": false, "name": "Yang Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf4", "hidden": false, "name": "Dong Du", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf5", "hidden": false, "name": "Linfeng Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf6", "hidden": false, "name": "Chen Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf7", "hidden": false, "name": "Weikang Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf8", "hidden": false, "name": "Tao Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6776bdde4f9262d263382cf9", "hidden": false, "name": "Haitao Mi", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T06:18:33
HUNYUANPROVER: A Scalable Data Synthesis Framework and Guided Tree Search for Automated Theorem Proving
We introduce HunyuanProver, an language model finetuned from the Hunyuan 7B for interactive automatic theorem proving with LEAN4. To alleviate the data sparsity issue, we design a scalable framework to iterative synthesize data with low cost. Besides, guided tree search algorithms are designed to enable effective ``system 2 thinking`` of the prover. HunyuanProver achieves state-of-the-art (SOTA) performances on major benchmarks. Specifically, it achieves a pass of 68.4% on the miniF2F-test compared to 65.9%, the current SOTA results. It proves 4 IMO statements (imo_1960_p2, imo_1962_p2}, imo_1964_p2 and imo_1983_p6) in miniF2F-test. To benefit the community, we will open-source a dataset of 30k synthesized instances, where each instance contains the original question in natural language, the converted statement by autoformalization, and the proof by HunyuanProver.
11
6776bddf4f9262d263382d20
null
null
2025-01-02T02:22:25.585000
OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
https://cdn-thumbnails.h…s/2412.19723.png
3
{ "_id": "6064a0eeb1703ddba0d458b9", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1617207525789-noauth.png", "followerCount": 12, "fullname": "Qiushi", "isHf": false, "isMod": false, "isPro": false, "name": "QiushiSun", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6064a0eeb1703ddba0d458b9/1GH_UkP9xJhaRBAnf4bk5.gif", "https://cdn-uploads.huggingface.co/production/uploads/6064a0eeb1703ddba0d458b9/Si3uV-4vAYqmmX7mDjhIv.gif" ]
2412.19723
[ { "_id": "67720d79b3163a95a653baaf", "hidden": false, "name": "Qiushi Sun", "status": "claimed_verified", "statusLastChangedAt": "2025-01-02T10:19:16.620Z", "user": { "_id": "6064a0eeb1703ddba0d458b9", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1617207525789-noauth.png", "fullname": "Qiushi", "isPro": false, "type": "user", "user": "QiushiSun" } }, { "_id": "67720d79b3163a95a653bab0", "hidden": false, "name": "Kanzhi Cheng", "status": "claimed_verified", "statusLastChangedAt": "2025-01-07T08:42:48.096Z", "user": { "_id": "63340dbbd92c5842ae71d1e9", "avatarUrl": "/avatars/3a3182996bd41b526dcbfa8687d91963.svg", "fullname": "Kanzhi Cheng", "isPro": false, "type": "user", "user": "cckevinn" } }, { "_id": "67720d79b3163a95a653bab1", "hidden": false, "name": "Zichen Ding", "status": "claimed_verified", "statusLastChangedAt": "2025-01-02T10:19:18.679Z", "user": { "_id": "642b9861bb77f8456634b048", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/642b9861bb77f8456634b048/ZT-oJrw5BsADC-gZT_i25.jpeg", "fullname": "Zichen Ding", "isPro": false, "type": "user", "user": "heroding77" } }, { "_id": "67720d79b3163a95a653bab2", "hidden": false, "name": "Chuanyang Jin", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:28.148Z", "user": { "_id": "64beb69801f1983a86a05de2", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64beb69801f1983a86a05de2/tFyCoqZ6gT8NWkZfuncID.jpeg", "fullname": "Chuanyang Jin", "isPro": false, "type": "user", "user": "Chuanyang-Jin" } }, { "_id": "67720d79b3163a95a653bab3", "hidden": false, "name": "Yian Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653bab4", "hidden": false, "name": "Fangzhi Xu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:20.429Z", "user": { "_id": "64e6cf78ecce34cb442dc889", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64e6cf78ecce34cb442dc889/qVZFiUEpBpSkmH8SQeinm.jpeg", "fullname": "Fangzhi Xu", "isPro": false, "type": "user", "user": "xufangzhi" } }, { "_id": "67720d79b3163a95a653bab5", "hidden": false, "name": "Zhenyu Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653bab6", "hidden": false, "name": "Chengyou Jia", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:30.296Z", "user": { "_id": "6602548a68d519ed324b47c5", "avatarUrl": "/avatars/5ab411f87440cc2a98c7a1c6a3ed5548.svg", "fullname": "ChengyouJia", "isPro": false, "type": "user", "user": "ChengyouJia" } }, { "_id": "67720d79b3163a95a653bab7", "hidden": false, "name": "Liheng Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:32.287Z", "user": { "_id": "6561824484a9fbe322b9abc3", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6561824484a9fbe322b9abc3/omS3D6PzGBD7kc4S7bOIO.png", "fullname": "LIHENG CHEN", "isPro": false, "type": "user", "user": "Lemaqwq" } }, { "_id": "67720d79b3163a95a653bab8", "hidden": false, "name": "Zhoumianze Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653bab9", "hidden": false, "name": "Ben Kao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653baba", "hidden": false, "name": "Guohao Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653babb", "hidden": false, "name": "Junxian He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653babc", "hidden": false, "name": "Yu Qiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720d79b3163a95a653babd", "hidden": false, "name": "Zhiyong Wu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-27T16:21:58
OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
Graphical User Interface (GUI) agents powered by Vision-Language Models (VLMs) have demonstrated human-like computer control capability. Despite their utility in advancing digital automation, a critical bottleneck persists: collecting high-quality trajectory data for training. Common practices for collecting such data rely on human supervision or synthetic data generation through executing pre-defined tasks, which are either resource-intensive or unable to guarantee data quality. Moreover, these methods suffer from limited data diversity and significant gaps between synthetic data and real-world environments. To address these challenges, we propose OS-Genesis, a novel GUI data synthesis pipeline that reverses the conventional trajectory collection process. Instead of relying on pre-defined tasks, OS-Genesis enables agents first to perceive environments and perform step-wise interactions, then retrospectively derive high-quality tasks to enable trajectory-level exploration. A trajectory reward model is then employed to ensure the quality of the generated trajectories. We demonstrate that training GUI agents with OS-Genesis significantly improves their performance on highly challenging online benchmarks. In-depth analysis further validates OS-Genesis's efficiency and its superior data quality and diversity compared to existing synthesis methods. Our codes, data, and checkpoints are available at https://qiushisun.github.io/OS-Genesis-Home/{OS-Genesis Homepage}.
82
67720d7bb3163a95a653bb18
https://qiushisun.github.io/OS-Genesis-Home/
https://github.com/OS-Copilot/OS-Genesis
2025-01-02T00:31:04.921000
Xmodel-2 Technical Report
https://cdn-thumbnails.h…s/2412.19638.png
4
{ "_id": "647adfd0e3a7d24c8e46d7d1", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647adfd0e3a7d24c8e46d7d1/UqZegkj3P2XyPe4Lgnrt7.jpeg", "followerCount": 1, "fullname": "ValeriaWong", "isHf": false, "isMod": false, "isPro": false, "name": "valeriaWong", "type": "user" }
true
null
2412.19638
[ { "_id": "677223b635722632fcc63ff5", "hidden": false, "name": "Wang Qun", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:29:25.463Z", "user": { "_id": "647adfd0e3a7d24c8e46d7d1", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647adfd0e3a7d24c8e46d7d1/UqZegkj3P2XyPe4Lgnrt7.jpeg", "fullname": "ValeriaWong", "isPro": false, "type": "user", "user": "valeriaWong" } }, { "_id": "677223b635722632fcc63ff6", "hidden": false, "name": "Liu Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677223b635722632fcc63ff7", "hidden": false, "name": "Lin Qingquan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677223b635722632fcc63ff8", "hidden": false, "name": "Qu Zhijiu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677223b635722632fcc63ff9", "hidden": false, "name": "Jiang Ling", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-27T13:32:10
Xmodel-2 Technical Report
Xmodel-2 is a 1.2-billion-parameter large language model designed specifically for reasoning tasks. Its architecture enables different model scales to share a unified set of hyperparameters, allowing for extensive experimentation on smaller models and seamless transfer of optimal configurations to larger models. To maximize training efficiency and stability, Xmodel-2 employs the WSD learning rate scheduler from MiniCPM. Pretrained on 1.5 trillion tokens from diverse sources, Xmodel-2 achieves state-of-the-art performance in complex reasoning and agent-based tasks, while maintaining low training costs. These results highlight the potential of efficient model design and training strategies in advancing reasoning capabilities. Model checkpoints and code are publicly available on GitHub at https://github.com/XiaoduoAILab/Xmodel-2
26
677223b735722632fcc64063
null
null
2024-12-31T08:54:02.196000
Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs
https://cdn-thumbnails.h…s/2412.21187.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.21187
[ { "_id": "6773f75a23a7829936cb36bd", "hidden": false, "name": "Xingyu Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36be", "hidden": false, "name": "Jiahao Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36bf", "hidden": false, "name": "Tian Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c0", "hidden": false, "name": "Zhiwei He", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:33.739Z", "user": { "_id": "638439ca834d3558a398d035", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669609868550-noauth.png", "fullname": "Zhiwei He", "isPro": false, "type": "user", "user": "zwhe99" } }, { "_id": "6773f75a23a7829936cb36c1", "hidden": false, "name": "Jianhui Pang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c2", "hidden": false, "name": "Dian Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c3", "hidden": false, "name": "Linfeng Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c4", "hidden": false, "name": "Qiuzhi Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c5", "hidden": false, "name": "Mengfei Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c6", "hidden": false, "name": "Zhuosheng Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c7", "hidden": false, "name": "Rui Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c8", "hidden": false, "name": "Zhaopeng Tu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36c9", "hidden": false, "name": "Haitao Mi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f75a23a7829936cb36ca", "hidden": false, "name": "Dong Yu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T18:55:12
Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs
The remarkable performance of models like the OpenAI o1 can be attributed to their ability to emulate human-like long-time thinking during inference. These models employ extended chain-of-thought (CoT) processes, exploring multiple strategies to enhance problem-solving capabilities. However, a critical question remains: How to intelligently and efficiently scale computational resources during testing. This paper presents the first comprehensive study on the prevalent issue of overthinking in these models, where excessive computational resources are allocated for simple problems with minimal benefit. We introduce novel efficiency metrics from both outcome and process perspectives to evaluate the rational use of computational resources by o1-like models. Using a self-training paradigm, we propose strategies to mitigate overthinking, streamlining reasoning processes without compromising accuracy. Experimental results show that our approach successfully reduces computational overhead while preserving model performance across a range of testsets with varying difficulty levels, such as GSM8K, MATH500, GPQA, and AIME.
40
6773f75b23a7829936cb3729
null
null
2024-12-31T08:45:57.560000
Slow Perception: Let's Perceive Geometric Figures Step-by-step
https://cdn-thumbnails.h…s/2412.20631.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.20631
[ { "_id": "6773f578fb77c86d80d4c9fc", "hidden": false, "name": "Haoran Wei", "status": "claimed_verified", "statusLastChangedAt": "2025-02-03T08:16:20.719Z", "user": { "_id": "6436618aeef1f55654a9f458", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6436618aeef1f55654a9f458/OvxGtuDg2GAFG9As-2hzW.jpeg", "fullname": "Haoran Wei", "isPro": false, "type": "user", "user": "HaoranWei" } }, { "_id": "6773f578fb77c86d80d4c9fd", "hidden": false, "name": "Youyang Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4c9fe", "hidden": false, "name": "Yumeng Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4c9ff", "hidden": false, "name": "Jia Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4ca00", "hidden": false, "name": "Liang Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4ca01", "hidden": false, "name": "Jianjian Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4ca02", "hidden": false, "name": "Zheng Ge", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773f578fb77c86d80d4ca03", "hidden": false, "name": "Xiangyu Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T00:40:35
Slow Perception: Let's Perceive Geometric Figures Step-by-step
Recently, "visual o1" began to enter people's vision, with expectations that this slow-thinking design can solve visual reasoning tasks, especially geometric math problems. However, the reality is that current LVLMs (Large Vision Language Models) can hardly even accurately copy a geometric figure, let alone truly understand the complex inherent logic and spatial relationships within geometric shapes. We believe accurate copying (strong perception) is the first step to visual o1. Accordingly, we introduce the concept of "slow perception" (SP), which guides the model to gradually perceive basic point-line combinations, as our humans, reconstruct complex geometric structures progressively. There are two-fold stages in SP: a) perception decomposition. Perception is not instantaneous. In this stage, complex geometric figures are broken down into basic simple units to unify geometry representation. b) perception flow, which acknowledges that accurately tracing a line is not an easy task. This stage aims to avoid "long visual jumps" in regressing line segments by using a proposed "perceptual ruler" to trace each line stroke-by-stroke. Surprisingly, such a human-like perception manner enjoys an inference time scaling law -- the slower, the better. Researchers strive to speed up the model's perception in the past, but we slow it down again, allowing the model to read the image step-by-step and carefully.
15
6773f579fb77c86d80d4ca4d
null
null
2024-12-31T08:39:13.913000
PERSE: Personalized 3D Generative Avatars from A Single Portrait
https://cdn-thumbnails.h…s/2412.21206.png
3
{ "_id": "62414e2b585605a4079c2f38", "avatarUrl": "/avatars/db1dc5dd2164b7ecbd789104329296bd.svg", "followerCount": null, "fullname": "Hyunsoo Cha", "isHf": false, "isMod": false, "isPro": false, "name": "HyunsooCha", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/62414e2b585605a4079c2f38/fvMRw70bYO-xBRQTYBFOO.mp4" ]
2412.21206
[ { "_id": "67735ba473932c3aa94fe0ac", "hidden": false, "name": "Hyunsoo Cha", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:11.033Z", "user": { "_id": "62414e2b585605a4079c2f38", "avatarUrl": "/avatars/db1dc5dd2164b7ecbd789104329296bd.svg", "fullname": "Hyunsoo Cha", "isPro": false, "type": "user", "user": "HyunsooCha" } }, { "_id": "67735ba473932c3aa94fe0ad", "hidden": false, "name": "Inhee Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735ba473932c3aa94fe0ae", "hidden": false, "name": "Hanbyul Joo", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T18:59:58
PERSE: Personalized 3D Generative Avatars from A Single Portrait
We present PERSE, a method for building an animatable personalized generative avatar from a reference portrait. Our avatar model enables facial attribute editing in a continuous and disentangled latent space to control each facial attribute, while preserving the individual's identity. To achieve this, our method begins by synthesizing large-scale synthetic 2D video datasets, where each video contains consistent changes in the facial expression and viewpoint, combined with a variation in a specific facial attribute from the original input. We propose a novel pipeline to produce high-quality, photorealistic 2D videos with facial attribute editing. Leveraging this synthetic attribute dataset, we present a personalized avatar creation method based on the 3D Gaussian Splatting, learning a continuous and disentangled latent space for intuitive facial attribute manipulation. To enforce smooth transitions in this latent space, we introduce a latent space regularization technique by using interpolated 2D faces as supervision. Compared to previous approaches, we demonstrate that PERSE generates high-quality avatars with interpolated attributes while preserving identity of reference person.
18
67735ba873932c3aa94fe15b
null
null
2024-12-31T05:22:56.879000
Facilitating large language model Russian adaptation with Learned Embedding Propagation
https://cdn-thumbnails.h…s/2412.21140.png
2
{ "_id": "652cedbdf120598322ae358a", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/652cedbdf120598322ae358a/RrxrP0gtQus4SfNwfyAg_.jpeg", "followerCount": 65, "fullname": "Mikhail", "isHf": false, "isMod": false, "isPro": false, "name": "RefalMachine", "type": "user" }
true
null
2412.21140
[ { "_id": "6773c24cb272a4f186ec613c", "hidden": false, "name": "Mikhail Tikhomirov", "status": "extracted_pending", "statusLastChangedAt": "2024-12-31T10:07:09.658Z", "user": { "_id": "652cedbdf120598322ae358a", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/652cedbdf120598322ae358a/RrxrP0gtQus4SfNwfyAg_.jpeg", "fullname": "Mikhail", "isPro": false, "type": "user", "user": "RefalMachine" } }, { "_id": "6773c24cb272a4f186ec613d", "hidden": false, "name": "Daniil Chernyshev", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T18:15:45
Facilitating large language model Russian adaptation with Learned Embedding Propagation
Rapid advancements of large language model (LLM) technologies led to the introduction of powerful open-source instruction-tuned LLMs that have the same text generation quality as the state-of-the-art counterparts such as GPT-4. While the emergence of such models accelerates the adoption of LLM technologies in sensitive-information environments the authors of such models don not disclose the training data necessary for replication of the results thus making the achievements model-exclusive. Since those open-source models are also multilingual this in turn reduces the benefits of training a language specific LLMs as improved inference computation efficiency becomes the only guaranteed advantage of such costly procedure. More cost-efficient options such as vocabulary extension and subsequent continued pre-training are also inhibited by the lack of access to high-quality instruction-tuning data since it is the major factor behind the resulting LLM task-solving capabilities. To address the limitations and cut the costs of the language adaptation pipeline we propose Learned Embedding Propagation (LEP). Unlike existing approaches our method has lower training data size requirements due to minimal impact on existing LLM knowledge which we reinforce using novel ad-hoc embedding propagation procedure that allows to skip the instruction-tuning step and instead implant the new language knowledge directly into any existing instruct-tuned variant. We evaluated four Russian vocabulary adaptations for LLaMa-3-8B and Mistral-7B, showing that LEP is competitive with traditional instruction-tuning methods, achieving performance comparable to OpenChat 3.5 and LLaMa-3-8B-Instruct, with further improvements via self-calibration and continued tuning enhancing task-solving capabilities.
18
6773c24db272a4f186ec6190
null
null
2024-12-31T01:26:05.226000
HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation
https://cdn-thumbnails.h…s/2412.21199.png
3
{ "_id": "646f3443c261dc413383b8a4", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/646f3443c261dc413383b8a4/hEJd8wLyR5HTdMzApaloN.png", "followerCount": 6, "fullname": "Zhaojian Yu", "isHf": false, "isMod": false, "isPro": false, "name": "zjy2001", "type": "user" }
true
null
2412.21199
[ { "_id": "67738e22b8df35b9c86ace86", "hidden": false, "name": "Zhaojian Yu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:35.965Z", "user": { "_id": "646f3443c261dc413383b8a4", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/646f3443c261dc413383b8a4/hEJd8wLyR5HTdMzApaloN.png", "fullname": "Zhaojian Yu", "isPro": false, "type": "user", "user": "zjy2001" } }, { "_id": "67738e22b8df35b9c86ace87", "hidden": false, "name": "Yilun Zhao", "status": "claimed_verified", "statusLastChangedAt": "2025-01-22T20:49:57.088Z", "user": { "_id": "62f662bcc58915315c4eccea", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62f662bcc58915315c4eccea/zOAQLONfMP88zr70sxHK-.jpeg", "fullname": "Yilun", "isPro": true, "type": "user", "user": "yilunzhao" } }, { "_id": "67738e22b8df35b9c86ace88", "hidden": false, "name": "Arman Cohan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67738e22b8df35b9c86ace89", "hidden": false, "name": "Xiao-Ping Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T18:58:58
HumanEval Pro and MBPP Pro: Evaluating Large Language Models on Self-invoking Code Generation
We introduce self-invoking code generation, a new task designed to evaluate the progressive reasoning and problem-solving capabilities of LLMs. In this task, models are presented with a base problem and a related, more complex problem. They must solve the base problem and then utilize its solution to address the more complex one. This work features three key contributions. First, we propose a general recipe for generating more challenging versions of existing benchmarks, resulting in three new benchmarks: HumanEval Pro, MBPP Pro, and BigCodeBench-Lite Pro, specifically designed to assess LLMs on self-invoking code generation. Second, from the analysis of experimental results over twenty LLMs on our benchmarks, we have two important observations: (i) Most LLMs excel in traditional code generation benchmarks like HumanEval and MBPP, but their performance declines on self-invoking tasks. For example, o1-mini achieves 96.2% pass@1 on HumanEval but only 76.2% on HumanEval Pro. (ii) On self-invoking code generation task, the instruction-tuned models demonstrate only marginal improvements compared to the base models. Third, we disclose the types of failure modes that exist in our evaluation results. All these results underscore the need for further advancements in self-invoking code generation tasks and provide a new direction for future research on enhancing LLMs' code reasoning capabilities.
14
67738e24b8df35b9c86aced8
null
null
2024-12-31T01:20:42.246000
Bringing Objects to Life: 4D generation from 3D objects
https://cdn-thumbnails.h…s/2412.20422.png
2
{ "_id": "63c59c3a6d132b995fedface", "avatarUrl": "/avatars/4e18b19e477cb683ce1ba3ae6ab77d8e.svg", "followerCount": null, "fullname": "Ohad rahamim", "isHf": false, "isMod": false, "isPro": false, "name": "ohad204", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63c59c3a6d132b995fedface/ew_rUs_DefvjsYIilOYOY.gif", "https://cdn-uploads.huggingface.co/production/uploads/63c59c3a6d132b995fedface/34fPGLA17XCixLw3a_1xw.gif", "https://cdn-uploads.huggingface.co/production/uploads/63c59c3a6d132b995fedface/6zKQea1jeti_MkXzliDxp.gif" ]
2412.20422
[ { "_id": "67738c503be2064a656e7e70", "hidden": false, "name": "Ohad Rahamim", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:39.503Z", "user": { "_id": "63c59c3a6d132b995fedface", "avatarUrl": "/avatars/4e18b19e477cb683ce1ba3ae6ab77d8e.svg", "fullname": "Ohad rahamim", "isPro": false, "type": "user", "user": "ohad204" } }, { "_id": "67738c503be2064a656e7e71", "hidden": false, "name": "Ori Malca", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:41.929Z", "user": { "_id": "643ee7c8947f8be48fcb02c0", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/643ee7c8947f8be48fcb02c0/BOTz46LpyOZiC9BfFG_Pn.png", "fullname": "Ori Malca", "isPro": false, "type": "user", "user": "Orimalca" } }, { "_id": "67738c503be2064a656e7e72", "hidden": false, "name": "Dvir Samuel", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67738c503be2064a656e7e73", "hidden": false, "name": "Gal Chechik", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-29T10:12:01
Bringing Objects to Life: 4D generation from 3D objects
Recent advancements in generative modeling now enable the creation of 4D content (moving 3D objects) controlled with text prompts. 4D generation has large potential in applications like virtual worlds, media, and gaming, but existing methods provide limited control over the appearance and geometry of generated content. In this work, we introduce a method for animating user-provided 3D objects by conditioning on textual prompts to guide 4D generation, enabling custom animations while maintaining the identity of the original object. We first convert a 3D mesh into a ``static" 4D Neural Radiance Field (NeRF) that preserves the visual attributes of the input object. Then, we animate the object using an Image-to-Video diffusion model driven by text. To improve motion realism, we introduce an incremental viewpoint selection protocol for sampling perspectives to promote lifelike movement and a masked Score Distillation Sampling (SDS) loss, which leverages attention maps to focus optimization on relevant regions. We evaluate our model in terms of temporal coherence, prompt adherence, and visual fidelity and find that our method outperforms baselines that are based on other approaches, achieving up to threefold improvements in identity preservation measured using LPIPS scores, and effectively balancing visual quality with dynamic content.
36
67738c513be2064a656e7ebd
null
null
2024-12-30T23:56:15.734000
Training Software Engineering Agents and Verifiers with SWE-Gym
https://cdn-thumbnails.h…s/2412.21139.png
2
{ "_id": "61568f37272f2d87a99ba884", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61568f37272f2d87a99ba884/lgvkl5f0rEyiQRVU5FE32.png", "followerCount": 36, "fullname": "Jiayi Pan", "isHf": false, "isMod": false, "isPro": false, "name": "Jiayi-Pan", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/61568f37272f2d87a99ba884/lscibgXrTDtXtAzDwsbCF.png" ]
2412.21139
[ { "_id": "67735961702f3c89046839f4", "hidden": false, "name": "Jiayi Pan", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-31T05:07:53.061Z", "user": { "_id": "61568f37272f2d87a99ba884", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61568f37272f2d87a99ba884/lgvkl5f0rEyiQRVU5FE32.png", "fullname": "Jiayi Pan", "isPro": false, "type": "user", "user": "Jiayi-Pan" } }, { "_id": "67735961702f3c89046839f5", "hidden": false, "name": "Xingyao Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:13.208Z", "user": { "_id": "62c63f31031996c36c848504", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62c63f31031996c36c848504/xZvH6jnozuFLXbtRSU-h1.jpeg", "fullname": "Xingyao Wang", "isPro": false, "type": "user", "user": "xingyaoww" } }, { "_id": "67735961702f3c89046839f6", "hidden": false, "name": "Graham Neubig", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735961702f3c89046839f7", "hidden": false, "name": "Navdeep Jaitly", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735961702f3c89046839f8", "hidden": false, "name": "Heng Ji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735961702f3c89046839f9", "hidden": false, "name": "Alane Suhr", "status": "extracted_confirmed", "statusLastChangedAt": "2025-01-02T00:34:15.047Z", "user": { "_id": "6611e6e1188ff298b0dd0b79", "avatarUrl": "/avatars/3a495283955ec9e06e1829c7eb2cd9a4.svg", "fullname": "Alane Suhr", "isPro": false, "type": "user", "user": "alsuhr" } }, { "_id": "67735961702f3c89046839fa", "hidden": false, "name": "Yizhe Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T18:15:39
Training Software Engineering Agents and Verifiers with SWE-Gym
We present SWE-Gym, the first environment for training real-world software engineering (SWE) agents. SWE-Gym contains 2,438 real-world Python task instances, each comprising a codebase with an executable runtime environment, unit tests, and a task specified in natural language. We use SWE-Gym to train language model based SWE agents , achieving up to 19% absolute gains in resolve rate on the popular SWE-Bench Verified and Lite test sets. We also experiment with inference-time scaling through verifiers trained on agent trajectories sampled from SWE-Gym. When combined with our fine-tuned SWE agents, we achieve 32.0% and 26.0% on SWE-Bench Verified and Lite, respectively, reflecting a new state-of-the-art for open-weight SWE agents. To facilitate further research, we publicly release SWE-Gym, models, and agent trajectories.
22
67735962702f3c8904683a22
null
null
2024-12-30T23:38:03.262000
Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization
https://cdn-thumbnails.h…s/2412.18525.png
2
{ "_id": "66702b9cd8101e70bd8f70ec", "avatarUrl": "/avatars/2c20e4083ac314a4a42388b0ec4654e9.svg", "followerCount": 1, "fullname": "sheny", "isHf": false, "isMod": false, "isPro": false, "name": "axxkaya", "type": "user" }
true
null
2412.18525
[ { "_id": "67721383d565d51e49e7a90f", "hidden": false, "name": "Yang Shen", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:30:01.793Z", "user": { "_id": "66702b9cd8101e70bd8f70ec", "avatarUrl": "/avatars/2c20e4083ac314a4a42388b0ec4654e9.svg", "fullname": "sheny", "isPro": false, "type": "user", "user": "axxkaya" } }, { "_id": "67721383d565d51e49e7a910", "hidden": false, "name": "Xiu-Shen Wei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a911", "hidden": false, "name": "Yifan Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a912", "hidden": false, "name": "Yuxin Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a913", "hidden": false, "name": "Tao Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a914", "hidden": false, "name": "Jian Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a915", "hidden": false, "name": "Heyang Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a916", "hidden": false, "name": "Yazhou Yao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721383d565d51e49e7a917", "hidden": false, "name": "Errui Ding", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T16:08:25
Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization
Computer Vision (CV) has yet to fully achieve the zero-shot task generalization observed in Natural Language Processing (NLP), despite following many of the milestones established in NLP, such as large transformer models, extensive pre-training, and the auto-regression paradigm, among others. In this paper, we explore the idea that CV adopts discrete and terminological task definitions (\eg, ``image segmentation''), which may be a key barrier to zero-shot task generalization. Our hypothesis is that without truly understanding previously-seen tasks--due to these terminological definitions--deep models struggle to generalize to novel tasks. To verify this, we introduce Explanatory Instructions, which provide an intuitive way to define CV task objectives through detailed linguistic transformations from input images to outputs. We create a large-scale dataset comprising 12 million ``image input to explanatory instruction to output'' triplets, and train an auto-regressive-based vision-language model (AR-based VLM) that takes both images and explanatory instructions as input. By learning to follow these instructions, the AR-based VLM achieves instruction-level zero-shot capabilities for previously-seen tasks and demonstrates strong zero-shot generalization for unseen CV tasks. Code and dataset will be openly available on our GitHub repository.
75
67721386d565d51e49e7a9b7
null
null
2024-12-30T23:33:22.541000
Efficiently Serving LLM Reasoning Programs with Certaindex
https://cdn-thumbnails.h…s/2412.20993.png
2
{ "_id": "62ba66296501b0ff15ba1075", "avatarUrl": "/avatars/de468727cb1240fd4b1f24a19fb237a6.svg", "followerCount": 6, "fullname": "Yichao Fu", "isHf": false, "isMod": false, "isPro": true, "name": "Viol2000", "type": "user" }
true
null
2412.20993
[ { "_id": "6773560ad86f2a718772598b", "hidden": false, "name": "Yichao Fu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:34.516Z", "user": { "_id": "62ba66296501b0ff15ba1075", "avatarUrl": "/avatars/de468727cb1240fd4b1f24a19fb237a6.svg", "fullname": "Yichao Fu", "isPro": true, "type": "user", "user": "Viol2000" } }, { "_id": "6773560ad86f2a718772598c", "hidden": false, "name": "Junda Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-01-31T08:36:19.944Z", "user": { "_id": "643839d9581e6bf0fa9c835e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/643839d9581e6bf0fa9c835e/JxlgR-zQhms-rfF0sDxD8.jpeg", "fullname": "Junda Chen", "isPro": false, "type": "user", "user": "GindaChen" } }, { "_id": "6773560ad86f2a718772598d", "hidden": false, "name": "Siqi Zhu", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:32.277Z", "user": { "_id": "65621fd68631d43d2baf33b2", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/ZrNK7489lqq_vLeQgzhcR.png", "fullname": "siqi zhu", "isPro": false, "type": "user", "user": "zsqzz" } }, { "_id": "6773560ad86f2a718772598e", "hidden": false, "name": "Zheyu Fu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773560ad86f2a718772598f", "hidden": false, "name": "Zhongdongming Dai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773560ad86f2a7187725990", "hidden": false, "name": "Aurick Qiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6773560ad86f2a7187725991", "hidden": false, "name": "Hao Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:29.981Z", "user": { "_id": "62d363143eebd640a4fa41fa", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62d363143eebd640a4fa41fa/pvPwXlJ5OOb-UIfmffv4E.jpeg", "fullname": "Hao Zhang", "isPro": false, "type": "user", "user": "zhisbug" } } ]
2024-12-30T14:57:53
Efficiently Serving LLM Reasoning Programs with Certaindex
The rapid evolution of large language models (LLMs) has unlocked their capabilities in advanced reasoning tasks like mathematical problem-solving, code generation, and legal analysis. Central to this progress are inference-time reasoning algorithms, which refine outputs by exploring multiple solution paths, at the cost of increasing compute demands and response latencies. Existing serving systems fail to adapt to the scaling behaviors of these algorithms or the varying difficulty of queries, leading to inefficient resource use and unmet latency targets. We present Dynasor, a system that optimizes inference-time compute for LLM reasoning queries. Unlike traditional engines, Dynasor tracks and schedules requests within reasoning queries and uses Certaindex, a proxy that measures statistical reasoning progress based on model certainty, to guide compute allocation dynamically. Dynasor co-adapts scheduling with reasoning progress: it allocates more compute to hard queries, reduces compute for simpler ones, and terminates unpromising queries early, balancing accuracy, latency, and cost. On diverse datasets and algorithms, Dynasor reduces compute by up to 50% in batch processing and sustaining 3.3x higher query rates or 4.7x tighter latency SLOs in online serving.
36
6773560bd86f2a71877259f6
null
null
2024-12-30T23:03:45.561000
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
https://cdn-thumbnails.h…s/2412.21037.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.21037
[ { "_id": "67735d5d4a4e0c546461a90d", "hidden": false, "name": "Chia-Yu Hung", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:06.951Z", "user": { "_id": "62864cdf011a13fe5f84d746", "avatarUrl": "/avatars/d01d3f2261572a625949e6b9dee2aa47.svg", "fullname": "Hung Chia Yu", "isPro": false, "type": "user", "user": "hungchiayu" } }, { "_id": "67735d5d4a4e0c546461a90e", "hidden": false, "name": "Navonil Majumder", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735d5d4a4e0c546461a90f", "hidden": false, "name": "Zhifeng Kong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735d5d4a4e0c546461a910", "hidden": false, "name": "Ambuj Mehrish", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735d5d4a4e0c546461a911", "hidden": false, "name": "Rafael Valle", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735d5d4a4e0c546461a912", "hidden": false, "name": "Bryan Catanzaro", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735d5d4a4e0c546461a913", "hidden": false, "name": "Soujanya Poria", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T16:02:44
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms like verifiable rewards or gold-standard answers available for Large Language Models (LLMs). To address this, we propose CLAP-Ranked Preference Optimization (CRPO), a novel framework that iteratively generates and optimizes preference data to enhance TTA alignment. We demonstrate that the audio preference dataset generated using CRPO outperforms existing alternatives. With this framework, TangoFlux achieves state-of-the-art performance across both objective and subjective benchmarks. We open source all code and models to support further research in TTA generation.
24
67735d5e4a4e0c546461a951
null
null
2024-12-30T22:51:09.824000
OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
https://cdn-thumbnails.h…s/2412.20005.png
2
{ "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "followerCount": 19, "fullname": "Ningyu Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "Ningyu", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/620b3bbb0668e435407c8d0a/1spJ3Z9MTeBY4AMTJUTX4.png" ]
2412.20005
[ { "_id": "677368fbe182b937d5860758", "hidden": false, "name": "Yujie Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860759", "hidden": false, "name": "Xiangyuan Ru", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d586075a", "hidden": false, "name": "Kangwei Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d586075b", "hidden": false, "name": "Lin Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d586075c", "hidden": false, "name": "Mengshu Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d586075d", "hidden": false, "name": "Ningyu Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:46.387Z", "user": { "_id": "620b3bbb0668e435407c8d0a", "avatarUrl": "/avatars/e0fccbb2577d76088e09f054c35cffbc.svg", "fullname": "Ningyu Zhang", "isPro": false, "type": "user", "user": "Ningyu" } }, { "_id": "677368fbe182b937d586075e", "hidden": false, "name": "Lei Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d586075f", "hidden": false, "name": "Zhiqiang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860760", "hidden": false, "name": "Jun Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860761", "hidden": false, "name": "Lanning Wei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860762", "hidden": false, "name": "Da Zheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860763", "hidden": false, "name": "Haofen Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677368fbe182b937d5860764", "hidden": false, "name": "Huajun Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-28T04:01:30
OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
We introduce OneKE, a dockerized schema-guided knowledge extraction system, which can extract knowledge from the Web and raw PDF Books, and support various domains (science, news, etc.). Specifically, we design OneKE with multiple agents and a configure knowledge base. Different agents perform their respective roles, enabling support for various extraction scenarios. The configure knowledge base facilitates schema configuration, error case debugging and correction, further improving the performance. Empirical evaluations on benchmark datasets demonstrate OneKE's efficacy, while case studies further elucidate its adaptability to diverse tasks across multiple domains, highlighting its potential for broad applications. We have open-sourced the Code at https://github.com/zjunlp/OneKE and released a Video at http://oneke.openkg.cn/demo.mp4.
18
677368fce182b937d58607c6
null
null
2024-12-30T22:29:50.947000
On the Compositional Generalization of Multimodal LLMs for Medical Imaging
https://cdn-thumbnails.h…s/2412.20070.png
4
{ "_id": "64f1a34f2c5c8b767916447e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64f1a34f2c5c8b767916447e/uak2CsMAnxW8q4dwyAOBN.jpeg", "followerCount": 2, "fullname": "Zhenyang Cai", "isHf": false, "isMod": false, "isPro": false, "name": "Eric3200", "type": "user" }
true
null
2412.20070
[ { "_id": "67735f9f9cc5d33bf6af3cef", "hidden": false, "name": "Zhenyang Cai", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:04.716Z", "user": { "_id": "64f1a34f2c5c8b767916447e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64f1a34f2c5c8b767916447e/uak2CsMAnxW8q4dwyAOBN.jpeg", "fullname": "Zhenyang Cai", "isPro": false, "type": "user", "user": "Eric3200" } }, { "_id": "67735f9f9cc5d33bf6af3cf0", "hidden": false, "name": "Junying Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735f9f9cc5d33bf6af3cf1", "hidden": false, "name": "Rongsheng Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:02.316Z", "user": { "_id": "63ca949b04c979828315389d", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63ca949b04c979828315389d/HS5xWNAYjjHeyAAwWJ11l.jpeg", "fullname": "wangrongsheng", "isPro": false, "type": "user", "user": "wangrongsheng" } }, { "_id": "67735f9f9cc5d33bf6af3cf2", "hidden": false, "name": "Weihong Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735f9f9cc5d33bf6af3cf3", "hidden": false, "name": "Yonglin Deng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735f9f9cc5d33bf6af3cf4", "hidden": false, "name": "Dingjie Song", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:59.736Z", "user": { "_id": "619f01b8cc04eadf54fa5d5d", "avatarUrl": "/avatars/928f3d1a6146e2e1ae4860445d929d5c.svg", "fullname": "Song Dingjie", "isPro": false, "type": "user", "user": "songdj" } }, { "_id": "67735f9f9cc5d33bf6af3cf5", "hidden": false, "name": "Yize Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735f9f9cc5d33bf6af3cf6", "hidden": false, "name": "Zixu Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67735f9f9cc5d33bf6af3cf7", "hidden": false, "name": "Benyou Wang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-28T07:50:00
On the Compositional Generalization of Multimodal LLMs for Medical Imaging
Multimodal large language models (MLLMs) hold significant potential in the medical field, but their capabilities are often limited by insufficient data in certain medical domains, highlighting the need for understanding what kinds of images can be used by MLLMs for generalization. Current research suggests that multi-task training outperforms single-task as different tasks can benefit each other, but they often overlook the internal relationships within these tasks, providing limited guidance on selecting datasets to enhance specific tasks. To analyze this phenomenon, we attempted to employ compositional generalization (CG)-the ability of models to understand novel combinations by recombining learned elements-as a guiding framework. Since medical images can be precisely defined by Modality, Anatomical area, and Task, naturally providing an environment for exploring CG. Therefore, we assembled 106 medical datasets to create Med-MAT for comprehensive experiments. The experiments confirmed that MLLMs can use CG to understand unseen medical images and identified CG as one of the main drivers of the generalization observed in multi-task training. Additionally, further studies demonstrated that CG effectively supports datasets with limited data and delivers consistent performance across different backbones, highlighting its versatility and broad applicability. Med-MAT is publicly available at https://github.com/FreedomIntelligence/Med-MAT.
46
67735fa09cc5d33bf6af3d85
null
null
2024-12-30T22:14:42.572000
Edicho: Consistent Image Editing in the Wild
https://cdn-thumbnails.h…s/2412.21079.png
2
{ "_id": "64981bea09cea550852652af", "avatarUrl": "/avatars/df528e9008972c8e5ae4d278e617476c.svg", "followerCount": 3, "fullname": "Qiuyu Wang", "isHf": false, "isMod": false, "isPro": false, "name": "qiuyuu", "type": "user" }
false
null
2412.21079
[ { "_id": "67736096b272a4f186d161f9", "hidden": false, "name": "Qingyan Bai", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:12:57.160Z", "user": { "_id": "63f0baf66309c84d5f4a2226", "avatarUrl": "/avatars/a122f7d92441bd2feef7d4eda993fab7.svg", "fullname": "Qingyan Bai", "isPro": false, "type": "user", "user": "Meme145" } }, { "_id": "67736096b272a4f186d161fa", "hidden": false, "name": "Hao Ouyang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d161fb", "hidden": false, "name": "Yinghao Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d161fc", "hidden": false, "name": "Qiuyu Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d161fd", "hidden": false, "name": "Ceyuan Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d161fe", "hidden": false, "name": "Ka Leong Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d161ff", "hidden": false, "name": "Yujun Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67736096b272a4f186d16200", "hidden": false, "name": "Qifeng Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-30T16:56:44
Edicho: Consistent Image Editing in the Wild
As a verified need, consistent editing across in-the-wild images remains a technical challenge arising from various unmanageable factors, like object poses, lighting conditions, and photography environments. Edicho steps in with a training-free solution based on diffusion models, featuring a fundamental design principle of using explicit image correspondence to direct editing. Specifically, the key components include an attention manipulation module and a carefully refined classifier-free guidance (CFG) denoising strategy, both of which take into account the pre-estimated correspondence. Such an inference-time algorithm enjoys a plug-and-play nature and is compatible to most diffusion-based editing methods, such as ControlNet and BrushNet. Extensive results demonstrate the efficacy of Edicho in consistent cross-image editing under diverse settings. We will release the code to facilitate future studies.
23
6773609db272a4f186d16447
null
null
2024-12-30T14:51:31.064000
CypherBench: Towards Precise Retrieval over Full-scale Modern Knowledge Graphs in the LLM Era
https://cdn-thumbnails.h…s/2412.18702.png
2
{ "_id": "642f3e9ac953ca48ccd10927", "avatarUrl": "/avatars/eb6da3fce184aeb8cb9c2dfcd4235206.svg", "followerCount": 2, "fullname": "Yanlin Feng", "isHf": false, "isMod": false, "isPro": false, "name": "yanlinf", "type": "user" }
true
null
2412.18702
[ { "_id": "6771fca3117cc54ff8b99a7f", "hidden": false, "name": "Yanlin Feng", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T20:03:01.909Z", "user": { "_id": "642f3e9ac953ca48ccd10927", "avatarUrl": "/avatars/eb6da3fce184aeb8cb9c2dfcd4235206.svg", "fullname": "Yanlin Feng", "isPro": false, "type": "user", "user": "yanlinf" } }, { "_id": "6771fca3117cc54ff8b99a80", "hidden": false, "name": "Simone Papicchio", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-31T15:09:48.747Z", "user": { "_id": "62e10decd3a32f2d445056e7", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e10decd3a32f2d445056e7/RgbtYjNbTnuh1KrtZqZhI.png", "fullname": "Simone PAPICCHIO", "isPro": false, "type": "user", "user": "simone-papicchio" } }, { "_id": "6771fca3117cc54ff8b99a81", "hidden": false, "name": "Sajjadur Rahman", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T23:22:04
CypherBench: Towards Precise Retrieval over Full-scale Modern Knowledge Graphs in the LLM Era
Retrieval from graph data is crucial for augmenting large language models (LLM) with both open-domain knowledge and private enterprise data, and it is also a key component in the recent GraphRAG system (edge et al., 2024). Despite decades of research on knowledge graphs and knowledge base question answering, leading LLM frameworks (e.g. Langchain and LlamaIndex) have only minimal support for retrieval from modern encyclopedic knowledge graphs like Wikidata. In this paper, we analyze the root cause and suggest that modern RDF knowledge graphs (e.g. Wikidata, Freebase) are less efficient for LLMs due to overly large schemas that far exceed the typical LLM context window, use of resource identifiers, overlapping relation types and lack of normalization. As a solution, we propose property graph views on top of the underlying RDF graph that can be efficiently queried by LLMs using Cypher. We instantiated this idea on Wikidata and introduced CypherBench, the first benchmark with 11 large-scale, multi-domain property graphs with 7.8 million entities and over 10,000 questions. To achieve this, we tackled several key challenges, including developing an RDF-to-property graph conversion engine, creating a systematic pipeline for text-to-Cypher task generation, and designing new evaluation metrics.
6
6771fca4117cc54ff8b99c5a
null
null
2024-12-30T12:24:56.133000
1.58-bit FLUX
https://cdn-thumbnails.h…s/2412.18653.png
6
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18653
[ { "_id": "6772d6ff33efe31653f02b0b", "hidden": false, "name": "Chenglin Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6772d6ff33efe31653f02b0c", "hidden": false, "name": "Celong Liu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:46:38.723Z", "user": { "_id": "64364ba31adb261e94e0e5b4", "avatarUrl": "/avatars/cb6ef2155d88b1e64c4ff36bd3a8f255.svg", "fullname": "Celong Liu", "isPro": false, "type": "user", "user": "goddice" } }, { "_id": "6772d6ff33efe31653f02b0d", "hidden": false, "name": "Xueqing Deng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:46:44.300Z", "user": { "_id": "65ca9b1743207e438a95e90c", "avatarUrl": "/avatars/8f7bde1c44d8e665a29ee08ce7fedfa4.svg", "fullname": "Xueqing Deng", "isPro": true, "type": "user", "user": "xdeng77" } }, { "_id": "6772d6ff33efe31653f02b0e", "hidden": false, "name": "Dongwon Kim", "status": "claimed_verified", "statusLastChangedAt": "2025-01-16T08:31:52.344Z", "user": { "_id": "64dc5208c38427829de81b16", "avatarUrl": "/avatars/43a08e46a7a78f1e3d1f6645a9b1d26b.svg", "fullname": "Dongwon", "isPro": false, "type": "user", "user": "kdwon" } }, { "_id": "6772d6ff33efe31653f02b0f", "hidden": false, "name": "Xing Mei", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:47:17.287Z", "user": { "_id": "642e748858ba61373f2ea217", "avatarUrl": "/avatars/e9f002cd662f07bd213c31a45176d5a5.svg", "fullname": "Xing Mei", "isPro": false, "type": "user", "user": "xm2023" } }, { "_id": "6772d6ff33efe31653f02b10", "hidden": false, "name": "Xiaohui Shen", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:47:23.529Z", "user": { "_id": "6430aa1b32a732121cd81f98", "avatarUrl": "/avatars/5419f8d6d4d36fa5ac83e30667b9fd99.svg", "fullname": "Xiaohui Shen", "isPro": false, "type": "user", "user": "XiaohuiShen" } }, { "_id": "6772d6ff33efe31653f02b11", "hidden": false, "name": "Liang-Chieh Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T19:00:02
1.58-bit FLUX
We present 1.58-bit FLUX, the first successful approach to quantizing the state-of-the-art text-to-image generation model, FLUX.1-dev, using 1.58-bit weights (i.e., values in {-1, 0, +1}) while maintaining comparable performance for generating 1024 x 1024 images. Notably, our quantization method operates without access to image data, relying solely on self-supervision from the FLUX.1-dev model. Additionally, we develop a custom kernel optimized for 1.58-bit operations, achieving a 7.7x reduction in model storage, a 5.1x reduction in inference memory, and improved inference latency. Extensive evaluations on the GenEval and T2I Compbench benchmarks demonstrate the effectiveness of 1.58-bit FLUX in maintaining generation quality while significantly enhancing computational efficiency.
79
6772d70133efe31653f02bde
null
null
2024-12-30T08:43:12.896000
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
https://cdn-thumbnails.h…s/2412.18619.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18619
[ { "_id": "67720aa292c63806bde6d2be", "hidden": false, "name": "Liang Chen", "status": "extracted_pending", "statusLastChangedAt": "2024-12-30T02:51:16.924Z", "user": { "_id": "61b0a4ce1b3d95b3d1ed9251", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/Wwjr26vdudX5KYVTb8Q0a.png", "fullname": "Liang Chen", "isPro": true, "type": "user", "user": "leonardPKU" } }, { "_id": "67720aa292c63806bde6d2bf", "hidden": true, "name": "Zekun Wang", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:26:13.884Z", "user": { "_id": "656832dfbd65fd41ee7aa8cd", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656832dfbd65fd41ee7aa8cd/HHkyetTqNq1wIBPipzjQA.jpeg", "fullname": "Zekun Wang", "isPro": false, "type": "user", "user": "kugwzk" } }, { "_id": "67720aa292c63806bde6d2c0", "hidden": false, "name": "Shuhuai Ren", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:26:21.168Z", "user": { "_id": "60d2e681b8448e1785bbda06", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1624434302056-noauth.jpeg", "fullname": "Shuhuai Ren", "isPro": false, "type": "user", "user": "ShuhuaiRen" } }, { "_id": "67720aa292c63806bde6d2c1", "hidden": false, "name": "Lei Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2c2", "hidden": false, "name": "Haozhe Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2c3", "hidden": false, "name": "Yunshui Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:30:44.113Z", "user": { "_id": "62e670d33651180f7d334ef3", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661829952739-62e670d33651180f7d334ef3.jpeg", "fullname": "LiYunshui", "isPro": false, "type": "user", "user": "Wa2erGo" } }, { "_id": "67720aa292c63806bde6d2c4", "hidden": false, "name": "Zefan Cai", "status": "claimed_verified", "statusLastChangedAt": "2025-01-02T10:19:21.147Z", "user": { "_id": "66f2f84bdc54099111af9c59", "avatarUrl": "/avatars/91c998f82f2ee99cff9cfff859f9c76e.svg", "fullname": "Zefan Cai", "isPro": false, "type": "user", "user": "Zefan-Cai" } }, { "_id": "67720aa292c63806bde6d2c5", "hidden": false, "name": "Hongcheng Guo", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:26:44.644Z", "user": { "_id": "60ff653699f651283c017df7", "avatarUrl": "/avatars/075257e8d7fefc43be24738f978ea5c3.svg", "fullname": "Hongcheng Guo", "isPro": false, "type": "user", "user": "hongcheng" } }, { "_id": "67720aa292c63806bde6d2c6", "hidden": false, "name": "Lei Zhang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-01T20:13:45.095Z", "user": { "_id": "64c38871f9cd765462fa1a17", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64c38871f9cd765462fa1a17/yuIlVcqeDlQVKsUF8uEl3.jpeg", "fullname": "Lei Zhang", "isPro": false, "type": "user", "user": "Lemoncoke" } }, { "_id": "67720aa292c63806bde6d2c7", "hidden": false, "name": "Yizhe Xiong", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:34.303Z", "user": { "_id": "642e916f6a378e41aa561516", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/Fsas7e23J8Fhexvi8LoSN.jpeg", "fullname": "Yizhe Xiong", "isPro": false, "type": "user", "user": "Bostoncake" } }, { "_id": "67720aa292c63806bde6d2c8", "hidden": true, "name": "Yichi Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:24:36.386Z", "user": { "_id": "6369b1d456d1f93498130a8a", "avatarUrl": "/avatars/8ec228aa6f171715652511f948765db9.svg", "fullname": "Yichi Zhang", "isPro": false, "type": "user", "user": "594zyc" } }, { "_id": "67720aa292c63806bde6d2c9", "hidden": false, "name": "Ruoyu Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2ca", "hidden": false, "name": "Qingxiu Dong", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:25:52.373Z", "user": { "_id": "670740744341dcee459fb990", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/66UkZvrAk7fQr5YCylEFk.png", "fullname": "Qingxiu Dong", "isPro": false, "type": "user", "user": "Rsy24" } }, { "_id": "67720aa292c63806bde6d2cb", "hidden": false, "name": "Ge Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:26:59.568Z", "user": { "_id": "638efcf4c67af472d316d424", "avatarUrl": "/avatars/97a57859d7d87a3a8f1bb41d32a72bc2.svg", "fullname": "Ge Zhang", "isPro": false, "type": "user", "user": "zhangysk" } }, { "_id": "67720aa292c63806bde6d2cc", "hidden": false, "name": "Jian Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2cd", "hidden": false, "name": "Lingwei Meng", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:25:39.412Z", "user": { "_id": "65d8d2d0182779056299f668", "avatarUrl": "/avatars/2f21723a6348ee55431336b740aac9d7.svg", "fullname": "Lingwei Meng", "isPro": false, "type": "user", "user": "LingweiMeng" } }, { "_id": "67720aa292c63806bde6d2ce", "hidden": false, "name": "Shujie Hu", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:25:46.895Z", "user": { "_id": "62a9ed2e097098dc397b5b82", "avatarUrl": "/avatars/cb6b05ec338e4f46187151527157781d.svg", "fullname": "Shujie Hu", "isPro": false, "type": "user", "user": "sjhu" } }, { "_id": "67720aa292c63806bde6d2cf", "hidden": false, "name": "Yulong Chen", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:27:19.601Z", "user": { "_id": "627e07f953494d140baea7fd", "avatarUrl": "/avatars/e6c4cc361f0cbcd5e27840d477cd06f3.svg", "fullname": "cyl", "isPro": false, "type": "user", "user": "yulongchen" } }, { "_id": "67720aa292c63806bde6d2d0", "hidden": false, "name": "Junyang Lin", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:22:56.705Z", "user": { "_id": "620760a26e3b7210c2ff1943", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg", "fullname": "Junyang Lin", "isPro": false, "type": "user", "user": "JustinLin610" } }, { "_id": "67720aa292c63806bde6d2d1", "hidden": false, "name": "Shuai Bai", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:27:34.048Z", "user": { "_id": "63451cf0a05b51f7ded25505", "avatarUrl": "/avatars/dec4bbee4a82b773fc58dfc2dce9dbeb.svg", "fullname": "shuai bai", "isPro": false, "type": "user", "user": "bluelike" } }, { "_id": "67720aa292c63806bde6d2d2", "hidden": false, "name": "Andreas Vlachos", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T19:36:05.328Z", "user": { "_id": "66a638a54166878166e4169d", "avatarUrl": "/avatars/5b7c5b1654a0a6e6bc746ef2c54f2ab0.svg", "fullname": "Andreas Vlachos", "isPro": false, "type": "user", "user": "andreasvlachos" } }, { "_id": "67720aa292c63806bde6d2d3", "hidden": false, "name": "Xu Tan", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:27:49.723Z", "user": { "_id": "5f1040b6e9d71719e3be71d2", "avatarUrl": "/avatars/a2f28940236ae625ed3810ad62e343ff.svg", "fullname": "Xu Tan", "isPro": false, "type": "user", "user": "xutan" } }, { "_id": "67720aa292c63806bde6d2d4", "hidden": false, "name": "Minjia Zhang", "status": "admin_assigned", "statusLastChangedAt": "2025-01-10T09:28:01.219Z", "user": { "_id": "6305b5e39d2531fabd195c5f", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6305b5e39d2531fabd195c5f/oGS5leXVrB8Ct_BF96VkR.jpeg", "fullname": "Zhang", "isPro": false, "type": "user", "user": "Minjia" } }, { "_id": "67720aa292c63806bde6d2d5", "hidden": false, "name": "Wen Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2d6", "hidden": false, "name": "Aaron Yee", "status": "claimed_verified", "statusLastChangedAt": "2025-01-08T09:47:35.995Z", "user": { "_id": "65f970a362c5227d533fe56b", "avatarUrl": "/avatars/c54eb2b41bfe9005cfb6b98b5451d784.svg", "fullname": "Aaron Yee", "isPro": false, "type": "user", "user": "aaronyee" } }, { "_id": "67720aa292c63806bde6d2d7", "hidden": false, "name": "Tianyu Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67720aa292c63806bde6d2d8", "hidden": false, "name": "Baobao Chang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-16T05:02:25
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
Building on the foundations of language modeling in natural language processing, Next Token Prediction (NTP) has evolved into a versatile training objective for machine learning tasks across various modalities, achieving considerable success. As Large Language Models (LLMs) have advanced to unify understanding and generation tasks within the textual modality, recent research has shown that tasks from different modalities can also be effectively encapsulated within the NTP framework, transforming the multimodal information into tokens and predict the next one given the context. This survey introduces a comprehensive taxonomy that unifies both understanding and generation within multimodal learning through the lens of NTP. The proposed taxonomy covers five key aspects: Multimodal tokenization, MMNTP model architectures, unified task representation, datasets \& evaluation, and open challenges. This new taxonomy aims to aid researchers in their exploration of multimodal intelligence. An associated GitHub repository collecting the latest papers and repos is available at https://github.com/LMM101/Awesome-Multimodal-Next-Token-Prediction
55
67720aa492c63806bde6d350
null
null
2024-12-30T05:58:47.315000
Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
https://cdn-thumbnails.h…s/2412.19512.png
2
{ "_id": "608abf1272b50b02c4b02865", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1619708309549-608abf1272b50b02c4b02865.jpeg", "followerCount": 2, "fullname": "Hsuan Su", "isHf": false, "isMod": false, "isPro": false, "name": "jacksukk", "type": "user" }
true
null
2412.19512
[ { "_id": "67727ca4986fbffa7a2208d4", "hidden": false, "name": "Hua Farn", "status": "extracted_pending", "statusLastChangedAt": "2024-12-30T10:57:42.126Z", "user": { "_id": "641a9cb1f9dd6391a2463477", "avatarUrl": "/avatars/73caf5389f88c7356f02ee9f73146faa.svg", "fullname": "Farn Hua", "isPro": false, "type": "user", "user": "farnhua" } }, { "_id": "67727ca4986fbffa7a2208d5", "hidden": false, "name": "Hsuan Su", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:29:21.989Z", "user": { "_id": "608abf1272b50b02c4b02865", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1619708309549-608abf1272b50b02c4b02865.jpeg", "fullname": "Hsuan Su", "isPro": false, "type": "user", "user": "jacksukk" } }, { "_id": "67727ca4986fbffa7a2208d6", "hidden": false, "name": "Shachi H Kumar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67727ca4986fbffa7a2208d7", "hidden": false, "name": "Saurav Sahay", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:52:51.579Z", "user": { "_id": "626c5d06f451470f86252659", "avatarUrl": "/avatars/41078e5f99a7660b979c8cf7a63d71c6.svg", "fullname": "Saurav Sahay", "isPro": false, "type": "user", "user": "sauravsahay" } }, { "_id": "67727ca4986fbffa7a2208d8", "hidden": false, "name": "Shang-Tse Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67727ca4986fbffa7a2208d9", "hidden": false, "name": "Hung-yi Lee", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:53:02.019Z", "user": { "_id": "629b6eb3ad498ab19282ae6f", "avatarUrl": "/avatars/adade8c512122120b06e5c50724a178d.svg", "fullname": "Hung-yi Lee", "isPro": false, "type": "user", "user": "hungyilee" } } ]
2024-12-27T08:03:22
Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
Fine-tuning large language models (LLMs) for downstream tasks is a widely adopted approach, but it often leads to safety degradation in safety-aligned LLMs. Currently, many solutions address this issue by incorporating additional safety data, which can be impractical in many cases. In this paper, we address the question: How can we improve downstream task performance while preserving safety in LLMs without relying on additional safety data? We propose a simple and effective method that maintains the inherent safety of LLMs while enhancing their downstream task performance: merging the weights of pre- and post-fine-tuned safety-aligned models. Experimental results across various downstream tasks, models, and merging methods demonstrate that this approach effectively mitigates safety degradation while improving downstream task performance, offering a practical solution for adapting safety-aligned LLMs.
8
67727ca6986fbffa7a220934
null
null
2024-12-30T03:07:39.186000
The Superposition of Diffusion Models Using the Itô Density Estimator
https://cdn-thumbnails.h…s/2412.17762.png
2
{ "_id": "65a49ef27ec6af0f956a5c61", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65a49ef27ec6af0f956a5c61/OGJ1OVezooF7dUn1SiEKG.jpeg", "followerCount": 1, "fullname": "Marta Skreta", "isHf": false, "isMod": false, "isPro": false, "name": "mskrt", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/65a49ef27ec6af0f956a5c61/P2wF-TfD9U5L1rN3ezxCI.gif" ]
2412.17762
[ { "_id": "676d0a2e0076ad5ba195b88a", "hidden": false, "name": "Marta Skreta", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:04.997Z", "user": { "_id": "65a49ef27ec6af0f956a5c61", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65a49ef27ec6af0f956a5c61/OGJ1OVezooF7dUn1SiEKG.jpeg", "fullname": "Marta Skreta", "isPro": false, "type": "user", "user": "mskrt" } }, { "_id": "676d0a2e0076ad5ba195b88b", "hidden": false, "name": "Lazar Atanackovic", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:02.707Z", "user": { "_id": "66cd0e523f6dbc62985b184b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66cd0e523f6dbc62985b184b/vlHTY9LKJ0jKeAniHdGil.jpeg", "fullname": "Lazar Atanackovic", "isPro": false, "type": "user", "user": "lazaratan" } }, { "_id": "676d0a2e0076ad5ba195b88c", "hidden": false, "name": "Avishek Joey Bose", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d0a2e0076ad5ba195b88d", "hidden": false, "name": "Alexander Tong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d0a2e0076ad5ba195b88e", "hidden": false, "name": "Kirill Neklyudov", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T20:44:53.406Z", "user": { "_id": "65039d7a93574a897179139a", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65039d7a93574a897179139a/szQAWkF2M9XnYKykmX8Cg.jpeg", "fullname": "Kirill Neklyudov", "isPro": false, "type": "user", "user": "necludov" } } ]
2024-12-23T18:18:07
The Superposition of Diffusion Models Using the Itô Density Estimator
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant computational burden of re-training a larger combined model. In this paper, we cast the problem of combining multiple pre-trained diffusion models at the generation stage under a novel proposed framework termed superposition. Theoretically, we derive superposition from rigorous first principles stemming from the celebrated continuity equation and design two novel algorithms tailor-made for combining diffusion models in SuperDiff. SuperDiff leverages a new scalable It\^o density estimator for the log likelihood of the diffusion SDE which incurs no additional overhead compared to the well-known Hutchinson's estimator needed for divergence calculations. We demonstrate that SuperDiff is scalable to large pre-trained diffusion models as superposition is performed solely through composition during inference, and also enjoys painless implementation as it combines different pre-trained vector fields through an automated re-weighting scheme. Notably, we show that SuperDiff is efficient during inference time, and mimics traditional composition operators such as the logical OR and the logical AND. We empirically demonstrate the utility of using SuperDiff for generating more diverse images on CIFAR-10, more faithful prompt conditioned image editing using Stable Diffusion, and improved unconditional de novo structure design of proteins. https://github.com/necludov/super-diffusion
12
676d0a330076ad5ba195b97c
null
null
2024-12-30T01:27:40.574000
SBS Figures: Pre-training Figure QA from Stage-by-Stage Synthesized Images
https://cdn-thumbnails.h…s/2412.17606.png
2
{ "_id": "651a7684a1a5e5d617e28f84", "avatarUrl": "/avatars/f484ae7c8d980818cd2ba3ffa682b781.svg", "followerCount": 0, "fullname": "Risa Shinoda", "isHf": false, "isMod": false, "isPro": false, "name": "risashinoda", "type": "user" }
true
null
2412.17606
[ { "_id": "677204632f016f40c42e2180", "hidden": false, "name": "Risa Shinoda", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:31:04.074Z", "user": { "_id": "651a7684a1a5e5d617e28f84", "avatarUrl": "/avatars/f484ae7c8d980818cd2ba3ffa682b781.svg", "fullname": "Risa Shinoda", "isPro": false, "type": "user", "user": "risashinoda" } }, { "_id": "677204632f016f40c42e2181", "hidden": false, "name": "Kuniaki Saito", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:23:08.219Z", "user": { "_id": "649937d773736b1e8e61a5fa", "avatarUrl": "/avatars/5da5e43a5bfc9c44194805dc8fa97e9a.svg", "fullname": "Kuniaki Saito", "isPro": false, "type": "user", "user": "ksaito2omr" } }, { "_id": "677204632f016f40c42e2182", "hidden": false, "name": "Shohei Tanaka", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677204632f016f40c42e2183", "hidden": false, "name": "Tosho Hirasawa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677204632f016f40c42e2184", "hidden": false, "name": "Yoshitaka Ushiku", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:41:42.211Z", "user": { "_id": "6268cab5dc80d589a9f8ea82", "avatarUrl": "/avatars/2897993439d641478aa4e797551fccfb.svg", "fullname": "Yoshitaka Ushiku", "isPro": false, "type": "user", "user": "yushiku" } } ]
2024-12-23T14:25:33
SBS Figures: Pre-training Figure QA from Stage-by-Stage Synthesized Images
Building a large-scale figure QA dataset requires a considerable amount of work, from gathering and selecting figures to extracting attributes like text, numbers, and colors, and generating QAs. Although recent developments in LLMs have led to efforts to synthesize figures, most of these focus primarily on QA generation. Additionally, creating figures directly using LLMs often encounters issues such as code errors, similar-looking figures, and repetitive content in figures. To address this issue, we present SBSFigures (Stage-by-Stage Synthetic Figures), a dataset for pre-training figure QA. Our proposed pipeline enables the creation of chart figures with complete annotations of the visualized data and dense QA annotations without any manual annotation process. Our stage-by-stage pipeline makes it possible to create diverse topic and appearance figures efficiently while minimizing code errors. Our SBSFigures demonstrate a strong pre-training effect, making it possible to achieve efficient training with a limited amount of real-world chart data starting from our pre-trained weights.
5
677204652f016f40c42e2367
null
null
2024-12-30T00:04:19.275000
VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models
https://cdn-thumbnails.h…s/2412.19645.png
2
{ "_id": "63468720dd6d90d82ccf3450", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63468720dd6d90d82ccf3450/tVBFlmZNz8FRMkOrDaDID.jpeg", "followerCount": 32, "fullname": "YSH", "isHf": false, "isMod": false, "isPro": false, "name": "BestWishYsh", "type": "user" }
false
null
2412.19645
[ { "_id": "677229bf8103ad52cb7031b1", "hidden": false, "name": "Tao Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677229bf8103ad52cb7031b2", "hidden": false, "name": "Yong Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677229bf8103ad52cb7031b3", "hidden": false, "name": "Xiaodong Cun", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:51.507Z", "user": { "_id": "63184c517ca1b876d99b7e0e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63184c517ca1b876d99b7e0e/b-qDExoeJuDXK0cJBZKnz.jpeg", "fullname": "Xiaodong Cun", "isPro": false, "type": "user", "user": "vinthony" } }, { "_id": "677229bf8103ad52cb7031b4", "hidden": false, "name": "Zhongang Qi", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:45.157Z", "user": { "_id": "660cc64f1b14f691990c0ea0", "avatarUrl": "/avatars/f172d3120b22d745a41cc3f2eb499ce6.svg", "fullname": "Zhongang Qi", "isPro": false, "type": "user", "user": "phoenixqza" } }, { "_id": "677229bf8103ad52cb7031b5", "hidden": false, "name": "Junfu Pu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:39.399Z", "user": { "_id": "646c5fcc97819a8be938ee9b", "avatarUrl": "/avatars/27976d701e88aebf5855418c055ba50e.svg", "fullname": "Junfu Pu", "isPro": false, "type": "user", "user": "Jevin754" } }, { "_id": "677229bf8103ad52cb7031b6", "hidden": false, "name": "Huanzhang Dou", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:34.026Z", "user": { "_id": "64c8ad3a527d763655600f6f", "avatarUrl": "/avatars/6a4ab87fe040d666313f59d93712cf73.svg", "fullname": "Huanzhang Dou", "isPro": false, "type": "user", "user": "Huanzhang" } }, { "_id": "677229bf8103ad52cb7031b7", "hidden": false, "name": "Guangcong Zheng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:21.249Z", "user": { "_id": "6400b8af330a45b036094a87", "avatarUrl": "/avatars/1985c994dab49c8d2b7ec07cbea8eb7e.svg", "fullname": "GuangcongZheng", "isPro": false, "type": "user", "user": "ZGCTroy" } }, { "_id": "677229bf8103ad52cb7031b8", "hidden": false, "name": "Ying Shan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:42:27.994Z", "user": { "_id": "63ca3ddc04c979828310bfcb", "avatarUrl": "/avatars/615e0d8622950b4408b40d550f02a894.svg", "fullname": "Ying Shan", "isPro": false, "type": "user", "user": "yshan2u" } }, { "_id": "677229bf8103ad52cb7031b9", "hidden": false, "name": "Xi Li", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-27T13:49:25
VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models
Zero-shot customized video generation has gained significant attention due to its substantial application potential. Existing methods rely on additional models to extract and inject reference subject features, assuming that the Video Diffusion Model (VDM) alone is insufficient for zero-shot customized video generation. However, these methods often struggle to maintain consistent subject appearance due to suboptimal feature extraction and injection techniques. In this paper, we reveal that VDM inherently possesses the force to extract and inject subject features. Departing from previous heuristic approaches, we introduce a novel framework that leverages VDM's inherent force to enable high-quality zero-shot customized video generation. Specifically, for feature extraction, we directly input reference images into VDM and use its intrinsic feature extraction process, which not only provides fine-grained features but also significantly aligns with VDM's pre-trained knowledge. For feature injection, we devise an innovative bidirectional interaction between subject features and generated content through spatial self-attention within VDM, ensuring that VDM has better subject fidelity while maintaining the diversity of the generated video.Experiments on both customized human and object video generation validate the effectiveness of our framework.
13
677229c48103ad52cb7032d5
null
null
2024-12-29T23:57:59.186000
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
https://cdn-thumbnails.h…s/2412.19326.png
2
{ "_id": "62aafa49f29ff279b51f0182", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62aafa49f29ff279b51f0182/rQx8QFQGOY2qIhqJ8zSRj.jpeg", "followerCount": 10, "fullname": "yinanhe", "isHf": false, "isMod": false, "isPro": false, "name": "ynhe", "type": "user" }
false
null
2412.19326
[ { "_id": "67722817633a6043c33212aa", "hidden": false, "name": "Ziang Yan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:44:33.925Z", "user": { "_id": "65499e5f2a292b3e2e5715a3", "avatarUrl": "/avatars/087b3e36dfb66e044265b856bab31657.svg", "fullname": "ziang yan", "isPro": false, "type": "user", "user": "Aurorana" } }, { "_id": "67722817633a6043c33212ab", "hidden": false, "name": "Zhilin Li", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:44:41.063Z", "user": { "_id": "64412af71a80f6d83cb03318", "avatarUrl": "/avatars/113ad7064e7230eff3ae86ad24c88818.svg", "fullname": "zhilin li", "isPro": false, "type": "user", "user": "white213" } }, { "_id": "67722817633a6043c33212ac", "hidden": false, "name": "Yinan He", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:44:47.500Z", "user": { "_id": "65b9d9961fe588f824fde191", "avatarUrl": "/avatars/a9245958cc998a4b4b870bf2490fdaee.svg", "fullname": "Yinan He", "isPro": false, "type": "user", "user": "yinanhe" } }, { "_id": "67722817633a6043c33212ad", "hidden": false, "name": "Chenting Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212ae", "hidden": false, "name": "Kunchang Li", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:45:06.653Z", "user": { "_id": "61fb81006374891646732f37", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1643872995181-61fb81006374891646732f37.jpeg", "fullname": "Kunchang Li", "isPro": false, "type": "user", "user": "Andy1621" } }, { "_id": "67722817633a6043c33212af", "hidden": false, "name": "Xinhao Li", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:45:14.329Z", "user": { "_id": "672f8a28c53c174f39b08ac1", "avatarUrl": "/avatars/9d865f757667de14381d7c4d7ba7e4c4.svg", "fullname": "XINHAO LI", "isPro": false, "type": "user", "user": "xinhaoli" } }, { "_id": "67722817633a6043c33212b0", "hidden": false, "name": "Xiangyu Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212b1", "hidden": false, "name": "Zilei Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212b2", "hidden": false, "name": "Yali Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212b3", "hidden": false, "name": "Yu Qiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212b4", "hidden": false, "name": "Limin Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67722817633a6043c33212b5", "hidden": false, "name": "Yi Wang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-26T18:56:05
Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment
Current multimodal large language models (MLLMs) struggle with fine-grained or precise understanding of visuals though they give comprehensive perception and reasoning in a spectrum of vision applications. Recent studies either develop tool-using or unify specific visual tasks into the autoregressive framework, often at the expense of overall multimodal performance. To address this issue and enhance MLLMs with visual tasks in a scalable fashion, we propose Task Preference Optimization (TPO), a novel method that utilizes differentiable task preferences derived from typical fine-grained visual tasks. TPO introduces learnable task tokens that establish connections between multiple task-specific heads and the MLLM. By leveraging rich visual labels during training, TPO significantly enhances the MLLM's multimodal capabilities and task-specific performance. Through multi-task co-training within TPO, we observe synergistic benefits that elevate individual task performance beyond what is achievable through single-task training methodologies. Our instantiation of this approach with VideoChat and LLaVA demonstrates an overall 14.6% improvement in multimodal performance compared to baseline models. Additionally, MLLM-TPO demonstrates robust zero-shot capabilities across various tasks, performing comparably to state-of-the-art supervised models. The code will be released at https://github.com/OpenGVLab/TPO
18
6772281a633a6043c3321365
null
null
2024-12-29T23:24:15.208000
From Elements to Design: A Layered Approach for Automatic Graphic Design Composition
https://cdn-thumbnails.h…s/2412.19712.png
2
{ "_id": "6440dda9cea37249a0f9b473", "avatarUrl": "/avatars/9747e1ca11ed5725b2b9968f028cac93.svg", "followerCount": 1, "fullname": "Jiawei Lin", "isHf": false, "isMod": false, "isPro": false, "name": "KyleLin", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6440dda9cea37249a0f9b473/uCdgNCCTyvVF56mH0uPnc.png", "https://cdn-uploads.huggingface.co/production/uploads/6440dda9cea37249a0f9b473/ks1FHTdlYHN9WEYLBJYG1.png", "https://cdn-uploads.huggingface.co/production/uploads/6440dda9cea37249a0f9b473/q1JODwSII9FQVA0dee4EA.png" ]
2412.19712
[ { "_id": "67721e8cb9d0358385f6628d", "hidden": false, "name": "Jiawei Lin", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:54:15.360Z", "user": { "_id": "6440dda9cea37249a0f9b473", "avatarUrl": "/avatars/9747e1ca11ed5725b2b9968f028cac93.svg", "fullname": "Jiawei Lin", "isPro": false, "type": "user", "user": "KyleLin" } }, { "_id": "67721e8cb9d0358385f6628e", "hidden": false, "name": "Shizhao Sun", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:54:08.910Z", "user": { "_id": "63eb00a191a1b8ec4fbba2a9", "avatarUrl": "/avatars/0cc7cf9b6d05337603f700e0d592edf5.svg", "fullname": "ShizhaoSun", "isPro": false, "type": "user", "user": "ShizhaoSun" } }, { "_id": "67721e8cb9d0358385f6628f", "hidden": false, "name": "Danqing Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721e8cb9d0358385f66290", "hidden": false, "name": "Ting Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721e8cb9d0358385f66291", "hidden": false, "name": "Ji Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67721e8cb9d0358385f66292", "hidden": false, "name": "Jiang Bian", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:53:33.295Z", "user": { "_id": "63f253f8f4e30ffd2bd308fb", "avatarUrl": "/avatars/303f4c7ee588f638acf78a7966786e1e.svg", "fullname": "Jiang Bian", "isPro": false, "type": "user", "user": "bianjiang" } } ]
2024-12-27T16:13:08
From Elements to Design: A Layered Approach for Automatic Graphic Design Composition
In this work, we investigate automatic design composition from multimodal graphic elements. Although recent studies have developed various generative models for graphic design, they usually face the following limitations: they only focus on certain subtasks and are far from achieving the design composition task; they do not consider the hierarchical information of graphic designs during the generation process. To tackle these issues, we introduce the layered design principle into Large Multimodal Models (LMMs) and propose a novel approach, called LaDeCo, to accomplish this challenging task. Specifically, LaDeCo first performs layer planning for a given element set, dividing the input elements into different semantic layers according to their contents. Based on the planning results, it subsequently predicts element attributes that control the design composition in a layer-wise manner, and includes the rendered image of previously generated layers into the context. With this insightful design, LaDeCo decomposes the difficult task into smaller manageable steps, making the generation process smoother and clearer. The experimental results demonstrate the effectiveness of LaDeCo in design composition. Furthermore, we show that LaDeCo enables some interesting applications in graphic design, such as resolution adjustment, element filling, design variation, etc. In addition, it even outperforms the specialized models in some design subtasks without any task-specific training.
15
67721e92b9d0358385f66457
null
null
2024-12-29T22:31:54.173000
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
https://cdn-thumbnails.h…s/2412.18925.png
6
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2412.18925
[ { "_id": "677209448e0ed7713b183674", "hidden": false, "name": "Junying Chen", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:04:26.351Z", "user": { "_id": "64097dd1b6a334f53e2b3e4c", "avatarUrl": "/avatars/18d036aab5e096054a8706bc78027126.svg", "fullname": "Junying Chen", "isPro": false, "type": "user", "user": "jymcc" } }, { "_id": "677209448e0ed7713b183675", "hidden": false, "name": "Zhenyang Cai", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T19:57:15.576Z", "user": { "_id": "64f1a34f2c5c8b767916447e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64f1a34f2c5c8b767916447e/uak2CsMAnxW8q4dwyAOBN.jpeg", "fullname": "Zhenyang Cai", "isPro": false, "type": "user", "user": "Eric3200" } }, { "_id": "677209448e0ed7713b183676", "hidden": false, "name": "Ke Ji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677209448e0ed7713b183677", "hidden": false, "name": "Xidong Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T19:57:22.852Z", "user": { "_id": "640ed3e9f2d7c41a1e9a9fde", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1678693486092-640ed3e9f2d7c41a1e9a9fde.jpeg", "fullname": "Xidong Wang", "isPro": false, "type": "user", "user": "Xidong" } }, { "_id": "677209448e0ed7713b183678", "hidden": false, "name": "Wanlong Liu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:03:23.956Z", "user": { "_id": "64eb333e6878d90b031fa5c5", "avatarUrl": "/avatars/a0d875b49d1c56be88f34854647306da.svg", "fullname": "Wanlong Liu", "isPro": false, "type": "user", "user": "lwl-uestc" } }, { "_id": "677209448e0ed7713b183679", "hidden": false, "name": "Rongsheng Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:30:46.521Z", "user": { "_id": "63ca949b04c979828315389d", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63ca949b04c979828315389d/HS5xWNAYjjHeyAAwWJ11l.jpeg", "fullname": "wangrongsheng", "isPro": false, "type": "user", "user": "wangrongsheng" } }, { "_id": "677209448e0ed7713b18367a", "hidden": false, "name": "Jianye Hou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "677209448e0ed7713b18367b", "hidden": false, "name": "Benyou Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:03:08.303Z", "user": { "_id": "637c6703ca8542a0ba900ccb", "avatarUrl": "/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg", "fullname": "Wang", "isPro": false, "type": "user", "user": "Benyou" } } ]
2024-12-25T15:12:34
HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs
The breakthrough of OpenAI o1 highlights the potential of enhancing reasoning to improve LLM. Yet, most research in reasoning has focused on mathematical tasks, leaving domains like medicine underexplored. The medical domain, though distinct from mathematics, also demands robust reasoning to provide reliable answers, given the high standards of healthcare. However, verifying medical reasoning is challenging, unlike those in mathematics. To address this, we propose verifiable medical problems with a medical verifier to check the correctness of model outputs. This verifiable nature enables advancements in medical reasoning through a two-stage approach: (1) using the verifier to guide the search for a complex reasoning trajectory for fine-tuning LLMs, (2) applying reinforcement learning (RL) with verifier-based rewards to enhance complex reasoning further. Finally, we introduce HuatuoGPT-o1, a medical LLM capable of complex reasoning, which outperforms general and medical-specific baselines using only 40K verifiable problems. Experiments show complex reasoning improves medical problem-solving and benefits more from RL. We hope our approach inspires advancements in reasoning across medical and other specialized domains.
97
677209448e0ed7713b1836cb
null
null
2024-12-29T21:38:37.393000
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
https://cdn-thumbnails.h…s/2412.18605.png
4
{ "_id": "663b4d6aa55b0634634cd302", "avatarUrl": "/avatars/1191982568ad67895225f22844b6da99.svg", "followerCount": null, "fullname": "ZehanWang", "isHf": false, "isMod": false, "isPro": false, "name": "ZehanWang", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/663b4d6aa55b0634634cd302/otb9alc3ESHg68f_TBcFp.png" ]
2412.18605
[ { "_id": "676bb2c29063304d2d9ec676", "hidden": false, "name": "Zehan Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:48:11.548Z", "user": { "_id": "663b4d6aa55b0634634cd302", "avatarUrl": "/avatars/1191982568ad67895225f22844b6da99.svg", "fullname": "ZehanWang", "isPro": false, "type": "user", "user": "ZehanWang" } }, { "_id": "676bb2c29063304d2d9ec677", "hidden": false, "name": "Ziang Zhang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:48:02.295Z", "user": { "_id": "66850e08a1bb72ba2d1d25a7", "avatarUrl": "/avatars/67b29e82c06812f987d2a65907de50ad.svg", "fullname": "Ziang Zhang", "isPro": false, "type": "user", "user": "ziangzhang" } }, { "_id": "676bb2c29063304d2d9ec678", "hidden": false, "name": "Tianyu Pang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T20:52:17.280Z", "user": { "_id": "63d91b6d255ef6add20e1b38", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1675921369867-63d91b6d255ef6add20e1b38.jpeg", "fullname": "Tianyu Pang", "isPro": false, "type": "user", "user": "P2333" } }, { "_id": "676bb2c29063304d2d9ec679", "hidden": false, "name": "Chao Du", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676bb2c29063304d2d9ec67a", "hidden": false, "name": "Hengshuang Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676bb2c29063304d2d9ec67b", "hidden": false, "name": "Zhou Zhao", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T18:58:43
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
Orientation is a key attribute of objects, crucial for understanding their spatial pose and arrangement in images. However, practical solutions for accurate orientation estimation from a single image remain underexplored. In this work, we introduce Orient Anything, the first expert and foundational model designed to estimate object orientation in a single- and free-view image. Due to the scarcity of labeled data, we propose extracting knowledge from the 3D world. By developing a pipeline to annotate the front face of 3D objects and render images from random views, we collect 2M images with precise orientation annotations. To fully leverage the dataset, we design a robust training objective that models the 3D orientation as probability distributions of three angles and predicts the object orientation by fitting these distributions. Besides, we employ several strategies to improve synthetic-to-real transfer. Our model achieves state-of-the-art orientation estimation accuracy in both rendered and real images and exhibits impressive zero-shot ability in various scenarios. More importantly, our model enhances many applications, such as comprehension and generation of complex spatial concepts and 3D object pose adjustment.
20
676bb2c49063304d2d9ec7d0
null
null
2024-12-27T07:29:26.502000
Molar: Multimodal LLMs with Collaborative Filtering Alignment for Enhanced Sequential Recommendation
https://cdn-thumbnails.h…s/2412.18176.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18176
[ { "_id": "676e9d9d8126645611b73ecb", "hidden": false, "name": "Yucong Luo", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:11:39.638Z", "user": { "_id": "644679e3eda3580e81a0832b", "avatarUrl": "/avatars/bc7c67955644efe55244f9b8a68b8408.svg", "fullname": "Luo Yucong", "isPro": false, "type": "user", "user": "GodFire" } }, { "_id": "676e9d9d8126645611b73ecc", "hidden": false, "name": "Qitao Qin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676e9d9d8126645611b73ecd", "hidden": false, "name": "Hao Zhang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:11:55.544Z", "user": { "_id": "6320ddd4a023aad6a768db54", "avatarUrl": "/avatars/96b449a5cf6b2c7a818d7e7dc8c2e821.svg", "fullname": "Hao Zhang", "isPro": false, "type": "user", "user": "haozhang" } }, { "_id": "676e9d9d8126645611b73ece", "hidden": false, "name": "Mingyue Cheng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:12:00.921Z", "user": { "_id": "647f5222e9c81260ff87640d", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647f5222e9c81260ff87640d/u7kmIrFEzaIG2Id6R2bAz.jpeg", "fullname": "Mingyue Cheng", "isPro": false, "type": "user", "user": "twigcheng" } }, { "_id": "676e9d9d8126645611b73ecf", "hidden": false, "name": "Ruiran Yan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:12:07.779Z", "user": { "_id": "661ac5b53d7248a6f20080c1", "avatarUrl": "/avatars/26aef5944759c2e4366a71eb8c7fc50a.svg", "fullname": "Ruiran Yan", "isPro": false, "type": "user", "user": "Ruiran" } }, { "_id": "676e9d9d8126645611b73ed0", "hidden": false, "name": "Kefan Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676e9d9d8126645611b73ed1", "hidden": false, "name": "Jie Ouyang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:12:23.124Z", "user": { "_id": "653780f5434f0b412aba51c2", "avatarUrl": "/avatars/1c416e5b178fcd04898a393cc2ef8d6c.svg", "fullname": "Jie Ouyang", "isPro": false, "type": "user", "user": "russwest404" } } ]
2024-12-24T05:23:13
Molar: Multimodal LLMs with Collaborative Filtering Alignment for Enhanced Sequential Recommendation
Sequential recommendation (SR) systems have evolved significantly over the past decade, transitioning from traditional collaborative filtering to deep learning approaches and, more recently, to large language models (LLMs). While the adoption of LLMs has driven substantial advancements, these models inherently lack collaborative filtering information, relying primarily on textual content data neglecting other modalities and thus failing to achieve optimal recommendation performance. To address this limitation, we propose Molar, a Multimodal large language sequential recommendation framework that integrates multiple content modalities with ID information to capture collaborative signals effectively. Molar employs an MLLM to generate unified item representations from both textual and non-textual data, facilitating comprehensive multimodal modeling and enriching item embeddings. Additionally, it incorporates collaborative filtering signals through a post-alignment mechanism, which aligns user representations from content-based and ID-based models, ensuring precise personalization and robust performance. By seamlessly combining multimodal content with collaborative filtering insights, Molar captures both user interests and contextual semantics, leading to superior recommendation accuracy. Extensive experiments validate that Molar significantly outperforms traditional and LLM-based baselines, highlighting its strength in utilizing multimodal data and collaborative signals for sequential recommendation tasks. The source code is available at https://anonymous.4open.science/r/Molar-8B06/.
15
676e9d9d8126645611b73f18
null
null
2024-12-27T07:26:55.759000
MMFactory: A Universal Solution Search Engine for Vision-Language Tasks
https://cdn-thumbnails.h…s/2412.18072.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18072
[ { "_id": "676e9cfc11998b72ab00be60", "hidden": false, "name": "Wan-Cyuan Fan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:13:13.288Z", "user": { "_id": "63357efea01bd734f7243c61", "avatarUrl": "/avatars/459d3e35bb6ae60f69a0c7bd77a39d58.svg", "fullname": "WanCyuan Fan", "isPro": false, "type": "user", "user": "ChrisFan" } }, { "_id": "676e9cfc11998b72ab00be61", "hidden": false, "name": "Tanzila Rahman", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:12:47.649Z", "user": { "_id": "64062c3ca577649430c1006b", "avatarUrl": "/avatars/870ac9e8dc15051e97a7a6efd46e8a36.svg", "fullname": "Tanzila Rahman", "isPro": false, "type": "user", "user": "trahman" } }, { "_id": "676e9cfc11998b72ab00be62", "hidden": false, "name": "Leonid Sigal", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T00:59:16
MMFactory: A Universal Solution Search Engine for Vision-Language Tasks
With advances in foundational and vision-language models, and effective fine-tuning techniques, a large number of both general and special-purpose models have been developed for a variety of visual tasks. Despite the flexibility and accessibility of these models, no single model is able to handle all tasks and/or applications that may be envisioned by potential users. Recent approaches, such as visual programming and multimodal LLMs with integrated tools aim to tackle complex visual tasks, by way of program synthesis. However, such approaches overlook user constraints (e.g., performance / computational needs), produce test-time sample-specific solutions that are difficult to deploy, and, sometimes, require low-level instructions that maybe beyond the abilities of a naive user. To address these limitations, we introduce MMFactory, a universal framework that includes model and metrics routing components, acting like a solution search engine across various available models. Based on a task description and few sample input-output pairs and (optionally) resource and/or performance constraints, MMFactory can suggest a diverse pool of programmatic solutions by instantiating and combining visio-lingual tools from its model repository. In addition to synthesizing these solutions, MMFactory also proposes metrics and benchmarks performance / resource characteristics, allowing users to pick a solution that meets their unique design constraints. From the technical perspective, we also introduced a committee-based solution proposer that leverages multi-agent LLM conversation to generate executable, diverse, universal, and robust solutions for the user. Experimental results show that MMFactory outperforms existing methods by delivering state-of-the-art solutions tailored to user problem specifications. Project page is available at https://davidhalladay.github.io/mmfactory_demo.
18
676e9cfd11998b72ab00bfe8
null
null
2024-12-27T02:39:50.862000
A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
https://cdn-thumbnails.h…s/2412.17483.png
2
{ "_id": "654c99d6e82a71cb487c2ecd", "avatarUrl": "/avatars/480c883b587ed2e41e1e9661c844a938.svg", "followerCount": 1, "fullname": "ChenlongDeng", "isHf": false, "isMod": false, "isPro": false, "name": "ChenlongDeng", "type": "user" }
true
null
2412.17483
[ { "_id": "676a2a3ebce62ec5a02c4a66", "hidden": false, "name": "Chenlong Deng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-26T18:49:33.308Z", "user": { "_id": "654c99d6e82a71cb487c2ecd", "avatarUrl": "/avatars/480c883b587ed2e41e1e9661c844a938.svg", "fullname": "ChenlongDeng", "isPro": false, "type": "user", "user": "ChenlongDeng" } }, { "_id": "676a2a3ebce62ec5a02c4a67", "hidden": false, "name": "Zhisong Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2a3ebce62ec5a02c4a68", "hidden": false, "name": "Kelong Mao", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:30.975Z", "user": { "_id": "649a65605c74a2125c22bbc1", "avatarUrl": "/avatars/e7435d3aeeb59acc6f6f43b48d6982a0.svg", "fullname": "Mao", "isPro": false, "type": "user", "user": "kyriemao" } }, { "_id": "676a2a3ebce62ec5a02c4a69", "hidden": false, "name": "Shuaiyi Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:28.792Z", "user": { "_id": "64d708e45e5f05485ce5fbd8", "avatarUrl": "/avatars/3c5389a5ffecbc05583004c627b21b6c.svg", "fullname": "Li Shuaiyi", "isPro": false, "type": "user", "user": "Syon-Li" } }, { "_id": "676a2a3ebce62ec5a02c4a6a", "hidden": false, "name": "Xinting Huang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2a3ebce62ec5a02c4a6b", "hidden": false, "name": "Dong Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2a3ebce62ec5a02c4a6c", "hidden": false, "name": "Zhicheng Dou", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T11:24:04
A Silver Bullet or a Compromise for Full Attention? A Comprehensive Study of Gist Token-based Context Compression
In this work, we provide a thorough investigation of gist-based context compression methods to improve long-context processing in large language models. We focus on two key questions: (1) How well can these methods replace full attention models? and (2) What potential failure patterns arise due to compression? Through extensive experiments, we show that while gist-based compression can achieve near-lossless performance on tasks like retrieval-augmented generation and long-document QA, it faces challenges in tasks like synthetic recall. Furthermore, we identify three key failure patterns: lost by the boundary, lost if surprise, and lost along the way. To mitigate these issues, we propose two effective strategies: fine-grained autoencoding, which enhances the reconstruction of original token information, and segment-wise token importance estimation, which adjusts optimization based on token dependencies. Our work provides valuable insights into the understanding of gist token-based context compression and offers practical strategies for improving compression capabilities.
31
676a2a3fbce62ec5a02c4ace
null
null
2024-12-27T01:08:10.879000
YuLan-Mini: An Open Data-efficient Language Model
https://cdn-thumbnails.h…s/2412.17743.png
2
{ "_id": "6317419f3eb2544b62389a79", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6317419f3eb2544b62389a79/8oU90du902ATtBPYCcLFK.jpeg", "followerCount": 3, "fullname": "Ivan Hu", "isHf": false, "isMod": false, "isPro": false, "name": "IvanHU", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6317419f3eb2544b62389a79/ruQQgcy_OzG_dx_7Z-tk-.png", "https://cdn-uploads.huggingface.co/production/uploads/6317419f3eb2544b62389a79/DEKzbv-CqVK7cw-pc7K_t.png", "https://cdn-uploads.huggingface.co/production/uploads/6317419f3eb2544b62389a79/e7mdk9SoId2AUjUpoH8vD.png", "https://cdn-uploads.huggingface.co/production/uploads/6317419f3eb2544b62389a79/UulrjfVDO1p_BSy8O-A5G.png" ]
2412.17743
[ { "_id": "676d2c63310ca4eb397415fe", "hidden": false, "name": "Yiwen Hu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:31:52.455Z", "user": { "_id": "6317419f3eb2544b62389a79", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6317419f3eb2544b62389a79/8oU90du902ATtBPYCcLFK.jpeg", "fullname": "Ivan Hu", "isPro": false, "type": "user", "user": "IvanHU" } }, { "_id": "676d2c63310ca4eb397415ff", "hidden": false, "name": "Huatong Song", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:02:13.538Z", "user": { "_id": "66163dc8c7f45b3f893ff40b", "avatarUrl": "/avatars/801043dac0caae90bbca8c9d3e2e203b.svg", "fullname": "Song Huatong", "isPro": false, "type": "user", "user": "XXsongLALA" } }, { "_id": "676d2c63310ca4eb39741600", "hidden": false, "name": "Jia Deng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d2c63310ca4eb39741601", "hidden": false, "name": "Jiapeng Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:01:54.842Z", "user": { "_id": "61482b08c3ca9182d68debe9", "avatarUrl": "/avatars/4319add48a092adfee3bc45930ac6d2a.svg", "fullname": "Jiapeng Wang", "isPro": false, "type": "user", "user": "jpWang" } }, { "_id": "676d2c63310ca4eb39741602", "hidden": false, "name": "Jie Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d2c63310ca4eb39741603", "hidden": false, "name": "Kun Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d2c63310ca4eb39741604", "hidden": false, "name": "Yutao Zhu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:01:41.536Z", "user": { "_id": "625e62452a7279d3c77b5c38", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/625e62452a7279d3c77b5c38/zJINew6U4_Gup4WTobb-0.jpeg", "fullname": "Yutao Zhu", "isPro": false, "type": "user", "user": "yutaozhu94" } }, { "_id": "676d2c63310ca4eb39741605", "hidden": false, "name": "Jinhao Jiang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:01:35.432Z", "user": { "_id": "61b8405b516a20acdf3b85ff", "avatarUrl": "/avatars/3d2eae7c163a80b73260087b05a4230b.svg", "fullname": "Jinhao Jiang", "isPro": false, "type": "user", "user": "Boru" } }, { "_id": "676d2c63310ca4eb39741606", "hidden": false, "name": "Zican Dong", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:01:29.767Z", "user": { "_id": "65180fa499cf1a4e3913a31b", "avatarUrl": "/avatars/a321b308336bb881b11b396e210226cf.svg", "fullname": "zican dong", "isPro": false, "type": "user", "user": "cjgs20017" } }, { "_id": "676d2c63310ca4eb39741607", "hidden": false, "name": "Wayne Xin Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d2c63310ca4eb39741608", "hidden": false, "name": "Ji-Rong Wen", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:01:14.755Z", "user": { "_id": "64b8c89052b7353d8c6a1013", "avatarUrl": "/avatars/cd59fffe81f6b07b4519540b8ff3d95f.svg", "fullname": "Ji-Rong Wen", "isPro": false, "type": "user", "user": "jrwen" } } ]
2024-12-23T17:47:53
YuLan-Mini: An Open Data-efficient Language Model
Effective pre-training of large language models (LLMs) has been challenging due to the immense resource demands and the complexity of the technical processes involved. This paper presents a detailed technical report on YuLan-Mini, a highly capable base model with 2.42B parameters that achieves top-tier performance among models of similar parameter scale. Our pre-training approach focuses on enhancing training efficacy through three key technical contributions: an elaborate data pipeline combines data cleaning with data schedule strategies, a robust optimization method to mitigate training instability, and an effective annealing approach that incorporates targeted data selection and long context training. Remarkably, YuLan-Mini, trained on 1.08T tokens, achieves performance comparable to industry-leading models that require significantly more data. To facilitate reproduction, we release the full details of the data composition for each training phase. Project details can be accessed at the following link: https://github.com/RUC-GSAI/YuLan-Mini.
65
676d2c65310ca4eb39741682
null
null
2024-12-26T21:16:18.231000
VidTwin: Video VAE with Decoupled Structure and Dynamics
https://cdn-thumbnails.h…s/2412.17726.png
3
{ "_id": "622842e296588dd1a2594746", "avatarUrl": "/avatars/b96d0b49e4cff25d83fefc67b7cd1076.svg", "followerCount": null, "fullname": "wangyuchi", "isHf": false, "isMod": false, "isPro": false, "name": "YuchiWang", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/622842e296588dd1a2594746/BpaGMvo-7LnqTnfodRYAz.gif" ]
2412.17726
[ { "_id": "676d52f991c1773322dcc757", "hidden": false, "name": "Yuchi Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:31:48.752Z", "user": { "_id": "622842e296588dd1a2594746", "avatarUrl": "/avatars/b96d0b49e4cff25d83fefc67b7cd1076.svg", "fullname": "wangyuchi", "isPro": false, "type": "user", "user": "YuchiWang" } }, { "_id": "676d52f991c1773322dcc758", "hidden": false, "name": "Junliang Guo", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:17:48.498Z", "user": { "_id": "66868659ccb9539da85c4e14", "avatarUrl": "/avatars/515a49363872c23d57a6f75063606348.svg", "fullname": "Junliang Guo", "isPro": false, "type": "user", "user": "leo-guo" } }, { "_id": "676d52f991c1773322dcc759", "hidden": false, "name": "Xinyi Xie", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:17:54.976Z", "user": { "_id": "643cf31cb561f17fa4678293", "avatarUrl": "/avatars/dd02c667d0d00704fb41c7b7065dce0c.svg", "fullname": "Xinyi XIE", "isPro": false, "type": "user", "user": "Hyude" } }, { "_id": "676d52f991c1773322dcc75a", "hidden": false, "name": "Tianyu He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d52f991c1773322dcc75b", "hidden": false, "name": "Xu Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d52f991c1773322dcc75c", "hidden": false, "name": "Jiang Bian", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:18:06.605Z", "user": { "_id": "63f253f8f4e30ffd2bd308fb", "avatarUrl": "/avatars/303f4c7ee588f638acf78a7966786e1e.svg", "fullname": "Jiang Bian", "isPro": false, "type": "user", "user": "bianjiang" } } ]
2024-12-23T17:16:58
VidTwin: Video VAE with Decoupled Structure and Dynamics
Recent advancements in video autoencoders (Video AEs) have significantly improved the quality and efficiency of video generation. In this paper, we propose a novel and compact video autoencoder, VidTwin, that decouples video into two distinct latent spaces: Structure latent vectors, which capture overall content and global movement, and Dynamics latent vectors, which represent fine-grained details and rapid movements. Specifically, our approach leverages an Encoder-Decoder backbone, augmented with two submodules for extracting these latent spaces, respectively. The first submodule employs a Q-Former to extract low-frequency motion trends, followed by downsampling blocks to remove redundant content details. The second averages the latent vectors along the spatial dimension to capture rapid motion. Extensive experiments show that VidTwin achieves a high compression rate of 0.20% with high reconstruction quality (PSNR of 28.14 on the MCL-JCV dataset), and performs efficiently and effectively in downstream generative tasks. Moreover, our model demonstrates explainability and scalability, paving the way for future research in video latent representation and generation. Our code has been released at https://github.com/microsoft/VidTok/tree/main/vidtwin.
8
676d52fb91c1773322dcc8b6
null
null
2024-12-26T15:44:06.575000
How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?
https://cdn-thumbnails.h…s/2412.18495.png
2
{ "_id": "66309b3833ccd9e68c5d5171", "avatarUrl": "/avatars/bd2a3fb38820c828bdb1acb93b673cb4.svg", "followerCount": 8, "fullname": "Sara Papi", "isHf": false, "isMod": false, "isPro": false, "name": "spapi", "type": "user" }
true
null
2412.18495
[ { "_id": "676dbfe27fff9075b53888d4", "hidden": false, "name": "Sara Papi", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-26T20:48:59.504Z", "user": { "_id": "66309b3833ccd9e68c5d5171", "avatarUrl": "/avatars/bd2a3fb38820c828bdb1acb93b673cb4.svg", "fullname": "Sara Papi", "isPro": false, "type": "user", "user": "spapi" } }, { "_id": "676dbfe27fff9075b53888d5", "hidden": false, "name": "Peter Polak", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676dbfe27fff9075b53888d6", "hidden": false, "name": "Ondřej Bojar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676dbfe27fff9075b53888d7", "hidden": false, "name": "Dominik Macháček", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T15:26:31
How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System?
Simultaneous speech-to-text translation (SimulST) translates source-language speech into target-language text concurrently with the speaker's speech, ensuring low latency for better user comprehension. Despite its intended application to unbounded speech, most research has focused on human pre-segmented speech, simplifying the task and overlooking significant challenges. This narrow focus, coupled with widespread terminological inconsistencies, is limiting the applicability of research outcomes to real-world applications, ultimately hindering progress in the field. Our extensive literature review of 110 papers not only reveals these critical issues in current research but also serves as the foundation for our key contributions. We 1) define the steps and core components of a SimulST system, proposing a standardized terminology and taxonomy; 2) conduct a thorough analysis of community trends, and 3) offer concrete recommendations and future directions to bridge the gaps in existing literature, from evaluation frameworks to system architectures, for advancing the field towards more realistic and effective SimulST solutions.
8
676dbfe37fff9075b5388918
null
null
2024-12-26T12:13:44.072000
WavePulse: Real-time Content Analytics of Radio Livestreams
https://cdn-thumbnails.h…s/2412.17998.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2412.17998
[ { "_id": "676d8eb98771d55751f14218", "hidden": false, "name": "Govind Mittal", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-26T18:33:31.720Z", "user": { "_id": "639a2ad41cd404e3fa319591", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/639a2ad41cd404e3fa319591/kUjrCfhukgS6HYgEtzbbh.jpeg", "fullname": "Govind Mittal", "isPro": false, "type": "user", "user": "mittal" } }, { "_id": "676d8eb98771d55751f14219", "hidden": false, "name": "Sarthak Gupta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d8eb98771d55751f1421a", "hidden": false, "name": "Shruti Wagle", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d8eb98771d55751f1421b", "hidden": false, "name": "Chirag Chopra", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:17:28.974Z", "user": { "_id": "65b830ea078c543033b25ecc", "avatarUrl": "/avatars/643934e94c1fc4a357dbcc2452a76ef0.svg", "fullname": "Chirag Chopra", "isPro": false, "type": "user", "user": "Chirag77" } }, { "_id": "676d8eb98771d55751f1421c", "hidden": false, "name": "Anthony J DeMattee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d8eb98771d55751f1421d", "hidden": false, "name": "Nasir Memon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d8eb98771d55751f1421e", "hidden": false, "name": "Mustaque Ahamad", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d8eb98771d55751f1421f", "hidden": false, "name": "Chinmay Hegde", "status": "extracted_confirmed", "statusLastChangedAt": "2025-03-03T11:50:24.204Z", "user": { "_id": "631620f6894404e25068856f", "avatarUrl": "/avatars/52c30caa0ee11347f82420a14ec19996.svg", "fullname": "Chinmay Hegde", "isPro": false, "type": "user", "user": "chegde" } } ]
2024-12-23T21:42:31
WavePulse: Real-time Content Analytics of Radio Livestreams
Radio remains a pervasive medium for mass information dissemination, with AM/FM stations reaching more Americans than either smartphone-based social networking or live television. Increasingly, radio broadcasts are also streamed online and accessed over the Internet. We present WavePulse, a framework that records, documents, and analyzes radio content in real-time. While our framework is generally applicable, we showcase the efficacy of WavePulse in a collaborative project with a team of political scientists focusing on the 2024 Presidential Elections. We use WavePulse to monitor livestreams of 396 news radio stations over a period of three months, processing close to 500,000 hours of audio streams. These streams were converted into time-stamped, diarized transcripts and analyzed to track answer key political science questions at both the national and state levels. Our analysis revealed how local issues interacted with national trends, providing insights into information flow. Our results demonstrate WavePulse's efficacy in capturing and analyzing content from radio livestreams sourced from the Web. Code and dataset can be accessed at https://wave-pulse.io.
10
676d8ebc8771d55751f14311
null
null
2024-12-26T11:33:39.769000
Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models
https://cdn-thumbnails.h…s/2412.18609.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18609
[ { "_id": "676d854b8771d55751ee0f4f", "hidden": false, "name": "Jinhui Yi", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:31:40.301Z", "user": { "_id": "667dfe6f684550b7893497c5", "avatarUrl": "/avatars/3f48b041a826fd8a8cc74c2d4ced1705.svg", "fullname": "Jinhui Yi", "isPro": false, "type": "user", "user": "jh-yi" } }, { "_id": "676d854b8771d55751ee0f50", "hidden": false, "name": "Syed Talal Wasim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d854b8771d55751ee0f51", "hidden": false, "name": "Yanan Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d854b8771d55751ee0f52", "hidden": false, "name": "Muzammal Naseer", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:17:02.313Z", "user": { "_id": "6495aae2e9b6d89a5c65cf15", "avatarUrl": "/avatars/c176d68af77698745c23da8b92424738.svg", "fullname": "Muzammal Naseer", "isPro": false, "type": "user", "user": "muzammal" } }, { "_id": "676d854b8771d55751ee0f53", "hidden": false, "name": "Juergen Gall", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T18:59:56
Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models
We present an efficient encoder-free approach for video-language understanding that achieves competitive performance while significantly reducing computational overhead. Current video-language models typically rely on heavyweight image encoders (300M-1.1B parameters) or video encoders (1B-1.4B parameters), creating a substantial computational burden when processing multi-frame videos. Our method introduces a novel Spatio-Temporal Alignment Block (STAB) that directly processes video inputs without requiring pre-trained encoders while using only 45M parameters for visual processing - at least a 6.5times reduction compared to traditional approaches. The STAB architecture combines Local Spatio-Temporal Encoding for fine-grained feature extraction, efficient spatial downsampling through learned attention and separate mechanisms for modeling frame-level and video-level relationships. Our model achieves comparable or superior performance to encoder-based approaches for open-ended video question answering on standard benchmarks. The fine-grained video question-answering evaluation demonstrates our model's effectiveness, outperforming the encoder-based approaches Video-ChatGPT and Video-LLaVA in key aspects like correctness and temporal understanding. Extensive ablation studies validate our architectural choices and demonstrate the effectiveness of our spatio-temporal modeling approach while achieving 3-4times faster processing speeds than previous methods. Code is available at https://github.com/jh-yi/Video-Panda.
17
676d854e8771d55751ee108f
null
null
2024-12-26T11:29:06.978000
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
https://cdn-thumbnails.h…s/2412.18319.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.18319
[ { "_id": "676d843d92c4a8fe49532793", "hidden": false, "name": "Huanjin Yao", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:16:04.725Z", "user": { "_id": "6590e03454f8826173ed5ee6", "avatarUrl": "/avatars/b2fbaaf444e1e53c5e914cd42a41389a.svg", "fullname": "Huanjin Yao", "isPro": false, "type": "user", "user": "HuanjinYao" } }, { "_id": "676d843d92c4a8fe49532794", "hidden": false, "name": "Jiaxing Huang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-30T21:15:31.439Z", "user": { "_id": "65237910b80dc49ba03a96d9", "avatarUrl": "/avatars/9d81c4c8fb2d597079e8dd9d9b79a8d8.svg", "fullname": "jiaxing", "isPro": false, "type": "user", "user": "huangjiaxing" } }, { "_id": "676d843d92c4a8fe49532795", "hidden": false, "name": "Wenhao Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe49532796", "hidden": false, "name": "Jingyi Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe49532797", "hidden": false, "name": "Yibo Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe49532798", "hidden": false, "name": "Shunyu Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe49532799", "hidden": false, "name": "Yingjie Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe4953279a", "hidden": false, "name": "Yuxin Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe4953279b", "hidden": false, "name": "Haocheng Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d843d92c4a8fe4953279c", "hidden": false, "name": "Li Shen", "status": "claimed_verified", "statusLastChangedAt": "2025-01-26T11:38:47.131Z", "user": { "_id": "62de356ad89af8c07209e7d4", "avatarUrl": "/avatars/610629958726b270418368b8b7f61469.svg", "fullname": "Li Shen", "isPro": false, "type": "user", "user": "mathshenli" } }, { "_id": "676d843d92c4a8fe4953279d", "hidden": false, "name": "Dacheng Tao", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T10:07:51
Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
In this work, we aim to develop an MLLM that understands and solves questions by learning to create each intermediate step of the reasoning involved till the final answer. To this end, we propose Collective Monte Carlo Tree Search (CoMCTS), a new learning-to-reason method for MLLMs, which introduces the concept of collective learning into ``tree search'' for effective and efficient reasoning-path searching and learning. The core idea of CoMCTS is to leverage collective knowledge from multiple models to collaboratively conjecture, search and identify effective reasoning paths toward correct answers via four iterative operations including Expansion, Simulation and Error Positioning, Backpropagation, and Selection. Using CoMCTS, we construct Mulberry-260k, a multimodal dataset with a tree of rich, explicit and well-defined reasoning nodes for each question. With Mulberry-260k, we perform collective SFT to train our model, Mulberry, a series of MLLMs with o1-like step-by-step Reasoning and Reflection capabilities. Extensive experiments demonstrate the superiority of our proposed methods on various benchmarks. Code will be available at https://github.com/HJYao00/Mulberry
37
676d843e92c4a8fe495328d3
null
null
2024-12-26T10:26:14.913000
PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion
https://cdn-thumbnails.h…s/2412.17780.png
2
{ "_id": "64cd5b3f0494187a9e8b7c69", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/eo9HiGbvCxHQZ-QJB1Y_o.jpeg", "followerCount": 2, "fullname": "Pranam Chatterjee", "isHf": false, "isMod": false, "isPro": false, "name": "pranamanam", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/64cd5b3f0494187a9e8b7c69/_ILClx7w2rSaaqw5PjP2i.png" ]
2412.17780
[ { "_id": "676d74cd96a84bb36f9d04e1", "hidden": false, "name": "Sophia Tang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d74cd96a84bb36f9d04e2", "hidden": false, "name": "Yinuo Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676d74cd96a84bb36f9d04e3", "hidden": false, "name": "Pranam Chatterjee", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-26T16:27:18.389Z", "user": { "_id": "64cd5b3f0494187a9e8b7c69", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/eo9HiGbvCxHQZ-QJB1Y_o.jpeg", "fullname": "Pranam Chatterjee", "isPro": false, "type": "user", "user": "pranamanam" } } ]
2024-12-23T18:38:49
PepTune: De Novo Generation of Therapeutic Peptides with Multi-Objective-Guided Discrete Diffusion
Peptide therapeutics, a major class of medicines, have achieved remarkable success across diseases such as diabetes and cancer, with landmark examples such as GLP-1 receptor agonists revolutionizing the treatment of type-2 diabetes and obesity. Despite their success, designing peptides that satisfy multiple conflicting objectives, such as target binding affinity, solubility, and membrane permeability, remains a major challenge. Classical drug development and structure-based design are ineffective for such tasks, as they fail to optimize global functional properties critical for therapeutic efficacy. Existing generative frameworks are largely limited to continuous spaces, unconditioned outputs, or single-objective guidance, making them unsuitable for discrete sequence optimization across multiple properties. To address this, we present PepTune, a multi-objective discrete diffusion model for the simultaneous generation and optimization of therapeutic peptide SMILES. Built on the Masked Discrete Language Model (MDLM) framework, PepTune ensures valid peptide structures with state-dependent masking schedules and penalty-based objectives. To guide the diffusion process, we propose a Monte Carlo Tree Search (MCTS)-based strategy that balances exploration and exploitation to iteratively refine Pareto-optimal sequences. MCTS integrates classifier-based rewards with search-tree expansion, overcoming gradient estimation challenges and data sparsity inherent to discrete spaces. Using PepTune, we generate diverse, chemically-modified peptides optimized for multiple therapeutic properties, including target binding affinity, membrane permeability, solubility, hemolysis, and non-fouling characteristics on various disease-relevant targets. In total, our results demonstrate that MCTS-guided discrete diffusion is a powerful and modular approach for multi-objective sequence design in discrete state spaces.
4
676d74d396a84bb36f9d0660
null
null
2024-12-25T22:21:50.545000
Token-Budget-Aware LLM Reasoning
https://cdn-thumbnails.h…s/2412.18547.png
2
{ "_id": "64dfcc62e8b6f3f3baa950e0", "avatarUrl": "/avatars/21bbff67d46c08044efe2406575aa77e.svg", "followerCount": null, "fullname": "Zhenting Wang", "isHf": false, "isMod": false, "isPro": false, "name": "ztwang", "type": "user" }
true
null
2412.18547
[ { "_id": "676c40d619a21c8b928d13c2", "hidden": false, "name": "Tingxu Han", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c40d619a21c8b928d13c3", "hidden": false, "name": "Chunrong Fang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c40d619a21c8b928d13c4", "hidden": false, "name": "Shiyu Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c40d619a21c8b928d13c5", "hidden": false, "name": "Shiqing Ma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c40d619a21c8b928d13c6", "hidden": false, "name": "Zhenyu Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c40d619a21c8b928d13c7", "hidden": false, "name": "Zhenting Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:16.437Z", "user": { "_id": "64dfcc62e8b6f3f3baa950e0", "avatarUrl": "/avatars/21bbff67d46c08044efe2406575aa77e.svg", "fullname": "Zhenting Wang", "isPro": false, "type": "user", "user": "ztwang" } } ]
2024-12-24T16:55:45
Token-Budget-Aware LLM Reasoning
Reasoning is critical for large language models (LLMs) to excel in a wide range of tasks. While methods like Chain-of-Thought (CoT) reasoning enhance LLM performance by decomposing problems into intermediate steps, they also incur significant overhead in token usage, leading to increased costs. We find that the reasoning process of current LLMs is unnecessarily lengthy and it can be compressed by including a reasonable token budget in the prompt, but the choice of token budget plays a crucial role in the actual compression effectiveness. We then propose a token-budget-aware LLM reasoning framework, which dynamically estimates token budgets for different problems based on reasoning complexity and uses the estimated token budgets to guide the reasoning process. Experiments show that our method effectively reduces token costs in CoT reasoning with only a slight performance reduction, offering a practical solution to balance efficiency and accuracy in LLM reasoning. Code: https://github.com/GeniusHTX/TALE.
46
676c40d719a21c8b928d13ea
null
null
2024-12-25T15:44:24.255000
Bridging the Data Provenance Gap Across Text, Speech and Video
https://cdn-thumbnails.h…s/2412.17847.png
2
{ "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1650745211725-noauth.png", "followerCount": 51, "fullname": "Mohammed Hamdy", "isHf": false, "isMod": false, "isPro": false, "name": "mmhamdy", "type": "user" }
true
null
2412.17847
[ { "_id": "676b6f6e1f5ca46174ac96c8", "hidden": false, "name": "Shayne Longpre", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96c9", "hidden": false, "name": "Nikhil Singh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96ca", "hidden": false, "name": "Manuel Cherep", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96cb", "hidden": false, "name": "Kushagra Tiwary", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96cc", "hidden": false, "name": "Joanna Materzynska", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96cd", "hidden": false, "name": "William Brannon", "status": "claimed_verified", "statusLastChangedAt": "2025-02-06T14:15:29.452Z", "user": { "_id": "641bdd25a63c4e8062387b6a", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/641bdd25a63c4e8062387b6a/dFanBhG6_NVqpRNB-jjC4.png", "fullname": "William Brannon", "isPro": false, "type": "user", "user": "wwbrannon" } }, { "_id": "676b6f6e1f5ca46174ac96ce", "hidden": false, "name": "Robert Mahari", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96cf", "hidden": false, "name": "Manan Dey", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d0", "hidden": false, "name": "Mohammed Hamdy", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:33:07.482Z", "user": { "_id": "62645f88c39850dc093d6105", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1650745211725-noauth.png", "fullname": "Mohammed Hamdy", "isPro": false, "type": "user", "user": "mmhamdy" } }, { "_id": "676b6f6e1f5ca46174ac96d1", "hidden": false, "name": "Nayan Saxena", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d2", "hidden": false, "name": "Ahmad Mustafa Anis", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:33:00.926Z", "user": { "_id": "6246908d8031dcfa9ef6d80b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1648791658636-noauth.jpeg", "fullname": "Ahmad Mustafa Anis", "isPro": false, "type": "user", "user": "AhmadMustafa" } }, { "_id": "676b6f6e1f5ca46174ac96d3", "hidden": false, "name": "Emad A. Alghamdi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d4", "hidden": false, "name": "Vu Minh Chien", "status": "claimed_verified", "statusLastChangedAt": "2025-02-12T09:19:03.796Z", "user": { "_id": "60535c9d10aba34e3b6a2ef7", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1616075850230-noauth.jpeg", "fullname": "vumichien", "isPro": false, "type": "user", "user": "vumichien" } }, { "_id": "676b6f6e1f5ca46174ac96d5", "hidden": false, "name": "Naana Obeng-Marnu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d6", "hidden": false, "name": "Da Yin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d7", "hidden": false, "name": "Kun Qian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d8", "hidden": false, "name": "Yizhi Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96d9", "hidden": false, "name": "Minnie Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96da", "hidden": false, "name": "An Dinh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96db", "hidden": false, "name": "Shrestha Mohanty", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96dc", "hidden": false, "name": "Deividas Mataciunas", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:33:05.129Z", "user": { "_id": "6040a00558b78f3a0047c23a", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6040a00558b78f3a0047c23a/_BsyJoaCBO3r6GnfSgnIA.jpeg", "fullname": "David Mataciunas", "isPro": false, "type": "user", "user": "DeividasM" } }, { "_id": "676b6f6e1f5ca46174ac96dd", "hidden": false, "name": "Tobin South", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96de", "hidden": false, "name": "Jianguo Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96df", "hidden": false, "name": "Ariel N. Lee", "status": "claimed_verified", "statusLastChangedAt": "2025-02-05T16:54:01.989Z", "user": { "_id": "638bcfa91987d67b340e6c1c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/638bcfa91987d67b340e6c1c/3tHCB_J6c4-lsEZ_zJSlp.jpeg", "fullname": "Ariel N. Lee", "isPro": false, "type": "user", "user": "arielnlee" } }, { "_id": "676b6f6e1f5ca46174ac96e0", "hidden": false, "name": "Campbell S. Lund", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e1", "hidden": false, "name": "Christopher Klamm", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e2", "hidden": false, "name": "Damien Sileo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e3", "hidden": false, "name": "Diganta Misra", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e4", "hidden": false, "name": "Enrico Shippole", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e5", "hidden": false, "name": "Kevin Klyman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e6", "hidden": false, "name": "Lester JV Miranda", "status": "claimed_verified", "statusLastChangedAt": "2025-01-06T08:00:50.985Z", "user": { "_id": "634e20a0c1ce28f1de920cc4", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1666064515342-noauth.jpeg", "fullname": "Lj V. Miranda", "isPro": true, "type": "user", "user": "ljvmiranda921" } }, { "_id": "676b6f6e1f5ca46174ac96e7", "hidden": false, "name": "Niklas Muennighoff", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e8", "hidden": false, "name": "Seonghyeon Ye", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96e9", "hidden": false, "name": "Seungone Kim", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:33:02.972Z", "user": { "_id": "6469949654873f0043b09c22", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6469949654873f0043b09c22/Lk7IJAR16Wa_sGJ2g81AQ.jpeg", "fullname": "Seungone Kim", "isPro": false, "type": "user", "user": "seungone" } }, { "_id": "676b6f6e1f5ca46174ac96ea", "hidden": false, "name": "Vipul Gupta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96eb", "hidden": false, "name": "Vivek Sharma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96ec", "hidden": false, "name": "Xuhui Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96ed", "hidden": false, "name": "Caiming Xiong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96ee", "hidden": false, "name": "Luis Villa", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96ef", "hidden": false, "name": "Stella Biderman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96f0", "hidden": false, "name": "Alex Pentland", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b6f6e1f5ca46174ac96f1", "hidden": false, "name": "Sara Hooker", "status": "claimed_verified", "statusLastChangedAt": "2025-01-20T09:30:34.082Z", "user": { "_id": "63434eb76f59b79da07dbddf", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63434eb76f59b79da07dbddf/BEwmVjqPNYlqmutXG0G6e.jpeg", "fullname": "Sara Hooker", "isPro": false, "type": "user", "user": "sarahooker" } }, { "_id": "676b6f6e1f5ca46174ac96f2", "hidden": false, "name": "Jad Kabbara", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T01:30:19
Bridging the Data Provenance Gap Across Text, Speech and Video
Progress in AI is driven largely by the scale and quality of training data. Despite this, there is a deficit of empirical analysis examining the attributes of well-established datasets beyond text. In this work we conduct the largest and first-of-its-kind longitudinal audit across modalities--popular text, speech, and video datasets--from their detailed sourcing trends and use restrictions to their geographical and linguistic representation. Our manual analysis covers nearly 4000 public datasets between 1990-2024, spanning 608 languages, 798 sources, 659 organizations, and 67 countries. We find that multimodal machine learning applications have overwhelmingly turned to web-crawled, synthetic, and social media platforms, such as YouTube, for their training sets, eclipsing all other sources since 2019. Secondly, tracing the chain of dataset derivations we find that while less than 33% of datasets are restrictively licensed, over 80% of the source content in widely-used text, speech, and video datasets, carry non-commercial restrictions. Finally, counter to the rising number of languages and geographies represented in public AI training datasets, our audit demonstrates measures of relative geographical and multilingual representation have failed to significantly improve their coverage since 2013. We believe the breadth of our audit enables us to empirically examine trends in data sourcing, restrictions, and Western-centricity at an ecosystem-level, and that visibility into these questions are essential to progress in responsible AI. As a contribution to ongoing improvements in dataset transparency and responsible use, we release our entire multimodal audit, allowing practitioners to trace data provenance across text, speech, and video.
9
676b6f6f1f5ca46174ac9777
null
null
2024-12-25T13:09:18.809000
MotiF: Making Text Count in Image Animation with Motion Focal Loss
https://cdn-thumbnails.h…s/2412.16153.png
2
{ "_id": "6438c702c04b3b996ea702fb", "avatarUrl": "/avatars/09a386c4e810193d2b1f0b7799bc8270.svg", "followerCount": null, "fullname": "Shijie Wang", "isHf": false, "isMod": false, "isPro": false, "name": "wang-sj16", "type": "user" }
true
null
2412.16153
[ { "_id": "676c4a04295f85d93ef6d6b6", "hidden": false, "name": "Shijie Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:14.529Z", "user": { "_id": "6438c702c04b3b996ea702fb", "avatarUrl": "/avatars/09a386c4e810193d2b1f0b7799bc8270.svg", "fullname": "Shijie Wang", "isPro": false, "type": "user", "user": "wang-sj16" } }, { "_id": "676c4a04295f85d93ef6d6b7", "hidden": false, "name": "Samaneh Azadi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c4a04295f85d93ef6d6b8", "hidden": false, "name": "Rohit Girdhar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c4a04295f85d93ef6d6b9", "hidden": false, "name": "Saketh Rambhatla", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c4a04295f85d93ef6d6ba", "hidden": false, "name": "Chen Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c4a04295f85d93ef6d6bb", "hidden": false, "name": "Xi Yin", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-20T18:57:06
MotiF: Making Text Count in Image Animation with Motion Focal Loss
Text-Image-to-Video (TI2V) generation aims to generate a video from an image following a text description, which is also referred to as text-guided image animation. Most existing methods struggle to generate videos that align well with the text prompts, particularly when motion is specified. To overcome this limitation, we introduce MotiF, a simple yet effective approach that directs the model's learning to the regions with more motion, thereby improving the text alignment and motion generation. We use optical flow to generate a motion heatmap and weight the loss according to the intensity of the motion. This modified objective leads to noticeable improvements and complements existing methods that utilize motion priors as model inputs. Additionally, due to the lack of a diverse benchmark for evaluating TI2V generation, we propose TI2V Bench, a dataset consists of 320 image-text pairs for robust evaluation. We present a human evaluation protocol that asks the annotators to select an overall preference between two videos followed by their justifications. Through a comprehensive evaluation on TI2V Bench, MotiF outperforms nine open-sourced models, achieving an average preference of 72%. The TI2V Bench is released in https://wang-sj16.github.io/motif/.
6
676c4a08295f85d93ef6d7b3
null
null
2024-12-25T08:46:30.796000
Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning
https://cdn-thumbnails.h…s/2412.15797.png
3
{ "_id": "63fb6e281b4b1bd4e7ffc5be", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1677422062937-noauth.jpeg", "followerCount": 9, "fullname": "Xiao Liu", "isHf": false, "isMod": false, "isPro": false, "name": "lx865712528", "type": "user" }
true
null
2412.15797
[ { "_id": "676c0c92dd95830fd9d60010", "hidden": false, "name": "Sungjin Park", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c0c92dd95830fd9d60011", "hidden": false, "name": "Xiao Liu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:19.827Z", "user": { "_id": "63fb6e281b4b1bd4e7ffc5be", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1677422062937-noauth.jpeg", "fullname": "Xiao Liu", "isPro": false, "type": "user", "user": "lx865712528" } }, { "_id": "676c0c92dd95830fd9d60012", "hidden": false, "name": "Yeyun Gong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676c0c92dd95830fd9d60013", "hidden": false, "name": "Edward Choi", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-20T11:14:29
Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning
Despite recent advances in large language models, open-source models often struggle to consistently perform well on complex reasoning tasks. Existing ensemble methods, whether applied at the token or output levels, fail to address these challenges. In response, we present Language model Ensemble with Monte Carlo Tree Search (LE-MCTS), a novel framework for process-level ensembling of language models. LE-MCTS formulates step-by-step reasoning with an ensemble of language models as a Markov decision process. In this framework, states represent intermediate reasoning paths, while actions consist of generating the next reasoning step using one of the language models selected from a predefined pool. Guided by a process-based reward model, LE-MCTS performs a tree search over the reasoning steps generated by different language models, identifying the most accurate reasoning chain. Experimental results on five mathematical reasoning benchmarks demonstrate that our approach outperforms both single language model decoding algorithms and language model ensemble methods. Notably, LE-MCTS improves performance by 3.6% and 4.3% on the MATH and MQA datasets, respectively, highlighting its effectiveness in solving complex reasoning problems.
18
676c0c98dd95830fd9d60172
null
null
2024-12-25T04:33:08.560000
In Case You Missed It: ARC 'Challenge' Is Not That Challenging
https://cdn-thumbnails.h…s/2412.17758.png
2
{ "_id": "600b381d3cc3b87db94bc0ce", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/600b381d3cc3b87db94bc0ce/I3xpr4gzcG1uXawXBpWpD.jpeg", "followerCount": 3, "fullname": "Łukasz Borchmann", "isHf": false, "isMod": false, "isPro": false, "name": "Borchmann", "type": "user" }
true
null
2412.17758
[ { "_id": "676bd06524bd46fa1990dcec", "hidden": false, "name": "Łukasz Borchmann", "status": "extracted_pending", "statusLastChangedAt": "2024-12-25T09:29:11.195Z", "user": { "_id": "600b381d3cc3b87db94bc0ce", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/600b381d3cc3b87db94bc0ce/I3xpr4gzcG1uXawXBpWpD.jpeg", "fullname": "Łukasz Borchmann", "isPro": false, "type": "user", "user": "Borchmann" } } ]
2024-12-23T18:14:36
In Case You Missed It: ARC 'Challenge' Is Not That Challenging
ARC Challenge appears more difficult than ARC Easy for modern LLMs primarily due to an evaluation setup that prevents direct comparison of answer choices rather than inherent complexity. Although some researchers have quietly shifted to a more appropriate scheme over the last year, the implications of this change have yet to be widely acknowledged. We highlight this overlooked shift, show how similar evaluation practices falsely imply reasoning deficits in other benchmarks, and demonstrate that fairer methods dramatically reduce performance gaps (e.g. on SIQA) and even yield superhuman results (OpenBookQA). In doing so, we reveal how evaluation shapes perceived difficulty and offer guidelines to ensure that multiple-choice evaluations accurately reflect actual model capabilities.
16
676bd06724bd46fa1990dd63
null
null
2024-12-25T04:26:52.921000
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene Understanding
https://cdn-thumbnails.h…s/2412.18450.png
2
{ "_id": "6363767e572fd34304f49a67", "avatarUrl": "/avatars/a9fc92b6005d48adf71a45bebf812648.svg", "followerCount": 1, "fullname": "Tatiana Zemskova", "isHf": false, "isMod": false, "isPro": false, "name": "wingrune", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/6363767e572fd34304f49a67/W7uO-sU8egl0RxsbSGcPg.png" ]
2412.18450
[ { "_id": "676bbe579484d105b89dba3b", "hidden": false, "name": "Tatiana Zemskova", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:37.677Z", "user": { "_id": "6363767e572fd34304f49a67", "avatarUrl": "/avatars/a9fc92b6005d48adf71a45bebf812648.svg", "fullname": "Tatiana Zemskova", "isPro": false, "type": "user", "user": "wingrune" } }, { "_id": "676bbe579484d105b89dba3c", "hidden": false, "name": "Dmitry Yudin", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:35.428Z", "user": { "_id": "67585775ab7e701b5754649f", "avatarUrl": "/avatars/7ccaf915b6b409e49d35f5212b8293c3.svg", "fullname": "Dmitry Yudin", "isPro": false, "type": "user", "user": "yuddim" } } ]
2024-12-24T14:21:58
3DGraphLLM: Combining Semantic Graphs and Large Language Models for 3D Scene Understanding
A 3D scene graph represents a compact scene model, storing information about the objects and the semantic relationships between them, making its use promising for robotic tasks. When interacting with a user, an embodied intelligent agent should be capable of responding to various queries about the scene formulated in natural language. Large Language Models (LLMs) are beneficial solutions for user-robot interaction due to their natural language understanding and reasoning abilities. Recent methods for creating learnable representations of 3D scenes have demonstrated the potential to improve the quality of LLMs responses by adapting to the 3D world. However, the existing methods do not explicitly utilize information about the semantic relationships between objects, limiting themselves to information about their coordinates. In this work, we propose a method 3DGraphLLM for constructing a learnable representation of a 3D scene graph. The learnable representation is used as input for LLMs to perform 3D vision-language tasks. In our experiments on popular ScanRefer, RIORefer, Multi3DRefer, ScanQA, Sqa3D, and Scan2cap datasets, we demonstrate the advantage of this approach over baseline methods that do not use information about the semantic relationships between objects. The code is publicly available at https://github.com/CognitiveAISystems/3DGraphLLM.
34
676bbe599484d105b89dbac5
null
null
2024-12-25T01:53:21.705000
PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models
https://cdn-thumbnails.h…s/2412.18608.png
2
{ "_id": "636a3d8bf8d9af4aea18553f", "avatarUrl": "/avatars/028a86d088764fd66c36a2ddebf09f9a.svg", "followerCount": 4, "fullname": "MINGHAO CHEN", "isHf": false, "isMod": false, "isPro": true, "name": "silentchen", "type": "user" }
false
null
2412.18608
[ { "_id": "676baa15295f85d93eb48f24", "hidden": false, "name": "Minghao Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f25", "hidden": false, "name": "Roman Shapovalov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f26", "hidden": false, "name": "Iro Laina", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f27", "hidden": false, "name": "Tom Monnier", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f28", "hidden": false, "name": "Jianyuan Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f29", "hidden": false, "name": "David Novotny", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676baa15295f85d93eb48f2a", "hidden": false, "name": "Andrea Vedaldi", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T18:59:43
PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models
Text- or image-to-3D generators and 3D scanners can now produce 3D assets with high-quality shapes and textures. These assets typically consist of a single, fused representation, like an implicit neural field, a Gaussian mixture, or a mesh, without any useful structure. However, most applications and creative workflows require assets to be made of several meaningful parts that can be manipulated independently. To address this gap, we introduce PartGen, a novel approach that generates 3D objects composed of meaningful parts starting from text, an image, or an unstructured 3D object. First, given multiple views of a 3D object, generated or rendered, a multi-view diffusion model extracts a set of plausible and view-consistent part segmentations, dividing the object into parts. Then, a second multi-view diffusion model takes each part separately, fills in the occlusions, and uses those completed views for 3D reconstruction by feeding them to a 3D reconstruction network. This completion process considers the context of the entire object to ensure that the parts integrate cohesively. The generative completion model can make up for the information missing due to occlusions; in extreme cases, it can hallucinate entirely invisible parts based on the input 3D asset. We evaluate our method on generated and real 3D assets and show that it outperforms segmentation and part-extraction baselines by a large margin. We also showcase downstream applications such as 3D part editing.
15
676baa1f295f85d93eb4928a
null
null
2024-12-25T00:47:02.418000
DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation
https://cdn-thumbnails.h…s/2412.18597.png
2
{ "_id": "63184c517ca1b876d99b7e0e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63184c517ca1b876d99b7e0e/b-qDExoeJuDXK0cJBZKnz.jpeg", "followerCount": 317, "fullname": "Xiaodong Cun", "isHf": false, "isMod": false, "isPro": false, "name": "vinthony", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/63184c517ca1b876d99b7e0e/GgE-qpq_L56Fc5JKfyhqL.gif" ]
2412.18597
[ { "_id": "676b9b876fb4876383b8591b", "hidden": false, "name": "Minghong Cai", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:49.066Z", "user": { "_id": "64f94370c3c12b377cc51086", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64f94370c3c12b377cc51086/6CXcHhqAoykqXcShqM8Rd.jpeg", "fullname": "Minghong Cai", "isPro": false, "type": "user", "user": "onevfall" } }, { "_id": "676b9b876fb4876383b8591c", "hidden": false, "name": "Xiaodong Cun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b8591d", "hidden": false, "name": "Xiaoyu Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b8591e", "hidden": false, "name": "Wenze Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b8591f", "hidden": false, "name": "Zhaoyang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b85920", "hidden": false, "name": "Yong Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b85921", "hidden": false, "name": "Ying Shan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b9b876fb4876383b85922", "hidden": false, "name": "Xiangyu Yue", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T18:51:19
DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation
Sora-like video generation models have achieved remarkable progress with a Multi-Modal Diffusion Transformer MM-DiT architecture. However, the current video generation models predominantly focus on single-prompt, struggling to generate coherent scenes with multiple sequential prompts that better reflect real-world dynamic scenarios. While some pioneering works have explored multi-prompt video generation, they face significant challenges including strict training data requirements, weak prompt following, and unnatural transitions. To address these problems, we propose DiTCtrl, a training-free multi-prompt video generation method under MM-DiT architectures for the first time. Our key idea is to take the multi-prompt video generation task as temporal video editing with smooth transitions. To achieve this goal, we first analyze MM-DiT's attention mechanism, finding that the 3D full attention behaves similarly to that of the cross/self-attention blocks in the UNet-like diffusion models, enabling mask-guided precise semantic control across different prompts with attention sharing for multi-prompt video generation. Based on our careful design, the video generated by DiTCtrl achieves smooth transitions and consistent object motion given multiple sequential prompts without additional training. Besides, we also present MPVBench, a new benchmark specially designed for multi-prompt video generation to evaluate the performance of multi-prompt generation. Extensive experiments demonstrate that our method achieves state-of-the-art performance without additional training.
19
676b9b886fb4876383b8597d
null
null
2024-12-24T23:37:35.368000
Fourier Position Embedding: Enhancing Attention's Periodic Extension for Length Generalization
https://cdn-thumbnails.h…s/2412.17739.png
26
{ "_id": "60bc94cd85a3ab33829b6211", "avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg", "followerCount": 1, "fullname": "Kaiyan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "iseesaw", "type": "user" }
true
null
2412.17739
[ { "_id": "676a6844bee647b8c004f469", "hidden": false, "name": "Ermo Hua", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:47.877Z", "user": { "_id": "6445fa2ffc22e309d78bef3e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6445fa2ffc22e309d78bef3e/FQaINLd0PjgY9EnK_APRk.jpeg", "fullname": "Messi Hua", "isPro": false, "type": "user", "user": "Messi-Hua" } }, { "_id": "676a6844bee647b8c004f46a", "hidden": false, "name": "Che Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f46b", "hidden": false, "name": "Xingtai Lv", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f46c", "hidden": false, "name": "Kaiyan Zhang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:45.610Z", "user": { "_id": "60bc94cd85a3ab33829b6211", "avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg", "fullname": "Kaiyan Zhang", "isPro": false, "type": "user", "user": "iseesaw" } }, { "_id": "676a6844bee647b8c004f46d", "hidden": false, "name": "Ning Ding", "status": "claimed_verified", "statusLastChangedAt": "2025-01-31T09:50:47.792Z", "user": { "_id": "60cf4bcb1ce3775ebb86e5d5", "avatarUrl": "/avatars/12bcd18d215abf91f297f93007733148.svg", "fullname": "Ning Ding", "isPro": false, "type": "user", "user": "stingning" } }, { "_id": "676a6844bee647b8c004f46e", "hidden": false, "name": "Youbang Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f46f", "hidden": false, "name": "Biqing Qi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f470", "hidden": false, "name": "Yuchen Fan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f471", "hidden": false, "name": "Xue Kai Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a6844bee647b8c004f472", "hidden": false, "name": "Bowen Zhou", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T17:44:01
Fourier Position Embedding: Enhancing Attention's Periodic Extension for Length Generalization
Extending the context length of Language Models (LMs) by improving Rotary Position Embedding (RoPE) has become a trend. While existing works mainly address RoPE's limitations within attention mechanism, this paper provides an analysis across nearly all parts of LMs, uncovering their adverse effects on length generalization for RoPE-based attention. Using Discrete Signal Processing theory, we show that RoPE enables periodic attention by implicitly achieving Non-Uniform Discrete Fourier Transform. However, this periodicity is undermined by the spectral damage caused by: 1) linear layers and activation functions outside of attention; 2) insufficiently trained frequency components brought by time-domain truncation. Building on our observations, we propose Fourier Position Embedding (FoPE), which enhances attention's frequency-domain properties to improve both its periodic extension and length generalization. FoPE constructs Fourier Series and zero-outs the destructive frequency components, increasing model robustness against the spectrum damage. Experiments across various model scales show that, within varying context windows, FoPE can maintain a more stable perplexity and a more consistent accuracy in a needle-in-haystack task compared to RoPE and ALiBi. Several analyses and ablations bring further support to our method and theoretical modeling.
41
676a6845bee647b8c004f51c
null
null
2024-12-24T22:42:12.778000
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing
https://cdn-thumbnails.h…s/2412.14711.png
2
{ "_id": "66c0a08bac74db25de8427ec", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66c0a08bac74db25de8427ec/9D-piDBZqSt6KNkHImmkv.jpeg", "followerCount": 3, "fullname": "Jintao Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "jt-zhang", "type": "user" }
false
null
2412.14711
[ { "_id": "676a25362d7ae887c4f20b6d", "hidden": false, "name": "Ziteng Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:36.994Z", "user": { "_id": "6151815f33293feca12a6d44", "avatarUrl": "/avatars/e92c8de879c542d2f843b742cd204065.svg", "fullname": "Ziteng Wang", "isPro": false, "type": "user", "user": "thuwzt" } }, { "_id": "676a25362d7ae887c4f20b6e", "hidden": false, "name": "Jianfei Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a25362d7ae887c4f20b6f", "hidden": false, "name": "Jun Zhu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T10:21:20
ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing
Sparsely activated Mixture-of-Experts (MoE) models are widely adopted to scale up model capacity without increasing the computation budget. However, vanilla TopK routers are trained in a discontinuous, non-differentiable way, limiting their performance and scalability. To address this issue, we propose ReMoE, a fully differentiable MoE architecture that offers a simple yet effective drop-in replacement for the conventional TopK+Softmax routing, utilizing ReLU as the router instead. We further propose methods to regulate the router's sparsity while balancing the load among experts. ReMoE's continuous nature enables efficient dynamic allocation of computation across tokens and layers, while also exhibiting domain specialization. Our experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity. Furthermore, ReMoE exhibits superior scalability with respect to the number of experts, surpassing traditional MoE architectures. The implementation based on Megatron-LM is available at https://github.com/thu-ml/ReMoE.
16
676a25372d7ae887c4f20bb3
null
null
2024-12-24T22:37:30.445000
DepthLab: From Partial to Complete
https://cdn-thumbnails.h…s/2412.18153.png
2
{ "_id": "6479925ab77e18dbf640bd67", "avatarUrl": "/avatars/bb52ecd22ca4b49157f8668be35409e7.svg", "followerCount": 6, "fullname": "Zhiheng Liu", "isHf": false, "isMod": false, "isPro": false, "name": "Johanan0528", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/6479925ab77e18dbf640bd67/kJ_cJvqOflDjH6dllp3Xe.mp4" ]
2412.18153
[ { "_id": "676b7d07d886f8125a4fb855", "hidden": false, "name": "Zhiheng Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb856", "hidden": false, "name": "Ka Leong Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb857", "hidden": false, "name": "Qiuyu Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb858", "hidden": false, "name": "Shuzhe Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb859", "hidden": false, "name": "Hao Ouyang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb85a", "hidden": false, "name": "Bin Tan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb85b", "hidden": false, "name": "Kai Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb85c", "hidden": false, "name": "Yujun Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb85d", "hidden": false, "name": "Qifeng Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b7d07d886f8125a4fb85e", "hidden": false, "name": "Ping Luo", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-24T04:16:38
DepthLab: From Partial to Complete
Missing values remain a common challenge for depth data across its wide range of applications, stemming from various causes like incomplete data acquisition and perspective alteration. This work bridges this gap with DepthLab, a foundation depth inpainting model powered by image diffusion priors. Our model features two notable strengths: (1) it demonstrates resilience to depth-deficient regions, providing reliable completion for both continuous areas and isolated points, and (2) it faithfully preserves scale consistency with the conditioned known depth when filling in missing values. Drawing on these advantages, our approach proves its worth in various downstream tasks, including 3D scene inpainting, text-to-3D scene generation, sparse-view reconstruction with DUST3R, and LiDAR depth completion, exceeding current solutions in both numerical performance and visual quality. Our project page with source code is available at https://johanan528.github.io/depthlab_web/.
34
676b7d0bd886f8125a4fb983
null
null
2024-12-24T22:25:12.161000
SKETCH: Structured Knowledge Enhanced Text Comprehension for Holistic Retrieval
https://cdn-thumbnails.h…s/2412.15443.png
2
{ "_id": "63a4754927f1f64ed7238dac", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg", "followerCount": 3, "fullname": "Aman Chadha", "isHf": false, "isMod": false, "isPro": false, "name": "amanchadha", "type": "user" }
true
null
2412.15443
[ { "_id": "676b765f038795095f73b556", "hidden": false, "name": "Aakash Mahalingam", "status": "claimed_verified", "statusLastChangedAt": "2025-01-20T09:30:32.076Z", "user": { "_id": "653fb335165813113370ca70", "avatarUrl": "/avatars/12a857024e68f150c17162b6958ed30a.svg", "fullname": "Aakash Mahalingam", "isPro": false, "type": "user", "user": "ashgam" } }, { "_id": "676b765f038795095f73b557", "hidden": false, "name": "Vinesh Kumar Gande", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b765f038795095f73b558", "hidden": false, "name": "Aman Chadha", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:32:58.838Z", "user": { "_id": "63a4754927f1f64ed7238dac", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg", "fullname": "Aman Chadha", "isPro": false, "type": "user", "user": "amanchadha" } }, { "_id": "676b765f038795095f73b559", "hidden": false, "name": "Vinija Jain", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676b765f038795095f73b55a", "hidden": false, "name": "Divya Chaudhary", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T22:51:56
SKETCH: Structured Knowledge Enhanced Text Comprehension for Holistic Retrieval
Retrieval-Augmented Generation (RAG) systems have become pivotal in leveraging vast corpora to generate informed and contextually relevant responses, notably reducing hallucinations in Large Language Models. Despite significant advancements, these systems struggle to efficiently process and retrieve information from large datasets while maintaining a comprehensive understanding of the context. This paper introduces SKETCH, a novel methodology that enhances the RAG retrieval process by integrating semantic text retrieval with knowledge graphs, thereby merging structured and unstructured data for a more holistic comprehension. SKETCH, demonstrates substantial improvements in retrieval performance and maintains superior context integrity compared to traditional methods. Evaluated across four diverse datasets: QuALITY, QASPER, NarrativeQA, and Italian Cuisine-SKETCH consistently outperforms baseline approaches on key RAGAS metrics such as answer_relevancy, faithfulness, context_precision and context_recall. Notably, on the Italian Cuisine dataset, SKETCH achieved an answer relevancy of 0.94 and a context precision of 0.99, representing the highest performance across all evaluated metrics. These results highlight SKETCH's capability in delivering more accurate and contextually relevant responses, setting new benchmarks for future retrieval systems.
9
676b7660038795095f73b583
null
null
2024-12-24T07:42:01.751000
ResearchTown: Simulator of Human Research Community
https://cdn-thumbnails.h…s/2412.17767.png
2
{ "_id": "636453547cf2c0b4f0a3ee1e", "avatarUrl": "/avatars/29b4bf0e6abd3b70bb0dbd58188e4ac8.svg", "followerCount": 1, "fullname": "Haofei Yu", "isHf": false, "isMod": false, "isPro": false, "name": "lwaekfjlk", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/636453547cf2c0b4f0a3ee1e/SHXI2ZTH3G7RYb7lG8UhL.png" ]
2412.17767
[ { "_id": "676aaa45d4000ace4575110c", "hidden": false, "name": "Haofei Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace4575110d", "hidden": false, "name": "Zhaochen Hong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace4575110e", "hidden": false, "name": "Zirui Cheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace4575110f", "hidden": false, "name": "Kunlun Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace45751110", "hidden": false, "name": "Keyang Xuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace45751111", "hidden": false, "name": "Jinwei Yao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace45751112", "hidden": false, "name": "Tao Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676aaa45d4000ace45751113", "hidden": false, "name": "Jiaxuan You", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T18:26:53
ResearchTown: Simulator of Human Research Community
Large Language Models (LLMs) have demonstrated remarkable potential in scientific domains, yet a fundamental question remains unanswered: Can we simulate human research communities with LLMs? Addressing this question can deepen our understanding of the processes behind idea brainstorming and inspire the automatic discovery of novel scientific insights. In this work, we propose ResearchTown, a multi-agent framework for research community simulation. Within this framework, the human research community is simplified and modeled as an agent-data graph, where researchers and papers are represented as agent-type and data-type nodes, respectively, and connected based on their collaboration relationships. We also introduce TextGNN, a text-based inference framework that models various research activities (e.g., paper reading, paper writing, and review writing) as special forms of a unified message-passing process on the agent-data graph. To evaluate the quality of the research simulation, we present ResearchBench, a benchmark that uses a node-masking prediction task for scalable and objective assessment based on similarity. Our experiments reveal three key findings: (1) ResearchTown can provide a realistic simulation of collaborative research activities, including paper writing and review writing; (2) ResearchTown can maintain robust simulation with multiple researchers and diverse papers; (3) ResearchTown can generate interdisciplinary research ideas that potentially inspire novel research directions.
14
676aaa46d4000ace457511b2
null
null
2024-12-24T05:04:23.303000
OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning
https://cdn-thumbnails.h…s/2412.16849.png
2
{ "_id": "6494457c6339264dd78bcb95", "avatarUrl": "/avatars/d87842251f1a43f50cc827f0e2a995ee.svg", "followerCount": 1, "fullname": "sdzy", "isHf": false, "isMod": false, "isPro": false, "name": "sdzy", "type": "user" }
false
null
2412.16849
[ { "_id": "676a85a619a646a97e21e8e5", "hidden": false, "name": "Yuxiang Zhang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:36.448Z", "user": { "_id": "645b4a2978730bcc103dfe4d", "avatarUrl": "/avatars/de544de899897fd0a83506ff287123bc.svg", "fullname": "Yuxiang Zhang", "isPro": false, "type": "user", "user": "TokerZ" } }, { "_id": "676a85a619a646a97e21e8e6", "hidden": false, "name": "Yuqi Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a85a619a646a97e21e8e7", "hidden": false, "name": "Jiangming Shu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a85a619a646a97e21e8e8", "hidden": false, "name": "Yuhang Wang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-14T08:30:16.733Z", "user": { "_id": "6492a388bb63613026a34ccd", "avatarUrl": "/avatars/83352981ef3858db3655826a869ba730.svg", "fullname": "Yuhang Wang", "isPro": false, "type": "user", "user": "Rykeryh" } }, { "_id": "676a85a619a646a97e21e8e9", "hidden": false, "name": "Jinlin Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a85a619a646a97e21e8ea", "hidden": false, "name": "Jitao Sang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-22T04:21:30
OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning
OpenAI's recent introduction of Reinforcement Fine-Tuning (RFT) showcases the potential of reasoning foundation model and offers a new paradigm for fine-tuning beyond simple pattern imitation. This technical report presents OpenRFT, our attempt to fine-tune generalist reasoning models for domain-specific tasks under the same settings as RFT. OpenRFT addresses two key challenges of lacking reasoning step data and the limited quantity of training samples, by leveraging the domain-specific samples in three ways: question augmentation, synthesizing reasoning-process data, and few-shot ICL. The evaluation is conducted on SciKnowEval, where OpenRFT achieves notable performance gains with only 100 domain-specific samples for each task. More experimental results will be updated continuously in later versions. Source codes, datasets, and models are disclosed at: https://github.com/ADaM-BJTU/OpenRFT
9
676a85a719a646a97e21e92d
null
null
2024-12-24T04:45:51.603000
PC Agent: While You Sleep, AI Works -- A Cognitive Journey into Digital World
https://cdn-thumbnails.h…s/2412.17589.png
2
{ "_id": "616bfc2b40e2f69baa1c7add", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/616bfc2b40e2f69baa1c7add/Os7_qgMei-2lRVelrOG7B.jpeg", "followerCount": 10, "fullname": "Run-Ze Fan", "isHf": false, "isMod": false, "isPro": false, "name": "Vfrz", "type": "user" }
true
null
2412.17589
[ { "_id": "676a8186b88f71d971444400", "hidden": false, "name": "Yanheng He", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:38.504Z", "user": { "_id": "661b9ac57cfb7bcb3057a578", "avatarUrl": "/avatars/f8afaa8eaad3a1e5963a4feebec3f7ab.svg", "fullname": "Yanheng He", "isPro": false, "type": "user", "user": "henryhe0123" } }, { "_id": "676a8186b88f71d971444401", "hidden": false, "name": "Jiahe Jin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a8186b88f71d971444402", "hidden": false, "name": "Shijie Xia", "status": "claimed_verified", "statusLastChangedAt": "2025-02-06T14:15:31.725Z", "user": { "_id": "65900d4ff5a209eeac08b463", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65900d4ff5a209eeac08b463/PJNNBRJIk1qR24oaRLTex.jpeg", "fullname": "shijie xia", "isPro": false, "type": "user", "user": "seven-cat" } }, { "_id": "676a8186b88f71d971444403", "hidden": false, "name": "Jiadi Su", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a8186b88f71d971444404", "hidden": false, "name": "Runze Fan", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:41.541Z", "user": { "_id": "616bfc2b40e2f69baa1c7add", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/616bfc2b40e2f69baa1c7add/Os7_qgMei-2lRVelrOG7B.jpeg", "fullname": "Run-Ze Fan", "isPro": false, "type": "user", "user": "Vfrz" } }, { "_id": "676a8186b88f71d971444405", "hidden": false, "name": "Haoyang Zou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a8186b88f71d971444406", "hidden": false, "name": "Xiangkun Hu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a8186b88f71d971444407", "hidden": false, "name": "Pengfei Liu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T14:02:12
PC Agent: While You Sleep, AI Works -- A Cognitive Journey into Digital World
Imagine a world where AI can handle your work while you sleep - organizing your research materials, drafting a report, or creating a presentation you need for tomorrow. However, while current digital agents can perform simple tasks, they are far from capable of handling the complex real-world work that humans routinely perform. We present PC Agent, an AI system that demonstrates a crucial step toward this vision through human cognition transfer. Our key insight is that the path from executing simple "tasks" to handling complex "work" lies in efficiently capturing and learning from human cognitive processes during computer use. To validate this hypothesis, we introduce three key innovations: (1) PC Tracker, a lightweight infrastructure that efficiently collects high-quality human-computer interaction trajectories with complete cognitive context; (2) a two-stage cognition completion pipeline that transforms raw interaction data into rich cognitive trajectories by completing action semantics and thought processes; and (3) a multi-agent system combining a planning agent for decision-making with a grounding agent for robust visual grounding. Our preliminary experiments in PowerPoint presentation creation reveal that complex digital work capabilities can be achieved with a small amount of high-quality cognitive data - PC Agent, trained on just 133 cognitive trajectories, can handle sophisticated work scenarios involving up to 50 steps across multiple applications. This demonstrates the data efficiency of our approach, highlighting that the key to training capable digital agents lies in collecting human cognitive data. By open-sourcing our complete framework, including the data collection infrastructure and cognition completion methods, we aim to lower the barriers for the research community to develop truly capable digital agents.
12
676a8188b88f71d971444477
null
null
2024-12-24T04:00:22.597000
Agent-SafetyBench: Evaluating the Safety of LLM Agents
https://cdn-thumbnails.h…s/2412.14470.png
2
{ "_id": "61b58aa0d65058ce70beb98c", "avatarUrl": "/avatars/aefd9271b891abc6dd2ded1a30eebca4.svg", "followerCount": 1, "fullname": "Zhexin Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "nonstopfor", "type": "user" }
true
null
2412.14470
[ { "_id": "676a77cf5d76485cb34417b3", "hidden": false, "name": "Zhexin Zhang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:43.539Z", "user": { "_id": "61b58aa0d65058ce70beb98c", "avatarUrl": "/avatars/aefd9271b891abc6dd2ded1a30eebca4.svg", "fullname": "Zhexin Zhang", "isPro": false, "type": "user", "user": "nonstopfor" } }, { "_id": "676a77cf5d76485cb34417b4", "hidden": false, "name": "Shiyao Cui", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a77cf5d76485cb34417b5", "hidden": false, "name": "Yida Lu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a77cf5d76485cb34417b6", "hidden": false, "name": "Jingzhuo Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a77cf5d76485cb34417b7", "hidden": false, "name": "Junxiao Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a77cf5d76485cb34417b8", "hidden": false, "name": "Hongning Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a77cf5d76485cb34417b9", "hidden": false, "name": "Minlie Huang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T02:35:15
Agent-SafetyBench: Evaluating the Safety of LLM Agents
As large language models (LLMs) are increasingly deployed as agents, their integration into interactive environments and tool use introduce new safety challenges beyond those associated with the models themselves. However, the absence of comprehensive benchmarks for evaluating agent safety presents a significant barrier to effective assessment and further improvement. In this paper, we introduce Agent-SafetyBench, a comprehensive benchmark designed to evaluate the safety of LLM agents. Agent-SafetyBench encompasses 349 interaction environments and 2,000 test cases, evaluating 8 categories of safety risks and covering 10 common failure modes frequently encountered in unsafe interactions. Our evaluation of 16 popular LLM agents reveals a concerning result: none of the agents achieves a safety score above 60%. This highlights significant safety challenges in LLM agents and underscores the considerable need for improvement. Through quantitative analysis, we identify critical failure modes and summarize two fundamental safety detects in current LLM agents: lack of robustness and lack of risk awareness. Furthermore, our findings suggest that reliance on defense prompts alone is insufficient to address these safety issues, emphasizing the need for more advanced and robust strategies. We release Agent-SafetyBench at https://github.com/thu-coai/Agent-SafetyBench to facilitate further research and innovation in agent safety evaluation and improvement.
12
676a77d15d76485cb3441886
null
null
2024-12-24T03:56:19.269000
Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
https://cdn-thumbnails.h…s/2412.17295.png
2
{ "_id": "60b9e6837946aff342f734ae", "avatarUrl": "/avatars/a711a6aa35757dfd7b78b26098a964fc.svg", "followerCount": 3, "fullname": "Yuxuan Wang", "isHf": false, "isMod": false, "isPro": false, "name": "ColorfulAI", "type": "user" }
true
null
2412.17295
[ { "_id": "676a3878eabbef01bb66f7f8", "hidden": false, "name": "Yueqian Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3878eabbef01bb66f7f9", "hidden": false, "name": "Xiaojun Meng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3878eabbef01bb66f7fa", "hidden": false, "name": "Yuxuan Wang", "status": "extracted_confirmed", "statusLastChangedAt": "2025-01-14T09:25:44.845Z", "user": { "_id": "60b9e6837946aff342f734ae", "avatarUrl": "/avatars/a711a6aa35757dfd7b78b26098a964fc.svg", "fullname": "Yuxuan Wang", "isPro": false, "type": "user", "user": "ColorfulAI" } }, { "_id": "676a3878eabbef01bb66f7fb", "hidden": false, "name": "Jianxin Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3878eabbef01bb66f7fc", "hidden": false, "name": "Qun Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3878eabbef01bb66f7fd", "hidden": false, "name": "Dongyan Zhao", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T05:32:48
Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
Multi-modal multi-party conversation (MMC) is a less studied yet important topic of research due to that it well fits real-world scenarios and thus potentially has more widely-used applications. Compared with the traditional multi-modal conversations, MMC requires stronger character-centered understanding abilities as there are many interlocutors appearing in both the visual and textual context. To facilitate the study of this problem, we present Friends-MMC in this paper, an MMC dataset that contains 24,000+ unique utterances paired with video context. To explore the character-centered understanding of the dialogue, we also annotate the speaker of each utterance, the names and bounding bboxes of faces that appear in the video. Based on this Friends-MMC dataset, we further study two fundamental MMC tasks: conversation speaker identification and conversation response prediction, both of which have the multi-party nature with the video or image as visual context. For conversation speaker identification, we demonstrate the inefficiencies of existing methods such as pre-trained models, and propose a simple yet effective baseline method that leverages an optimization solver to utilize the context of two modalities to achieve better performance. For conversation response prediction, we fine-tune generative dialogue models on Friend-MMC, and analyze the benefits of speaker information. The code and dataset is publicly available at https://github.com/yellow-binary-tree/Friends-MMC and thus we call for more attention on modeling speaker information when understanding conversations.
9
676a387eeabbef01bb66ff15
null
null
2024-12-24T02:25:15.795000
LearnLM: Improving Gemini for Learning
https://cdn-thumbnails.h…s/2412.16429.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.16429
[ { "_id": "676a61bda89cd26e3da31478", "hidden": false, "name": "LearnLM Team", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31479", "hidden": false, "name": "Abhinit Modi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147a", "hidden": false, "name": "Aditya Srikanth Veerubhotla", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147b", "hidden": false, "name": "Aliya Rysbek", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147c", "hidden": false, "name": "Andrea Huber", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147d", "hidden": false, "name": "Brett Wiltshire", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147e", "hidden": false, "name": "Brian Veprek", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3147f", "hidden": false, "name": "Daniel Gillick", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31480", "hidden": false, "name": "Daniel Kasenberg", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31481", "hidden": false, "name": "Derek Ahmed", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31482", "hidden": false, "name": "Irina Jurenka", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31483", "hidden": false, "name": "James Cohan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31484", "hidden": false, "name": "Jennifer She", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31485", "hidden": false, "name": "Julia Wilkowski", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31486", "hidden": false, "name": "Kaiz Alarakyia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31487", "hidden": false, "name": "Kevin McKee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31488", "hidden": false, "name": "Lisa Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31489", "hidden": false, "name": "Markus Kunesch", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148a", "hidden": false, "name": "Mike Schaekermann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148b", "hidden": false, "name": "Miruna Pîslar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148c", "hidden": false, "name": "Nikhil Joshi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148d", "hidden": false, "name": "Parsa Mahmoudieh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148e", "hidden": false, "name": "Paul Jhun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3148f", "hidden": false, "name": "Sara Wiltberger", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31490", "hidden": false, "name": "Shakir Mohamed", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31491", "hidden": false, "name": "Shashank Agarwal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31492", "hidden": false, "name": "Shubham Milind Phal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31493", "hidden": false, "name": "Sun Jae Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31494", "hidden": false, "name": "Theofilos Strinopoulos", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31495", "hidden": false, "name": "Wei-Jen Ko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31496", "hidden": false, "name": "Amy Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31497", "hidden": false, "name": "Ankit Anand", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31498", "hidden": false, "name": "Avishkar Bhoopchand", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da31499", "hidden": false, "name": "Dan Wild", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149a", "hidden": false, "name": "Divya Pandya", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149b", "hidden": false, "name": "Filip Bar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149c", "hidden": false, "name": "Garth Graham", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149d", "hidden": false, "name": "Holger Winnemoeller", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149e", "hidden": false, "name": "Mahvish Nagda", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da3149f", "hidden": false, "name": "Prateek Kolhar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a0", "hidden": false, "name": "Renee Schneider", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a1", "hidden": false, "name": "Shaojian Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a2", "hidden": false, "name": "Stephanie Chan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a3", "hidden": false, "name": "Steve Yadlowsky", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a4", "hidden": false, "name": "Viknesh Sounderajah", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a61bda89cd26e3da314a5", "hidden": false, "name": "Yannis Assael", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-21T01:34:05
LearnLM: Improving Gemini for Learning
Today's generative AI systems are tuned to present information by default rather than engage users in service of learning as a human tutor would. To address the wide range of potential education use cases for these systems, we reframe the challenge of injecting pedagogical behavior as one of pedagogical instruction following, where training and evaluation examples include system-level instructions describing the specific pedagogy attributes present or desired in subsequent model turns. This framing avoids committing our models to any particular definition of pedagogy, and instead allows teachers or developers to specify desired model behavior. It also clears a path to improving Gemini models for learning -- by enabling the addition of our pedagogical data to post-training mixtures -- alongside their rapidly expanding set of capabilities. Both represent important changes from our initial tech report. We show how training with pedagogical instruction following produces a LearnLM model (available on Google AI Studio) that is preferred substantially by expert raters across a diverse set of learning scenarios, with average preference strengths of 31\% over GPT-4o, 11\% over Claude 3.5, and 13\% over the Gemini 1.5 Pro model LearnLM was based on.
22
676a61bfa89cd26e3da3150e
null
null
2024-12-24T02:18:55.342000
DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought
https://cdn-thumbnails.h…s/2412.17498.png
4
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
true
null
2412.17498
[ { "_id": "676a1e8facedf3baab442be0", "hidden": false, "name": "Jiaan Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:48.135Z", "user": { "_id": "6051e3f145db307eddc0c962", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1676443438507-6051e3f145db307eddc0c962.jpeg", "fullname": "Jiaan Wang", "isPro": false, "type": "user", "user": "Krystalan" } }, { "_id": "676a1e8facedf3baab442be1", "hidden": false, "name": "Fandong Meng", "status": "claimed_verified", "statusLastChangedAt": "2025-01-05T23:03:09.771Z", "user": { "_id": "64cb254871a7bbb60c17d5fa", "avatarUrl": "/avatars/5121fd5b7b55d275eba3947f3f4c034d.svg", "fullname": "Fandong Meng", "isPro": false, "type": "user", "user": "fandong" } }, { "_id": "676a1e8facedf3baab442be2", "hidden": false, "name": "Yunlong Liang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a1e8facedf3baab442be3", "hidden": false, "name": "Jie Zhou", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T11:55:33
DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought
Recently, O1-like models have emerged as representative examples, illustrating the effectiveness of long chain-of-thought (CoT) in reasoning tasks such as math and coding tasks. In this paper, we introduce DRT-o1, an attempt to bring the success of long CoT to neural machine translation (MT). Specifically, in view of the literature books that might involve similes and metaphors, translating these texts to a target language is very difficult in practice due to cultural differences. In such cases, literal translation often fails to convey the intended meaning effectively. Even for professional human translators, considerable thought must be given to preserving semantics throughout the translation process. To simulate LLMs' long thought ability in MT, we first mine sentences containing similes or metaphors from existing literature books, and then develop a multi-agent framework to translate these sentences via long thought. In the multi-agent framework, a translator is used to iteratively translate the source sentence under the suggestions provided by an advisor. To ensure the effectiveness of the long thoughts, an evaluator is also employed to judge whether the translation in the current round is better than the previous one or not. In this manner, we collect tens of thousands of long-thought MT data, which is used to train our DRT-o1. The experimental results on literature translation demonstrate the effectiveness of the DRT-o1. Using Qwen2.5-7B and Qwen2.5-14B as the backbones, the improvement brought by DRT-o1 achieves 7.33~8.26 BLEU and 1.66~3.36 CometScore. Besides, DRT-o1-7B can outperform QwQ-32B-Preview by 7.82 BLEU and 1.46 CometScore, showing its effectiveness. The project is available at https://github.com/krystalan/DRT-o1
22
676a1e90acedf3baab442c22
null
null
2024-12-24T02:15:11.940000
OpenAI o1 System Card
https://cdn-thumbnails.h…s/2412.16720.png
2
{ "_id": "60f1abe7544c2adfd699860c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg", "followerCount": 6280, "fullname": "AK", "isHf": true, "isMod": false, "isPro": false, "name": "akhaliq", "type": "user" }
false
null
2412.16720
[ { "_id": "676a5f1127c8341daf37c83b", "hidden": false, "name": "OpenAI", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c83d", "hidden": false, "name": "Aaron Jaech", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c83e", "hidden": false, "name": "Adam Kalai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c83f", "hidden": false, "name": "Adam Lerer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c840", "hidden": false, "name": "Adam Richardson", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c841", "hidden": false, "name": "Ahmed El-Kishky", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c842", "hidden": false, "name": "Aiden Low", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c843", "hidden": false, "name": "Alec Helyar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c844", "hidden": false, "name": "Aleksander Madry", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c845", "hidden": false, "name": "Alex Beutel", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c846", "hidden": false, "name": "Alex Carney", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c847", "hidden": false, "name": "Alex Iftimie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c848", "hidden": false, "name": "Alex Karpenko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c849", "hidden": false, "name": "Alex Tachard Passos", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84a", "hidden": false, "name": "Alexander Neitz", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84b", "hidden": false, "name": "Alexander Prokofiev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84c", "hidden": false, "name": "Alexander Wei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84d", "hidden": false, "name": "Allison Tam", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84e", "hidden": false, "name": "Ally Bennett", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c84f", "hidden": false, "name": "Ananya Kumar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c850", "hidden": false, "name": "Andre Saraiva", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c851", "hidden": false, "name": "Andrea Vallone", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c852", "hidden": false, "name": "Andrew Duberstein", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c853", "hidden": false, "name": "Andrew Kondrich", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c854", "hidden": false, "name": "Andrey Mishchenko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c855", "hidden": false, "name": "Andy Applebaum", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c856", "hidden": false, "name": "Angela Jiang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c857", "hidden": false, "name": "Ashvin Nair", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c858", "hidden": false, "name": "Barret Zoph", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c859", "hidden": false, "name": "Behrooz Ghorbani", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85a", "hidden": false, "name": "Ben Rossen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85b", "hidden": false, "name": "Benjamin Sokolowsky", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85c", "hidden": false, "name": "Boaz Barak", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85d", "hidden": false, "name": "Bob McGrew", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85e", "hidden": false, "name": "Borys Minaiev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c85f", "hidden": false, "name": "Botao Hao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c860", "hidden": false, "name": "Bowen Baker", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c861", "hidden": false, "name": "Brandon Houghton", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c862", "hidden": false, "name": "Brandon McKinzie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c863", "hidden": false, "name": "Brydon Eastman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c864", "hidden": false, "name": "Camillo Lugaresi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c865", "hidden": false, "name": "Cary Bassin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c866", "hidden": false, "name": "Cary Hudson", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c867", "hidden": false, "name": "Chak Ming Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c868", "hidden": false, "name": "Charles de Bourcy", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c869", "hidden": false, "name": "Chelsea Voss", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86a", "hidden": false, "name": "Chen Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86b", "hidden": false, "name": "Chong Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86c", "hidden": false, "name": "Chris Koch", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86d", "hidden": false, "name": "Chris Orsinger", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86e", "hidden": false, "name": "Christopher Hesse", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c86f", "hidden": false, "name": "Claudia Fischer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c870", "hidden": false, "name": "Clive Chan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c871", "hidden": false, "name": "Dan Roberts", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c872", "hidden": false, "name": "Daniel Kappler", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c873", "hidden": false, "name": "Daniel Levy", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c874", "hidden": false, "name": "Daniel Selsam", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c875", "hidden": false, "name": "David Dohan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c876", "hidden": false, "name": "David Farhi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c877", "hidden": false, "name": "David Mely", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c878", "hidden": false, "name": "David Robinson", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c879", "hidden": false, "name": "Dimitris Tsipras", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87a", "hidden": false, "name": "Doug Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87b", "hidden": false, "name": "Dragos Oprica", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87c", "hidden": false, "name": "Eben Freeman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87d", "hidden": false, "name": "Eddie Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87e", "hidden": false, "name": "Edmund Wong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c87f", "hidden": false, "name": "Elizabeth Proehl", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c880", "hidden": false, "name": "Enoch Cheung", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c881", "hidden": false, "name": "Eric Mitchell", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c882", "hidden": false, "name": "Eric Wallace", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c883", "hidden": false, "name": "Erik Ritter", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c884", "hidden": false, "name": "Evan Mays", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c885", "hidden": false, "name": "Fan Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c886", "hidden": false, "name": "Felipe Petroski Such", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c887", "hidden": false, "name": "Filippo Raso", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c888", "hidden": false, "name": "Florencia Leoni", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c889", "hidden": false, "name": "Foivos Tsimpourlas", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88a", "hidden": false, "name": "Francis Song", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88b", "hidden": false, "name": "Fred von Lohmann", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88c", "hidden": false, "name": "Freddie Sulit", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88d", "hidden": false, "name": "Geoff Salmon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88e", "hidden": false, "name": "Giambattista Parascandolo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c88f", "hidden": false, "name": "Gildas Chabot", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c890", "hidden": false, "name": "Grace Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c891", "hidden": false, "name": "Greg Brockman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c892", "hidden": false, "name": "Guillaume Leclerc", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c893", "hidden": false, "name": "Hadi Salman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c894", "hidden": false, "name": "Haiming Bao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c895", "hidden": false, "name": "Hao Sheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c896", "hidden": false, "name": "Hart Andrin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c897", "hidden": false, "name": "Hessam Bagherinezhad", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c898", "hidden": false, "name": "Hongyu Ren", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c899", "hidden": false, "name": "Hunter Lightman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89a", "hidden": false, "name": "Hyung Won Chung", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89b", "hidden": false, "name": "Ian Kivlichan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89c", "hidden": false, "name": "Ian O'Connell", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89d", "hidden": false, "name": "Ian Osband", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89e", "hidden": false, "name": "Ignasi Clavera Gilaberte", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c89f", "hidden": false, "name": "Ilge Akkaya", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a0", "hidden": false, "name": "Ilya Kostrikov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a1", "hidden": false, "name": "Ilya Sutskever", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a2", "hidden": false, "name": "Irina Kofman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a3", "hidden": false, "name": "Jakub Pachocki", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a4", "hidden": false, "name": "James Lennon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a5", "hidden": false, "name": "Jason Wei", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a6", "hidden": false, "name": "Jean Harb", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a7", "hidden": false, "name": "Jerry Twore", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a8", "hidden": false, "name": "Jiacheng Feng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8a9", "hidden": false, "name": "Jiahui Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8aa", "hidden": false, "name": "Jiayi Weng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ab", "hidden": false, "name": "Jie Tang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ac", "hidden": false, "name": "Jieqi Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ad", "hidden": false, "name": "Joaquin Quiñonero Candela", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ae", "hidden": false, "name": "Joe Palermo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8af", "hidden": false, "name": "Joel Parish", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b0", "hidden": false, "name": "Johannes Heidecke", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b1", "hidden": false, "name": "John Hallman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b2", "hidden": false, "name": "John Rizzo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b3", "hidden": false, "name": "Jonathan Gordon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b4", "hidden": false, "name": "Jonathan Uesato", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b5", "hidden": false, "name": "Jonathan Uesato", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b6", "hidden": false, "name": "Jonathan Ward", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b7", "hidden": false, "name": "Joost Huizinga", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b8", "hidden": false, "name": "Julie Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8b9", "hidden": false, "name": "Kai Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ba", "hidden": false, "name": "Kai Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8bb", "hidden": false, "name": "Karan Singhal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8bc", "hidden": false, "name": "Karina Nguyen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8bd", "hidden": false, "name": "Karl Cobbe", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8be", "hidden": false, "name": "Katy Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8bf", "hidden": false, "name": "Kayla Wood", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c0", "hidden": false, "name": "Kendra Rimbach", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c1", "hidden": false, "name": "Keren Gu-Lemberg", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c2", "hidden": false, "name": "Keren GuLemberg", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c3", "hidden": false, "name": "Kevin Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c4", "hidden": false, "name": "Kevin Lu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c5", "hidden": false, "name": "Kevin Stone", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c6", "hidden": false, "name": "Kevin Yu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c7", "hidden": false, "name": "Lama Ahmad", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c8", "hidden": false, "name": "Lauren Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8c9", "hidden": false, "name": "Leo Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ca", "hidden": false, "name": "Leon Maksin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8cb", "hidden": false, "name": "Leyton Ho", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8cc", "hidden": false, "name": "Liam Fedus", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8cd", "hidden": false, "name": "Lilian Weng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ce", "hidden": false, "name": "Linden Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8cf", "hidden": false, "name": "Lindsay McCallum", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d0", "hidden": false, "name": "Lindsey Held", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d1", "hidden": false, "name": "Lorenz Kuhn", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d2", "hidden": false, "name": "Lukas Kondraciuk", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d3", "hidden": false, "name": "Lukasz Kaiser", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d4", "hidden": false, "name": "Luke Metz", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d5", "hidden": false, "name": "Madelaine Boyd", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d6", "hidden": false, "name": "Maja Trebacz", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d7", "hidden": false, "name": "Manas Joglekar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d8", "hidden": false, "name": "Mark Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8d9", "hidden": false, "name": "Marko Tintor", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8da", "hidden": false, "name": "Mason Meyer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8db", "hidden": false, "name": "Matt Jones", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8dc", "hidden": false, "name": "Matt Kaufer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8dd", "hidden": false, "name": "Max Schwarzer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8de", "hidden": false, "name": "Meghan Shah", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8df", "hidden": false, "name": "Mehmet Yatbaz", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e0", "hidden": false, "name": "Melody Guan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e1", "hidden": false, "name": "Mengyuan Xu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e2", "hidden": false, "name": "Mengyuan Yan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e3", "hidden": false, "name": "Mia Glaese", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e4", "hidden": false, "name": "Mianna Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e5", "hidden": false, "name": "Mianna Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e6", "hidden": false, "name": "Michael Lampe", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e7", "hidden": false, "name": "Michael Malek", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e8", "hidden": false, "name": "Michele Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8e9", "hidden": false, "name": "Michelle Fradin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ea", "hidden": false, "name": "Mike McClay", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8eb", "hidden": false, "name": "Mikhail Pavlov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ec", "hidden": false, "name": "Miles Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ed", "hidden": false, "name": "Mingxuan Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ee", "hidden": false, "name": "Mira Murati", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ef", "hidden": false, "name": "Mo Bavarian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f0", "hidden": false, "name": "Mostafa Rohaninejad", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f1", "hidden": false, "name": "Nat McAleese", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f2", "hidden": false, "name": "Neil Chowdhury", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f3", "hidden": false, "name": "Neil Chowdhury", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f4", "hidden": false, "name": "Nick Ryder", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f5", "hidden": false, "name": "Nikolas Tezak", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f6", "hidden": false, "name": "Noam Brown", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f7", "hidden": false, "name": "Ofir Nachum", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f8", "hidden": false, "name": "Oleg Boiko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8f9", "hidden": false, "name": "Oleg Murk", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8fa", "hidden": false, "name": "Olivia Watkins", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8fb", "hidden": false, "name": "Patrick Chao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8fc", "hidden": false, "name": "Paul Ashbourne", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8fd", "hidden": false, "name": "Pavel Izmailov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8fe", "hidden": false, "name": "Peter Zhokhov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c8ff", "hidden": false, "name": "Rachel Dias", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c900", "hidden": false, "name": "Rahul Arora", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c901", "hidden": false, "name": "Randall Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c902", "hidden": false, "name": "Rapha Gontijo Lopes", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c903", "hidden": false, "name": "Raz Gaon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c904", "hidden": false, "name": "Reah Miyara", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c905", "hidden": false, "name": "Reimar Leike", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c906", "hidden": false, "name": "Renny Hwang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c907", "hidden": false, "name": "Rhythm Garg", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c908", "hidden": false, "name": "Robin Brown", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c909", "hidden": false, "name": "Roshan James", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90a", "hidden": false, "name": "Rui Shu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90b", "hidden": false, "name": "Ryan Cheu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90c", "hidden": false, "name": "Ryan Greene", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90d", "hidden": false, "name": "Saachi Jain", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90e", "hidden": false, "name": "Sam Altman", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c90f", "hidden": false, "name": "Sam Toizer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c910", "hidden": false, "name": "Sam Toyer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c911", "hidden": false, "name": "Samuel Miserendino", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c912", "hidden": false, "name": "Sandhini Agarwal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c913", "hidden": false, "name": "Santiago Hernandez", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c914", "hidden": false, "name": "Sasha Baker", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c915", "hidden": false, "name": "Scott McKinney", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c916", "hidden": false, "name": "Scottie Yan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c917", "hidden": false, "name": "Shengjia Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c918", "hidden": false, "name": "Shengli Hu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c919", "hidden": false, "name": "Shibani Santurkar", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91a", "hidden": false, "name": "Shraman Ray Chaudhuri", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91b", "hidden": false, "name": "Shuyuan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91c", "hidden": false, "name": "Siyuan Fu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91d", "hidden": false, "name": "Spencer Papay", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91e", "hidden": false, "name": "Steph Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c91f", "hidden": false, "name": "Suchir Balaji", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c920", "hidden": false, "name": "Suvansh Sanjeev", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c921", "hidden": false, "name": "Szymon Sidor", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c922", "hidden": false, "name": "Tal Broda", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c923", "hidden": false, "name": "Aidan Clark", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c924", "hidden": false, "name": "Tao Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c925", "hidden": false, "name": "Taylor Gordon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c926", "hidden": false, "name": "Ted Sanders", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c927", "hidden": false, "name": "Tejal Patwardhan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c928", "hidden": false, "name": "Thibault Sottiaux", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c929", "hidden": false, "name": "Thomas Degry", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92a", "hidden": false, "name": "Thomas Dimson", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92b", "hidden": false, "name": "Tianhao Zheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92c", "hidden": false, "name": "Timur Garipov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92d", "hidden": false, "name": "Tom Stasi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92e", "hidden": false, "name": "Trapit Bansal", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c92f", "hidden": false, "name": "Trevor Creech", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c930", "hidden": false, "name": "Troy Peterson", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c931", "hidden": false, "name": "Tyna Eloundou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c932", "hidden": false, "name": "Valerie Qi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c933", "hidden": false, "name": "Vineet Kosaraju", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c934", "hidden": false, "name": "Vinnie Monaco", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c935", "hidden": false, "name": "Vitchyr Pong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c936", "hidden": false, "name": "Vlad Fomenko", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c937", "hidden": false, "name": "Weiyi Zheng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c938", "hidden": false, "name": "Wenda Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c939", "hidden": false, "name": "Wes McCabe", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93a", "hidden": false, "name": "Wojciech Zaremba", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93b", "hidden": false, "name": "Yann Dubois", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93c", "hidden": false, "name": "Yinghai Lu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93d", "hidden": false, "name": "Yining Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93e", "hidden": false, "name": "Young Cha", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c93f", "hidden": false, "name": "Yu Bai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c940", "hidden": false, "name": "Yuchen He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c941", "hidden": false, "name": "Yuchen Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c942", "hidden": false, "name": "Yunyun Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c943", "hidden": false, "name": "Zheng Shao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5f1127c8341daf37c944", "hidden": false, "name": "Zhuohan Li", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-21T18:04:31
OpenAI o1 System Card
The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought. These advanced reasoning capabilities provide new avenues for improving the safety and robustness of our models. In particular, our models can reason about our safety policies in context when responding to potentially unsafe prompts, through deliberative alignment. This leads to state-of-the-art performance on certain benchmarks for risks such as generating illicit advice, choosing stereotyped responses, and succumbing to known jailbreaks. Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence. Our results underscore the need for building robust alignment methods, extensively stress-testing their efficacy, and maintaining meticulous risk management protocols. This report outlines the safety work carried out for the OpenAI o1 and OpenAI o1-mini models, including safety evaluations, external red teaming, and Preparedness Framework evaluations.
31
676a5f1427c8341daf37c9c1
null
null
2024-12-24T02:03:44.880000
Large Motion Video Autoencoding with Cross-modal Video VAE
https://cdn-thumbnails.h…s/2412.17805.png
3
{ "_id": "630231bf7e137e3d6b3b0645", "avatarUrl": "/avatars/dad52f8955110f0a2caeb613d6aa3ea2.svg", "followerCount": 3, "fullname": "He", "isHf": false, "isMod": false, "isPro": false, "name": "Yingqing", "type": "user" }
false
[ "https://cdn-uploads.huggingface.co/production/uploads/630231bf7e137e3d6b3b0645/TYdqz6KUOeiRZ2ZAYaiyq.mp4" ]
2412.17805
[ { "_id": "676a5396207235e4b020dfda", "hidden": false, "name": "Yazhou Xing", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5396207235e4b020dfdb", "hidden": false, "name": "Yang Fei", "status": "claimed_verified", "statusLastChangedAt": "2025-02-04T09:39:56.502Z", "user": { "_id": "648ffe95669191ccb6772a2e", "avatarUrl": "/avatars/025cac7dc64ea9ef2074754c92086baa.svg", "fullname": "Yang Fei", "isPro": false, "type": "user", "user": "sunfly" } }, { "_id": "676a5396207235e4b020dfdc", "hidden": false, "name": "Yingqing He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5396207235e4b020dfdd", "hidden": false, "name": "Jingye Chen", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:38.574Z", "user": { "_id": "6478a982256b62e219917d67", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/PUJ-N2cQxgEmDGfyjajyA.jpeg", "fullname": "JingyeChen22", "isPro": false, "type": "user", "user": "JingyeChen22" } }, { "_id": "676a5396207235e4b020dfde", "hidden": false, "name": "Jiaxin Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5396207235e4b020dfdf", "hidden": false, "name": "Xiaowei Chi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a5396207235e4b020dfe0", "hidden": false, "name": "Qifeng Chen", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T18:58:24
Large Motion Video Autoencoding with Cross-modal Video VAE
Learning a robust video Variational Autoencoder (VAE) is essential for reducing video redundancy and facilitating efficient video generation. Directly applying image VAEs to individual frames in isolation can result in temporal inconsistencies and suboptimal compression rates due to a lack of temporal compression. Existing Video VAEs have begun to address temporal compression; however, they often suffer from inadequate reconstruction performance. In this paper, we present a novel and powerful video autoencoder capable of high-fidelity video encoding. First, we observe that entangling spatial and temporal compression by merely extending the image VAE to a 3D VAE can introduce motion blur and detail distortion artifacts. Thus, we propose temporal-aware spatial compression to better encode and decode the spatial information. Additionally, we integrate a lightweight motion compression model for further temporal compression. Second, we propose to leverage the textual information inherent in text-to-video datasets and incorporate text guidance into our model. This significantly enhances reconstruction quality, particularly in terms of detail preservation and temporal stability. Third, we further improve the versatility of our model through joint training on both images and videos, which not only enhances reconstruction quality but also enables the model to perform both image and video autoencoding. Extensive evaluations against strong recent baselines demonstrate the superior performance of our method. The project website can be found at~https://yzxing87.github.io/vae/{https://yzxing87.github.io/vae/}.
24
676a5398207235e4b020e097
null
null
2024-12-24T00:41:11.109000
Outcome-Refining Process Supervision for Code Generation
https://cdn-thumbnails.h…s/2412.15118.png
2
{ "_id": "61e24808b31e7cc38eb84d37", "avatarUrl": "/avatars/65fbea940fad211462ecc5ad725e0c28.svg", "followerCount": 1, "fullname": "Zhuohao Yu", "isHf": false, "isMod": false, "isPro": false, "name": "zhuohaoyu", "type": "user" }
true
null
2412.15118
[ { "_id": "676a444d9315022860ac1f0e", "hidden": false, "name": "Zhuohao Yu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:34:53.843Z", "user": { "_id": "61e24808b31e7cc38eb84d37", "avatarUrl": "/avatars/65fbea940fad211462ecc5ad725e0c28.svg", "fullname": "Zhuohao Yu", "isPro": false, "type": "user", "user": "zhuohaoyu" } }, { "_id": "676a444d9315022860ac1f0f", "hidden": false, "name": "Weizheng Gu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a444d9315022860ac1f10", "hidden": false, "name": "Yidong Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a444d9315022860ac1f11", "hidden": false, "name": "Zhengran Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a444d9315022860ac1f12", "hidden": false, "name": "Jindong Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a444d9315022860ac1f13", "hidden": false, "name": "Wei Ye", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a444d9315022860ac1f14", "hidden": false, "name": "Shikun Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T17:59:42
Outcome-Refining Process Supervision for Code Generation
Large Language Models have demonstrated remarkable capabilities in code generation, yet they often struggle with complex programming tasks that require deep algorithmic reasoning. While process supervision through learned reward models shows promise in guiding reasoning steps, it requires expensive training data and suffers from unreliable evaluation. We propose Outcome-Refining Process Supervision, a novel paradigm that treats outcome refinement itself as the process to be supervised. Our framework leverages concrete execution signals to ground the supervision of reasoning steps, while using tree-structured exploration to maintain multiple solution trajectories simultaneously. Experiments demonstrate that our approach enables even smaller models to achieve high success accuracy and performance metrics on competitive programming tasks, creates more reliable verification than traditional reward models without requiring training PRMs. Our approach achieves significant improvements across 5 models and 3 datasets: an average of 26.9% increase in correctness and 42.2% in efficiency. The results suggest that providing structured reasoning space with concrete verification signals is crucial for solving complex programming tasks. We open-source all our code and data at: https://github.com/zhuohaoyu/ORPS
19
676a444f9315022860ac1f70
null
null
2024-12-24T00:05:04.046000
Deliberation in Latent Space via Differentiable Cache Augmentation
https://cdn-thumbnails.h…s/2412.17747.png
5
{ "_id": "65d8fa4e16cebdd689fa5587", "avatarUrl": "/avatars/9646f22ae16653af47ea38af0839ae7b.svg", "followerCount": null, "fullname": "Luyang Liu", "isHf": false, "isMod": false, "isPro": false, "name": "luyangl", "type": "user" }
false
null
2412.17747
[ { "_id": "676a3ec0b91a321e164a8780", "hidden": false, "name": "Luyang Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3ec0b91a321e164a8781", "hidden": false, "name": "Jonas Pfeiffer", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3ec0b91a321e164a8782", "hidden": false, "name": "Jiaxing Wu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3ec0b91a321e164a8783", "hidden": false, "name": "Jun Xie", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3ec0b91a321e164a8784", "hidden": false, "name": "Arthur Szlam", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T18:02:25
Deliberation in Latent Space via Differentiable Cache Augmentation
Techniques enabling large language models (LLMs) to "think more" by generating and attending to intermediate reasoning steps have shown promise in solving complex problems. However, the standard approaches generate sequences of discrete tokens immediately before responding, and so they can incur significant latency costs and be challenging to optimize. In this work, we demonstrate that a frozen LLM can be augmented with an offline coprocessor that operates on the model's key-value (kv) cache. This coprocessor augments the cache with a set of latent embeddings designed to improve the fidelity of subsequent decoding. We train this coprocessor using the language modeling loss from the decoder on standard pretraining data, while keeping the decoder itself frozen. This approach enables the model to learn, in an end-to-end differentiable fashion, how to distill additional computation into its kv-cache. Because the decoder remains unchanged, the coprocessor can operate offline and asynchronously, and the language model can function normally if the coprocessor is unavailable or if a given cache is deemed not to require extra computation. We show experimentally that when a cache is augmented, the decoder achieves lower perplexity on numerous subsequent tokens. Furthermore, even without any task-specific training, our experiments demonstrate that cache augmentation consistently reduces perplexity and improves performance across a range of reasoning-intensive tasks.
30
676a3ec1b91a321e164a87ca
null
null
2024-12-23T23:23:30.988000
Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching
https://cdn-thumbnails.h…s/2412.17153.png
2
{ "_id": "64c832a8c547ed5243d29630", "avatarUrl": "/avatars/59d1975634e84095b69423c02441d453.svg", "followerCount": 3, "fullname": "Zinan Lin", "isHf": false, "isMod": false, "isPro": false, "name": "fjxmlzn", "type": "user" }
true
null
2412.17153
[ { "_id": "676a3070b1618113354d99fa", "hidden": false, "name": "Enshu Liu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:07.539Z", "user": { "_id": "63c7ecbcaa2a7669c45ad49e", "avatarUrl": "/avatars/eba0250d328b0d850d3a6d3057bea583.svg", "fullname": "Enshu Liu", "isPro": false, "type": "user", "user": "jsttlgdkycy" } }, { "_id": "676a3070b1618113354d99fb", "hidden": false, "name": "Xuefei Ning", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:05.401Z", "user": { "_id": "641031b1a78453b8d96b8420", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1678782881444-noauth.jpeg", "fullname": "Xuefei Ning", "isPro": false, "type": "user", "user": "Foxfi" } }, { "_id": "676a3070b1618113354d99fc", "hidden": false, "name": "Yu Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a3070b1618113354d99fd", "hidden": false, "name": "Zinan Lin", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-24T03:54:58.649Z", "user": { "_id": "64c832a8c547ed5243d29630", "avatarUrl": "/avatars/59d1975634e84095b69423c02441d453.svg", "fullname": "Zinan Lin", "isPro": false, "type": "user", "user": "fjxmlzn" } } ]
2024-12-22T20:21:54
Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching
Autoregressive (AR) models have achieved state-of-the-art performance in text and image generation but suffer from slow generation due to the token-by-token process. We ask an ambitious question: can a pre-trained AR model be adapted to generate outputs in just one or two steps? If successful, this would significantly advance the development and deployment of AR models. We notice that existing works that try to speed up AR generation by generating multiple tokens at once fundamentally cannot capture the output distribution due to the conditional dependencies between tokens, limiting their effectiveness for few-step generation. To address this, we propose Distilled Decoding (DD), which uses flow matching to create a deterministic mapping from Gaussian distribution to the output distribution of the pre-trained AR model. We then train a network to distill this mapping, enabling few-step generation. DD doesn't need the training data of the original AR model, making it more practical.We evaluate DD on state-of-the-art image AR models and present promising results on ImageNet-256. For VAR, which requires 10-step generation, DD enables one-step generation (6.3times speed-up), with an acceptable increase in FID from 4.19 to 9.96. For LlamaGen, DD reduces generation from 256 steps to 1, achieving an 217.8times speed-up with a comparable FID increase from 4.11 to 11.35. In both cases, baseline methods completely fail with FID>100. DD also excels on text-to-image generation, reducing the generation from 256 steps to 2 for LlamaGen with minimal FID increase from 25.70 to 28.95. As the first work to demonstrate the possibility of one-step generation for image AR models, DD challenges the prevailing notion that AR models are inherently slow, and opens up new opportunities for efficient AR generation. The project website is at https://imagination-research.github.io/distilled-decoding.
34
676a3072b1618113354d9aa1
null
null
2024-12-23T22:18:25.419000
NILE: Internal Consistency Alignment in Large Language Models
https://cdn-thumbnails.h…s/2412.16686.png
2
{ "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "followerCount": null, "fullname": "Qiyuan Zhang", "isHf": false, "isMod": false, "isPro": false, "name": "DonJoey", "type": "user" }
true
null
2412.16686
[ { "_id": "676a20cbeabbef01bb5f825c", "hidden": false, "name": "Minda Hu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:43.586Z", "user": { "_id": "65fd50dd78866973892e43eb", "avatarUrl": "/avatars/f4a103b93a496c76537f27a8e4c11e6e.svg", "fullname": "Minda Hu", "isPro": false, "type": "user", "user": "mindahu" } }, { "_id": "676a20cbeabbef01bb5f825d", "hidden": false, "name": "Qiyuan Zhang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:45.781Z", "user": { "_id": "62a42f22c683d02f5b63320c", "avatarUrl": "/avatars/bc611abe9c4ef8d378123cb8ac9fdbf2.svg", "fullname": "Qiyuan Zhang", "isPro": false, "type": "user", "user": "DonJoey" } }, { "_id": "676a20cbeabbef01bb5f825e", "hidden": false, "name": "Yufei Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f825f", "hidden": false, "name": "Bowei He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8260", "hidden": false, "name": "Hongru Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8261", "hidden": false, "name": "Jingyan Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8262", "hidden": false, "name": "Liangyou Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8263", "hidden": false, "name": "Yasheng Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8264", "hidden": false, "name": "Chen Ma", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a20cbeabbef01bb5f8265", "hidden": false, "name": "Irwin King", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-21T16:25:16
NILE: Internal Consistency Alignment in Large Language Models
As a crucial step to enhance LLMs alignment with human intentions, Instruction Fine-Tuning (IFT) has a high demand on dataset quality. However, existing IFT datasets often contain knowledge that is inconsistent with LLMs' internal knowledge learned from the pre-training phase, which can greatly affect the efficacy of IFT. To address this issue, we introduce NILE (iNternal consIstency aLignmEnt) framework, aimed at optimizing IFT datasets to unlock LLMs' capability further. NILE operates by eliciting target pre-trained LLM's internal knowledge corresponding to instruction data. The internal knowledge is leveraged to revise the answer in IFT datasets. Additionally, we propose a novel Internal Consistency Filtering (ICF) method to filter training samples, ensuring its high consistency with LLM's internal knowledge. Our experiments demonstrate that NILE-aligned IFT datasets sharply boost LLM performance across multiple LLM ability evaluation datasets, achieving up to 66.6% gain on Arena-Hard and 68.5% on Alpaca-Eval V2. Further analysis confirms that each component of the NILE}framework contributes to these substantial performance improvements, and provides compelling evidence that dataset consistency with pre-trained internal knowledge is pivotal for maximizing LLM potential.
8
676a20cceabbef01bb5f82df
null
null
2024-12-23T22:09:53.346000
Diving into Self-Evolving Training for Multimodal Reasoning
https://cdn-thumbnails.h…s/2412.17451.png
2
{ "_id": "6458af46f4d212d780bd7c68", "avatarUrl": "/avatars/832fd34bcc041b0b7b551873a459fc3c.svg", "followerCount": 8, "fullname": "Wei Liu", "isHf": false, "isMod": false, "isPro": false, "name": "PeterV09", "type": "user" }
false
null
2412.17451
[ { "_id": "676a25e38ffab02f2c91a99e", "hidden": false, "name": "Wei Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a25e38ffab02f2c91a99f", "hidden": false, "name": "Junlong Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:34.958Z", "user": { "_id": "621e40ac944c7e36aaec2369", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/621e40ac944c7e36aaec2369/Yj-FJRWps3rvsS_B2bnKo.jpeg", "fullname": "Junlong Li", "isPro": false, "type": "user", "user": "lockon" } }, { "_id": "676a25e38ffab02f2c91a9a0", "hidden": false, "name": "Xiwen Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a25e38ffab02f2c91a9a1", "hidden": false, "name": "Fan Zhou", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:32.979Z", "user": { "_id": "628f6e5ab90dde28ef57d293", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/628f6e5ab90dde28ef57d293/AxNzR2nvrND6Rf3RPkYMk.jpeg", "fullname": "Fan Zhou", "isPro": false, "type": "user", "user": "koalazf99" } }, { "_id": "676a25e38ffab02f2c91a9a2", "hidden": false, "name": "Yu Cheng", "status": "claimed_verified", "statusLastChangedAt": "2025-01-23T15:05:17.014Z", "user": { "_id": "67017abfe4d49b157ac534d9", "avatarUrl": "/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg", "fullname": "Yu Cheng", "isPro": false, "type": "user", "user": "ych133" } }, { "_id": "676a25e38ffab02f2c91a9a3", "hidden": false, "name": "Junxian He", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T10:18:41
Diving into Self-Evolving Training for Multimodal Reasoning
Reasoning ability is essential for Large Multimodal Models (LMMs). In the absence of multimodal chain-of-thought annotated data, self-evolving training, where the model learns from its own outputs, has emerged as an effective and scalable approach for enhancing reasoning abilities. Despite its growing usage, a comprehensive understanding of self-evolving training, particularly in the context of multimodal reasoning, remains limited. In this paper, we delve into the intricacies of self-evolving training for multimodal reasoning, pinpointing three key factors: Training Method, Reward Model, and Prompt Variation. We systematically examine each factor and explore how various configurations affect the training's effectiveness. Our analysis leads to a set of best practices for each factor, aimed at optimizing multimodal reasoning. Furthermore, we explore the Self-Evolution Dynamics during training and the impact of automatic balancing mechanisms in boosting performance. After all the investigations, we present a final recipe for self-evolving training in multimodal reasoning, encapsulating these design choices into a framework we call MSTaR (Multimodal Self-evolving Training for Reasoning), which is universally effective for models with different sizes on various benchmarks, e.g., surpassing the pre-evolved model significantly on 5 multimodal reasoning benchmarks without using additional human annotations, as demonstrated on MiniCPM-V-2.5 (8B), Phi-3.5-Vision (4B) and InternVL2 (2B). We believe this study fills a significant gap in the understanding of self-evolving training for multimodal reasoning and offers a robust framework for future research. Our policy and reward models, as well as the collected data, is released to facilitate further investigation in multimodal reasoning.
43
676a25e48ffab02f2c91a9e3
null
null
2024-12-23T22:04:28.157000
B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners
https://cdn-thumbnails.h…s/2412.17256.png
2
{ "_id": "62751082b43ccfeef483424f", "avatarUrl": "/avatars/fec83e4478e7d1731ba6033328131852.svg", "followerCount": 2, "fullname": "WeihaoZeng", "isHf": false, "isMod": false, "isPro": false, "name": "AndrewZeng", "type": "user" }
true
null
2412.17256
[ { "_id": "676a23c19fc612bf4a3b93f6", "hidden": false, "name": "Weihao Zeng", "status": "claimed_verified", "statusLastChangedAt": "2025-01-15T08:49:25.881Z", "user": { "_id": "62751082b43ccfeef483424f", "avatarUrl": "/avatars/fec83e4478e7d1731ba6033328131852.svg", "fullname": "WeihaoZeng", "isPro": false, "type": "user", "user": "AndrewZeng" } }, { "_id": "676a23c19fc612bf4a3b93f7", "hidden": false, "name": "Yuzhen Huang", "status": "claimed_verified", "statusLastChangedAt": "2025-01-03T14:05:42.772Z", "user": { "_id": "6462def82a83863b97c0611e", "avatarUrl": "/avatars/c03e9cc7d75b0266fcc56ecb6ee62148.svg", "fullname": "Yuzhen Huang", "isPro": false, "type": "user", "user": "yuzhen17" } }, { "_id": "676a23c19fc612bf4a3b93f8", "hidden": false, "name": "Lulu Zhao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a23c19fc612bf4a3b93f9", "hidden": false, "name": "Yijun Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a23c19fc612bf4a3b93fa", "hidden": false, "name": "Zifei Shan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a23c19fc612bf4a3b93fb", "hidden": false, "name": "Junxian He", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-23T03:58:34
B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners
In the absence of extensive human-annotated data for complex reasoning tasks, self-improvement -- where models are trained on their own outputs -- has emerged as a primary method for enhancing performance. However, the critical factors underlying the mechanism of these iterative self-improving methods remain poorly understood, such as under what conditions self-improvement is effective, and what are the bottlenecks in the current iterations. In this work, we identify and propose methods to monitor two pivotal factors in this iterative process: (1) the model's ability to generate sufficiently diverse responses (exploration); and (2) the effectiveness of external rewards in distinguishing high-quality candidates from lower-quality ones (exploitation). Using mathematical reasoning as a case study, we begin with a quantitative analysis to track the dynamics of exploration and exploitation, discovering that a model's exploratory capabilities rapidly deteriorate over iterations, and the effectiveness of exploiting external rewards diminishes as well. Motivated by these findings, we introduce B-STaR, a Self-Taught Reasoning framework that autonomously adjusts configurations across iterations to Balance exploration and exploitation, thereby optimizing the self-improving effectiveness based on the current policy model and available rewards. Our experiments on mathematical reasoning, coding, and commonsense reasoning demonstrate that B-STaR not only enhances the model's exploratory capabilities throughout training but also achieves a more effective balance between exploration and exploitation, leading to superior performance.
46
676a23c29fc612bf4a3b943b
null
null
2024-12-23T22:03:04.208000
RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response
https://cdn-thumbnails.h…s/2412.14922.png
2
{ "_id": "642da1cd99f3110ac27caca5", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg", "followerCount": 5, "fullname": "junyu", "isHf": false, "isMod": false, "isPro": false, "name": "luojunyu", "type": "user" }
true
null
2412.14922
[ { "_id": "676a2354463437b5e1217e15", "hidden": false, "name": "Junyu Luo", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:39.138Z", "user": { "_id": "642da1cd99f3110ac27caca5", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/642da1cd99f3110ac27caca5/C1QJY3R_ZdaeANG1y8iW7.jpeg", "fullname": "junyu", "isPro": false, "type": "user", "user": "luojunyu" } }, { "_id": "676a2354463437b5e1217e16", "hidden": false, "name": "Xiao Luo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2354463437b5e1217e17", "hidden": false, "name": "Kaize Ding", "status": "extracted_pending", "statusLastChangedAt": "2024-12-24T02:58:28.896Z", "user": { "_id": "665e2f9301ca1c80a0a311d2", "avatarUrl": "/avatars/67c88b55b580e6db74df4d0091197cea.svg", "fullname": "Kaize Ding", "isPro": false, "type": "user", "user": "kaize0409" } }, { "_id": "676a2354463437b5e1217e18", "hidden": false, "name": "Jingyang Yuan", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2354463437b5e1217e19", "hidden": false, "name": "Zhiping Xiao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a2354463437b5e1217e1a", "hidden": false, "name": "Ming Zhang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T15:00:18
RobustFT: Robust Supervised Fine-tuning for Large Language Models under Noisy Response
Supervised fine-tuning (SFT) plays a crucial role in adapting large language models (LLMs) to specific domains or tasks. However, as demonstrated by empirical experiments, the collected data inevitably contains noise in practical applications, which poses significant challenges to model performance on downstream tasks. Therefore, there is an urgent need for a noise-robust SFT framework to enhance model capabilities in downstream tasks. To address this challenge, we introduce a robust SFT framework (RobustFT) that performs noise detection and relabeling on downstream task data. For noise identification, our approach employs a multi-expert collaborative system with inference-enhanced models to achieve superior noise detection. In the denoising phase, we utilize a context-enhanced strategy, which incorporates the most relevant and confident knowledge followed by careful assessment to generate reliable annotations. Additionally, we introduce an effective data selection mechanism based on response entropy, ensuring only high-quality samples are retained for fine-tuning. Extensive experiments conducted on multiple LLMs across five datasets demonstrate RobustFT's exceptional performance in noisy scenarios.
86
676a2354463437b5e1217e51
null
null
2024-12-23T21:55:42.534000
Revisiting In-Context Learning with Long Context Language Models
https://cdn-thumbnails.h…s/2412.16926.png
2
{ "_id": "63036b6c5c70c21d0ea79d48", "avatarUrl": "/avatars/a7eb03f5cbd4eaa09fe807bbed8bc0f7.svg", "followerCount": 7, "fullname": "Jinheon Baek", "isHf": false, "isMod": false, "isPro": false, "name": "jinheon", "type": "user" }
true
null
2412.16926
[ { "_id": "676a211eb16181133547681b", "hidden": false, "name": "Jinheon Baek", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:35:41.247Z", "user": { "_id": "63036b6c5c70c21d0ea79d48", "avatarUrl": "/avatars/a7eb03f5cbd4eaa09fe807bbed8bc0f7.svg", "fullname": "Jinheon Baek", "isPro": false, "type": "user", "user": "jinheon" } }, { "_id": "676a211eb16181133547681c", "hidden": false, "name": "Sun Jae Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a211eb16181133547681d", "hidden": false, "name": "Prakhar Gupta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a211eb16181133547681e", "hidden": false, "name": "Geunseob", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a211eb16181133547681f", "hidden": false, "name": "Oh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a211eb161811335476820", "hidden": false, "name": "Siddharth Dalmia", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676a211eb161811335476821", "hidden": false, "name": "Prateek Kolhar", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-22T08:55:19
Revisiting In-Context Learning with Long Context Language Models
In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.
30
676a211fb161811335476846
null
null
2024-12-23T20:23:03.222000
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage
https://cdn-thumbnails.h…s/2412.15484.png
2
{ "_id": "6768bf111d652acb0c60b938", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/5j0JkrslB8zGf4Wy8tKFc.png", "followerCount": null, "fullname": "Saehyung Lee", "isHf": false, "isMod": false, "isPro": false, "name": "saehyungl", "type": "user" }
true
null
2412.15484
[ { "_id": "6768c08c75d8e8d042beda80", "hidden": false, "name": "Saehyung Lee", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:10:04.150Z", "user": { "_id": "6768bf111d652acb0c60b938", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/5j0JkrslB8zGf4Wy8tKFc.png", "fullname": "Saehyung Lee", "isPro": false, "type": "user", "user": "saehyungl" } }, { "_id": "6768c08c75d8e8d042beda81", "hidden": false, "name": "Seunghyun Yoon", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768c08c75d8e8d042beda82", "hidden": false, "name": "Trung Bui", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768c08c75d8e8d042beda83", "hidden": false, "name": "Jing Shi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768c08c75d8e8d042beda84", "hidden": false, "name": "Sungroh Yoon", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-20T01:37:22
Toward Robust Hyper-Detailed Image Captioning: A Multiagent Approach and Dual Evaluation Metrics for Factuality and Coverage
Multimodal large language models (MLLMs) excel at generating highly detailed captions but often produce hallucinations. Our analysis reveals that existing hallucination detection methods struggle with detailed captions. We attribute this to the increasing reliance of MLLMs on their generated text, rather than the input image, as the sequence length grows. To address this issue, we propose a multiagent approach that leverages LLM-MLLM collaboration to correct given captions. Additionally, we introduce an evaluation framework and a benchmark dataset to facilitate the systematic analysis of detailed captions. Our experiments demonstrate that our proposed evaluation method better aligns with human judgments of factuality than existing metrics and that existing approaches to improve the MLLM factuality may fall short in hyper-detailed image captioning tasks. In contrast, our proposed method significantly enhances the factual accuracy of captions, even improving those generated by GPT-4V. Finally, we highlight a limitation of VQA-centric benchmarking by demonstrating that an MLLM's performance on VQA benchmarks may not correlate with its ability to generate detailed image captions.
15
6768c08d75d8e8d042bedab9
null
null
2024-12-23T14:18:33.125000
TRecViT: A Recurrent Video Transformer
https://cdn-thumbnails.h…s/2412.14294.png
3
{ "_id": "6267cdd3f91d1c1633c08bbf", "avatarUrl": "/avatars/51d7961098f52f39ee406009a12982b8.svg", "followerCount": null, "fullname": "Artem Zholus", "isHf": false, "isMod": false, "isPro": false, "name": "artemZholus", "type": "user" }
false
null
2412.14294
[ { "_id": "6769b72361c7635a1e1e5dc0", "hidden": false, "name": "Viorica Pătrăucean", "status": "extracted_confirmed", "statusLastChangedAt": "2025-01-06T10:47:22.333Z", "user": { "_id": "640b53d991d9f65c58a85d4a", "avatarUrl": "/avatars/bea43ffca19cb0705caea3b22f652adc.svg", "fullname": "Vio Patr", "isPro": false, "type": "user", "user": "viorik" } }, { "_id": "6769b72361c7635a1e1e5dc1", "hidden": false, "name": "Xu Owen He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc2", "hidden": false, "name": "Joseph Heyward", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc3", "hidden": false, "name": "Chuhan Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc4", "hidden": false, "name": "Mehdi S. M. Sajjadi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc5", "hidden": false, "name": "George-Cristian Muraru", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc6", "hidden": false, "name": "Artem Zholus", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc7", "hidden": false, "name": "Mahdi Karami", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc8", "hidden": false, "name": "Ross Goroshin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dc9", "hidden": false, "name": "Yutian Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dca", "hidden": false, "name": "Simon Osindero", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dcb", "hidden": false, "name": "João Carreira", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6769b72361c7635a1e1e5dcc", "hidden": false, "name": "Razvan Pascanu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-18T19:44:30
TRecViT: A Recurrent Video Transformer
We propose a novel block for video modelling. It relies on a time-space-channel factorisation with dedicated blocks for each dimension: gated linear recurrent units (LRUs) perform information mixing over time, self-attention layers perform mixing over space, and MLPs over channels. The resulting architecture TRecViT performs well on sparse and dense tasks, trained in supervised or self-supervised regimes. Notably, our model is causal and outperforms or is on par with a pure attention model ViViT-L on large scale video datasets (SSv2, Kinetics400), while having 3times less parameters, 12times smaller memory footprint, and 5times lower FLOPs count. Code and checkpoints will be made available online at https://github.com/google-deepmind/trecvit.
13
6769b72661c7635a1e1e5e85
null
null
2024-12-23T10:41:23.825000
Multi-LLM Text Summarization
https://cdn-thumbnails.h…s/2412.15487.png
2
{ "_id": "62c5947524171688a9feb992", "avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg", "followerCount": 8, "fullname": "Franck Dernoncourt", "isHf": false, "isMod": false, "isPro": false, "name": "Franck-Dernoncourt", "type": "user" }
false
null
2412.15487
[ { "_id": "676984a1edea1efd81017959", "hidden": false, "name": "Jiangnan Fang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795a", "hidden": false, "name": "Cheng-Tse Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795b", "hidden": false, "name": "Jieun Kim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795c", "hidden": false, "name": "Yash Bhedaru", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795d", "hidden": false, "name": "Ethan Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795e", "hidden": false, "name": "Nikhil Singh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd8101795f", "hidden": false, "name": "Nedim Lipka", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd81017960", "hidden": false, "name": "Puneet Mathur", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd81017961", "hidden": false, "name": "Nesreen K. Ahmed", "status": "claimed_verified", "statusLastChangedAt": "2025-01-07T08:42:50.165Z", "user": { "_id": "663e588ccca86c8371a913d9", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/663e588ccca86c8371a913d9/wgVGw5U77cqwtjovhsXyR.jpeg", "fullname": "Nesreen Ahmed", "isPro": false, "type": "user", "user": "nkahmed" } }, { "_id": "676984a1edea1efd81017962", "hidden": false, "name": "Franck Dernoncourt", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:36:09.059Z", "user": { "_id": "62c5947524171688a9feb992", "avatarUrl": "/avatars/5a151713b9eae8dc566f5957acee3475.svg", "fullname": "Franck Dernoncourt", "isPro": false, "type": "user", "user": "Franck-Dernoncourt" } }, { "_id": "676984a1edea1efd81017963", "hidden": false, "name": "Ryan A. Rossi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676984a1edea1efd81017964", "hidden": false, "name": "Hanieh Deilamsalehy", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-20T01:55:26
Multi-LLM Text Summarization
In this work, we propose a Multi-LLM summarization framework, and investigate two different multi-LLM strategies including centralized and decentralized. Our multi-LLM summarization framework has two fundamentally important steps at each round of conversation: generation and evaluation. These steps are different depending on whether our multi-LLM decentralized summarization is used or centralized. In both our multi-LLM decentralized and centralized strategies, we have k different LLMs that generate diverse summaries of the text. However, during evaluation, our multi-LLM centralized summarization approach leverages a single LLM to evaluate the summaries and select the best one whereas k LLMs are used for decentralized multi-LLM summarization. Overall, we find that our multi-LLM summarization approaches significantly outperform the baselines that leverage only a single LLM by up to 3x. These results indicate the effectiveness of multi-LLM approaches for summarization.
6
676984a2edea1efd810179ae
null
null
2024-12-23T08:39:28.353000
IDOL: Instant Photorealistic 3D Human Creation from a Single Image
https://cdn-thumbnails.h…s/2412.14963.png
2
{ "_id": "65f83ad63545cc30502642ff", "avatarUrl": "/avatars/07838b6b8a5e5f04a623c61548418252.svg", "followerCount": null, "fullname": "yiyuzhuang", "isHf": false, "isMod": false, "isPro": false, "name": "yiyuzhuang", "type": "user" }
true
null
2412.14963
[ { "_id": "67690222fbd79d33cf57d51d", "hidden": false, "name": "Yiyu Zhuang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:09:48.780Z", "user": { "_id": "65f83ad63545cc30502642ff", "avatarUrl": "/avatars/07838b6b8a5e5f04a623c61548418252.svg", "fullname": "yiyuzhuang", "isPro": false, "type": "user", "user": "yiyuzhuang" } }, { "_id": "67690222fbd79d33cf57d51e", "hidden": false, "name": "Jiaxi Lv", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d51f", "hidden": false, "name": "Hao Wen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d520", "hidden": false, "name": "Qing Shuai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d521", "hidden": false, "name": "Ailing Zeng", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d522", "hidden": false, "name": "Hao Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d523", "hidden": false, "name": "Shifeng Chen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d524", "hidden": false, "name": "Yujiu Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d525", "hidden": false, "name": "Xun Cao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "67690222fbd79d33cf57d526", "hidden": false, "name": "Wei Liu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T15:43:05
IDOL: Instant Photorealistic 3D Human Creation from a Single Image
Creating a high-fidelity, animatable 3D full-body avatar from a single image is a challenging task due to the diverse appearance and poses of humans and the limited availability of high-quality training data. To achieve fast and high-quality human reconstruction, this work rethinks the task from the perspectives of dataset, model, and representation. First, we introduce a large-scale HUman-centric GEnerated dataset, HuGe100K, consisting of 100K diverse, photorealistic sets of human images. Each set contains 24-view frames in specific human poses, generated using a pose-controllable image-to-multi-view model. Next, leveraging the diversity in views, poses, and appearances within HuGe100K, we develop a scalable feed-forward transformer model to predict a 3D human Gaussian representation in a uniform space from a given human image. This model is trained to disentangle human pose, body shape, clothing geometry, and texture. The estimated Gaussians can be animated without post-processing. We conduct comprehensive experiments to validate the effectiveness of the proposed dataset and method. Our model demonstrates the ability to efficiently reconstruct photorealistic humans at 1K resolution from a single input image using a single GPU instantly. Additionally, it seamlessly supports various applications, as well as shape and texture editing tasks.
6
67690226fbd79d33cf57d65a
null
null
2024-12-23T04:48:38.485000
LLMs Lost in Translation: M-ALERT uncovers Cross-Linguistic Safety Gaps
https://cdn-thumbnails.h…s/2412.15035.png
3
{ "_id": "61b85aa99ba538c73a7dc78b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61b85aa99ba538c73a7dc78b/gWxtQAvOYn7cXgE_nAy0p.jpeg", "followerCount": 32, "fullname": "Simone Tedeschi", "isHf": false, "isMod": false, "isPro": false, "name": "sted97", "type": "user" }
true
null
2412.15035
[ { "_id": "676931b8bc5af30a79d40a76", "hidden": false, "name": "Felix Friedrich", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:35:28.332Z", "user": { "_id": "62e7dd4036a8e8a82700041c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e7dd4036a8e8a82700041c/Dgk9mXYLVd4LpiNLWjn-q.jpeg", "fullname": "Felix Friedrich", "isPro": false, "type": "user", "user": "felfri" } }, { "_id": "676931b8bc5af30a79d40a77", "hidden": false, "name": "Simone Tedeschi", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:09:30.612Z", "user": { "_id": "61b85aa99ba538c73a7dc78b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61b85aa99ba538c73a7dc78b/gWxtQAvOYn7cXgE_nAy0p.jpeg", "fullname": "Simone Tedeschi", "isPro": false, "type": "user", "user": "sted97" } }, { "_id": "676931b8bc5af30a79d40a78", "hidden": false, "name": "Patrick Schramowski", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:35:22.391Z", "user": { "_id": "62d021a3dd7bdfc5e5c61c5c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62d021a3dd7bdfc5e5c61c5c/bnQW2SqirfGaQmI84HW_c.jpeg", "fullname": "Patrick Schramowski", "isPro": false, "type": "user", "user": "PSaiml" } }, { "_id": "676931b8bc5af30a79d40a79", "hidden": false, "name": "Manuel Brack", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:35:34.064Z", "user": { "_id": "62fa1d95e8c9c532aa75331c", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62fa1d95e8c9c532aa75331c/WFfk_n8gOj845pSkfdazA.jpeg", "fullname": "Manuel Brack", "isPro": false, "type": "user", "user": "mbrack" } }, { "_id": "676931b8bc5af30a79d40a7a", "hidden": false, "name": "Roberto Navigli", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:35:40.306Z", "user": { "_id": "63845c7c99292a80134bc784", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63845c7c99292a80134bc784/eKOPJMjm8F-JUKT_mi7Dv.png", "fullname": "Roberto Navigli", "isPro": false, "type": "user", "user": "navigli" } }, { "_id": "676931b8bc5af30a79d40a7b", "hidden": false, "name": "Huu Nguyen", "status": "claimed_verified", "statusLastChangedAt": "2025-03-02T20:18:58.316Z", "user": { "_id": "5fc6879e1c5ee87b1164876d", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5fc6879e1c5ee87b1164876d/Tjnm_lv0Bq0gPbFOTDH6E.jpeg", "fullname": "Huu Nguyen", "isPro": false, "type": "user", "user": "huu-ontocord" } }, { "_id": "676931b8bc5af30a79d40a7c", "hidden": false, "name": "Bo Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676931b8bc5af30a79d40a7d", "hidden": false, "name": "Kristian Kersting", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T16:46:54
LLMs Lost in Translation: M-ALERT uncovers Cross-Linguistic Safety Gaps
Building safe Large Language Models (LLMs) across multiple languages is essential in ensuring both safe access and linguistic diversity. To this end, we introduce M-ALERT, a multilingual benchmark that evaluates the safety of LLMs in five languages: English, French, German, Italian, and Spanish. M-ALERT includes 15k high-quality prompts per language, totaling 75k, following the detailed ALERT taxonomy. Our extensive experiments on 10 state-of-the-art LLMs highlight the importance of language-specific safety analysis, revealing that models often exhibit significant inconsistencies in safety across languages and categories. For instance, Llama3.2 shows high unsafety in the category crime_tax for Italian but remains safe in other languages. Similar differences can be observed across all models. In contrast, certain categories, such as substance_cannabis and crime_propaganda, consistently trigger unsafe responses across models and languages. These findings underscore the need for robust multilingual safety practices in LLMs to ensure safe and responsible usage across diverse user communities.
4
676931b9bc5af30a79d40ad2
null
null
2024-12-23T04:21:04.641000
Sequence Matters: Harnessing Video Models in 3D Super-Resolution
https://cdn-thumbnails.h…s/2412.11525.png
2
{ "_id": "6742e770459000b812f3a276", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/nsEgzIm_-mPQXgYXXh05k.png", "followerCount": 1, "fullname": "Lani Ko", "isHf": false, "isMod": false, "isPro": false, "name": "lanikoisgod", "type": "user" }
true
null
2412.11525
[ { "_id": "676687cb5b17ac358c9ff22b", "hidden": false, "name": "Hyun-kyu Ko", "status": "claimed_verified", "statusLastChangedAt": "2024-12-21T15:19:33.358Z", "user": { "_id": "6742e770459000b812f3a276", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/nsEgzIm_-mPQXgYXXh05k.png", "fullname": "Lani Ko", "isPro": false, "type": "user", "user": "lanikoisgod" } }, { "_id": "676687cb5b17ac358c9ff22c", "hidden": false, "name": "Dongheok Park", "status": "claimed_verified", "statusLastChangedAt": "2024-12-21T15:19:35.720Z", "user": { "_id": "645c7f5a7848314a460e0b69", "avatarUrl": "/avatars/41eb5620dcfdbacf48b1345a987f8d8f.svg", "fullname": "Dongheok Park", "isPro": false, "type": "user", "user": "HEOK" } }, { "_id": "676687cb5b17ac358c9ff22d", "hidden": false, "name": "Youngin Park", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:45.160Z", "user": { "_id": "64816a98cb64a2b85a460aa3", "avatarUrl": "/avatars/7a36f54647c9424e37579b56dc6b35d1.svg", "fullname": "Youngin Park", "isPro": false, "type": "user", "user": "yi0109-park" } }, { "_id": "676687cb5b17ac358c9ff22e", "hidden": false, "name": "Byeonghyeon Lee", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:51.729Z", "user": { "_id": "614c218e7e1c37ffc55c7510", "avatarUrl": "/avatars/f86636fceb97480a73b7ca7b5e20d8f5.svg", "fullname": "Byeonghyeon Lee", "isPro": false, "type": "user", "user": "blee" } }, { "_id": "676687cb5b17ac358c9ff22f", "hidden": false, "name": "Juhee Han", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:57.023Z", "user": { "_id": "66036d0530394deb4ae1defd", "avatarUrl": "/avatars/0037ccccb1dccb03d634e3651952fe3e.svg", "fullname": "Juhee Han", "isPro": false, "type": "user", "user": "juxhee" } }, { "_id": "676687cb5b17ac358c9ff230", "hidden": false, "name": "Eunbyung Park", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:35:02.518Z", "user": { "_id": "655e0141d36a195f663ee4b0", "avatarUrl": "/avatars/64609d6fd1581e6cf3e057ee569d69a1.svg", "fullname": "Eunbyung Park", "isPro": false, "type": "user", "user": "epark" } } ]
2024-12-16T08:00:50
Sequence Matters: Harnessing Video Models in 3D Super-Resolution
3D super-resolution aims to reconstruct high-fidelity 3D models from low-resolution (LR) multi-view images. Early studies primarily focused on single-image super-resolution (SISR) models to upsample LR images into high-resolution images. However, these methods often lack view consistency because they operate independently on each image. Although various post-processing techniques have been extensively explored to mitigate these inconsistencies, they have yet to fully resolve the issues. In this paper, we perform a comprehensive study of 3D super-resolution by leveraging video super-resolution (VSR) models. By utilizing VSR models, we ensure a higher degree of spatial consistency and can reference surrounding spatial information, leading to more accurate and detailed reconstructions. Our findings reveal that VSR models can perform remarkably well even on sequences that lack precise spatial alignment. Given this observation, we propose a simple yet practical approach to align LR images without involving fine-tuning or generating 'smooth' trajectory from the trained 3D models over LR images. The experimental results show that the surprisingly simple algorithms can achieve the state-of-the-art results of 3D super-resolution tasks on standard benchmark datasets, such as the NeRF-synthetic and MipNeRF-360 datasets. Project page: https://ko-lani.github.io/Sequence-Matters
10
676687cc5b17ac358c9ff2b9
null
null
2024-12-23T02:30:57.045000
Fietje: An open, efficient LLM for Dutch
https://cdn-thumbnails.h…s/2412.15450.png
3
{ "_id": "5e6a3d4ea9afd5125d9ec064", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1584020801691-noauth.jpeg", "followerCount": 2307, "fullname": "Stefan Schweter", "isHf": false, "isMod": false, "isPro": true, "name": "stefan-it", "type": "user" }
true
null
2412.15450
[ { "_id": "676911953727573d69c04ad6", "hidden": false, "name": "Bram Vanroy", "status": "extracted_confirmed", "statusLastChangedAt": "2024-12-23T09:14:08.298Z", "user": { "_id": "5e1e17b6fcf41d740b6996a8", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1594192845975-5e1e17b6fcf41d740b6996a8.jpeg", "fullname": "Bram Vanroy", "isPro": true, "type": "user", "user": "BramVanroy" } } ]
2024-12-19T23:06:01
Fietje: An open, efficient LLM for Dutch
This paper introduces Fietje, a family of small language models (SLMs) specifically designed for the Dutch language. The model is based on Phi 2, an English-centric model of 2.7 billion parameters. Fietje demonstrated competitive results with larger language models upon its release. A core emphasis of this work is transparency and reproducibility: Fietje is fully open-source, with model weights, datasets, training, and evaluation code all publicly accessible. The paper discusses the performance of Fietje and many other models on an extensive evaluation suite of benchmarks on reasoning, sentiment analysis, world knowledge, linguistic acceptability and word sense disambiguation. Evaluation results illustrate the rapid progress in the field of LLMs, where recent small models outperform older, larger models that were fine-tuned for Dutch. This trend signals an exciting future for Dutch language processing, suggesting that even compact LLMs are becoming increasingly capable. Furthermore, ongoing and future efforts to adapt LLMs to Dutch are poised to enhance these models even further, broadening their applicability and accessibility. Fietje is only an intermediate step in improving accessibility to language technology for users of the Dutch language.
4
676911963727573d69c04b01
null
null
2024-12-23T00:09:09.595000
Offline Reinforcement Learning for LLM Multi-Step Reasoning
https://cdn-thumbnails.h…s/2412.16145.png
6
{ "_id": "660ee5df35d092e3fc2a3685", "avatarUrl": "/avatars/a7e0472fb7ea49973f74e3eea13dc964.svg", "followerCount": 3, "fullname": "Shibo Hao", "isHf": false, "isMod": false, "isPro": false, "name": "Shibo-UCSD", "type": "user" }
true
null
2412.16145
[ { "_id": "6768f050bf7c0f8d9a17c4f2", "hidden": false, "name": "Huaijie Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:31:56.207Z", "user": { "_id": "6310664063b70252b4779150", "avatarUrl": "/avatars/d514270c57b6ac494e0a419d792a72e5.svg", "fullname": "Huaijie Wang", "isPro": false, "type": "user", "user": "jwhj" } }, { "_id": "6768f050bf7c0f8d9a17c4f3", "hidden": false, "name": "Shibo Hao", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:32:02.176Z", "user": { "_id": "660ee5df35d092e3fc2a3685", "avatarUrl": "/avatars/a7e0472fb7ea49973f74e3eea13dc964.svg", "fullname": "Shibo Hao", "isPro": false, "type": "user", "user": "Shibo-UCSD" } }, { "_id": "6768f050bf7c0f8d9a17c4f4", "hidden": false, "name": "Hanze Dong", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:32:07.779Z", "user": { "_id": "63a3ff69f91ad3ea5703841d", "avatarUrl": "/avatars/69227c4bce01d33747c1377b6f9672db.svg", "fullname": "Hanze Dong", "isPro": false, "type": "user", "user": "hendrydong" } }, { "_id": "6768f050bf7c0f8d9a17c4f5", "hidden": false, "name": "Shenao Zhang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:09:50.829Z", "user": { "_id": "661213f894e0b3bff3e80c69", "avatarUrl": "/avatars/d8febbb081825bf91e487aa8bad3a391.svg", "fullname": "Shenao Zhang", "isPro": false, "type": "user", "user": "ZhangShenao" } }, { "_id": "6768f050bf7c0f8d9a17c4f6", "hidden": false, "name": "Yilin Bao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768f050bf7c0f8d9a17c4f7", "hidden": false, "name": "Ziran Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768f050bf7c0f8d9a17c4f8", "hidden": false, "name": "Yi Wu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:32:27.882Z", "user": { "_id": "62c88b04ab9c23f5c459ed90", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62c88b04ab9c23f5c459ed90/tEaeuKpXdXwqK-zq1H-8a.png", "fullname": "Yi Wu", "isPro": false, "type": "user", "user": "yiwu" } } ]
2024-12-20T18:49:45
Offline Reinforcement Learning for LLM Multi-Step Reasoning
Improving the multi-step reasoning ability of large language models (LLMs) with offline reinforcement learning (RL) is essential for quickly adapting them to complex tasks. While Direct Preference Optimization (DPO) has shown promise in aligning LLMs with human preferences, it is less suitable for multi-step reasoning tasks because (1) DPO relies on paired preference data, which is not readily available for multi-step reasoning tasks, and (2) it treats all tokens uniformly, making it ineffective for credit assignment in multi-step reasoning tasks, which often come with sparse reward. In this work, we propose OREO (Offline Reasoning Optimization), an offline RL method for enhancing LLM multi-step reasoning. Building on insights from previous works of maximum entropy reinforcement learning, it jointly learns a policy model and value function by optimizing the soft Bellman Equation. We show in principle that it reduces the need to collect pairwise data and enables better credit assignment. Empirically, OREO surpasses existing offline learning methods on multi-step reasoning benchmarks, including mathematical reasoning tasks (GSM8K, MATH) and embodied agent control (ALFWorld). The approach can be extended to a multi-iteration framework when additional resources are available. Furthermore, the learned value function can be leveraged to guide the tree search for free, which can further boost performance during test time.
38
6768f051bf7c0f8d9a17c53a
null
null
2024-12-22T23:39:08.449000
MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
https://cdn-thumbnails.h…s/2412.14590.png
5
{ "_id": "65373b2c89dd48faca859d02", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65373b2c89dd48faca859d02/42HukqnvMykvaoTxnQJjk.jpeg", "followerCount": 1, "fullname": "Zhen Zheng", "isHf": false, "isMod": false, "isPro": false, "name": "JamesTheZ", "type": "user" }
true
null
2412.14590
[ { "_id": "676827d8bc5af30a79857669", "hidden": false, "name": "Zhen Zheng", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:10:47.069Z", "user": { "_id": "65373b2c89dd48faca859d02", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65373b2c89dd48faca859d02/42HukqnvMykvaoTxnQJjk.jpeg", "fullname": "Zhen Zheng", "isPro": false, "type": "user", "user": "JamesTheZ" } }, { "_id": "676827d8bc5af30a7985766a", "hidden": false, "name": "Xiaonan Song", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:23.206Z", "user": { "_id": "66839c29d88415938469b483", "avatarUrl": "/avatars/38af0856922ea53f607a7158e640923c.svg", "fullname": "Xiaonan Song", "isPro": false, "type": "user", "user": "xiaonans" } }, { "_id": "676827d8bc5af30a7985766b", "hidden": false, "name": "Chuanjie Liu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:28.717Z", "user": { "_id": "6528b76d0b44378dcc54ec60", "avatarUrl": "/avatars/0943b4c7def02c1751e163aa758ccec5.svg", "fullname": "Chuanjie Liu", "isPro": false, "type": "user", "user": "chuanjieliu" } } ]
2024-12-19T07:15:15
MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
Quantization has become one of the most effective methodologies to compress LLMs into smaller size. However, the existing quantization solutions still show limitations of either non-negligible accuracy drop or system inefficiency. In this paper, we make a comprehensive analysis of the general quantization principles on their effect to the triangle of accuracy, memory consumption and system efficiency. We propose MixLLM that explores the new optimization space of mixed-precision quantization between output features based on the insight that different output features matter differently in the model. MixLLM identifies the output features with high salience in the global view rather than within each single layer, effectively assigning the larger bit-width to output features that need it most to achieve good accuracy with low memory consumption. We present the sweet spot of quantization configuration of algorithm-system co-design that leads to high accuracy and system efficiency. To address the system challenge, we design the two-step dequantization to make use of the int8 Tensor Core easily and fast data type conversion to reduce dequantization overhead significantly, and present the software pipeline to overlap the memory access, dequantization and the MatMul to the best. Extensive experiments show that with only 10% more bits, the PPL increasement can be reduced from about 0.5 in SOTA to within 0.2 for Llama 3.1 70B, while on average MMLU-Pro improves by 0.93 over the SOTA of three popular models. In addition to its superior accuracy, MixLLM also achieves state-of-the-art system efficiency.
14
676827d9bc5af30a798576c2
null
null
2024-12-22T22:25:16.875000
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
https://cdn-thumbnails.h…s/2412.16112.png
5
{ "_id": "64b929308b53fb5dbd059ce3", "avatarUrl": "/avatars/564c44cf45db794747a96f79f30ecd91.svg", "followerCount": 8, "fullname": "Liu Songhua", "isHf": false, "isMod": false, "isPro": true, "name": "Huage001", "type": "user" }
true
null
2412.16112
[ { "_id": "6768d7f797a8f966b3362aa6", "hidden": false, "name": "Songhua Liu", "status": "claimed_verified", "statusLastChangedAt": "2025-02-18T16:41:34.617Z", "user": { "_id": "64b929308b53fb5dbd059ce3", "avatarUrl": "/avatars/564c44cf45db794747a96f79f30ecd91.svg", "fullname": "Liu Songhua", "isPro": true, "type": "user", "user": "Huage001" } }, { "_id": "6768d7f797a8f966b3362aa7", "hidden": false, "name": "Zhenxiong Tan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:32:44.224Z", "user": { "_id": "674e743be91289226ef9e857", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/FnLwJTTeItGbAUg4IPr7l.jpeg", "fullname": "唐振雄", "isPro": false, "type": "user", "user": "ZhenxiongTang" } }, { "_id": "6768d7f797a8f966b3362aa8", "hidden": false, "name": "Xinchao Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:33:21.470Z", "user": { "_id": "63fc03a50aab060792ffef39", "avatarUrl": "/avatars/9d5b1bb2a41928e08176b703935133ab.svg", "fullname": "Wangxinchao", "isPro": false, "type": "user", "user": "wxcTest" } } ]
2024-12-20T17:57:09
CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up
Diffusion Transformers (DiT) have become a leading architecture in image generation. However, the quadratic complexity of attention mechanisms, which are responsible for modeling token-wise relationships, results in significant latency when generating high-resolution images. To address this issue, we aim at a linear attention mechanism in this paper that reduces the complexity of pre-trained DiTs to linear. We begin our exploration with a comprehensive summary of existing efficient attention mechanisms and identify four key factors crucial for successful linearization of pre-trained DiTs: locality, formulation consistency, high-rank attention maps, and feature integrity. Based on these insights, we introduce a convolution-like local attention strategy termed CLEAR, which limits feature interactions to a local window around each query token, and thus achieves linear complexity. Our experiments indicate that, by fine-tuning the attention layer on merely 10K self-generated samples for 10K iterations, we can effectively transfer knowledge from a pre-trained DiT to a student model with linear complexity, yielding results comparable to the teacher model. Simultaneously, it reduces attention computations by 99.5% and accelerates generation by 6.3 times for generating 8K-resolution images. Furthermore, we investigate favorable properties in the distilled attention layers, such as zero-shot generalization cross various models and plugins, and improved support for multi-GPU parallel inference. Models and codes are available here: https://github.com/Huage001/CLEAR.
22
6768d7fa97a8f966b3362bcf
null
null
2024-12-22T22:11:03.096000
Parallelized Autoregressive Visual Generation
https://cdn-thumbnails.h…s/2412.15119.png
2
{ "_id": "63ea23b9dedfeebe54d02bdf", "avatarUrl": "/avatars/4d9f9a546aa8c63e277161ea700075c4.svg", "followerCount": 1, "fullname": "Yuqing Wang", "isHf": false, "isMod": false, "isPro": false, "name": "Epiphqny", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63ea23b9dedfeebe54d02bdf/VcujHexGXpqTL8TZjz9kv.mp4" ]
2412.15119
[ { "_id": "6764fd9b10330426aecdded7", "hidden": false, "name": "Yuqing Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:11:21.169Z", "user": { "_id": "63ea23b9dedfeebe54d02bdf", "avatarUrl": "/avatars/4d9f9a546aa8c63e277161ea700075c4.svg", "fullname": "Yuqing Wang", "isPro": false, "type": "user", "user": "Epiphqny" } }, { "_id": "6764fd9b10330426aecdded8", "hidden": false, "name": "Shuhuai Ren", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:11:23.352Z", "user": { "_id": "60d2e681b8448e1785bbda06", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1624434302056-noauth.jpeg", "fullname": "Shuhuai Ren", "isPro": false, "type": "user", "user": "ShuhuaiRen" } }, { "_id": "6764fd9b10330426aecdded9", "hidden": false, "name": "Zhijie Lin", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:30:34.250Z", "user": { "_id": "64415957bd0c9726529802f6", "avatarUrl": "/avatars/1132d1ee68fb58ec635d57c8175caacd.svg", "fullname": "Zhijie Lin", "isPro": false, "type": "user", "user": "Ikuinen" } }, { "_id": "6764fd9b10330426aecddeda", "hidden": false, "name": "Yujin Han", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764fd9b10330426aecddedb", "hidden": false, "name": "Haoyuan Guo", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764fd9b10330426aecddedc", "hidden": false, "name": "Zhenheng Yang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:24:14.038Z", "user": { "_id": "6421183b69a2c2933882d652", "avatarUrl": "/avatars/66813a8fa22915087cccd4dbfb945ca7.svg", "fullname": "Zhenheng Yang", "isPro": false, "type": "user", "user": "zhenheny" } }, { "_id": "6764fd9b10330426aecddedd", "hidden": false, "name": "Difan Zou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764fd9b10330426aecddede", "hidden": false, "name": "Jiashi Feng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:23:58.910Z", "user": { "_id": "67298e44017b96a1d0101dc4", "avatarUrl": "/avatars/1f8ed1a3e911e6a3021087b9371d284c.svg", "fullname": "Jiashi Feng", "isPro": false, "type": "user", "user": "jshfeng" } }, { "_id": "6764fd9b10330426aecddedf", "hidden": false, "name": "Xihui Liu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:11:19.211Z", "user": { "_id": "65d5ec74cd05bc1eaa125040", "avatarUrl": "/avatars/2de1b1539a86452c2c89570eeb02f5ab.svg", "fullname": "Xihui Liu", "isPro": false, "type": "user", "user": "XihuiLiu" } } ]
2024-12-19T17:59:54
Parallelized Autoregressive Visual Generation
Autoregressive models have emerged as a powerful approach for visual generation but suffer from slow inference speed due to their sequential token-by-token prediction process. In this paper, we propose a simple yet effective approach for parallelized autoregressive visual generation that improves generation efficiency while preserving the advantages of autoregressive modeling. Our key insight is that parallel generation depends on visual token dependencies-tokens with weak dependencies can be generated in parallel, while strongly dependent adjacent tokens are difficult to generate together, as their independent sampling may lead to inconsistencies. Based on this observation, we develop a parallel generation strategy that generates distant tokens with weak dependencies in parallel while maintaining sequential generation for strongly dependent local tokens. Our approach can be seamlessly integrated into standard autoregressive models without modifying the architecture or tokenizer. Experiments on ImageNet and UCF-101 demonstrate that our method achieves a 3.6x speedup with comparable quality and up to 9.5x speedup with minimal quality degradation across both image and video generation tasks. We hope this work will inspire future research in efficient visual generation and unified autoregressive modeling. Project page: https://epiphqny.github.io/PAR-project.
51
6764fda210330426aecde36f
null
null
2024-12-22T21:59:13.541000
Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis
https://cdn-thumbnails.h…s/2412.15322.png
2
{ "_id": "63041b541dd5d3c62486c294", "avatarUrl": "/avatars/a5286d562f7b9082730f760e66c3bf29.svg", "followerCount": 25, "fullname": "Ho Kei Cheng", "isHf": false, "isMod": false, "isPro": true, "name": "hkchengrex", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/63041b541dd5d3c62486c294/Isw6SS_03NoIJoDxGgUmd.mp4" ]
2412.15322
[ { "_id": "6768d18826eb881162077014", "hidden": false, "name": "Ho Kei Cheng", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:09:58.995Z", "user": { "_id": "63041b541dd5d3c62486c294", "avatarUrl": "/avatars/a5286d562f7b9082730f760e66c3bf29.svg", "fullname": "Ho Kei Cheng", "isPro": true, "type": "user", "user": "hkchengrex" } }, { "_id": "6768d18826eb881162077015", "hidden": false, "name": "Masato Ishii", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:33:45.656Z", "user": { "_id": "674545617e0ea169b0c471d1", "avatarUrl": "/avatars/a422e3efa5fb1c3f2c6c0997c412b088.svg", "fullname": "Masato Ishii", "isPro": false, "type": "user", "user": "mi141" } }, { "_id": "6768d18826eb881162077016", "hidden": false, "name": "Akio Hayakawa", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:36:17.816Z", "user": { "_id": "66d54244b4396d43c356ea2e", "avatarUrl": "/avatars/c90dde293cbda625db73b35d8f191328.svg", "fullname": "Akio Hayakawa", "isPro": false, "type": "user", "user": "AkHyk" } }, { "_id": "6768d18826eb881162077017", "hidden": false, "name": "Takashi Shibuya", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:09:56.433Z", "user": { "_id": "6650773ca6acfdd2aba7d486", "avatarUrl": "/avatars/d297886ea60dbff98a043caf825820ed.svg", "fullname": "Takashi Shibuya", "isPro": false, "type": "user", "user": "TakashiShibuyaSony" } }, { "_id": "6768d18826eb881162077018", "hidden": false, "name": "Alexander Schwing", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768d18826eb881162077019", "hidden": false, "name": "Yuki Mitsufuji", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:34:10.186Z", "user": { "_id": "665e32384ecc8a7181634f6d", "avatarUrl": "/avatars/8752f952010540d14f45eac849e91371.svg", "fullname": "Yuki Mitsufuji", "isPro": false, "type": "user", "user": "mittu1204" } } ]
2024-12-19T18:59:55
Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis
We propose to synthesize high-quality and synchronized audio, given video and optional text conditions, using a novel multimodal joint training framework MMAudio. In contrast to single-modality training conditioned on (limited) video data only, MMAudio is jointly trained with larger-scale, readily available text-audio data to learn to generate semantically aligned high-quality audio samples. Additionally, we improve audio-visual synchrony with a conditional synchronization module that aligns video conditions with audio latents at the frame level. Trained with a flow matching objective, MMAudio achieves new video-to-audio state-of-the-art among public models in terms of audio quality, semantic alignment, and audio-visual synchronization, while having a low inference time (1.23s to generate an 8s clip) and just 157M parameters. MMAudio also achieves surprisingly competitive performance in text-to-audio generation, showing that joint training does not hinder single-modality performance. Code and demo are available at: https://hkchengrex.github.io/MMAudio
18
6768d18926eb881162077079
null
null
2024-12-22T21:39:56.930000
SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation
https://cdn-thumbnails.h…s/2412.13649.png
3
{ "_id": "644a4fbc2166258fccc664bc", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg", "followerCount": 6, "fullname": "Jialong Wu", "isHf": false, "isMod": false, "isPro": false, "name": "callanwu", "type": "user" }
true
null
2412.13649
[ { "_id": "6768cd61aa9027defefa2ad4", "hidden": false, "name": "Jialong Wu", "status": "claimed_verified", "statusLastChangedAt": "2024-12-23T11:10:00.992Z", "user": { "_id": "644a4fbc2166258fccc664bc", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/8k3b44MbhQiWuo6i8BnYl.jpeg", "fullname": "Jialong Wu", "isPro": false, "type": "user", "user": "callanwu" } }, { "_id": "6768cd61aa9027defefa2ad5", "hidden": false, "name": "Zhenglin Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:36:19.930Z", "user": { "_id": "6643261b8876db14227eeb19", "avatarUrl": "/avatars/67428c9e37a2273697c0547e1783ec6b.svg", "fullname": "Zhenglin Wang", "isPro": false, "type": "user", "user": "wzl0228" } }, { "_id": "6768cd61aa9027defefa2ad6", "hidden": false, "name": "Linhai Zhang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:30:58.318Z", "user": { "_id": "66596d64ce1b2838888f4401", "avatarUrl": "/avatars/d8d0d116a3198571c7e86f09871c2d76.svg", "fullname": "Linhai Zhang", "isPro": false, "type": "user", "user": "lzhang472" } }, { "_id": "6768cd61aa9027defefa2ad7", "hidden": false, "name": "Yilong Lai", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768cd61aa9027defefa2ad8", "hidden": false, "name": "Yulan He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6768cd61aa9027defefa2ad9", "hidden": false, "name": "Deyu Zhou", "status": "admin_assigned", "statusLastChangedAt": "2024-12-23T11:31:24.902Z", "user": { "_id": "64e821f2bddc5b1072b15c2e", "avatarUrl": "/avatars/618b5a48f2fa62daff4e1922a9aa9e8b.svg", "fullname": "zhoudeyu", "isPro": false, "type": "user", "user": "zhoudeyu" } } ]
2024-12-18T09:27:33
SCOPE: Optimizing Key-Value Cache Compression in Long-context Generation
Key-Value (KV) cache has become a bottleneck of LLMs for long-context generation. Despite the numerous efforts in this area, the optimization for the decoding phase is generally ignored. However, we believe such optimization is crucial, especially for long-output generation tasks based on the following two observations: (i) Excessive compression during the prefill phase, which requires specific full context impairs the comprehension of the reasoning task; (ii) Deviation of heavy hitters occurs in the reasoning tasks with long outputs. Therefore, SCOPE, a simple yet efficient framework that separately performs KV cache optimization during the prefill and decoding phases, is introduced. Specifically, the KV cache during the prefill phase is preserved to maintain the essential information, while a novel strategy based on sliding is proposed to select essential heavy hitters for the decoding phase. Memory usage and memory transfer are further optimized using adaptive and discontinuous strategies. Extensive experiments on LongGenBench show the effectiveness and generalization of SCOPE and its compatibility as a plug-in to other prefill-only KV compression methods.
20
6768cd62aa9027defefa2b1b
null
null
2024-12-20T15:56:56.140000
AV-Link: Temporally-Aligned Diffusion Features for Cross-Modal Audio-Video Generation
https://cdn-thumbnails.h…s/2412.15191.png
2
{ "_id": "64276311eb9a0ed86180715b", "avatarUrl": "/avatars/76f933cd549f10e5e2db379de235d304.svg", "followerCount": 1, "fullname": "Aliaksandr Siarohin", "isHf": false, "isMod": false, "isPro": false, "name": "aliaksandr-siarohin", "type": "user" }
false
null
2412.15191
[ { "_id": "6765d9f5bde4bc579f5b0603", "hidden": false, "name": "Moayed Haji-Ali", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0604", "hidden": false, "name": "Willi Menapace", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0605", "hidden": false, "name": "Aliaksandr Siarohin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0606", "hidden": false, "name": "Ivan Skorokhodov", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0607", "hidden": false, "name": "Alper Canberk", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0608", "hidden": false, "name": "Kwot Sin Lee", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b0609", "hidden": false, "name": "Vicente Ordonez", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765d9f5bde4bc579f5b060a", "hidden": false, "name": "Sergey Tulyakov", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T18:57:21
AV-Link: Temporally-Aligned Diffusion Features for Cross-Modal Audio-Video Generation
We propose AV-Link, a unified framework for Video-to-Audio and Audio-to-Video generation that leverages the activations of frozen video and audio diffusion models for temporally-aligned cross-modal conditioning. The key to our framework is a Fusion Block that enables bidirectional information exchange between our backbone video and audio diffusion models through a temporally-aligned self attention operation. Unlike prior work that uses feature extractors pretrained for other tasks for the conditioning signal, AV-Link can directly leverage features obtained by the complementary modality in a single framework i.e. video features to generate audio, or audio features to generate video. We extensively evaluate our design choices and demonstrate the ability of our method to achieve synchronized and high-quality audiovisual content, showcasing its potential for applications in immersive media generation. Project Page: snap-research.github.io/AVLink/
5
6765d9f9bde4bc579f5b078b
null
null
2024-12-20T09:51:46.571000
PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation
https://cdn-thumbnails.h…s/2412.14283.png
4
{ "_id": "64ac49ccb7d86b40fd60a8dd", "avatarUrl": "/avatars/e9f5482cffdd1d5917523a496a3805f0.svg", "followerCount": 1, "fullname": "Liyao Jiang", "isHf": false, "isMod": false, "isPro": false, "name": "LiyaoJiang", "type": "user" }
true
null
2412.14283
[ { "_id": "6765152de07adde9c961fabb", "hidden": false, "name": "Liyao Jiang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:26.476Z", "user": { "_id": "64ac49ccb7d86b40fd60a8dd", "avatarUrl": "/avatars/e9f5482cffdd1d5917523a496a3805f0.svg", "fullname": "Liyao Jiang", "isPro": false, "type": "user", "user": "LiyaoJiang" } }, { "_id": "6765152de07adde9c961fabc", "hidden": false, "name": "Negar Hassanpour", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765152de07adde9c961fabd", "hidden": false, "name": "Mohammad Salameh", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765152de07adde9c961fabe", "hidden": false, "name": "Mohammadreza Samadi", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765152de07adde9c961fabf", "hidden": false, "name": "Jiao He", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765152de07adde9c961fac0", "hidden": false, "name": "Fengyu Sun", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6765152de07adde9c961fac1", "hidden": false, "name": "Di Niu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-18T19:24:15
PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation
Recent research explores the potential of Diffusion Models (DMs) for consistent object editing, which aims to modify object position, size, and composition, etc., while preserving the consistency of objects and background without changing their texture and attributes. Current inference-time methods often rely on DDIM inversion, which inherently compromises efficiency and the achievable consistency of edited images. Recent methods also utilize energy guidance which iteratively updates the predicted noise and can drive the latents away from the original image, resulting in distortions. In this paper, we propose PixelMan, an inversion-free and training-free method for achieving consistent object editing via Pixel Manipulation and generation, where we directly create a duplicate copy of the source object at target location in the pixel space, and introduce an efficient sampling approach to iteratively harmonize the manipulated object into the target location and inpaint its original location, while ensuring image consistency by anchoring the edited image to be generated to the pixel-manipulated image as well as by introducing various consistency-preserving optimization techniques during inference. Experimental evaluations based on benchmark datasets as well as extensive visual comparisons show that in as few as 16 inference steps, PixelMan outperforms a range of state-of-the-art training-based and training-free methods (usually requiring 50 steps) on multiple consistent object editing tasks.
3
67651533e07adde9c961fce3
null
null
2024-12-20T08:57:11.189000
DateLogicQA: Benchmarking Temporal Biases in Large Language Models
https://cdn-thumbnails.h…s/2412.13377.png
2
{ "_id": "60394599033b61166496163b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1614366097007-noauth.jpeg", "followerCount": 21, "fullname": "Gagan Bhatia", "isHf": false, "isMod": false, "isPro": false, "name": "gagan3012", "type": "user" }
true
null
2412.13377
[ { "_id": "676577a8abcd70b404ad67ca", "hidden": false, "name": "Gagan Bhatia", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T15:42:23.951Z", "user": { "_id": "60394599033b61166496163b", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1614366097007-noauth.jpeg", "fullname": "Gagan Bhatia", "isPro": false, "type": "user", "user": "gagan3012" } }, { "_id": "676577a8abcd70b404ad67cb", "hidden": false, "name": "MingZe Tang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676577a8abcd70b404ad67cc", "hidden": false, "name": "Cristina Mahanta", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "676577a8abcd70b404ad67cd", "hidden": false, "name": "Madiha Kazi", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-17T23:25:47
DateLogicQA: Benchmarking Temporal Biases in Large Language Models
This paper introduces DateLogicQA, a benchmark with 190 questions covering diverse date formats, temporal contexts, and reasoning types. We propose the Semantic Integrity Metric to assess tokenization quality and analyse two biases: Representation-Level Bias, affecting embeddings, and Logical-Level Bias, influencing reasoning outputs. Our findings provide a comprehensive evaluation of LLMs' capabilities and limitations in temporal reasoning, highlighting key challenges in handling temporal data accurately. The GitHub repository for our work is available at https://github.com/gagan3012/EAIS-Temporal-Bias
2
676577aaabcd70b404ad687b
null
null
2024-12-20T05:35:43.828000
Move-in-2D: 2D-Conditioned Human Motion Generation
https://cdn-thumbnails.h…s/2412.13185.png
2
{ "_id": "667b2ee8e005e1dbcc76e2e2", "avatarUrl": "/avatars/8b2f5f997f0ed5ae9f4f274941933c40.svg", "followerCount": null, "fullname": "Hsin-Ping Huang", "isHf": false, "isMod": false, "isPro": false, "name": "hsinh", "type": "user" }
true
null
2412.13185
[ { "_id": "6762c4d82faaf11234a44936", "hidden": false, "name": "Hsin-Ping Huang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-19T18:02:01.287Z", "user": { "_id": "667b2ee8e005e1dbcc76e2e2", "avatarUrl": "/avatars/8b2f5f997f0ed5ae9f4f274941933c40.svg", "fullname": "Hsin-Ping Huang", "isPro": false, "type": "user", "user": "hsinh" } }, { "_id": "6762c4d82faaf11234a44937", "hidden": false, "name": "Yang Zhou", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6762c4d82faaf11234a44938", "hidden": false, "name": "Jui-Hsien Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6762c4d82faaf11234a44939", "hidden": false, "name": "Difan Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6762c4d82faaf11234a4493a", "hidden": false, "name": "Feng Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6762c4d82faaf11234a4493b", "hidden": false, "name": "Ming-Hsuan Yang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6762c4d82faaf11234a4493c", "hidden": false, "name": "Zhan Xu", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-17T18:58:07
Move-in-2D: 2D-Conditioned Human Motion Generation
Generating realistic human videos remains a challenging task, with the most effective methods currently relying on a human motion sequence as a control signal. Existing approaches often use existing motion extracted from other videos, which restricts applications to specific motion types and global scene matching. We propose Move-in-2D, a novel approach to generate human motion sequences conditioned on a scene image, allowing for diverse motion that adapts to different scenes. Our approach utilizes a diffusion model that accepts both a scene image and text prompt as inputs, producing a motion sequence tailored to the scene. To train this model, we collect a large-scale video dataset featuring single-human activities, annotating each video with the corresponding human motion as the target output. Experiments demonstrate that our method effectively predicts human motion that aligns with the scene image after projection. Furthermore, we show that the generated motion sequence improves human motion quality in video synthesis tasks.
2
6762c4d92faaf11234a449a9
null
null
2024-12-19T23:42:38.162000
LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis
https://cdn-thumbnails.h…s/2412.15214.png
3
{ "_id": "64981bea09cea550852652af", "avatarUrl": "/avatars/df528e9008972c8e5ae4d278e617476c.svg", "followerCount": 3, "fullname": "Qiuyu Wang", "isHf": false, "isMod": false, "isPro": false, "name": "qiuyuu", "type": "user" }
true
null
2412.15214
[ { "_id": "6764f548cee1fdbd9765e9bc", "hidden": false, "name": "Hanlin Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:36.964Z", "user": { "_id": "665f059a8947302aa2c63afe", "avatarUrl": "/avatars/50f560285946532321a0bd526494148d.svg", "fullname": "hanlin wang", "isPro": false, "type": "user", "user": "hlwang06" } }, { "_id": "6764f548cee1fdbd9765e9bd", "hidden": false, "name": "Hao Ouyang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764f548cee1fdbd9765e9be", "hidden": false, "name": "Qiuyu Wang", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:28.937Z", "user": { "_id": "64981bea09cea550852652af", "avatarUrl": "/avatars/df528e9008972c8e5ae4d278e617476c.svg", "fullname": "Qiuyu Wang", "isPro": false, "type": "user", "user": "qiuyuu" } }, { "_id": "6764f548cee1fdbd9765e9bf", "hidden": false, "name": "Wen Wang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764f548cee1fdbd9765e9c0", "hidden": false, "name": "Ka Leong Cheng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:26:43.630Z", "user": { "_id": "64acd2ec39fcfebff8c79c00", "avatarUrl": "/avatars/9419384846b92182f2c47ce2fbd0f8d3.svg", "fullname": "Ka Leong Cheng", "isPro": false, "type": "user", "user": "felixcheng97" } }, { "_id": "6764f548cee1fdbd9765e9c1", "hidden": false, "name": "Qifeng Chen", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:26:37.950Z", "user": { "_id": "6467b121e7a6a374fd19b44b", "avatarUrl": "/avatars/3f2874d58986d651aef55e3408b05700.svg", "fullname": "Qifeng Chen", "isPro": false, "type": "user", "user": "cqf" } }, { "_id": "6764f548cee1fdbd9765e9c2", "hidden": false, "name": "Yujun Shen", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764f548cee1fdbd9765e9c3", "hidden": false, "name": "Limin Wang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:26:23.360Z", "user": { "_id": "62c77f4352d8ae531f5511f9", "avatarUrl": "/avatars/50198ccb02ccd286975a4613fbabee28.svg", "fullname": "Limin Wang", "isPro": false, "type": "user", "user": "lmwang" } } ]
2024-12-19T18:59:56
LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis
The intuitive nature of drag-based interaction has led to its growing adoption for controlling object trajectories in image-to-video synthesis. Still, existing methods that perform dragging in the 2D space usually face ambiguity when handling out-of-plane movements. In this work, we augment the interaction with a new dimension, i.e., the depth dimension, such that users are allowed to assign a relative depth for each point on the trajectory. That way, our new interaction paradigm not only inherits the convenience from 2D dragging, but facilitates trajectory control in the 3D space, broadening the scope of creativity. We propose a pioneering method for 3D trajectory control in image-to-video synthesis by abstracting object masks into a few cluster points. These points, accompanied by the depth information and the instance information, are finally fed into a video diffusion model as the control signal. Extensive experiments validate the effectiveness of our approach, dubbed LeviTor, in precisely manipulating the object movements when producing photo-realistic videos from static images. Project page: https://ppetrichor.github.io/levitor.github.io/
15
6764f549cee1fdbd9765ea31
null
null
2024-12-19T22:27:39.645000
AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling
https://cdn-thumbnails.h…s/2412.15084.png
2
{ "_id": "62bc9d90e81dfd65cced9316", "avatarUrl": "/avatars/05df14cd1fdbc7d6a80d2960a05a94f0.svg", "followerCount": 3, "fullname": "Yang Chen", "isHf": false, "isMod": false, "isPro": false, "name": "ychenNLP", "type": "user" }
true
null
2412.15084
[ { "_id": "6764e3e30afbb34519fd2018", "hidden": false, "name": "Zihan Liu", "status": "extracted_pending", "statusLastChangedAt": "2024-12-20T03:26:28.621Z", "user": { "_id": "65f33b1c9f7970ccc0234cbf", "avatarUrl": "/avatars/99fbab303912e3674663251c04279907.svg", "fullname": "Zihan Liu", "isPro": false, "type": "user", "user": "zihanliu" } }, { "_id": "6764e3e30afbb34519fd2019", "hidden": false, "name": "Yang Chen", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:40.034Z", "user": { "_id": "62bc9d90e81dfd65cced9316", "avatarUrl": "/avatars/05df14cd1fdbc7d6a80d2960a05a94f0.svg", "fullname": "Yang Chen", "isPro": false, "type": "user", "user": "ychenNLP" } }, { "_id": "6764e3e30afbb34519fd201a", "hidden": false, "name": "Mohammad Shoeybi", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:27:29.660Z", "user": { "_id": "6641544c695975af2cbd0da6", "avatarUrl": "/avatars/0ad3c18dcba585259b064fe9b00a07ce.svg", "fullname": "Mohammad Shoeybi", "isPro": false, "type": "user", "user": "shoeybi" } }, { "_id": "6764e3e30afbb34519fd201b", "hidden": false, "name": "Bryan Catanzaro", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:27:23.912Z", "user": { "_id": "6311021788942700629e6247", "avatarUrl": "/avatars/e7adc1632b76e80e7e4a590033d1c20a.svg", "fullname": "Bryan Catanzaro", "isPro": false, "type": "user", "user": "ctnzr" } }, { "_id": "6764e3e30afbb34519fd201c", "hidden": false, "name": "Wei Ping", "status": "extracted_pending", "statusLastChangedAt": "2024-12-20T03:26:28.621Z", "user": { "_id": "663ee43bfeeb49803537da98", "avatarUrl": "/avatars/17c3e9c435cc36fb04b4589e6176a243.svg", "fullname": "Wei Ping", "isPro": false, "type": "user", "user": "wping" } } ]
2024-12-19T17:29:44
AceMath: Advancing Frontier Math Reasoning with Post-Training and Reward Modeling
In this paper, we introduce AceMath, a suite of frontier math models that excel in solving complex math problems, along with highly effective reward models capable of evaluating generated solutions and reliably identifying the correct ones. To develop the instruction-tuned math models, we propose a supervised fine-tuning (SFT) process that first achieves competitive performance across general domains, followed by targeted fine-tuning for the math domain using a carefully curated set of prompts and synthetically generated responses. The resulting model, AceMath-72B-Instruct greatly outperforms Qwen2.5-Math-72B-Instruct, GPT-4o and Claude-3.5 Sonnet. To develop math-specialized reward model, we first construct AceMath-RewardBench, a comprehensive and robust benchmark for evaluating math reward models across diverse problems and difficulty levels. After that, we present a systematic approach to build our math reward models. The resulting model, AceMath-72B-RM, consistently outperforms state-of-the-art reward models. Furthermore, when combining AceMath-72B-Instruct with AceMath-72B-RM, we achieve the highest average rm@8 score across the math reasoning benchmarks. We will release model weights, training data, and evaluation benchmarks at: https://research.nvidia.com/labs/adlr/acemath
13
6764e3e40afbb34519fd206d
null
null
2024-12-19T22:27:13.562000
Descriptive Caption Enhancement with Visual Specialists for Multimodal Perception
https://cdn-thumbnails.h…s/2412.14233.png
2
{ "_id": "64297212e5f33939cf3a3d9b", "avatarUrl": "/avatars/bd21759ab5d7e526b99fcb7ed813ffb3.svg", "followerCount": null, "fullname": "yanpeng_sun", "isHf": false, "isMod": false, "isPro": false, "name": "syp115", "type": "user" }
true
null
2412.14233
[ { "_id": "6764e3c22086097d58dc7fc4", "hidden": false, "name": "Yanpeng Sun", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:42.316Z", "user": { "_id": "64297212e5f33939cf3a3d9b", "avatarUrl": "/avatars/bd21759ab5d7e526b99fcb7ed813ffb3.svg", "fullname": "yanpeng_sun", "isPro": false, "type": "user", "user": "syp115" } }, { "_id": "6764e3c22086097d58dc7fc5", "hidden": false, "name": "Jing Hao", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e3c22086097d58dc7fc6", "hidden": false, "name": "Ke Zhu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e3c22086097d58dc7fc7", "hidden": false, "name": "Jiang-Jiang Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e3c22086097d58dc7fc8", "hidden": false, "name": "Yuxiang Zhao", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:33:04.173Z", "user": { "_id": "641966f6e8a6183a8caa3145", "avatarUrl": "/avatars/a7d7096fed7e49fcc04f2ef494e9c381.svg", "fullname": "yuxiang zhao", "isPro": false, "type": "user", "user": "cloud913" } }, { "_id": "6764e3c22086097d58dc7fc9", "hidden": true, "name": "Xiaofan Li", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:33:18.741Z", "user": { "_id": "65f425f9bfb2f1451f237827", "avatarUrl": "/avatars/36b0bce9364d88f10081847befd29787.svg", "fullname": "Xiaofan Li", "isPro": false, "type": "user", "user": "FuNz" } }, { "_id": "6764e3c22086097d58dc7fca", "hidden": false, "name": "Gang Zhang", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e3c22086097d58dc7fcb", "hidden": false, "name": "Zechao Li", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e3c22086097d58dc7fcc", "hidden": false, "name": "Jingdong Wang", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-18T18:45:43
Descriptive Caption Enhancement with Visual Specialists for Multimodal Perception
Training Large Multimodality Models (LMMs) relies on descriptive image caption that connects image and language. Existing methods either distill the caption from the LMM models or construct the captions from the internet images or by human. We propose to leverage off-the-shelf visual specialists, which were trained from annotated images initially not for image captioning, for enhancing the image caption. Our approach, named DCE, explores object low-level and fine-grained attributes (e.g., depth, emotion and fine-grained categories) and object relations (e.g., relative location and human-object-interaction (HOI)), and combine the attributes into the descriptive caption. Experiments demonstrate that such visual specialists are able to improve the performance for visual understanding tasks as well as reasoning that benefits from more accurate visual understanding. We will release the source code and the pipeline so that other visual specialists are easily combined into the pipeline. The complete source code of DCE pipeline and datasets will be available at https://github.com/syp2ysy/DCE.
6
6764e3c32086097d58dc8000
null
null
2024-12-19T22:24:46.171000
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
https://cdn-thumbnails.h…s/2412.15200.png
2
{ "_id": "66054d519af76210c2ee4b8e", "avatarUrl": "/avatars/c1854765b1e45ec33602b2cb9443f82a.svg", "followerCount": 1, "fullname": "Wang Zhao", "isHf": false, "isMod": false, "isPro": true, "name": "thuzhaowang", "type": "user" }
true
null
2412.15200
[ { "_id": "6764da71bdc5692a8d6bccf6", "hidden": false, "name": "Wang Zhao", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:57.404Z", "user": { "_id": "66054d519af76210c2ee4b8e", "avatarUrl": "/avatars/c1854765b1e45ec33602b2cb9443f82a.svg", "fullname": "Wang Zhao", "isPro": true, "type": "user", "user": "thuzhaowang" } }, { "_id": "6764da71bdc5692a8d6bccf7", "hidden": false, "name": "Yan-Pei Cao", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:27:44.668Z", "user": { "_id": "638066faf022c8a5803f7eb8", "avatarUrl": "/avatars/4cfd699c3f6c5461b12b7dc5e3fe183d.svg", "fullname": "Yanpei Cao", "isPro": false, "type": "user", "user": "pookiefoof" } }, { "_id": "6764da71bdc5692a8d6bccf8", "hidden": false, "name": "Jiale Xu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:28:27.765Z", "user": { "_id": "62c695829db11473f08af1cd", "avatarUrl": "/avatars/cacb54077892a44aef81454dc107df4f.svg", "fullname": "Jiale Xu", "isPro": true, "type": "user", "user": "bluestyle97" } }, { "_id": "6764da71bdc5692a8d6bccf9", "hidden": false, "name": "Yuejiang Dong", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764da71bdc5692a8d6bccfa", "hidden": false, "name": "Ying Shan", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:28:06.210Z", "user": { "_id": "63ca3ddc04c979828310bfcb", "avatarUrl": "/avatars/615e0d8622950b4408b40d550f02a894.svg", "fullname": "Ying Shan", "isPro": false, "type": "user", "user": "yshan2u" } } ]
2024-12-19T18:58:46
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
Procedural Content Generation (PCG) is powerful in creating high-quality 3D contents, yet controlling it to produce desired shapes is difficult and often requires extensive parameter tuning. Inverse Procedural Content Generation aims to automatically find the best parameters under the input condition. However, existing sampling-based and neural network-based methods still suffer from numerous sample iterations or limited controllability. In this work, we present DI-PCG, a novel and efficient method for Inverse PCG from general image conditions. At its core is a lightweight diffusion transformer model, where PCG parameters are directly treated as the denoising target and the observed images as conditions to control parameter generation. DI-PCG is efficient and effective. With only 7.6M network parameters and 30 GPU hours to train, it demonstrates superior performance in recovering parameters accurately, and generalizing well to in-the-wild images. Quantitative and qualitative experiment results validate the effectiveness of DI-PCG in inverse PCG and image-to-3D generation tasks. DI-PCG offers a promising approach for efficient inverse PCG and represents a valuable exploration step towards a 3D generation path that models how to construct a 3D asset using parametric models.
9
6764da76bdc5692a8d6bcedf
null
null
2024-12-19T22:23:57.229000
How to Synthesize Text Data without Model Collapse?
https://cdn-thumbnails.h…s/2412.14689.png
4
{ "_id": "649e6761f9134a06ed1e0cea", "avatarUrl": "/avatars/00b5dcb744c54a4aa18fe08efd70d6ff.svg", "followerCount": 6, "fullname": "Daixuan Cheng", "isHf": false, "isMod": false, "isPro": false, "name": "daixuancheng", "type": "user" }
true
null
2412.14689
[ { "_id": "6764e1dfc51db09f8c3cd75b", "hidden": false, "name": "Xuekai Zhu", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:20:24.860Z", "user": { "_id": "647ffddeb82adfa7cc1a10d9", "avatarUrl": "/avatars/26aa168d6b2068298ebb16584aa52b6c.svg", "fullname": "zhu", "isPro": false, "type": "user", "user": "xuekai" } }, { "_id": "6764e1dfc51db09f8c3cd75c", "hidden": false, "name": "Daixuan Cheng", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:20:32.479Z", "user": { "_id": "649e6761f9134a06ed1e0cea", "avatarUrl": "/avatars/00b5dcb744c54a4aa18fe08efd70d6ff.svg", "fullname": "Daixuan Cheng", "isPro": false, "type": "user", "user": "daixuancheng" } }, { "_id": "6764e1dfc51db09f8c3cd75d", "hidden": false, "name": "Hengli Li", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:20:41.812Z", "user": { "_id": "63256836ff539edeea8a8660", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1663395861151-noauth.png", "fullname": "Li Hengli", "isPro": false, "type": "user", "user": "Hengli" } }, { "_id": "6764e1dfc51db09f8c3cd75e", "hidden": false, "name": "Kaiyan Zhang", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:20:57.241Z", "user": { "_id": "60bc94cd85a3ab33829b6211", "avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg", "fullname": "Kaiyan Zhang", "isPro": false, "type": "user", "user": "iseesaw" } }, { "_id": "6764e1dfc51db09f8c3cd75f", "hidden": true, "name": "Ermo Hua", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:36:29.222Z", "user": { "_id": "6445fa2ffc22e309d78bef3e", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6445fa2ffc22e309d78bef3e/FQaINLd0PjgY9EnK_APRk.jpeg", "fullname": "Messi Hua", "isPro": false, "type": "user", "user": "Messi-Hua" } }, { "_id": "6764e1dfc51db09f8c3cd760", "hidden": false, "name": "Xingtai Lv", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:21:08.043Z", "user": { "_id": "663f07d029be04778ba97871", "avatarUrl": "/avatars/fb7c9d4a2c537d918a3267e7cbc03f04.svg", "fullname": "Xingtai Lv", "isPro": false, "type": "user", "user": "XingtaiHF" } }, { "_id": "6764e1dfc51db09f8c3cd761", "hidden": false, "name": "Ning Ding", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:21:13.963Z", "user": { "_id": "60cf4bcb1ce3775ebb86e5d5", "avatarUrl": "/avatars/12bcd18d215abf91f297f93007733148.svg", "fullname": "Ning Ding", "isPro": false, "type": "user", "user": "stingning" } }, { "_id": "6764e1dfc51db09f8c3cd762", "hidden": false, "name": "Zhouhan Lin", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e1dfc51db09f8c3cd763", "hidden": false, "name": "Zilong Zheng", "status": "extracted_pending", "statusLastChangedAt": "2024-12-20T03:17:52.795Z", "user": { "_id": "63a95a6a7930fa8c7dd63d4e", "avatarUrl": "/avatars/d9d0420f7ddfe2f3a7e029fb05f1c89f.svg", "fullname": "Zilong Zheng", "isPro": false, "type": "user", "user": "zlzheng" } }, { "_id": "6764e1dfc51db09f8c3cd764", "hidden": false, "name": "Bowen Zhou", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:21:41.799Z", "user": { "_id": "669f614b59adf5b56e05bce3", "avatarUrl": "/avatars/ffd4189efbceb0e63a03db273065a44b.svg", "fullname": "BowenZhou", "isPro": false, "type": "user", "user": "bowenZhou" } } ]
2024-12-19T09:43:39
How to Synthesize Text Data without Model Collapse?
Model collapse in synthetic data indicates that iterative training on self-generated data leads to a gradual decline in performance. With the proliferation of AI models, synthetic data will fundamentally reshape the web data ecosystem. Future GPT-{n} models will inevitably be trained on a blend of synthetic and human-produced data. In this paper, we focus on two questions: what is the impact of synthetic data on language model training, and how to synthesize data without model collapse? We first pre-train language models across different proportions of synthetic data, revealing a negative correlation between the proportion of synthetic data and model performance. We further conduct statistical analysis on synthetic data to uncover distributional shift phenomenon and over-concentration of n-gram features. Inspired by the above findings, we propose token editing on human-produced data to obtain semi-synthetic data. As a proof of concept, we theoretically demonstrate that token-level editing can prevent model collapse, as the test error is constrained by a finite upper bound. We conduct extensive experiments on pre-training from scratch, continual pre-training, and supervised fine-tuning. The results validate our theoretical proof that token-level editing improves data quality and enhances model performance.
51
6764e1e0c51db09f8c3cd793
null
null
2024-12-19T22:20:23.883000
Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion
https://cdn-thumbnails.h…s/2412.14462.png
2
{ "_id": "658bb7e47459b6e471b9d2e6", "avatarUrl": "/avatars/efd8051b468b4dbcb5d149479de67c58.svg", "followerCount": null, "fullname": "Wanhua Li", "isHf": false, "isMod": false, "isPro": false, "name": "EthanTaylor", "type": "user" }
true
[ "https://cdn-uploads.huggingface.co/production/uploads/658bb7e47459b6e471b9d2e6/BeBx0G4iyjtUIO5RHxGdL.qt" ]
2412.14462
[ { "_id": "6764e0339aeafaa0d8405e74", "hidden": false, "name": "Jixuan He", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:24:47.947Z", "user": { "_id": "64a5bda1733731f38378fdd3", "avatarUrl": "/avatars/3ec7c553f92ced4ae0420ce16f7e71b9.svg", "fullname": "Jixuan HE", "isPro": false, "type": "user", "user": "Kakituken" } }, { "_id": "6764e0339aeafaa0d8405e75", "hidden": false, "name": "Wanhua Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:44.737Z", "user": { "_id": "658bb7e47459b6e471b9d2e6", "avatarUrl": "/avatars/efd8051b468b4dbcb5d149479de67c58.svg", "fullname": "Wanhua Li", "isPro": false, "type": "user", "user": "EthanTaylor" } }, { "_id": "6764e0339aeafaa0d8405e76", "hidden": false, "name": "Ye Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e0339aeafaa0d8405e77", "hidden": false, "name": "Junsik Kim", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764e0339aeafaa0d8405e78", "hidden": false, "name": "Donglai Wei", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:25:00.011Z", "user": { "_id": "628c2c8ab80bb09700d6cb1d", "avatarUrl": "/avatars/384f75778fd5f07f249f2815a3039dca.svg", "fullname": "donglai wei", "isPro": false, "type": "user", "user": "dwei" } }, { "_id": "6764e0339aeafaa0d8405e79", "hidden": false, "name": "Hanspeter Pfister", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:24:54.200Z", "user": { "_id": "62acc69e36f7c7b7f65fccca", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62acc69e36f7c7b7f65fccca/S8o0XE6TaQwLU8q3QPkct.png", "fullname": "Hanspeter Pfister", "isPro": false, "type": "user", "user": "hpfister" } } ]
2024-12-19T02:23:13
Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion
As a common image editing operation, image composition involves integrating foreground objects into background scenes. In this paper, we expand the application of the concept of Affordance from human-centered image composition tasks to a more general object-scene composition framework, addressing the complex interplay between foreground objects and background scenes. Following the principle of Affordance, we define the affordance-aware object insertion task, which aims to seamlessly insert any object into any scene with various position prompts. To address the limited data issue and incorporate this task, we constructed the SAM-FB dataset, which contains over 3 million examples across more than 3,000 object categories. Furthermore, we propose the Mask-Aware Dual Diffusion (MADD) model, which utilizes a dual-stream architecture to simultaneously denoise the RGB image and the insertion mask. By explicitly modeling the insertion mask in the diffusion process, MADD effectively facilitates the notion of affordance. Extensive experimental results show that our method outperforms the state-of-the-art methods and exhibits strong generalization performance on in-the-wild images. Please refer to our code on https://github.com/KaKituken/affordance-aware-any.
15
6764e0389aeafaa0d8405f9e
null
null
2024-12-19T22:04:22.958000
UIP2P: Unsupervised Instruction-based Image Editing via Cycle Edit Consistency
https://cdn-thumbnails.h…s/2412.15216.png
3
{ "_id": "63412f2add8853dc7e306a4f", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/tnPbjO1jAvycUkeooUxHD.png", "followerCount": 1, "fullname": "Enis Simsar", "isHf": false, "isMod": false, "isPro": true, "name": "enisimsar", "type": "user" }
true
null
2412.15216
[ { "_id": "6764deae8ae9bee011733928", "hidden": false, "name": "Enis Simsar", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:47.269Z", "user": { "_id": "63412f2add8853dc7e306a4f", "avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/tnPbjO1jAvycUkeooUxHD.png", "fullname": "Enis Simsar", "isPro": true, "type": "user", "user": "enisimsar" } }, { "_id": "6764deae8ae9bee011733929", "hidden": false, "name": "Alessio Tonioni", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:28:56.892Z", "user": { "_id": "63402b30e670ff9cf63d8caa", "avatarUrl": "/avatars/0aee84d132a78d4ec71663836a57a245.svg", "fullname": "Alessio Tonioni", "isPro": false, "type": "user", "user": "Alessiot" } }, { "_id": "6764deae8ae9bee01173392a", "hidden": false, "name": "Yongqin Xian", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764deae8ae9bee01173392b", "hidden": false, "name": "Thomas Hofmann", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:28:42.685Z", "user": { "_id": "6630831ce888d89069e6276a", "avatarUrl": "/avatars/b40d00ac8978405dd2ae66166ac969ba.svg", "fullname": "Thomas Hofmann", "isPro": false, "type": "user", "user": "thofmann" } }, { "_id": "6764deae8ae9bee01173392c", "hidden": false, "name": "Federico Tombari", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T18:59:58
UIP2P: Unsupervised Instruction-based Image Editing via Cycle Edit Consistency
We propose an unsupervised model for instruction-based image editing that eliminates the need for ground-truth edited images during training. Existing supervised methods depend on datasets containing triplets of input image, edited image, and edit instruction. These are generated by either existing editing methods or human-annotations, which introduce biases and limit their generalization ability. Our method addresses these challenges by introducing a novel editing mechanism called Cycle Edit Consistency (CEC), which applies forward and backward edits in one training step and enforces consistency in image and attention spaces. This allows us to bypass the need for ground-truth edited images and unlock training for the first time on datasets comprising either real image-caption pairs or image-caption-edit triplets. We empirically show that our unsupervised technique performs better across a broader range of edits with high fidelity and precision. By eliminating the need for pre-existing datasets of triplets, reducing biases associated with supervised methods, and proposing CEC, our work represents a significant advancement in unblocking scaling of instruction-based image editing.
5
6764deaf8ae9bee0117339a6
null
null
2024-12-19T22:03:19.323000
TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation
https://cdn-thumbnails.h…s/2412.14642.png
2
{ "_id": "6438e55cb2ea24b52ebc45ec", "avatarUrl": "/avatars/e13c7398f77e7e0bd5eed03102aa5c36.svg", "followerCount": 3, "fullname": "Jiatong LI", "isHf": false, "isMod": false, "isPro": false, "name": "phenixace", "type": "user" }
true
null
2412.14642
[ { "_id": "6764de0ca246952fabef4309", "hidden": false, "name": "Jiatong Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-30T19:36:31.168Z", "user": { "_id": "6438e55cb2ea24b52ebc45ec", "avatarUrl": "/avatars/e13c7398f77e7e0bd5eed03102aa5c36.svg", "fullname": "Jiatong LI", "isPro": false, "type": "user", "user": "phenixace" } }, { "_id": "6764de0ca246952fabef430a", "hidden": false, "name": "Junxian Li", "status": "claimed_verified", "statusLastChangedAt": "2024-12-20T08:32:48.973Z", "user": { "_id": "656ae4088fb1ddf0d5ec9ac5", "avatarUrl": "/avatars/e38468d2c0274f3c0f5732f30a2e3436.svg", "fullname": "Junxian Li", "isPro": false, "type": "user", "user": "Duke-de-Artois" } }, { "_id": "6764de0ca246952fabef430b", "hidden": false, "name": "Yunqing Liu", "status": null, "statusLastChangedAt": null, "user": null }, { "_id": "6764de0ca246952fabef430c", "hidden": false, "name": "Dongzhan Zhou", "status": "admin_assigned", "statusLastChangedAt": "2024-12-20T09:30:10.820Z", "user": { "_id": "6538b861613fe158bd581e35", "avatarUrl": "/avatars/6817dbfe903675721fd227058b0a91ac.svg", "fullname": "Dongzhan Zhou", "isPro": false, "type": "user", "user": "schrodingers-tiger" } }, { "_id": "6764de0ca246952fabef430d", "hidden": false, "name": "Qing Li", "status": null, "statusLastChangedAt": null, "user": null } ]
2024-12-19T08:51:16
TOMG-Bench: Evaluating LLMs on Text-based Open Molecule Generation
In this paper, we propose Text-based Open Molecule Generation Benchmark (TOMG-Bench), the first benchmark to evaluate the open-domain molecule generation capability of LLMs. TOMG-Bench encompasses a dataset of three major tasks: molecule editing (MolEdit), molecule optimization (MolOpt), and customized molecule generation (MolCustom). Each task further contains three subtasks, with each subtask comprising 5,000 test samples. Given the inherent complexity of open molecule generation, we have also developed an automated evaluation system that helps measure both the quality and the accuracy of the generated molecules. Our comprehensive benchmarking of 25 LLMs reveals the current limitations and potential areas for improvement in text-guided molecule discovery. Furthermore, with the assistance of OpenMolIns, a specialized instruction tuning dataset proposed for solving challenges raised by TOMG-Bench, Llama3.1-8B could outperform all the open-source general LLMs, even surpassing GPT-3.5-turbo by 46.5\% on TOMG-Bench. Our codes and datasets are available through https://github.com/phenixace/TOMG-Bench.
4
6764de0da246952fabef4389
null
null