publishedAt
timestamp[ns] | title
string | thumbnail
string | numComments
int64 | submittedBy
dict | isAuthorParticipating
bool | mediaUrls
list | paper_id
string | paper_authors
list | paper_publishedAt
timestamp[ns] | paper_title
string | paper_summary
string | paper_upvotes
int64 | paper_discussionId
string | paper_projectPage
string | paper_githubRepo
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2025-02-04T23:04:25.888000 |
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
| 2 |
{
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661093436322-noauth.jpeg",
"followerCount": 5,
"fullname": "Xiang Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Dominic789654",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/63024676056ec3a2a8714b24/XcgjmhpXd3dH6LnFZGupJ.png",
"https://cdn-uploads.huggingface.co/production/uploads/63024676056ec3a2a8714b24/hxWz1iVOUcE76E_K5z-B0.png"
] |
2502.01941
|
[
{
"_id": "67a2e2a02dd2adbc88755a47",
"hidden": false,
"name": "Xiang Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:12:48.427Z",
"user": {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661093436322-noauth.jpeg",
"fullname": "Xiang Liu",
"isPro": false,
"type": "user",
"user": "Dominic789654"
}
},
{
"_id": "67a2e2a02dd2adbc88755a48",
"hidden": false,
"name": "Zhenheng Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a49",
"hidden": false,
"name": "Hong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4a",
"hidden": false,
"name": "Peijie Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4b",
"hidden": false,
"name": "Zeyu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4c",
"hidden": false,
"name": "Xiuze Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4d",
"hidden": false,
"name": "Bo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4e",
"hidden": false,
"name": "Xuming Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2e2a02dd2adbc88755a4f",
"hidden": false,
"name": "Xiaowen Chu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T02:23:06 |
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
|
This paper investigates an under-explored challenge in large language models
(LLMs): the impact of KV cache compression methods on LLMs' fundamental
capabilities. While existing methods achieve impressive compression ratios on
long-context benchmarks, their effects on core model capabilities remain
understudied. We present a comprehensive empirical study evaluating prominent
KV cache compression methods across diverse tasks, spanning world knowledge,
commonsense reasoning, arithmetic reasoning, code generation, safety, and
long-context understanding and generation.Our analysis reveals that KV cache
compression methods exhibit task-specific performance degradation. Arithmetic
reasoning tasks prove particularly sensitive to aggressive compression, with
different methods showing performance drops of 17.4%-43.3%. Notably, the
DeepSeek R1 Distill model exhibits more robust compression tolerance compared
to instruction-tuned models, showing only 9.67%-25.53% performance
degradation. Based on our analysis of attention patterns and cross-task
compression performance, we propose ShotKV, a novel compression approach that
distinctly handles prefill and decoding phases while maintaining shot-level
semantic coherence. Empirical results show that ShotKV achieves 9%-18%
performance improvements on long-context generation tasks under aggressive
compression ratios.
| 15 |
67a2e2a22dd2adbc88755ab4
| null | null |
|
2025-02-04T22:23:07.858000 |
ACECODER: Acing Coder RL via Automated Test-Case Synthesis
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2502.01718
|
[
{
"_id": "67a2d995c97974764a8c294c",
"hidden": false,
"name": "Huaye Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d995c97974764a8c294d",
"hidden": false,
"name": "Dongfu Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:14:14.136Z",
"user": {
"_id": "62567c86d444a9b5a0ec51c1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62567c86d444a9b5a0ec51c1/1vXJf2uGztPcXpkwyTBr6.png",
"fullname": "Dongfu Jiang",
"isPro": false,
"type": "user",
"user": "DongfuJiang"
}
},
{
"_id": "67a2d995c97974764a8c294e",
"hidden": false,
"name": "Haozhe Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d995c97974764a8c294f",
"hidden": false,
"name": "Ping Nie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-05T10:18:51.897Z",
"user": {
"_id": "65358802a920f38780b3248a",
"avatarUrl": "/avatars/9415510b598079973c2b0436ad12db9c.svg",
"fullname": "Ping Nie",
"isPro": false,
"type": "user",
"user": "pingnieuk"
}
},
{
"_id": "67a2d995c97974764a8c2950",
"hidden": false,
"name": "Xiaotong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d995c97974764a8c2951",
"hidden": false,
"name": "Wenhu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T18:46:04 |
ACECODER: Acing Coder RL via Automated Test-Case Synthesis
|
Most progress in recent coder models has been driven by supervised
fine-tuning (SFT), while the potential of reinforcement learning (RL) remains
largely unexplored, primarily due to the lack of reliable reward data/model in
the code domain. In this paper, we address this challenge by leveraging
automated large-scale test-case synthesis to enhance code model training.
Specifically, we design a pipeline that generates extensive (question,
test-cases) pairs from existing code data. Using these test cases, we construct
preference pairs based on pass rates over sampled programs to train reward
models with Bradley-Terry loss. It shows an average of 10-point improvement for
Llama-3.1-8B-Ins and 5-point improvement for Qwen2.5-Coder-7B-Ins through
best-of-32 sampling, making the 7B model on par with 236B DeepSeek-V2.5.
Furthermore, we conduct reinforcement learning with both reward models and
test-case pass rewards, leading to consistent improvements across HumanEval,
MBPP, BigCodeBench, and LiveCodeBench (V4). Notably, we follow the R1-style
training to start from Qwen2.5-Coder-base directly and show that our RL
training can improve model on HumanEval-plus by over 25\% and MBPP-plus by 6\%
for merely 80 optimization steps. We believe our results highlight the huge
potential of reinforcement learning in coder models.
| 29 |
67a2d996c97974764a8c29a1
| null | null |
|
2025-02-04T22:08:25.652000 |
QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search
| 2 |
{
"_id": "634e4670a51d5df8c2d92fce",
"avatarUrl": "/avatars/c52d7150b4de6a2eb2d83b345d35cbc2.svg",
"followerCount": 1,
"fullname": "Da Yin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "DaYin",
"type": "user"
}
| false | null |
2502.02584
|
[
{
"_id": "67a2d59fd5ad3369a66ff394",
"hidden": false,
"name": "Zongyu Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff395",
"hidden": false,
"name": "Yao Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff396",
"hidden": false,
"name": "Xingcheng Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff397",
"hidden": false,
"name": "Da Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff398",
"hidden": false,
"name": "Ziniu Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff399",
"hidden": false,
"name": "Yizhou Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d59fd5ad3369a66ff39a",
"hidden": false,
"name": "Kai-Wei Chang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T18:58:31 |
QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search
|
Language agents have become a promising solution to complex interactive
tasks. One of the key ingredients to the success of language agents is the
reward model on the trajectory of the agentic workflow, which provides valuable
guidance during training or inference. However, due to the lack of annotations
of intermediate interactions, most existing works use an outcome reward model
to optimize policies across entire trajectories. This may lead to sub-optimal
policies and hinder the overall performance. To address this, we propose QLASS
(Q-guided Language Agent Stepwise Search), to automatically generate
annotations by estimating Q-values in a stepwise manner for open language
agents. By introducing a reasoning tree and performing process reward modeling,
QLASS provides effective intermediate guidance for each step. With the stepwise
guidance, we propose a Q-guided generation strategy to enable language agents
to better adapt to long-term value, resulting in significant performance
improvement during model inference on complex interactive agent tasks. Notably,
even with almost half the annotated data, QLASS retains strong performance,
demonstrating its efficiency in handling limited supervision. We also
empirically demonstrate that QLASS can lead to more effective decision making
through qualitative analysis. We will release our code and data.
| 17 |
67a2d5a0d5ad3369a66ff3d4
| null | null |
|
2025-02-04T21:55:09.693000 |
Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search
| 2 |
{
"_id": "60ad0de755f970745d4ec28d",
"avatarUrl": "/avatars/b0de0222b8ed5fdac8dc7cb0336d2ec7.svg",
"followerCount": 11,
"fullname": "GtZeng",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "chaoscodes",
"type": "user"
}
| false | null |
2502.02508
|
[
{
"_id": "67a2d1f9bc9d072d9459e857",
"hidden": false,
"name": "Maohao Shen",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-05T03:00:33.470Z",
"user": {
"_id": "6553c985a7aded0380b5f928",
"avatarUrl": "/avatars/36109d6f536d2b34d98822b88eac9608.svg",
"fullname": "Maohao Shen",
"isPro": false,
"type": "user",
"user": "maohaos2"
}
},
{
"_id": "67a2d1f9bc9d072d9459e858",
"hidden": false,
"name": "Guangtao Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e859",
"hidden": false,
"name": "Zhenting Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85a",
"hidden": false,
"name": "Zhang-Wei Hong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85b",
"hidden": false,
"name": "Zhenfang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85c",
"hidden": false,
"name": "Wei Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85d",
"hidden": false,
"name": "Gregory Wornell",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85e",
"hidden": false,
"name": "Subhro Das",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e85f",
"hidden": false,
"name": "David Cox",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2d1f9bc9d072d9459e860",
"hidden": false,
"name": "Chuang Gan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T17:26:58 |
Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM
Reasoning via Autoregressive Search
|
Large language models (LLMs) have demonstrated remarkable reasoning
capabilities across diverse domains. Recent studies have shown that increasing
test-time computation enhances LLMs' reasoning capabilities. This typically
involves extensive sampling at inference time guided by an external LLM
verifier, resulting in a two-player system. Despite external guidance, the
effectiveness of this system demonstrates the potential of a single LLM to
tackle complex tasks. Thus, we pose a new research problem: Can we internalize
the searching capabilities to fundamentally enhance the reasoning abilities of
a single LLM? This work explores an orthogonal direction focusing on
post-training LLMs for autoregressive searching (i.e., an extended reasoning
process with self-reflection and self-exploration of new strategies). To
achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a
two-stage training paradigm: 1) a small-scale format tuning stage to
internalize the COAT reasoning format and 2) a large-scale self-improvement
stage leveraging reinforcement learning. Our approach results in Satori, a 7B
LLM trained on open-source models and data. Extensive empirical evaluations
demonstrate that Satori achieves state-of-the-art performance on mathematical
reasoning benchmarks while exhibits strong generalization to out-of-domain
tasks. Code, data, and models will be fully open-sourced.
| 23 |
67a2d1fcbc9d072d9459e91b
| null | null |
|
2025-02-04T21:09:41.016000 |
LongDPO: Unlock Better Long-form Generation Abilities for LLMs via Critique-augmented Stepwise Information
| 2 |
{
"_id": "64d99f6cd7e30889c6c477b4",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64d99f6cd7e30889c6c477b4/CEIWG22tJqAX3ItFlvV7W.jpeg",
"followerCount": 1,
"fullname": "Ping",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Bowen232",
"type": "user"
}
| false | null |
2502.02095
|
[
{
"_id": "67a2c810ca39d45e49b9a07d",
"hidden": false,
"name": "Bowen Ping",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2c810ca39d45e49b9a07e",
"hidden": false,
"name": "Jiali Zeng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2c810ca39d45e49b9a07f",
"hidden": false,
"name": "Fandong Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2c810ca39d45e49b9a080",
"hidden": false,
"name": "Shuo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2c810ca39d45e49b9a081",
"hidden": false,
"name": "Jie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a2c810ca39d45e49b9a082",
"hidden": false,
"name": "Shanghang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-04T08:25:17 |
LongDPO: Unlock Better Long-form Generation Abilities for LLMs via
Critique-augmented Stepwise Information
|
Long-form generation is crucial for academic writing papers and repo-level
code generation. Despite this, current models, including GPT-4o, still exhibit
unsatisfactory performance. Existing methods that utilize preference learning
with outcome supervision often fail to provide detailed feedback for extended
contexts. This shortcoming can lead to content that does not fully satisfy
query requirements, resulting in issues like length deviations, and diminished
quality. In this paper, we propose enhancing long-form generation by
incorporating process supervision. We employ Monte Carlo Tree Search to gather
stepwise preference pairs, utilizing a global memory pool to maintain
consistency. To address the issue of suboptimal candidate selection, we
integrate external critiques to refine and improve the quality of the
preference pairs. Finally, we apply step-level DPO using the collected stepwise
preference pairs. Experimental results show that our method improves length and
quality on long-form generation benchmarks, with almost lossless performance on
general benchmarks across various model backbones.
| 4 |
67a2c811ca39d45e49b9a0a3
| null | null |
|
2025-02-04T20:45:38.696000 |
MakeAnything: Harnessing Diffusion Transformers for Multi-Domain Procedural Sequence Generation
| 2 |
{
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"followerCount": 4,
"fullname": "Yiren Song",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "yiren98",
"type": "user"
}
| true | null |
2502.01572
|
[
{
"_id": "67a1dc124fc394b2aa6338d5",
"hidden": false,
"name": "Yiren Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:38:41.538Z",
"user": {
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"fullname": "Yiren Song",
"isPro": true,
"type": "user",
"user": "yiren98"
}
},
{
"_id": "67a1dc124fc394b2aa6338d6",
"hidden": false,
"name": "Cheng Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:24.249Z",
"user": {
"_id": "6534924de778506c5b1c614c",
"avatarUrl": "/avatars/55417e6e8a561df06836a4ad0912080e.svg",
"fullname": "Cheng Liu",
"isPro": false,
"type": "user",
"user": "lc03lc"
}
},
{
"_id": "67a1dc124fc394b2aa6338d7",
"hidden": false,
"name": "Mike Zheng Shou",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-04T09:21:27.413Z",
"user": {
"_id": "63a55320ce5763e06f78519c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1671779060549-noauth.jpeg",
"fullname": "Mike Shou",
"isPro": false,
"type": "user",
"user": "mikeshou"
}
}
] | 2025-02-03T17:55:30 |
MakeAnything: Harnessing Diffusion Transformers for Multi-Domain
Procedural Sequence Generation
|
A hallmark of human intelligence is the ability to create complex artifacts
through structured multi-step processes. Generating procedural tutorials with
AI is a longstanding but challenging goal, facing three key obstacles: (1)
scarcity of multi-task procedural datasets, (2) maintaining logical continuity
and visual consistency between steps, and (3) generalizing across multiple
domains. To address these challenges, we propose a multi-domain dataset
covering 21 tasks with over 24,000 procedural sequences. Building upon this
foundation, we introduce MakeAnything, a framework based on the diffusion
transformer (DIT), which leverages fine-tuning to activate the in-context
capabilities of DIT for generating consistent procedural sequences. We
introduce asymmetric low-rank adaptation (LoRA) for image generation, which
balances generalization capabilities and task-specific performance by freezing
encoder parameters while adaptively tuning decoder layers. Additionally, our
ReCraft model enables image-to-process generation through spatiotemporal
consistency constraints, allowing static images to be decomposed into plausible
creation sequences. Extensive experiments demonstrate that MakeAnything
surpasses existing methods, setting new performance benchmarks for procedural
generation tasks.
| 20 |
67a1dc174fc394b2aa6339f7
| null | null |
|
2025-02-04T20:34:37.638000 |
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
| 3 |
{
"_id": "65f26ed18404ac0e4cfe7d83",
"avatarUrl": "/avatars/9c72744836f86a8e355a45700b10e393.svg",
"followerCount": 1,
"fullname": "Paul Albert",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "PAlbert31",
"type": "user"
}
| false |
[
"https://cdn-uploads.huggingface.co/production/uploads/65f26ed18404ac0e4cfe7d83/E72WPhEmfC8brtLv3Ez5X.png"
] |
2502.00987
|
[
{
"_id": "67a1a4278b6584b24ff98eaf",
"hidden": false,
"name": "Paul Albert",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4278b6584b24ff98eb0",
"hidden": false,
"name": "Frederic Z. Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:15:29.195Z",
"user": {
"_id": "668ca1b3ed63008dfa687990",
"avatarUrl": "/avatars/b3ae0e7ff60c60db605288c9cf5ae6f3.svg",
"fullname": "Fred Zhang",
"isPro": false,
"type": "user",
"user": "fredzzhang"
}
},
{
"_id": "67a1a4278b6584b24ff98eb1",
"hidden": false,
"name": "Hemanth Saratchandran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4278b6584b24ff98eb2",
"hidden": false,
"name": "Cristian Rodriguez-Opazo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4278b6584b24ff98eb3",
"hidden": false,
"name": "Anton van den Hengel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4278b6584b24ff98eb4",
"hidden": false,
"name": "Ehsan Abbasnejad",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T01:59:45 |
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
|
Low-Rank Adaptation (LoRA) and its variants have shown impressive results in
reducing the number of trainable parameters and memory requirements of large
transformer networks while maintaining fine-tuning performance. However, the
low-rank nature of the weight update inherently limits the representation power
of fine-tuned models, potentially compromising performance on complex tasks.
This raises a critical question: when a performance gap between LoRA and
standard fine-tuning is observed, is it due to the reduced number of trainable
parameters or the rank deficiency? This paper aims to answer this question by
introducing RandLoRA, a parameter-efficient method that performs full-rank
updates using a learned linear combinations of low-rank, non-trainable random
matrices. Our method limits the number of trainable parameters by restricting
optimization to diagonal scaling matrices applied to the fixed random matrices.
This allows us to effectively overcome the low-rank limitations while
maintaining parameter and memory efficiency during training. Through extensive
experimentation across vision, language, and vision-language benchmarks, we
systematically evaluate the limitations of LoRA and existing random basis
methods. Our findings reveal that full-rank updates are beneficial across
vision and language tasks individually, and even more so for vision-language
tasks, where RandLoRA significantly reduces -- and sometimes eliminates -- the
performance gap between standard fine-tuning and LoRA, demonstrating its
efficacy.
| 9 |
67a1a4288b6584b24ff98ee8
| null | null |
|
2025-02-04T16:10:18.388000 |
Learning to Generate Unit Tests for Automated Debugging
| 2 |
{
"_id": "607aeae5d2cd8c150e6ae074",
"avatarUrl": "/avatars/a087743b98b6fe2181283a9610db4ec4.svg",
"followerCount": null,
"fullname": "Archiki Prasad",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "archiki",
"type": "user"
}
| true | null |
2502.01619
|
[
{
"_id": "67a280644fdf4d9187507d74",
"hidden": false,
"name": "Archiki Prasad",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:11:22.710Z",
"user": {
"_id": "607aeae5d2cd8c150e6ae074",
"avatarUrl": "/avatars/a087743b98b6fe2181283a9610db4ec4.svg",
"fullname": "Archiki Prasad",
"isPro": false,
"type": "user",
"user": "archiki"
}
},
{
"_id": "67a280644fdf4d9187507d75",
"hidden": false,
"name": "Elias Stengel-Eskin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:11:36.824Z",
"user": {
"_id": "61781c4caf41befe8ff060e8",
"avatarUrl": "/avatars/8871d7b046fc28cbc8638228da8e9737.svg",
"fullname": "Elias Stengel-Eskin",
"isPro": false,
"type": "user",
"user": "esteng"
}
},
{
"_id": "67a280644fdf4d9187507d76",
"hidden": false,
"name": "Justin Chih-Yao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a280644fdf4d9187507d77",
"hidden": false,
"name": "Zaid Khan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:39:28.111Z",
"user": {
"_id": "6301c3e0a123c93a5fb295ff",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661060051926-noauth.jpeg",
"fullname": "Zaid Khan",
"isPro": false,
"type": "user",
"user": "codezakh"
}
},
{
"_id": "67a280644fdf4d9187507d78",
"hidden": false,
"name": "Mohit Bansal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:12:17.347Z",
"user": {
"_id": "665d9d3a057f7c508f98c625",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/665d9d3a057f7c508f98c625/u1R9P9sJoAl4zEIcetbPy.jpeg",
"fullname": "Mohit Bansal",
"isPro": false,
"type": "user",
"user": "mohitbansal"
}
}
] | 2025-02-03T18:51:43 |
Learning to Generate Unit Tests for Automated Debugging
|
Unit tests (UTs) play an instrumental role in assessing code correctness as
well as providing feedback to a large language model (LLM) as it iteratively
debugs faulty code, motivating automated test generation. However, we uncover a
trade-off between generating unit test inputs that reveal errors when given a
faulty code and correctly predicting the unit test output without access to the
gold solution. To address this trade-off, we propose UTGen, which teaches LLMs
to generate unit test inputs that reveal errors along with their correct
expected outputs based on task descriptions and candidate code. We integrate
UTGen into UTDebug, a robust debugging pipeline that uses generated tests to
help LLMs debug effectively. Since model-generated tests can provide noisy
signals (e.g., from incorrectly predicted outputs), UTDebug (i) scales UTGen
via test-time compute to improve UT output prediction, and (ii) validates and
back-tracks edits based on multiple generated UTs to avoid overfitting. We show
that UTGen outperforms UT generation baselines by 7.59% based on a metric
measuring the presence of both error-revealing UT inputs and correct UT
outputs. When used with UTDebug, we find that feedback from UTGen's unit tests
improves pass@1 accuracy of Qwen-2.5 7B on HumanEvalFix and our own harder
debugging split of MBPP+ by over 3% and 12.35% (respectively) over other
LLM-based UT generation baselines.
| 4 |
67a280654fdf4d9187507dd2
| null | null |
|
2025-02-04T15:56:40.599000 |
Language Models Prefer What They Know: Relative Confidence Estimation via Confidence Preferences
| 2 |
{
"_id": "63b75a016fc56e43c3c15980",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1672960383634-noauth.jpeg",
"followerCount": 1,
"fullname": "Vaishnavi Shrivastava",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "vshrivas",
"type": "user"
}
| true | null |
2502.01126
|
[
{
"_id": "67a27eb5d6a1524a1a6f048c",
"hidden": false,
"name": "Vaishnavi Shrivastava",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:14:18.633Z",
"user": {
"_id": "63b75a016fc56e43c3c15980",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1672960383634-noauth.jpeg",
"fullname": "Vaishnavi Shrivastava",
"isPro": false,
"type": "user",
"user": "vshrivas"
}
},
{
"_id": "67a27eb5d6a1524a1a6f048d",
"hidden": false,
"name": "Ananya Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a27eb5d6a1524a1a6f048e",
"hidden": false,
"name": "Percy Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:14:47.149Z",
"user": {
"_id": "6409651b9e9f790c905b2335",
"avatarUrl": "/avatars/1fb8c80b60f21f65a0a027319101f236.svg",
"fullname": "Percy Liang",
"isPro": false,
"type": "user",
"user": "percyliang"
}
}
] | 2025-02-03T07:43:27 |
Language Models Prefer What They Know: Relative Confidence Estimation
via Confidence Preferences
|
Language models (LMs) should provide reliable confidence estimates to help
users detect mistakes in their outputs and defer to human experts when
necessary. Asking a language model to assess its confidence ("Score your
confidence from 0-1.") is a natural way of evaluating its uncertainty. However,
models struggle to provide absolute assessments of confidence (i.e. judging
confidence in answering a question independent of other questions) and the
coarse-grained scores they produce are not useful for evaluating the
correctness of their answers. We propose relative confidence estimation, where
we match up questions against each other and ask the model to make relative
judgments of confidence ("Which question are you more confident in answering
correctly?"). Treating each question as a "player" in a series of matchups
against other questions and the model's preferences as match outcomes, we can
use rank aggregation methods like Elo rating and Bradley-Terry to translate the
model's confidence preferences into confidence scores. We evaluate relative
confidence estimation against absolute confidence estimation and
self-consistency confidence methods on five state-of-the-art LMs -- GPT-4,
GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet, and Llama 3.1 405B -- across 14
challenging STEM, social science, and commonsense reasoning question answering
tasks. Our results demonstrate that relative confidence estimation consistently
provides more reliable confidence scores than absolute confidence estimation,
with average gains of 3.5% in selective classification AUC over direct absolute
confidence estimation methods and 1.7% over self-consistency approaches across
all models and datasets.
| 4 |
67a27eb9d6a1524a1a6f0591
| null | null |
|
2025-02-04T10:51:54.103000 |
AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding
| 2 |
{
"_id": "63efd75a5c2ceb16fc6e98fc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg",
"followerCount": 65,
"fullname": "Ahmed Masry",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "ahmed-masry",
"type": "user"
}
| true | null |
2502.01341
|
[
{
"_id": "67a236ba5f63ce00e8402d56",
"hidden": false,
"name": "Ahmed Masry",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T16:54:31.623Z",
"user": {
"_id": "63efd75a5c2ceb16fc6e98fc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63efd75a5c2ceb16fc6e98fc/qoA4LKuLTEr7hx90i90UK.jpeg",
"fullname": "Ahmed Masry",
"isPro": true,
"type": "user",
"user": "ahmed-masry"
}
},
{
"_id": "67a236ba5f63ce00e8402d57",
"hidden": false,
"name": "Juan A. Rodriguez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:15:38.988Z",
"user": {
"_id": "63507c18aef7e7f6cf476017",
"avatarUrl": "/avatars/183a74624b9daec613a57d405fa577bf.svg",
"fullname": "Juan A. Rodriguez",
"isPro": false,
"type": "user",
"user": "joanrod"
}
},
{
"_id": "67a236ba5f63ce00e8402d58",
"hidden": false,
"name": "Tianyu Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:15:45.137Z",
"user": {
"_id": "6452d79149b6b9a2383b5775",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/T28lP0kE7PZIGzJjhSpSx.jpeg",
"fullname": "Tianyu Zhang",
"isPro": false,
"type": "user",
"user": "TianyuZhang"
}
},
{
"_id": "67a236ba5f63ce00e8402d59",
"hidden": false,
"name": "Suyuchen Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T16:54:29.585Z",
"user": {
"_id": "62bb1e0f3ff437e49a3088e5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62bb1e0f3ff437e49a3088e5/bcUQmH8tKfI6DIWH9IcYp.jpeg",
"fullname": "Suyuchen Wang",
"isPro": false,
"type": "user",
"user": "sheryc"
}
},
{
"_id": "67a236ba5f63ce00e8402d5a",
"hidden": false,
"name": "Chao Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:22.513Z",
"user": {
"_id": "65826e30d73d6402f7ac515e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65826e30d73d6402f7ac515e/NjUQbyfMCWNjb5tVXTxKk.jpeg",
"fullname": "Chao Wang",
"isPro": false,
"type": "user",
"user": "erikchwang"
}
},
{
"_id": "67a236ba5f63ce00e8402d5b",
"hidden": false,
"name": "Aarash Feizi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:33:25.590Z",
"user": {
"_id": "6752203d99b478caa1e85a79",
"avatarUrl": "/avatars/29284c6cb11d45a640bf3871954007ed.svg",
"fullname": "Aarash Feizi",
"isPro": false,
"type": "user",
"user": "feiziaarash"
}
},
{
"_id": "67a236ba5f63ce00e8402d5c",
"hidden": false,
"name": "Akshay Kalkunte Suresh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a236ba5f63ce00e8402d5d",
"hidden": false,
"name": "Abhay Puri",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:33:52.015Z",
"user": {
"_id": "65830af2a1707aa10effcc32",
"avatarUrl": "/avatars/0626454399b711fca7fb2b66fcecaca8.svg",
"fullname": "Abhay Puri",
"isPro": false,
"type": "user",
"user": "abhaypuri"
}
},
{
"_id": "67a236ba5f63ce00e8402d5e",
"hidden": false,
"name": "Xiangru Jian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:33:57.470Z",
"user": {
"_id": "66155491f1214e73d69074b5",
"avatarUrl": "/avatars/00572ab695a4188422e8ee38fc87680b.svg",
"fullname": "Xiangru Jian",
"isPro": false,
"type": "user",
"user": "EdwardXJ"
}
},
{
"_id": "67a236ba5f63ce00e8402d5f",
"hidden": false,
"name": "Pierre-André Noël",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:03.702Z",
"user": {
"_id": "646cc16b94eb019a96e1fb2e",
"avatarUrl": "/avatars/31f168a6c1ec45eb0c784d9119c1b9bf.svg",
"fullname": "Pierre-Andre Noel",
"isPro": false,
"type": "user",
"user": "PierreAndreNoel"
}
},
{
"_id": "67a236ba5f63ce00e8402d60",
"hidden": false,
"name": "Sathwik Tejaswi Madhusudhan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:10.257Z",
"user": {
"_id": "63d3095c2727d7888cbb54e2",
"avatarUrl": "/avatars/51fd37f4216eec309cd439e56626d6ad.svg",
"fullname": "Sathwik Tejaswi Madhusudhan",
"isPro": false,
"type": "user",
"user": "stm4"
}
},
{
"_id": "67a236ba5f63ce00e8402d61",
"hidden": false,
"name": "Marco Pedersoli",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:16.472Z",
"user": {
"_id": "64b829042fccad9f5ff20cc7",
"avatarUrl": "/avatars/92cca111ddd0db5d998615c2257a0894.svg",
"fullname": "Marco Pedersoli",
"isPro": false,
"type": "user",
"user": "Marcopede"
}
},
{
"_id": "67a236ba5f63ce00e8402d62",
"hidden": false,
"name": "Bang Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T20:36:11.104Z",
"user": {
"_id": "654a97282d2fcd6bf2851173",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/654a97282d2fcd6bf2851173/9zXf940gr4WNt4e-oOt4k.png",
"fullname": "Bang Liu",
"isPro": false,
"type": "user",
"user": "Bang-UdeM-Mila"
}
},
{
"_id": "67a236ba5f63ce00e8402d63",
"hidden": false,
"name": "Nicolas Chapados",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:23.259Z",
"user": {
"_id": "631f54aa5ba8c026340b13cf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/631f54aa5ba8c026340b13cf/2jI0VUDG5cKkdf2C5KJuy.png",
"fullname": "Nicolas Chapados",
"isPro": false,
"type": "user",
"user": "nicolaschapados"
}
},
{
"_id": "67a236ba5f63ce00e8402d64",
"hidden": false,
"name": "Yoshua Bengio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a236ba5f63ce00e8402d65",
"hidden": false,
"name": "Enamul Hoque",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:32.793Z",
"user": {
"_id": "6706c57876c98ec236f2f090",
"avatarUrl": "/avatars/d45543b65b70f03a71dcd378a6ce931b.svg",
"fullname": "Enamul Hoque",
"isPro": false,
"type": "user",
"user": "enamulhoque1"
}
},
{
"_id": "67a236ba5f63ce00e8402d66",
"hidden": false,
"name": "Christopher Pal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a236ba5f63ce00e8402d67",
"hidden": false,
"name": "Issam H. Laradji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:34:48.589Z",
"user": {
"_id": "64062855692855e65ae31688",
"avatarUrl": "/avatars/e35d22a037b8b35422d3ee982f133076.svg",
"fullname": "Issam Laradji",
"isPro": false,
"type": "user",
"user": "issamlaradji"
}
},
{
"_id": "67a236ba5f63ce00e8402d68",
"hidden": false,
"name": "David Vazquez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:35:01.844Z",
"user": {
"_id": "646edffecb6ea6e6b6e1be4c",
"avatarUrl": "/avatars/8e1b0312c935ff1338c9fb74046fce02.svg",
"fullname": "David Vazquez",
"isPro": false,
"type": "user",
"user": "DavidVazquez"
}
},
{
"_id": "67a236ba5f63ce00e8402d69",
"hidden": false,
"name": "Perouz Taslakian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a236ba5f63ce00e8402d6a",
"hidden": false,
"name": "Spandana Gella",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:35:14.497Z",
"user": {
"_id": "65ca8745d64d82c92fa7c71f",
"avatarUrl": "/avatars/4f475609f1573cd671e82122c7097f45.svg",
"fullname": "G",
"isPro": false,
"type": "user",
"user": "spandanagella"
}
},
{
"_id": "67a236ba5f63ce00e8402d6b",
"hidden": false,
"name": "Sai Rajeswar",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T13:34:51 |
AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal
Understanding
|
Aligning visual features with language embeddings is a key challenge in
vision-language models (VLMs). The performance of such models hinges on having
a good connector that maps visual features generated by a vision encoder to a
shared embedding space with the LLM while preserving semantic similarity.
Existing connectors, such as multilayer perceptrons (MLPs), often produce
out-of-distribution or noisy inputs, leading to misalignment between the
modalities. In this work, we propose a novel vision-text alignment method,
AlignVLM, that maps visual features to a weighted average of LLM text
embeddings. Our approach leverages the linguistic priors encoded by the LLM to
ensure that visual features are mapped to regions of the space that the LLM can
effectively interpret. AlignVLM is particularly effective for document
understanding tasks, where scanned document images must be accurately mapped to
their textual content. Our extensive experiments show that AlignVLM achieves
state-of-the-art performance compared to prior alignment methods. We provide
further analysis demonstrating improved vision-text feature alignment and
robustness to noise.
| 36 |
67a236bb5f63ce00e8402ddc
| null | null |
|
2025-02-04T07:50:53.886000 |
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal Models
| 2 |
{
"_id": "633b99cfc9b44f5c6ac8fe03",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/633b99cfc9b44f5c6ac8fe03/sFmpPlWwo07ttcWWuV1Iw.jpeg",
"followerCount": 2,
"fullname": "huanqiacai",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "huanqia",
"type": "user"
}
| true | null |
2502.00698
|
[
{
"_id": "67a1b8afe03dbbbbb51bb5c1",
"hidden": false,
"name": "Huanqia Cai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:06.531Z",
"user": {
"_id": "633b99cfc9b44f5c6ac8fe03",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/633b99cfc9b44f5c6ac8fe03/sFmpPlWwo07ttcWWuV1Iw.jpeg",
"fullname": "huanqiacai",
"isPro": false,
"type": "user",
"user": "huanqia"
}
},
{
"_id": "67a1b8afe03dbbbbb51bb5c2",
"hidden": false,
"name": "Yijun Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T14:48:36.068Z",
"user": {
"_id": "645e553c3b6d85c65e8b0e54",
"avatarUrl": "/avatars/1fffc6499b9d65b21a895ca96f03b781.svg",
"fullname": "Steven",
"isPro": false,
"type": "user",
"user": "yijunyang"
}
},
{
"_id": "67a1b8afe03dbbbbb51bb5c3",
"hidden": false,
"name": "Winston Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-02T07:12:03 |
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal
Models
|
IQ testing has served as a foundational methodology for evaluating human
cognitive capabilities, deliberately decoupling assessment from linguistic
background, language proficiency, or domain-specific knowledge to isolate core
competencies in abstraction and reasoning. Yet, artificial intelligence
research currently lacks systematic benchmarks to quantify these critical
cognitive dimensions in multimodal systems. To address this critical gap, we
propose MM-IQ, a comprehensive evaluation framework comprising 2,710
meticulously curated test items spanning 8 distinct reasoning paradigms.
Through systematic evaluation of leading open-source and proprietary
multimodal models, our benchmark reveals striking limitations: even
state-of-the-art architectures achieve only marginally superior performance to
random chance (27.49% vs. 25% baseline accuracy). This substantial performance
chasm highlights the inadequacy of current multimodal systems in approximating
fundamental human reasoning capacities, underscoring the need for
paradigm-shifting advancements to bridge this cognitive divide.
| 24 |
67a1b8b1e03dbbbbb51bb613
| null | null |
|
2025-02-04T07:40:38.331000 |
SliderSpace: Decomposing the Visual Capabilities of Diffusion Models
| 8 |
{
"_id": "636daf1b56c0762cfda074b5",
"avatarUrl": "/avatars/f44be5eb110acfa2efbd09de6b416239.svg",
"followerCount": 7,
"fullname": "Rohit Gandikota",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "RohitGandikota",
"type": "user"
}
| true | null |
2502.01639
|
[
{
"_id": "67a20a822cf1b98052d941d1",
"hidden": false,
"name": "Rohit Gandikota",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T17:11:21.938Z",
"user": {
"_id": "636daf1b56c0762cfda074b5",
"avatarUrl": "/avatars/f44be5eb110acfa2efbd09de6b416239.svg",
"fullname": "Rohit Gandikota",
"isPro": false,
"type": "user",
"user": "RohitGandikota"
}
},
{
"_id": "67a20a822cf1b98052d941d2",
"hidden": false,
"name": "Zongze Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:38:03.547Z",
"user": {
"_id": "62f923adebd15ad7b5b22141",
"avatarUrl": "/avatars/3454aa0bbbe4f119c551f7e9b522afa8.svg",
"fullname": "Zongze Wu",
"isPro": false,
"type": "user",
"user": "ZongzeWu"
}
},
{
"_id": "67a20a822cf1b98052d941d3",
"hidden": false,
"name": "Richard Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a20a822cf1b98052d941d4",
"hidden": false,
"name": "David Bau",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:38:13.109Z",
"user": {
"_id": "6214d6c01e35c843d42d1f77",
"avatarUrl": "/avatars/ac208cd180b4f3ed1ec367e581facfcf.svg",
"fullname": "David Bau",
"isPro": false,
"type": "user",
"user": "davidbau"
}
},
{
"_id": "67a20a822cf1b98052d941d5",
"hidden": false,
"name": "Eli Shechtman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a20a822cf1b98052d941d6",
"hidden": false,
"name": "Nick Kolkin",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T18:59:55 |
SliderSpace: Decomposing the Visual Capabilities of Diffusion Models
|
We present SliderSpace, a framework for automatically decomposing the visual
capabilities of diffusion models into controllable and human-understandable
directions. Unlike existing control methods that require a user to specify
attributes for each edit direction individually, SliderSpace discovers multiple
interpretable and diverse directions simultaneously from a single text prompt.
Each direction is trained as a low-rank adaptor, enabling compositional control
and the discovery of surprising possibilities in the model's latent space.
Through extensive experiments on state-of-the-art diffusion models, we
demonstrate SliderSpace's effectiveness across three applications: concept
decomposition, artistic style exploration, and diversity enhancement. Our
quantitative evaluation shows that SliderSpace-discovered directions decompose
the visual structure of model's knowledge effectively, offering insights into
the latent capabilities encoded within diffusion models. User studies further
validate that our method produces more diverse and useful variations compared
to baselines. Our code, data and trained weights are available at
https://sliderspace.baulab.info
| 25 |
67a20a892cf1b98052d943dd
| null | null |
|
2025-02-04T05:09:45.473000 |
Almost Surely Safe Alignment of Large Language Models at Inference-Time
| 2 |
{
"_id": "631c375768f7da9ad2496bf6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/631c375768f7da9ad2496bf6/1sDOoecA6e1v_hn_VAgUq.jpeg",
"followerCount": 15,
"fullname": "Haitham Bou Ammar",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hba123",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/631c375768f7da9ad2496bf6/jINeBtMoT2NBNmd9kSK9g.png"
] |
2502.01208
|
[
{
"_id": "67a1e729a6c7d65cad72b3d7",
"hidden": false,
"name": "Xiaotong Ji",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:40:16.111Z",
"user": {
"_id": "6682989afc0e69f80acf2845",
"avatarUrl": "/avatars/35d48738965fe21fdd79198a17d6c8cc.svg",
"fullname": "jixiaotong",
"isPro": false,
"type": "user",
"user": "xiaotong9515"
}
},
{
"_id": "67a1e729a6c7d65cad72b3d8",
"hidden": false,
"name": "Shyam Sundhar Ramesh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1e729a6c7d65cad72b3d9",
"hidden": false,
"name": "Matthieu Zimmer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1e729a6c7d65cad72b3da",
"hidden": false,
"name": "Ilija Bogunovic",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:39:45.148Z",
"user": {
"_id": "65fd58a41a58717c800d3650",
"avatarUrl": "/avatars/9dce361a0417465116c816abdf53e916.svg",
"fullname": "Bogunovic",
"isPro": false,
"type": "user",
"user": "ilijabogunovic"
}
},
{
"_id": "67a1e729a6c7d65cad72b3db",
"hidden": false,
"name": "Jun Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1e729a6c7d65cad72b3dc",
"hidden": false,
"name": "Haitham Bou Ammar",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:39:59.356Z",
"user": {
"_id": "631c375768f7da9ad2496bf6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/631c375768f7da9ad2496bf6/1sDOoecA6e1v_hn_VAgUq.jpeg",
"fullname": "Haitham Bou Ammar",
"isPro": false,
"type": "user",
"user": "hba123"
}
}
] | 2025-02-03T09:59:32 |
Almost Surely Safe Alignment of Large Language Models at Inference-Time
|
Even highly capable large language models (LLMs) can produce biased or unsafe
responses, and alignment techniques, such as RLHF, aimed at mitigating this
issue, are expensive and prone to overfitting as they retrain the LLM. This
paper introduces a novel inference-time alignment approach that ensures LLMs
generate safe responses almost surely, i.e., with a probability approaching
one. We achieve this by framing the safe generation of inference-time responses
as a constrained Markov decision process within the LLM's latent space.
Crucially, we augment a safety state that tracks the evolution of safety
constraints and enables us to demonstrate formal safety guarantees upon solving
the MDP in the latent space. Building on this foundation, we propose
InferenceGuard, a practical implementation that safely aligns LLMs without
modifying the model weights. Empirically, we demonstrate InferenceGuard
effectively balances safety and task performance, outperforming existing
inference-time alignment methods in generating safe and aligned responses.
| 11 |
67a1e72aa6c7d65cad72b40f
| null | null |
|
2025-02-04T05:06:50.415000 |
PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models
| 6 |
{
"_id": "62d8315bad693a1a962864b3",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1664332914111-62d8315bad693a1a962864b3.png",
"followerCount": 13,
"fullname": "Arjun Guha",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "arjunguha",
"type": "user"
}
| true | null |
2502.01584
|
[
{
"_id": "67a1e658a68ad21bcdffead6",
"hidden": false,
"name": "Carolyn Jane Anderson",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:21.263Z",
"user": {
"_id": "6243199444c9c3b21be74c50",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6243199444c9c3b21be74c50/uxnj03NBcV_TeC3eV2U-Q.jpeg",
"fullname": "Carolyn Anderson",
"isPro": false,
"type": "user",
"user": "canders1"
}
},
{
"_id": "67a1e658a68ad21bcdffead7",
"hidden": false,
"name": "Joydeep Biswas",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:28.567Z",
"user": {
"_id": "641e5fd25f274a0a92c30b7a",
"avatarUrl": "/avatars/192ef72795a032f3c73950143a13f6b9.svg",
"fullname": "Joydeep Biswas",
"isPro": false,
"type": "user",
"user": "joydeep-b"
}
},
{
"_id": "67a1e658a68ad21bcdffead8",
"hidden": false,
"name": "Aleksander Boruch-Gruszecki",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:34.032Z",
"user": {
"_id": "67508d980b12dc1a51ea5e59",
"avatarUrl": "/avatars/ec29de1d231a93af18c279fcd2ebbd0b.svg",
"fullname": "Aleksander Boruch-Gruszecki",
"isPro": false,
"type": "user",
"user": "abgruszecki"
}
},
{
"_id": "67a1e658a68ad21bcdffead9",
"hidden": false,
"name": "Federico Cassano",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:39.635Z",
"user": {
"_id": "642ca13cc3684e5a4e806661",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/642ca13cc3684e5a4e806661/ELhdKsK429zi4wZkVkx9Y.jpeg",
"fullname": "Federico Cassano",
"isPro": false,
"type": "user",
"user": "cassanof"
}
},
{
"_id": "67a1e658a68ad21bcdffeada",
"hidden": false,
"name": "Molly Q Feldman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:48.494Z",
"user": {
"_id": "644c34858c51ddbe0ea78cb9",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644c34858c51ddbe0ea78cb9/7BREj4M1LR4LAGk4-WS4G.jpeg",
"fullname": "Molly Feldman",
"isPro": false,
"type": "user",
"user": "feldmanmolly"
}
},
{
"_id": "67a1e658a68ad21bcdffeadb",
"hidden": false,
"name": "Arjun Guha",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:09:54.609Z",
"user": {
"_id": "62d8315bad693a1a962864b3",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1664332914111-62d8315bad693a1a962864b3.png",
"fullname": "Arjun Guha",
"isPro": false,
"type": "user",
"user": "arjunguha"
}
},
{
"_id": "67a1e658a68ad21bcdffeadc",
"hidden": false,
"name": "Francesca Lucchetti",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:10:03.253Z",
"user": {
"_id": "64025d945caf6d21d67cdad2",
"avatarUrl": "/avatars/17227ea5bd07c820f4f3fd29ffa5853e.svg",
"fullname": "Francesca Lucchetti",
"isPro": false,
"type": "user",
"user": "franlucc"
}
},
{
"_id": "67a1e658a68ad21bcdffeadd",
"hidden": false,
"name": "Zixuan Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:10:34.894Z",
"user": {
"_id": "6634fdc967cbac200e103bd7",
"avatarUrl": "/avatars/47bbdbc066055f25736d7e1cca928b1f.svg",
"fullname": "Zixuan Wu",
"isPro": false,
"type": "user",
"user": "AryaWu"
}
}
] | 2025-02-03T18:10:38 |
PhD Knowledge Not Required: A Reasoning Challenge for Large Language
Models
|
Existing benchmarks for frontier models often test specialized, ``PhD-level''
knowledge that is difficult for non-experts to grasp. In contrast, we present a
benchmark based on the NPR Sunday Puzzle Challenge that requires only general
knowledge. Our benchmark is challenging for both humans and models, however
correct solutions are easy to verify, and models' mistakes are easy to spot.
Our work reveals capability gaps that are not evident in existing benchmarks:
OpenAI o1 significantly outperforms other reasoning models that are on par on
benchmarks that test specialized knowledge. Furthermore, our analysis of
reasoning outputs uncovers new kinds of failures. DeepSeek R1, for instance,
often concedes with ``I give up'' before providing an answer that it knows is
wrong. R1 can also be remarkably ``uncertain'' in its output and in rare cases,
it does not ``finish thinking,'' which suggests the need for an inference-time
technique to ``wrap up'' before the context window limit is reached. We also
quantify the effectiveness of reasoning longer with R1 and Gemini Thinking to
identify the point beyond which more reasoning is unlikely to improve accuracy
on our benchmark.
| 9 |
67a1e659a68ad21bcdffeb04
| null | null |
|
2025-02-04T04:59:22.696000 |
Current Pathology Foundation Models are unrobust to Medical Center Differences
| 2 |
{
"_id": "67225dd94201755d88e104c4",
"avatarUrl": "/avatars/6da69788ce0cd41c86f9dd0bf8d092aa.svg",
"followerCount": null,
"fullname": "Edwin D. de Jong",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "EdwinDdeJong",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/67225dd94201755d88e104c4/oD8gcxl4D9G3FPXWGVGiz.png",
"https://cdn-uploads.huggingface.co/production/uploads/67225dd94201755d88e104c4/_jrPyZDKwbr3K9-Q4_sCH.png"
] |
2501.18055
|
[
{
"_id": "67a197099b2f48315e74dcde",
"hidden": false,
"name": "Edwin D. de Jong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:28.120Z",
"user": {
"_id": "67225dd94201755d88e104c4",
"avatarUrl": "/avatars/6da69788ce0cd41c86f9dd0bf8d092aa.svg",
"fullname": "Edwin D. de Jong",
"isPro": false,
"type": "user",
"user": "EdwinDdeJong"
}
},
{
"_id": "67a197099b2f48315e74dcdf",
"hidden": false,
"name": "Eric Marcus",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a197099b2f48315e74dce0",
"hidden": false,
"name": "Jonas Teuwen",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-29T23:38:14 |
Current Pathology Foundation Models are unrobust to Medical Center
Differences
|
Pathology Foundation Models (FMs) hold great promise for healthcare. Before
they can be used in clinical practice, it is essential to ensure they are
robust to variations between medical centers. We measure whether pathology FMs
focus on biological features like tissue and cancer type, or on the well known
confounding medical center signatures introduced by staining procedure and
other differences. We introduce the Robustness Index. This novel robustness
metric reflects to what degree biological features dominate confounding
features. Ten current publicly available pathology FMs are evaluated. We find
that all current pathology foundation models evaluated represent the medical
center to a strong degree. Significant differences in the robustness index are
observed. Only one model so far has a robustness index greater than one,
meaning biological features dominate confounding features, but only slightly. A
quantitative approach to measure the influence of medical center differences on
FM-based prediction performance is described. We analyze the impact of
unrobustness on classification performance of downstream models, and find that
cancer-type classification errors are not random, but specifically attributable
to same-center confounders: images of other classes from the same medical
center. We visualize FM embedding spaces, and find these are more strongly
organized by medical centers than by biological factors. As a consequence, the
medical center of origin is predicted more accurately than the tissue source
and cancer type. The robustness index introduced here is provided with the aim
of advancing progress towards clinical adoption of robust and reliable
pathology FMs.
| 2 |
67a1970b9b2f48315e74dd5d
| null | null |
|
2025-02-04T04:35:57.149000 |
DeepRAG: Thinking to Retrieval Step by Step for Large Language Models
| 2 |
{
"_id": "643407dd4b34368fdb0149e8",
"avatarUrl": "/avatars/9477b9267d5692a4fe59e30590e9639d.svg",
"followerCount": 1,
"fullname": "Xinyan Guan",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xinyan233333",
"type": "user"
}
| true | null |
2502.01142
|
[
{
"_id": "67a1b4630e9634919de9bc52",
"hidden": false,
"name": "Xinyan Guan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:08.849Z",
"user": {
"_id": "643407dd4b34368fdb0149e8",
"avatarUrl": "/avatars/9477b9267d5692a4fe59e30590e9639d.svg",
"fullname": "Xinyan Guan",
"isPro": false,
"type": "user",
"user": "xinyan233333"
}
},
{
"_id": "67a1b4630e9634919de9bc53",
"hidden": false,
"name": "Jiali Zeng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:36:59.186Z",
"user": {
"_id": "657bef0fff16eeb2ee40ed9c",
"avatarUrl": "/avatars/2a436b1d9b04c611a795f10363150aca.svg",
"fullname": "zeng",
"isPro": false,
"type": "user",
"user": "zengjiali"
}
},
{
"_id": "67a1b4630e9634919de9bc54",
"hidden": false,
"name": "Fandong Meng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:36:44.284Z",
"user": {
"_id": "64cb254871a7bbb60c17d5fa",
"avatarUrl": "/avatars/5121fd5b7b55d275eba3947f3f4c034d.svg",
"fullname": "Fandong Meng",
"isPro": false,
"type": "user",
"user": "fandong"
}
},
{
"_id": "67a1b4630e9634919de9bc55",
"hidden": false,
"name": "Chunlei Xin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:37:04.877Z",
"user": {
"_id": "667b74b9279199a7c610687f",
"avatarUrl": "/avatars/9834e9971579655a4c387a306c610f57.svg",
"fullname": "Chunlei Xin",
"isPro": false,
"type": "user",
"user": "meow77"
}
},
{
"_id": "67a1b4630e9634919de9bc56",
"hidden": false,
"name": "Yaojie Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:37:11.297Z",
"user": {
"_id": "6216496a9b34d2fb49144599",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6216496a9b34d2fb49144599/41CKA_h1Ffj3RzVabSAkm.jpeg",
"fullname": "Yaojie Lu",
"isPro": false,
"type": "user",
"user": "luyaojie"
}
},
{
"_id": "67a1b4630e9634919de9bc57",
"hidden": false,
"name": "Hongyu Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:37:20.338Z",
"user": {
"_id": "6711c702f858a456b4b9f3a4",
"avatarUrl": "/avatars/178e9567c3111ab22717c3c0dd003a6a.svg",
"fullname": "Hongyu Lin",
"isPro": false,
"type": "user",
"user": "sanmusunrise"
}
},
{
"_id": "67a1b4630e9634919de9bc58",
"hidden": false,
"name": "Xianpei Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:37:26.589Z",
"user": {
"_id": "65e99a77e71555ed193609cf",
"avatarUrl": "/avatars/38ceb127883944677665da967d17dd18.svg",
"fullname": "Xianpei Han",
"isPro": false,
"type": "user",
"user": "xphan"
}
},
{
"_id": "67a1b4630e9634919de9bc59",
"hidden": false,
"name": "Le Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1b4630e9634919de9bc5a",
"hidden": false,
"name": "Jie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T08:22:45 |
DeepRAG: Thinking to Retrieval Step by Step for Large Language Models
|
Large Language Models (LLMs) have shown remarkable potential in reasoning
while they still suffer from severe factual hallucinations due to timeliness,
accuracy, and coverage of parametric knowledge. Meanwhile, integrating
reasoning with retrieval-augmented generation (RAG) remains challenging due to
ineffective task decomposition and redundant retrieval, which can introduce
noise and degrade response quality. In this paper, we propose DeepRAG, a
framework that models retrieval-augmented reasoning as a Markov Decision
Process (MDP), enabling strategic and adaptive retrieval. By iteratively
decomposing queries, DeepRAG dynamically determines whether to retrieve
external knowledge or rely on parametric reasoning at each step. Experiments
show that DeepRAG improves retrieval efficiency while improving answer accuracy
by 21.99%, demonstrating its effectiveness in optimizing retrieval-augmented
reasoning.
| 24 |
67a1b4640e9634919de9bc8b
| null | null |
|
2025-02-04T03:38:34.899000 |
A Study on the Performance of U-Net Modifications in Retroperitoneal Tumor Segmentation
| 3 |
{
"_id": "61ba19bf6122a4fd29049371",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1639586194527-noauth.jpeg",
"followerCount": 2,
"fullname": "Moein Heidari",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "moein99",
"type": "user"
}
| true | null |
2502.00314
|
[
{
"_id": "67a1d1ca167bea74d520eb59",
"hidden": false,
"name": "Moein Heidari",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:12:31.574Z",
"user": {
"_id": "61ba19bf6122a4fd29049371",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1639586194527-noauth.jpeg",
"fullname": "Moein Heidari",
"isPro": false,
"type": "user",
"user": "moein99"
}
},
{
"_id": "67a1d1ca167bea74d520eb5a",
"hidden": false,
"name": "Ehsan Khodapanah Aghdam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1d1ca167bea74d520eb5b",
"hidden": false,
"name": "Alexander Manzella",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1d1ca167bea74d520eb5c",
"hidden": false,
"name": "Daniel Hsu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1d1ca167bea74d520eb5d",
"hidden": false,
"name": "Rebecca Scalabrino",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1d1ca167bea74d520eb5e",
"hidden": false,
"name": "Wenjin Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:13:15.403Z",
"user": {
"_id": "6793160813ed9a38f3c214ef",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/qVy6X8N_ckK5Z-y2ger_2.png",
"fullname": "wenjin chen",
"isPro": false,
"type": "user",
"user": "cwjbks"
}
},
{
"_id": "67a1d1ca167bea74d520eb5f",
"hidden": true,
"name": "David J. Foran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1d1ca167bea74d520eb60",
"hidden": false,
"name": "Ilker Hacihaliloglu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-01T04:25:28 |
A Study on the Performance of U-Net Modifications in Retroperitoneal
Tumor Segmentation
|
The retroperitoneum hosts a variety of tumors, including rare benign and
malignant types, which pose diagnostic and treatment challenges due to their
infrequency and proximity to vital structures. Estimating tumor volume is
difficult due to their irregular shapes, and manual segmentation is
time-consuming. Automatic segmentation using U-Net and its variants,
incorporating Vision Transformer (ViT) elements, has shown promising results
but struggles with high computational demands. To address this, architectures
like the Mamba State Space Model (SSM) and Extended Long-Short Term Memory
(xLSTM) offer efficient solutions by handling long-range dependencies with
lower resource consumption. This study evaluates U-Net enhancements, including
CNN, ViT, Mamba, and xLSTM, on a new in-house CT dataset and a public organ
segmentation dataset. The proposed ViLU-Net model integrates Vi-blocks for
improved segmentation. Results highlight xLSTM's efficiency in the U-Net
framework. The code is publicly accessible on GitHub.
| 3 |
67a1d1cd167bea74d520ebf6
| null | null |
|
2025-02-04T03:22:06.520000 |
SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of Large Language Model
| 5 |
{
"_id": "62a155e615eeab266b2f2243",
"avatarUrl": "/avatars/e89ef156e73af028e3ce3664e6cb4e62.svg",
"followerCount": 4,
"fullname": "Zhiyu Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jimi888",
"type": "user"
}
| true | null |
2501.18636
|
[
{
"_id": "67a1bfc314cba2eba6da4b2b",
"hidden": false,
"name": "Xun Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b2c",
"hidden": false,
"name": "Simin Niu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:11:40.648Z",
"user": {
"_id": "66daea8776dbaaa372eabec5",
"avatarUrl": "/avatars/1e5fbe4ff06bb6121c7029253b76b79f.svg",
"fullname": "siminniu",
"isPro": false,
"type": "user",
"user": "siminniu"
}
},
{
"_id": "67a1bfc314cba2eba6da4b2d",
"hidden": false,
"name": "Zhiyu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:11:52.391Z",
"user": {
"_id": "661ef43ff65a7cf84e2291e1",
"avatarUrl": "/avatars/cf90dba5934763693c800b3708ce4771.svg",
"fullname": "Zhiyu (Drew) Li",
"isPro": false,
"type": "user",
"user": "zhiyuli"
}
},
{
"_id": "67a1bfc314cba2eba6da4b2e",
"hidden": false,
"name": "Sensen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b2f",
"hidden": false,
"name": "Hanyu Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:04.452Z",
"user": {
"_id": "669e0b93c7cb0568dac6e92e",
"avatarUrl": "/avatars/a39ea77d7391f164af8a80f94f85f2ca.svg",
"fullname": "hanyu Wang",
"isPro": false,
"type": "user",
"user": "UglyToilet"
}
},
{
"_id": "67a1bfc314cba2eba6da4b30",
"hidden": false,
"name": "Feiyu Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b31",
"hidden": false,
"name": "Jason Zhaoxin Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b32",
"hidden": false,
"name": "Bo Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b33",
"hidden": false,
"name": "Shichao Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b34",
"hidden": false,
"name": "Mengwei Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1bfc314cba2eba6da4b35",
"hidden": false,
"name": "Jiawei Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-21T10:00:17.081Z",
"user": {
"_id": "669e60ee8580d17cb60f8347",
"avatarUrl": "/avatars/37963b833228afe39cc24854c9326670.svg",
"fullname": "yang jiawei",
"isPro": false,
"type": "user",
"user": "Dany-0"
}
}
] | 2025-01-28T17:01:31 |
SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of
Large Language Model
|
The indexing-retrieval-generation paradigm of retrieval-augmented generation
(RAG) has been highly successful in solving knowledge-intensive tasks by
integrating external knowledge into large language models (LLMs). However, the
incorporation of external and unverified knowledge increases the vulnerability
of LLMs because attackers can perform attack tasks by manipulating knowledge.
In this paper, we introduce a benchmark named SafeRAG designed to evaluate the
RAG security. First, we classify attack tasks into silver noise, inter-context
conflict, soft ad, and white Denial-of-Service. Next, we construct RAG security
evaluation dataset (i.e., SafeRAG dataset) primarily manually for each task. We
then utilize the SafeRAG dataset to simulate various attack scenarios that RAG
may encounter. Experiments conducted on 14 representative RAG components
demonstrate that RAG exhibits significant vulnerability to all attack tasks and
even the most apparent attack task can easily bypass existing retrievers,
filters, or advanced LLMs, resulting in the degradation of RAG service quality.
Code is available at: https://github.com/IAAR-Shanghai/SafeRAG.
| 29 |
67a1bfc414cba2eba6da4b63
| null | null |
|
2025-02-04T03:10:49.348000 |
The Differences Between Direct Alignment Algorithms are a Blur
| 1 |
{
"_id": "62897fce5d9e25c10e4f319d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62897fce5d9e25c10e4f319d/bMlfAyzkNNZlkQ5mCW6Vc.jpeg",
"followerCount": 8,
"fullname": "Alexey Gorbatovski",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Myashka",
"type": "user"
}
| false |
[
"https://cdn-uploads.huggingface.co/production/uploads/62897fce5d9e25c10e4f319d/ndKErkZSfT5LvqKfIrC7f.png"
] |
2502.01237
|
[
{
"_id": "67a1c1428747511e7b9a1965",
"hidden": false,
"name": "Alexey Gorbatovski",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:00.767Z",
"user": {
"_id": "62897fce5d9e25c10e4f319d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62897fce5d9e25c10e4f319d/bMlfAyzkNNZlkQ5mCW6Vc.jpeg",
"fullname": "Alexey Gorbatovski",
"isPro": false,
"type": "user",
"user": "Myashka"
}
},
{
"_id": "67a1c1428747511e7b9a1966",
"hidden": false,
"name": "Boris Shaposhnikov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T14:48:32.720Z",
"user": {
"_id": "637dd11dcbad6e62a5e39743",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/637dd11dcbad6e62a5e39743/DG3rM8cy8inqbCoG4qizO.jpeg",
"fullname": "Boris Shaposhnikov",
"isPro": false,
"type": "user",
"user": "borisshapa"
}
},
{
"_id": "67a1c1428747511e7b9a1967",
"hidden": false,
"name": "Viacheslav Sinii",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:38:52.039Z",
"user": {
"_id": "6416272d986557e8cac64ece",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6416272d986557e8cac64ece/s3CLjNN_pGj-vJDcENFD2.jpeg",
"fullname": "Viacheslav",
"isPro": false,
"type": "user",
"user": "ummagumm-a"
}
},
{
"_id": "67a1c1428747511e7b9a1968",
"hidden": false,
"name": "Alexey Malakhov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:38:54.121Z",
"user": {
"_id": "636e71b2b0ebc04888157b71",
"avatarUrl": "/avatars/957ba705d470e3a01792741d7f0ff038.svg",
"fullname": "Alexey Malakhov",
"isPro": false,
"type": "user",
"user": "ZeL1k7"
}
},
{
"_id": "67a1c1428747511e7b9a1969",
"hidden": false,
"name": "Daniil Gavrilov",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:38:57.087Z",
"user": {
"_id": "62a9c8edc19f92ae443ab37f",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669110208492-62a9c8edc19f92ae443ab37f.png",
"fullname": "Daniil Gavrilov",
"isPro": false,
"type": "user",
"user": "kefirski"
}
}
] | 2025-02-03T10:54:14 |
The Differences Between Direct Alignment Algorithms are a Blur
|
Direct Alignment Algorithms (DAAs) simplify language model alignment by
replacing reinforcement learning (RL) and reward modeling (RM) in Reinforcement
Learning from Human Feedback (RLHF) with direct policy optimization. DAAs can
be classified by their ranking losses (pairwise vs. pointwise), by the rewards
used in those losses (e.g., likelihood ratios of policy and reference policy,
or odds ratios), or by whether a Supervised Fine-Tuning (SFT) phase is required
(two-stage vs. one-stage). We first show that one-stage methods underperform
two-stage methods. To address this, we incorporate an explicit SFT phase and
introduce the beta parameter, controlling the strength of preference
optimization, into single-stage ORPO and ASFT. These modifications improve
their performance in Alpaca Eval 2 by +3.46 (ORPO) and +8.27 (ASFT),
matching two-stage methods like DPO. Further analysis reveals that the key
factor is whether the approach uses pairwise or pointwise objectives, rather
than the specific implicit reward or loss function. These results highlight the
importance of careful evaluation to avoid premature claims of performance gains
or overall superiority in alignment algorithms.
| 112 |
67a1c1438747511e7b9a19ae
| null | null |
|
2025-02-04T01:04:33.630000 |
Preference Leakage: A Contamination Problem in LLM-as-a-judge
| 5 |
{
"_id": "6474e1afb68461d5cf7c41cc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6474e1afb68461d5cf7c41cc/bcoiD_qPrjHUBlB259djg.png",
"followerCount": 1,
"fullname": "Dawei Li",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wjldw",
"type": "user"
}
| true | null |
2502.01534
|
[
{
"_id": "67a1ad77d797fac51fa80770",
"hidden": false,
"name": "Dawei Li",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:34:59.780Z",
"user": {
"_id": "6474e1afb68461d5cf7c41cc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6474e1afb68461d5cf7c41cc/bcoiD_qPrjHUBlB259djg.png",
"fullname": "Dawei Li",
"isPro": false,
"type": "user",
"user": "wjldw"
}
},
{
"_id": "67a1ad77d797fac51fa80771",
"hidden": false,
"name": "Renliang Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:11.035Z",
"user": {
"_id": "653a195b0da86d726c9c580c",
"avatarUrl": "/avatars/61649e1d600fdc1edc50ead0dfa99fdd.svg",
"fullname": "Renliang Sun",
"isPro": false,
"type": "user",
"user": "RLSNLP"
}
},
{
"_id": "67a1ad77d797fac51fa80772",
"hidden": false,
"name": "Yue Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1ad77d797fac51fa80773",
"hidden": false,
"name": "Ming Zhong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:54:39.212Z",
"user": {
"_id": "61d53df2062444ea769d3b79",
"avatarUrl": "/avatars/fa771202368b6b2626a8fdf1c4369239.svg",
"fullname": "Ming Zhong",
"isPro": false,
"type": "user",
"user": "MingZhong"
}
},
{
"_id": "67a1ad77d797fac51fa80774",
"hidden": false,
"name": "Bohan Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1ad77d797fac51fa80775",
"hidden": false,
"name": "Jiawei Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1ad77d797fac51fa80776",
"hidden": false,
"name": "Xiangliang Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:14:47.621Z",
"user": {
"_id": "605a97d9b54d35bc67a4ff12",
"avatarUrl": "/avatars/7a48a2dac4e6ebb9e775022e15ddc5a7.svg",
"fullname": "zhangxiangliang",
"isPro": false,
"type": "user",
"user": "ZhangXiangliang"
}
},
{
"_id": "67a1ad77d797fac51fa80777",
"hidden": false,
"name": "Wei Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:14:54.615Z",
"user": {
"_id": "62fa0ffe0697d224219a0cb7",
"avatarUrl": "/avatars/f0ef59e1c0cf4ab4fe5cee08d488bd03.svg",
"fullname": "Wei Wang",
"isPro": false,
"type": "user",
"user": "WeiWang"
}
},
{
"_id": "67a1ad77d797fac51fa80778",
"hidden": false,
"name": "Huan Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T17:13:03 |
Preference Leakage: A Contamination Problem in LLM-as-a-judge
|
Large Language Models (LLMs) as judges and LLM-based data synthesis have
emerged as two fundamental LLM-driven data annotation methods in model
development. While their combination significantly enhances the efficiency of
model training and evaluation, little attention has been given to the potential
contamination brought by this new model development paradigm. In this work, we
expose preference leakage, a contamination problem in LLM-as-a-judge caused by
the relatedness between the synthetic data generators and LLM-based evaluators.
To study this issue, we first define three common relatednesses between data
generator LLM and judge LLM: being the same model, having an inheritance
relationship, and belonging to the same model family. Through extensive
experiments, we empirically confirm the bias of judges towards their related
student models caused by preference leakage across multiple LLM baselines and
benchmarks. Further analysis suggests that preference leakage is a pervasive
issue that is harder to detect compared to previously identified biases in
LLM-as-a-judge scenarios. All of these findings imply that preference leakage
is a widespread and challenging problem in the area of LLM-as-a-judge. We
release all codes and data at:
https://github.com/David-Li0406/Preference-Leakage.
| 39 |
67a1ad78d797fac51fa807c1
| null | null |
|
2025-02-04T00:50:46.370000 |
Lifelong Sequential Knowledge Editing without Model Degradation
| 2 |
{
"_id": "64e8f4a24f3f7b0b84834315",
"avatarUrl": "/avatars/242bb68c7ccffe5061c2d1c229ea3b0b.svg",
"followerCount": 1,
"fullname": "Akshat Gupta",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "akshat57",
"type": "user"
}
| true | null |
2502.01636
|
[
{
"_id": "67a1aa5dc7fa0ccf0a32ceb1",
"hidden": false,
"name": "Akshat Gupta",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-04T05:53:11.213Z",
"user": {
"_id": "64e8f4a24f3f7b0b84834315",
"avatarUrl": "/avatars/242bb68c7ccffe5061c2d1c229ea3b0b.svg",
"fullname": "Akshat Gupta",
"isPro": false,
"type": "user",
"user": "akshat57"
}
},
{
"_id": "67a1aa5dc7fa0ccf0a32ceb2",
"hidden": false,
"name": "Phudish Prateepamornkul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1aa5dc7fa0ccf0a32ceb3",
"hidden": false,
"name": "Maochuan Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1aa5dc7fa0ccf0a32ceb4",
"hidden": false,
"name": "Ahmed Alaa",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-07T09:58:25.997Z",
"user": {
"_id": "6353668fc0b9d81cd2668a2c",
"avatarUrl": "/avatars/4f6e54e702945d7d58b933ba4115a3e0.svg",
"fullname": "Ahmed Alaa",
"isPro": false,
"type": "user",
"user": "amalaa"
}
},
{
"_id": "67a1aa5dc7fa0ccf0a32ceb5",
"hidden": false,
"name": "Thomas Hartvigsen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1aa5dc7fa0ccf0a32ceb6",
"hidden": false,
"name": "Gopala Anumanchipalli",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:11:09.416Z",
"user": {
"_id": "60523e4aa7226b25aaeea2b8",
"avatarUrl": "/avatars/316ca348da91ebced86991f36150c959.svg",
"fullname": "Gopala Anumanchipalli",
"isPro": false,
"type": "user",
"user": "gopalakr"
}
}
] | 2025-02-03T18:59:14 |
Lifelong Sequential Knowledge Editing without Model Degradation
|
Prior work in parameter-modifying knowledge editing has shown that
large-scale sequential editing leads to significant model degradation. In this
paper, we study the reasons behind this and scale sequential knowledge editing
to 10,000 sequential edits, while maintaining the downstream performance of the
original model. We first show that locate-then-edit knowledge editing methods
lead to overfitting on the edited facts. We also show that continuous knowledge
editing using these methods leads to disproportionate growth in the norm of the
edited matrix. We then provide a crucial insight into the inner workings of
locate-then-edit methods. We show that norm-growth is a hidden trick employed
by these methods that gives larger importance to the output activations
produced from the edited layers. With this "importance hacking", the edited
layers provide a much larger contributions to the model's output. To mitigate
these issues, we present ENCORE - Early stopping and Norm-Constrained Robust
knowledge Editing. ENCORE controls for overfitting and the disproportionate
norm-growth to enable long-term sequential editing, where we are able to
perform up to 10,000 sequential edits without loss of downstream performance.
ENCORE is also 61% faster than MEMIT and 64% faster than AlphaEdit on
Llama3-8B.
| 5 |
67a1aa5fc7fa0ccf0a32cf90
| null | null |
|
2025-02-04T00:45:45.545000 |
FastKV: KV Cache Compression for Fast Long-Context Processing with Token-Selective Propagation
| 2 |
{
"_id": "639ffbc6beb95d698de9640d",
"avatarUrl": "/avatars/7ef1aaadd5b378d00e17dc548e42cb7e.svg",
"followerCount": 2,
"fullname": "Dongwon Jo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "dongwonjo",
"type": "user"
}
| true | null |
2502.01068
|
[
{
"_id": "67a1a75f6aa8429da4945eeb",
"hidden": false,
"name": "Dongwon Jo",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:16.125Z",
"user": {
"_id": "639ffbc6beb95d698de9640d",
"avatarUrl": "/avatars/7ef1aaadd5b378d00e17dc548e42cb7e.svg",
"fullname": "Dongwon Jo",
"isPro": false,
"type": "user",
"user": "dongwonjo"
}
},
{
"_id": "67a1a75f6aa8429da4945eec",
"hidden": false,
"name": "Jiwon Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:14.253Z",
"user": {
"_id": "662672eaebdfec5cfdf1d034",
"avatarUrl": "/avatars/61bc7add693c555e29ad3c1112215684.svg",
"fullname": "Jiwon Song",
"isPro": false,
"type": "user",
"user": "jiwonsong"
}
},
{
"_id": "67a1a75f6aa8429da4945eed",
"hidden": false,
"name": "Yulhwa Kim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:35:45.706Z",
"user": {
"_id": "6566ddb96af53c602f80b1e2",
"avatarUrl": "/avatars/403c8e486115920e50867b6462ddfd99.svg",
"fullname": "Yulhwa Kim",
"isPro": false,
"type": "user",
"user": "YulhwaKim"
}
},
{
"_id": "67a1a75f6aa8429da4945eee",
"hidden": false,
"name": "Jae-Joon Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T05:25:09 |
FastKV: KV Cache Compression for Fast Long-Context Processing with
Token-Selective Propagation
|
While large language models (LLMs) excel at handling long-context sequences,
they require substantial key-value (KV) caches to store contextual information,
which can heavily burden computational efficiency and memory usage. Previous
efforts to compress these KV caches primarily focused on reducing memory
demands but were limited in enhancing latency. To address this issue, we
introduce FastKV, a KV cache compression method designed to enhance latency for
long-context sequences. To enhance processing speeds while maintaining
accuracy, FastKV adopts a novel Token-Selective Propagation (TSP) approach that
retains the full context information in the initial layers of LLMs and
selectively propagates only a portion of this information in deeper layers even
in the prefill stage. Additionally, FastKV incorporates grouped-query attention
(GQA)-aware KV cache compression to exploit the advantages of GQA in both
memory and computational efficiency. Our experimental results show that FastKV
achieves 2.00times and 1.40times improvements in time-to-first-token
(TTFT) and throughput, respectively, compared to HeadKV, the state-of-the-art
KV cache compression method. Moreover, FastKV successfully maintains accuracy
on long-context benchmarks at levels comparable to the baselines. Our code is
available at https://github.com/dongwonjo/FastKV.
| 16 |
67a1a7616aa8429da4945f95
| null | null |
|
2025-02-04T00:37:57.949000 |
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models
| 19 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2502.01061
|
[
{
"_id": "67a1a7a166a8a88726963ef4",
"hidden": false,
"name": "Gaojie Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:49:22.604Z",
"user": {
"_id": "64802fcdcc9e514b3b031244",
"avatarUrl": "/avatars/cc5979008bdb21a2be9575865dce909b.svg",
"fullname": "Gaojie Lin",
"isPro": false,
"type": "user",
"user": "lingaojie"
}
},
{
"_id": "67a1a7a166a8a88726963ef5",
"hidden": false,
"name": "Jianwen Jiang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:15:17.797Z",
"user": {
"_id": "643410509bd5a84b5dca1345",
"avatarUrl": "/avatars/597823ce61c7b1a5da77e178820824f6.svg",
"fullname": "Jianwen Jiang",
"isPro": false,
"type": "user",
"user": "JianwenJ"
}
},
{
"_id": "67a1a7a166a8a88726963ef6",
"hidden": false,
"name": "Jiaqi Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-14T08:01:37.757Z",
"user": {
"_id": "6527603743db8c626c467726",
"avatarUrl": "/avatars/216df0374b37355d57d2c76e6f08e4d6.svg",
"fullname": "yang",
"isPro": false,
"type": "user",
"user": "jiaqi78"
}
},
{
"_id": "67a1a7a166a8a88726963ef7",
"hidden": false,
"name": "Zerong Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:50:10.474Z",
"user": {
"_id": "65b0b6d00648e8d10b609066",
"avatarUrl": "/avatars/071d2a99f6d7a4e37d338e58d46c4bc2.svg",
"fullname": "ZZerong",
"isPro": false,
"type": "user",
"user": "zerong2"
}
},
{
"_id": "67a1a7a166a8a88726963ef8",
"hidden": false,
"name": "Chao Liang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:15:22.815Z",
"user": {
"_id": "66d7ca4c0429a62c38c46bf5",
"avatarUrl": "/avatars/45c7859627ac4be70c40f5b1fd02b18a.svg",
"fullname": "liangdebugger",
"isPro": false,
"type": "user",
"user": "chao0412"
}
}
] | 2025-02-03T05:17:32 |
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human
Animation Models
|
End-to-end human animation, such as audio-driven talking human generation,
has undergone notable advancements in the recent few years. However, existing
methods still struggle to scale up as large general video generation models,
limiting their potential in real applications. In this paper, we propose
OmniHuman, a Diffusion Transformer-based framework that scales up data by
mixing motion-related conditions into the training phase. To this end, we
introduce two training principles for these mixed conditions, along with the
corresponding model architecture and inference strategy. These designs enable
OmniHuman to fully leverage data-driven motion generation, ultimately achieving
highly realistic human video generation. More importantly, OmniHuman supports
various portrait contents (face close-up, portrait, half-body, full-body),
supports both talking and singing, handles human-object interactions and
challenging body poses, and accommodates different image styles. Compared to
existing end-to-end audio-driven methods, OmniHuman not only produces more
realistic videos, but also offers greater flexibility in inputs. It also
supports multiple driving modalities (audio-driven, video-driven and combined
driving signals). Video samples are provided on the ttfamily project page
(https://omnihuman-lab.github.io)
| 184 |
67a1a7a466a8a88726963f90
| null | null |
|
2025-02-04T00:32:03.929000 |
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2502.01100
|
[
{
"_id": "67a1a649f4aecd0dfc96ebf4",
"hidden": false,
"name": "Bill Yuchen Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:17.972Z",
"user": {
"_id": "607f666a4ad99100d63ce35c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/607f666a4ad99100d63ce35c/QxhxnvfeV6efkxwUFHwjI.png",
"fullname": "Bill Yuchen Lin",
"isPro": false,
"type": "user",
"user": "yuchenlin"
}
},
{
"_id": "67a1a649f4aecd0dfc96ebf5",
"hidden": false,
"name": "Ronan Le Bras",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-04T05:31:56.722Z",
"user": {
"_id": "635049104e753c9940fefd71",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/635049104e753c9940fefd71/HgR43XIFw3dneY5ufrAE8.jpeg",
"fullname": "Ronan Le Bras",
"isPro": false,
"type": "user",
"user": "ronanlb"
}
},
{
"_id": "67a1a649f4aecd0dfc96ebf6",
"hidden": false,
"name": "Kyle Richardson",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T09:18:11.166Z",
"user": {
"_id": "659c6f5e1d398a238152227d",
"avatarUrl": "/avatars/dba97fe8cb825102f1eae97104a71f64.svg",
"fullname": "Kyle Richardson",
"isPro": false,
"type": "user",
"user": "yakazimir"
}
},
{
"_id": "67a1a649f4aecd0dfc96ebf7",
"hidden": false,
"name": "Ashish Sabharwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a649f4aecd0dfc96ebf8",
"hidden": false,
"name": "Radha Poovendran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a649f4aecd0dfc96ebf9",
"hidden": false,
"name": "Peter Clark",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:38:40.973Z",
"user": {
"_id": "64d265cfbe712cda5ab7cc3f",
"avatarUrl": "/avatars/caab6fa5764a0271552ae589d352b592.svg",
"fullname": "Peter Clarke",
"isPro": false,
"type": "user",
"user": "PeterClarke"
}
},
{
"_id": "67a1a649f4aecd0dfc96ebfa",
"hidden": false,
"name": "Yejin Choi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:38:53.950Z",
"user": {
"_id": "64d42729f63b01b7f676b176",
"avatarUrl": "/avatars/52e54bdd6a1fb6c774a40cd70f3d7925.svg",
"fullname": "Yejin Choi",
"isPro": false,
"type": "user",
"user": "yejinchoinka"
}
}
] | 2025-02-03T06:44:49 |
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
|
We investigate the logical reasoning capabilities of large language models
(LLMs) and their scalability in complex non-monotonic reasoning. To this end,
we introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM
reasoning performance on logic grid puzzles derived from constraint
satisfaction problems (CSPs). ZebraLogic enables the generation of puzzles with
controllable and quantifiable complexity, facilitating a systematic study of
the scaling limits of models such as Llama, o1 models, and DeepSeek-R1. By
encompassing a broad range of search space complexities and diverse logical
constraints, ZebraLogic provides a structured environment to evaluate reasoning
under increasing difficulty.
Our results reveal a significant decline in accuracy as problem complexity
grows -- a phenomenon we term the curse of complexity. This limitation persists
even with larger models and increased inference-time computation, suggesting
inherent constraints in current LLM reasoning capabilities. Additionally, we
explore strategies to enhance logical reasoning, including Best-of-N sampling,
backtracking mechanisms, and self-verification prompts. Our findings offer
critical insights into the scalability of LLM reasoning, highlight fundamental
limitations, and outline potential directions for improvement.
| 17 |
67a1a64cf4aecd0dfc96ecb8
| null | null |
|
2025-02-04T00:28:35.436000 |
The Jumping Reasoning Curve? Tracking the Evolution of Reasoning Performance in GPT-[n] and o-[n] Models on Multimodal Puzzles
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2502.01081
|
[
{
"_id": "67a1a56d83c3565727d22f0c",
"hidden": false,
"name": "Vernon Y. H. Toh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a56d83c3565727d22f0d",
"hidden": false,
"name": "Yew Ken Chia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a56d83c3565727d22f0e",
"hidden": false,
"name": "Deepanway Ghosal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:40:36.537Z",
"user": {
"_id": "62eced5e7e89e8d34df1a1ea",
"avatarUrl": "/avatars/99c9d8ba7e7722b7524d5d687cf96a25.svg",
"fullname": "Deepanway Ghosal",
"isPro": false,
"type": "user",
"user": "dghosal"
}
},
{
"_id": "67a1a56d83c3565727d22f0f",
"hidden": false,
"name": "Soujanya Poria",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:40:30.882Z",
"user": {
"_id": "626b626405fe1cb65725aca1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/626b626405fe1cb65725aca1/aa-Lata46I3fXOmMetvXH.jpeg",
"fullname": "Soujanya Poria",
"isPro": false,
"type": "user",
"user": "soujanyaporia"
}
}
] | 2025-02-03T05:47:04 |
The Jumping Reasoning Curve? Tracking the Evolution of Reasoning
Performance in GPT-[n] and o-[n] Models on Multimodal Puzzles
|
The releases of OpenAI's o1 and o3 mark a significant paradigm shift in Large
Language Models towards advanced reasoning capabilities. Notably, o3
outperformed humans in novel problem-solving and skill acquisition on the
Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI).
However, this benchmark is limited to symbolic patterns, whereas humans often
perceive and reason about multimodal scenarios involving both vision and
language data. Thus, there is an urgent need to investigate advanced reasoning
capabilities in multimodal tasks. To this end, we track the evolution of the
GPT-[n] and o-[n] series models on challenging multimodal puzzles, requiring
fine-grained visual perception with abstract or algorithmic reasoning. The
superior performance of o1 comes at nearly 750 times the computational cost of
GPT-4o, raising concerns about its efficiency. Our results reveal a clear
upward trend in reasoning capabilities across model iterations, with notable
performance jumps across GPT-series models and subsequently to o1. Nonetheless,
we observe that the o1 model still struggles with simple multimodal puzzles
requiring abstract reasoning. Furthermore, its performance in algorithmic
puzzles remains poor. We plan to continuously track new models in the series
and update our results in this paper accordingly. All resources used in this
evaluation are openly available https://github.com/declare-lab/LLM-PuzzleTest.
| 14 |
67a1a57083c3565727d22fc6
| null | null |
|
2025-02-04T00:27:13.960000 |
Scaling Embedding Layers in Language Models
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2502.01637
|
[
{
"_id": "67a1a51e6aa8429da493d0b5",
"hidden": false,
"name": "Da Yu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-05T10:15:26.901Z",
"user": {
"_id": "64b785384df206a3ed142dc0",
"avatarUrl": "/avatars/501a90b2c80d9b3a2e0d1819a4211f84.svg",
"fullname": "Da Yu",
"isPro": false,
"type": "user",
"user": "Jellyfish0538"
}
},
{
"_id": "67a1a51e6aa8429da493d0b6",
"hidden": false,
"name": "Edith Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a51e6aa8429da493d0b7",
"hidden": false,
"name": "Badih Ghazi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a51e6aa8429da493d0b8",
"hidden": false,
"name": "Yangsibo Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:04:05.965Z",
"user": {
"_id": "645949e7ecf89b6a375ff129",
"avatarUrl": "/avatars/93f167fe70c43328b95ed597b6dfa51b.svg",
"fullname": "Yangsibo Huang",
"isPro": true,
"type": "user",
"user": "yangsibo"
}
},
{
"_id": "67a1a51e6aa8429da493d0b9",
"hidden": false,
"name": "Pritish Kamath",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:04:12.118Z",
"user": {
"_id": "64c0d2f8d76592ba8996036c",
"avatarUrl": "/avatars/7b7720ad2060ac5c36651fee8d43ba69.svg",
"fullname": "Pritish Kamath",
"isPro": false,
"type": "user",
"user": "pritishkamath"
}
},
{
"_id": "67a1a51e6aa8429da493d0ba",
"hidden": false,
"name": "Ravi Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a51e6aa8429da493d0bb",
"hidden": false,
"name": "Daogao Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T20:58:14.948Z",
"user": {
"_id": "667a19869f78b2d01bc402b1",
"avatarUrl": "/avatars/b7fb00826a62e70d2dae5f978b7366f3.svg",
"fullname": "Daogao Liu",
"isPro": false,
"type": "user",
"user": "ShyShowmaker"
}
},
{
"_id": "67a1a51e6aa8429da493d0bc",
"hidden": false,
"name": "Chiyuan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T20:58:08.569Z",
"user": {
"_id": "65c65a2ef2d7e1eb92256d1f",
"avatarUrl": "/avatars/b02979bc1549a16515c880ce836c3023.svg",
"fullname": "Chiyuan Zhang",
"isPro": false,
"type": "user",
"user": "pluspluskid"
}
}
] | 2025-02-03T18:59:32 |
Scaling Embedding Layers in Language Models
|
We propose SCONE (Scalable, Contextualized,
Offloaded, N-gram Embedding), a method for
extending input embedding layers to enhance language model performance as layer
size scales. To avoid increased decoding costs, SCONE retains the original
vocabulary while introducing embeddings for a set of frequent n-grams. These
embeddings provide contextualized representation for each input token and are
learned with a separate model during training. During inference, they are
precomputed and stored in off-accelerator memory with minimal impact on
inference speed. SCONE enables two new scaling strategies: increasing the
number of cached n-gram embeddings and scaling the model used to learn them,
all while maintaining fixed inference-time FLOPS. We show that scaling both
aspects allows SCONE to outperform a 1.9B parameter baseline across diverse
corpora, while using only half the inference-time FLOPS.
| 24 |
67a1a51e6aa8429da493d0d5
| null | null |
|
2025-02-04T00:25:52.071000 |
Improving Transformer World Models for Data-Efficient RL
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2502.01591
|
[
{
"_id": "67a1a4b72bf092a7612b36eb",
"hidden": false,
"name": "Antoine Dedieu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4b72bf092a7612b36ec",
"hidden": false,
"name": "Joseph Ortiz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4b72bf092a7612b36ed",
"hidden": false,
"name": "Xinghua Lou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:08:45.474Z",
"user": {
"_id": "665c8e13a1ff38df9706379e",
"avatarUrl": "/avatars/e0aecfac58ff98c628fd57afee53f791.svg",
"fullname": "xinghua Lou",
"isPro": false,
"type": "user",
"user": "nickname-xingxing"
}
},
{
"_id": "67a1a4b72bf092a7612b36ee",
"hidden": false,
"name": "Carter Wendelken",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4b72bf092a7612b36ef",
"hidden": false,
"name": "Wolfgang Lehrach",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T21:08:36.618Z",
"user": {
"_id": "6310ec5d64939fabc00aea54",
"avatarUrl": "/avatars/d3fde9392fd30a6d80e2e7989ed4db17.svg",
"fullname": "Wolfgang Lehrach",
"isPro": false,
"type": "user",
"user": "wpl"
}
},
{
"_id": "67a1a4b72bf092a7612b36f0",
"hidden": false,
"name": "J Swaroop Guntupalli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4b72bf092a7612b36f1",
"hidden": false,
"name": "Miguel Lazaro-Gredilla",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1a4b72bf092a7612b36f2",
"hidden": false,
"name": "Kevin Patrick Murphy",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T18:25:17 |
Improving Transformer World Models for Data-Efficient RL
|
We present an approach to model-based RL that achieves a new state of the art
performance on the challenging Craftax-classic benchmark, an open-world 2D
survival game that requires agents to exhibit a wide range of general abilities
-- such as strong generalization, deep exploration, and long-term reasoning.
With a series of careful design choices aimed at improving sample efficiency,
our MBRL algorithm achieves a reward of 67.4% after only 1M environment steps,
significantly outperforming DreamerV3, which achieves 53.2%, and, for the first
time, exceeds human performance of 65.0%. Our method starts by constructing a
SOTA model-free baseline, using a novel policy architecture that combines CNNs
and RNNs. We then add three improvements to the standard MBRL setup: (a) "Dyna
with warmup", which trains the policy on real and imaginary data, (b) "nearest
neighbor tokenizer" on image patches, which improves the scheme to create the
transformer world model (TWM) inputs, and (c) "block teacher forcing", which
allows the TWM to reason jointly about the future tokens of the next timestep.
| 9 |
67a1a4b82bf092a7612b371b
| null | null |
|
2025-02-04T00:02:39.922000 |
Process Reinforcement through Implicit Rewards
| 2 |
{
"_id": "6321152b8c0da827c72c7c16",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg",
"followerCount": 13,
"fullname": "Hanbin Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "hanbin",
"type": "user"
}
| true | null |
2502.01456
|
[
{
"_id": "67a19d705efa4fab15497775",
"hidden": false,
"name": "Ganqu Cui",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:23.889Z",
"user": {
"_id": "650eba9555dc1e841746f132",
"avatarUrl": "/avatars/af6f5ee78f161d25ec0afc45d2def8eb.svg",
"fullname": "Ganqu Cui",
"isPro": false,
"type": "user",
"user": "ganqu"
}
},
{
"_id": "67a19d705efa4fab15497776",
"hidden": false,
"name": "Lifan Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497777",
"hidden": false,
"name": "Zefan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497778",
"hidden": false,
"name": "Hanbin Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:25.869Z",
"user": {
"_id": "6321152b8c0da827c72c7c16",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1678783813705-6321152b8c0da827c72c7c16.jpeg",
"fullname": "Hanbin Wang",
"isPro": false,
"type": "user",
"user": "hanbin"
}
},
{
"_id": "67a19d705efa4fab15497779",
"hidden": false,
"name": "Wendi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:51:24.261Z",
"user": {
"_id": "671bfaa29e5e675c7f5c4307",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/PwDA6OSSAmg6k4LliEQkZ.png",
"fullname": "Wendi Li",
"isPro": false,
"type": "user",
"user": "wendili"
}
},
{
"_id": "67a19d705efa4fab1549777a",
"hidden": false,
"name": "Bingxiang He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:51:31.090Z",
"user": {
"_id": "64c5e944979493279b700cb2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/vjFuPWw8Vl7b7gXB19Sk-.jpeg",
"fullname": "Bingxiang He",
"isPro": false,
"type": "user",
"user": "hbx"
}
},
{
"_id": "67a19d705efa4fab1549777b",
"hidden": false,
"name": "Yuchen Fan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:51:56.597Z",
"user": {
"_id": "672c2d7816766a76a747b7b5",
"avatarUrl": "/avatars/12c7b26d2b81721ccac3a5c71e32a1a1.svg",
"fullname": "Yuchen Fan",
"isPro": false,
"type": "user",
"user": "yuchenFan"
}
},
{
"_id": "67a19d705efa4fab1549777c",
"hidden": false,
"name": "Tianyu Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:52:26.615Z",
"user": {
"_id": "64abc4aa6cadc7aca585dddf",
"avatarUrl": "/avatars/736afea979cd0021c7a37f68731524ea.svg",
"fullname": "Tianyu Yu",
"isPro": false,
"type": "user",
"user": "Yirany"
}
},
{
"_id": "67a19d705efa4fab1549777d",
"hidden": false,
"name": "Qixin Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T14:15:22.453Z",
"user": {
"_id": "6680f0b20b72be136708af26",
"avatarUrl": "/avatars/5d8fd5be0cf94e246b46abb9d3cc8f5c.svg",
"fullname": "XuQixin",
"isPro": false,
"type": "user",
"user": "Racktic"
}
},
{
"_id": "67a19d705efa4fab1549777e",
"hidden": false,
"name": "Weize Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:52:46.343Z",
"user": {
"_id": "648312243b7fe59c876c0dca",
"avatarUrl": "/avatars/c26ad76cd213529e4670bb599b8199bb.svg",
"fullname": "weize",
"isPro": false,
"type": "user",
"user": "weizechen"
}
},
{
"_id": "67a19d705efa4fab1549777f",
"hidden": false,
"name": "Jiarui Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497780",
"hidden": false,
"name": "Huayu Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:53:06.620Z",
"user": {
"_id": "6630f87ee53fcb71c3887df0",
"avatarUrl": "/avatars/50191a3d45bebf90cf08df09477e95db.svg",
"fullname": "HuayuChen",
"isPro": false,
"type": "user",
"user": "HuayuChen"
}
},
{
"_id": "67a19d705efa4fab15497781",
"hidden": false,
"name": "Kaiyan Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:36:03.829Z",
"user": {
"_id": "60bc94cd85a3ab33829b6211",
"avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg",
"fullname": "Kaiyan Zhang",
"isPro": false,
"type": "user",
"user": "iseesaw"
}
},
{
"_id": "67a19d705efa4fab15497782",
"hidden": false,
"name": "Xingtai Lv",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:53:21.172Z",
"user": {
"_id": "663f07d029be04778ba97871",
"avatarUrl": "/avatars/fb7c9d4a2c537d918a3267e7cbc03f04.svg",
"fullname": "Xingtai Lv",
"isPro": false,
"type": "user",
"user": "XingtaiHF"
}
},
{
"_id": "67a19d705efa4fab15497783",
"hidden": false,
"name": "Shuo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497784",
"hidden": false,
"name": "Yuan Yao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497785",
"hidden": false,
"name": "Xu Han",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497786",
"hidden": false,
"name": "Hao Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab15497787",
"hidden": false,
"name": "Yu Cheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T14:48:37.956Z",
"user": {
"_id": "67017abfe4d49b157ac534d9",
"avatarUrl": "/avatars/997e1b9f54b27a7728a9d4abfee4ba91.svg",
"fullname": "Yu Cheng",
"isPro": false,
"type": "user",
"user": "ych133"
}
},
{
"_id": "67a19d705efa4fab15497788",
"hidden": false,
"name": "Zhiyuan Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T14:53:42.497Z",
"user": {
"_id": "6310a3cd531cc21f9e06de6a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6310a3cd531cc21f9e06de6a/aTGMx3O41lUARK9s3dAik.jpeg",
"fullname": "Zhiyuan Liu",
"isPro": false,
"type": "user",
"user": "acharkq"
}
},
{
"_id": "67a19d705efa4fab15497789",
"hidden": false,
"name": "Maosong Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab1549778a",
"hidden": false,
"name": "Bowen Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a19d705efa4fab1549778b",
"hidden": false,
"name": "Ning Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T15:43:48 |
Process Reinforcement through Implicit Rewards
|
Dense process rewards have proven a more effective alternative to the sparse
outcome-level rewards in the inference-time scaling of large language models
(LLMs), particularly in tasks requiring complex multi-step reasoning. While
dense rewards also offer an appealing choice for the reinforcement learning
(RL) of LLMs since their fine-grained rewards have the potential to address
some inherent issues of outcome rewards, such as training efficiency and credit
assignment, this potential remains largely unrealized. This can be primarily
attributed to the challenges of training process reward models (PRMs) online,
where collecting high-quality process labels is prohibitively expensive, making
them particularly vulnerable to reward hacking. To address these challenges, we
propose PRIME (Process Reinforcement through IMplicit rEwards), which enables
online PRM updates using only policy rollouts and outcome labels through
implict process rewards. PRIME combines well with various advantage functions
and forgoes the dedicated reward model training phrase that existing approaches
require, substantially reducing the development overhead. We demonstrate
PRIME's effectiveness on competitional math and coding. Starting from
Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several
key reasoning benchmarks over the SFT model. Notably, our resulting model,
Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning
benchmarks with 10% of its training data.
| 55 |
67a19d705efa4fab154977d0
| null | null |
|
2025-02-03T22:32:23.956000 |
Improved Training Technique for Latent Consistency Models
| 2 |
{
"_id": "63e083e6f351dc0745745d17",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63e083e6f351dc0745745d17/N0GE4uLrkm14blAQMnm2E.jpeg",
"followerCount": 1,
"fullname": "Quan Dao",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "quandao10",
"type": "user"
}
| true | null |
2502.01441
|
[
{
"_id": "67a189e8fbbab3ce03462fb3",
"hidden": false,
"name": "Quan Dao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:30.529Z",
"user": {
"_id": "63e083e6f351dc0745745d17",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63e083e6f351dc0745745d17/N0GE4uLrkm14blAQMnm2E.jpeg",
"fullname": "Quan Dao",
"isPro": false,
"type": "user",
"user": "quandao10"
}
},
{
"_id": "67a189e8fbbab3ce03462fb4",
"hidden": false,
"name": "Khanh Doan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a189e8fbbab3ce03462fb5",
"hidden": false,
"name": "Di Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a189e8fbbab3ce03462fb6",
"hidden": false,
"name": "Trung Le",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-04T03:30:50.175Z",
"user": {
"_id": "66db7db231e772c5ec4c5576",
"avatarUrl": "/avatars/aa0eb054bd6c881054431a22daf1aea1.svg",
"fullname": "Trung Le",
"isPro": false,
"type": "user",
"user": "trungleuc"
}
},
{
"_id": "67a189e8fbbab3ce03462fb7",
"hidden": false,
"name": "Dimitris Metaxas",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-03T15:25:58 |
Improved Training Technique for Latent Consistency Models
|
Consistency models are a new family of generative models capable of producing
high-quality samples in either a single step or multiple steps. Recently,
consistency models have demonstrated impressive performance, achieving results
on par with diffusion models in the pixel space. However, the success of
scaling consistency training to large-scale datasets, particularly for
text-to-image and video generation tasks, is determined by performance in the
latent space. In this work, we analyze the statistical differences between
pixel and latent spaces, discovering that latent data often contains highly
impulsive outliers, which significantly degrade the performance of iCT in the
latent space. To address this, we replace Pseudo-Huber losses with Cauchy
losses, effectively mitigating the impact of outliers. Additionally, we
introduce a diffusion loss at early timesteps and employ optimal transport (OT)
coupling to further enhance performance. Lastly, we introduce the adaptive
scaling-c scheduler to manage the robust training process and adopt
Non-scaling LayerNorm in the architecture to better capture the statistics of
the features and reduce outlier impact. With these strategies, we successfully
train latent consistency models capable of high-quality sampling with one or
two steps, significantly narrowing the performance gap between latent
consistency and diffusion models. The implementation is released here:
https://github.com/quandao10/sLCT/
| 8 |
67a189eafbbab3ce0346300b
| null | null |
|
2025-02-03T22:22:44.375000 |
AIN: The Arabic INclusive Large Multimodal Model
| 2 |
{
"_id": "656864e12d73834278a8dea7",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg",
"followerCount": 27,
"fullname": "Ahmed Heakl",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "ahmedheakl",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/mmf9V_8rdsi9hN-QdFZV8.png",
"https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/uLq0E1qq75-P4P1KV4xWF.png",
"https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/1eixiKjHGNVm6RaJpdWeq.png",
"https://cdn-uploads.huggingface.co/production/uploads/656864e12d73834278a8dea7/XVJSPAgIQcQn8Zi4gUVwi.png"
] |
2502.00094
|
[
{
"_id": "67a185ab908f4534beb94b8c",
"hidden": false,
"name": "Ahmed Heakl",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:32.712Z",
"user": {
"_id": "656864e12d73834278a8dea7",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/656864e12d73834278a8dea7/sfAWS2eyPtFHb_2GZIypp.jpeg",
"fullname": "Ahmed Heakl",
"isPro": true,
"type": "user",
"user": "ahmedheakl"
}
},
{
"_id": "67a185ab908f4534beb94b8d",
"hidden": false,
"name": "Sara Ghaboura",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-06T16:06:32.048Z",
"user": {
"_id": "66d559103c5bc37ee0dfa61b",
"avatarUrl": "/avatars/8310fc0b01e6d8873aec37ba9ef27c5b.svg",
"fullname": "SaraG",
"isPro": false,
"type": "user",
"user": "SLMLAH"
}
},
{
"_id": "67a185ab908f4534beb94b8e",
"hidden": false,
"name": "Omkar Thawkar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a185ab908f4534beb94b8f",
"hidden": false,
"name": "Fahad Shahbaz Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a185ab908f4534beb94b90",
"hidden": false,
"name": "Hisham Cholakkal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:36:17.368Z",
"user": {
"_id": "654a5f4f9b8bd6406d45bb46",
"avatarUrl": "/avatars/ac0d7eef62cd98a280b162cf7896b1a2.svg",
"fullname": "Hisham Cholakkal",
"isPro": false,
"type": "user",
"user": "hishamcholakkal"
}
},
{
"_id": "67a185ab908f4534beb94b91",
"hidden": false,
"name": "Rao Muhammad Anwer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a185ab908f4534beb94b92",
"hidden": false,
"name": "Salman Khan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-04T17:36:30.190Z",
"user": {
"_id": "65337cfbbadc49780755d1d1",
"avatarUrl": "/avatars/527f456a6a95e0b3a143be130b9b9258.svg",
"fullname": "Salman Khan",
"isPro": false,
"type": "user",
"user": "SalmanKhan"
}
}
] | 2025-01-31T18:58:20 |
AIN: The Arabic INclusive Large Multimodal Model
|
Amid the swift progress of large language models (LLMs) and their evolution
into large multimodal models (LMMs), significant strides have been made in
high-resource languages such as English and Chinese. While Arabic LLMs have
seen notable progress, Arabic LMMs remain largely unexplored, often narrowly
focusing on a few specific aspects of the language and visual understanding. To
bridge this gap, we introduce AIN-the Arabic Inclusive Multimodal
Model-designed to excel across diverse domains. AIN is an English-Arabic
bilingual LMM designed to excel in English and Arabic, leveraging carefully
constructed 3.6 million high-quality Arabic-English multimodal data samples.
AIN demonstrates state-of-the-art Arabic performance, while also possessing
strong English-language visual capabilities. On the recent CAMEL-Bench
benchmark comprising 38 sub-domains including, multi-image understanding,
complex visual perception, handwritten document understanding, video
understanding, medical imaging, plant diseases, and remote sensing-based land
use understanding, our AIN demonstrates strong performance with the 7B model
outperforming GPT-4o by an absolute gain of 3.4% averaged over eight domains
and 38 sub-domains. AIN's superior capabilities position it as a significant
step toward empowering Arabic speakers with advanced multimodal generative AI
tools across diverse applications.
| 17 |
67a185b0908f4534beb94c49
| null | null |
|
2025-02-03T21:13:03.001000 |
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference
| 2 |
{
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661093436322-noauth.jpeg",
"followerCount": 5,
"fullname": "Xiang Liu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Dominic789654",
"type": "user"
}
| true | null |
2502.00299
|
[
{
"_id": "67a1779d5f583199ce7921ad",
"hidden": false,
"name": "Xiang Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T14:48:39.990Z",
"user": {
"_id": "63024676056ec3a2a8714b24",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661093436322-noauth.jpeg",
"fullname": "Xiang Liu",
"isPro": false,
"type": "user",
"user": "Dominic789654"
}
},
{
"_id": "67a1779d5f583199ce7921ae",
"hidden": false,
"name": "Zhenheng Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1779d5f583199ce7921af",
"hidden": false,
"name": "Peijie Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1779d5f583199ce7921b0",
"hidden": false,
"name": "Zeyu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1779d5f583199ce7921b1",
"hidden": false,
"name": "Bo Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1779d5f583199ce7921b2",
"hidden": false,
"name": "Xuming Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a1779d5f583199ce7921b3",
"hidden": false,
"name": "Xiaowen Chu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-02-01T03:49:47 |
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient
Long-Context LLM Inference
|
To reduce memory costs in long-context inference with Large Language Models
(LLMs), many recent works focus on compressing the key-value (KV) cache of
different tokens. However, we identify that the previous KV cache compression
methods measure token importance individually, neglecting the dependency
between different tokens in the real-world language characterics. In light of
this, we introduce ChunkKV, grouping the tokens in a chunk as a basic
compressing unit, and retaining the most informative semantic chunks while
discarding the less important ones. Furthermore, observing that ChunkKV
exhibits higher similarity in the preserved indices across different layers, we
propose layer-wise index reuse to further reduce computational overhead. We
evaluated ChunkKV on cutting-edge long-context benchmarks including LongBench
and Needle-In-A-HayStack, as well as the GSM8K and JailbreakV in-context
learning benchmark. Our experiments with instruction tuning and multi-step
reasoning (O1 and R1) LLMs, achieve up to 10\% performance improvement under
aggressive compression ratios compared to existing methods.
| 3 |
67a1779e5f583199ce7921db
| null | null |
|
2025-02-03T15:09:16.653000 |
SAeUron: Interpretable Concept Unlearning in Diffusion Models with Sparse Autoencoders
| 2 |
{
"_id": "6422f416a73327caad9d1d86",
"avatarUrl": "/avatars/aa3639277cd1732504402fc64a57eff8.svg",
"followerCount": null,
"fullname": "Bartosz Cywiński",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "bcywinski",
"type": "user"
}
| true | null |
2501.18052
|
[
{
"_id": "67a07bbb8e344720ae1a6008",
"hidden": false,
"name": "Bartosz Cywiński",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T11:07:48.609Z",
"user": {
"_id": "6422f416a73327caad9d1d86",
"avatarUrl": "/avatars/aa3639277cd1732504402fc64a57eff8.svg",
"fullname": "Bartosz Cywiński",
"isPro": false,
"type": "user",
"user": "bcywinski"
}
},
{
"_id": "67a07bbb8e344720ae1a6009",
"hidden": false,
"name": "Kamil Deja",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-29T23:29:47 |
SAeUron: Interpretable Concept Unlearning in Diffusion Models with
Sparse Autoencoders
|
Diffusion models, while powerful, can inadvertently generate harmful or
undesirable content, raising significant ethical and safety concerns. Recent
machine unlearning approaches offer potential solutions but often lack
transparency, making it difficult to understand the changes they introduce to
the base model. In this work, we introduce SAeUron, a novel method leveraging
features learned by sparse autoencoders (SAEs) to remove unwanted concepts in
text-to-image diffusion models. First, we demonstrate that SAEs, trained in an
unsupervised manner on activations from multiple denoising timesteps of the
diffusion model, capture sparse and interpretable features corresponding to
specific concepts. Building on this, we propose a feature selection method that
enables precise interventions on model activations to block targeted content
while preserving overall performance. Evaluation with the competitive
UnlearnCanvas benchmark on object and style unlearning highlights SAeUron's
state-of-the-art performance. Moreover, we show that with a single SAE, we can
remove multiple concepts simultaneously and that in contrast to other methods,
SAeUron mitigates the possibility of generating unwanted content, even under
adversarial attack. Code and checkpoints are available at:
https://github.com/cywinski/SAeUron.
| 6 |
67a07bc08e344720ae1a60e9
| null | null |
|
2025-02-03T13:32:45.792000 |
Zero-Shot Novel View and Depth Synthesis with Multi-View Geometric Diffusion
| 2 |
{
"_id": "62e458d33051028b542be2a0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1659131510466-noauth.jpeg",
"followerCount": 2,
"fullname": "Zubair Irshad",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "mirshad7",
"type": "user"
}
| false | null |
2501.18804
|
[
{
"_id": "67a10b8e83c3565727b0cd68",
"hidden": false,
"name": "Vitor Guizilini",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a10b8e83c3565727b0cd69",
"hidden": false,
"name": "Muhammad Zubair Irshad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a10b8e83c3565727b0cd6a",
"hidden": false,
"name": "Dian Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a10b8e83c3565727b0cd6b",
"hidden": false,
"name": "Greg Shakhnarovich",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a10b8e83c3565727b0cd6c",
"hidden": false,
"name": "Rares Ambrus",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T23:43:06 |
Zero-Shot Novel View and Depth Synthesis with Multi-View Geometric
Diffusion
|
Current methods for 3D scene reconstruction from sparse posed images employ
intermediate 3D representations such as neural fields, voxel grids, or 3D
Gaussians, to achieve multi-view consistent scene appearance and geometry. In
this paper we introduce MVGD, a diffusion-based architecture capable of direct
pixel-level generation of images and depth maps from novel viewpoints, given an
arbitrary number of input views. Our method uses raymap conditioning to both
augment visual features with spatial information from different viewpoints, as
well as to guide the generation of images and depth maps from novel views. A
key aspect of our approach is the multi-task generation of images and depth
maps, using learnable task embeddings to guide the diffusion process towards
specific modalities. We train this model on a collection of more than 60
million multi-view samples from publicly available datasets, and propose
techniques to enable efficient and consistent learning in such diverse
conditions. We also propose a novel strategy that enables the efficient
training of larger models by incrementally fine-tuning smaller ones, with
promising scaling behavior. Through extensive experiments, we report
state-of-the-art results in multiple novel view synthesis benchmarks, as well
as multi-view stereo and video depth estimation.
| 5 |
67a10b9183c3565727b0cdef
| null | null |
|
2025-02-03T13:15:59.743000 |
MatAnyone: Stable Video Matting with Consistent Memory Propagation
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.14677
|
[
{
"_id": "679d5057ca02e3270aaada16",
"hidden": false,
"name": "Peiqing Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T16:54:33.485Z",
"user": {
"_id": "6513aae6330c55fdc5462ca8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/EDhpiTqCBMNPmMGrOKcvY.jpeg",
"fullname": "pq-yang",
"isPro": false,
"type": "user",
"user": "PeiqingYang"
}
},
{
"_id": "679d5057ca02e3270aaada17",
"hidden": false,
"name": "Shangchen Zhou",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T16:54:35.247Z",
"user": {
"_id": "62e57662ae9d3f10acbb1b1b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62e57662ae9d3f10acbb1b1b/lg58jdbNyv6LGH2LFnZDF.png",
"fullname": "Shangchen Zhou",
"isPro": false,
"type": "user",
"user": "sczhou"
}
},
{
"_id": "679d5057ca02e3270aaada18",
"hidden": false,
"name": "Jixin Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d5057ca02e3270aaada19",
"hidden": false,
"name": "Qingyi Tao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d5057ca02e3270aaada1a",
"hidden": false,
"name": "Chen Change Loy",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-24T17:56:24 |
MatAnyone: Stable Video Matting with Consistent Memory Propagation
|
Auxiliary-free human video matting methods, which rely solely on input
frames, often struggle with complex or ambiguous backgrounds. To address this,
we propose MatAnyone, a robust framework tailored for target-assigned video
matting. Specifically, building on a memory-based paradigm, we introduce a
consistent memory propagation module via region-adaptive memory fusion, which
adaptively integrates memory from the previous frame. This ensures semantic
stability in core regions while preserving fine-grained details along object
boundaries. For robust training, we present a larger, high-quality, and diverse
dataset for video matting. Additionally, we incorporate a novel training
strategy that efficiently leverages large-scale segmentation data, boosting
matting stability. With this new network design, dataset, and training
strategy, MatAnyone delivers robust and accurate video matting results in
diverse real-world scenarios, outperforming existing methods.
| 31 |
679d505cca02e3270aaadaf6
| null | null |
|
2025-02-03T13:01:15.923000 |
Scalable-Softmax Is Superior for Attention
| 3 |
{
"_id": "60eeedbf50b60c406afc1291",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1649111275459-60eeedbf50b60c406afc1291.png",
"followerCount": 2,
"fullname": "Samuel Arcadinho",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "SSamDav",
"type": "user"
}
| false | null |
2501.19399
|
[
{
"_id": "67a0e9707ddf31accd7b2510",
"hidden": false,
"name": "Ken M. Nakanishi",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T18:55:35 |
Scalable-Softmax Is Superior for Attention
|
The maximum element of the vector output by the Softmax function approaches
zero as the input vector size increases. Transformer-based language models rely
on Softmax to compute attention scores, causing the attention distribution to
flatten as the context size grows. This reduces the model's ability to
prioritize key information effectively and potentially limits its length
generalization. To address this problem, we propose Scalable-Softmax (SSMax),
which replaces Softmax in scenarios where the input vector size varies. SSMax
can be seamlessly integrated into existing Transformer-based architectures.
Experimental results in language modeling show that models using SSMax not only
achieve faster loss reduction during pretraining but also significantly improve
performance in long contexts and key information retrieval. Furthermore, an
analysis of attention scores reveals that SSMax enables the model to focus
attention on key information even in long contexts. Additionally, although
models that use SSMax from the beginning of pretraining achieve better length
generalization, those that have already started pretraining can still gain some
of this ability by replacing Softmax in the attention layers with SSMax, either
during or after pretraining.
| 21 |
67a0e9707ddf31accd7b254a
| null | null |
|
2025-02-03T10:59:24.249000 |
The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training
| 3 |
{
"_id": "64a2b68da0696e0a29739349",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64a2b68da0696e0a29739349/wUtx3yCcSiN7SZP4nMSPK.png",
"followerCount": 3,
"fullname": "Fabian S",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "fabian-sp",
"type": "user"
}
| false | null |
2501.18965
|
[
{
"_id": "67a0e794042d0e5936db83cf",
"hidden": false,
"name": "Fabian Schaipp",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T16:00:56.014Z",
"user": {
"_id": "64a2b68da0696e0a29739349",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64a2b68da0696e0a29739349/wUtx3yCcSiN7SZP4nMSPK.png",
"fullname": "Fabian S",
"isPro": false,
"type": "user",
"user": "fabian-sp"
}
},
{
"_id": "67a0e794042d0e5936db83d0",
"hidden": false,
"name": "Alexander Hägele",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T16:01:03.085Z",
"user": {
"_id": "65785d22dddc2360b01702e1",
"avatarUrl": "/avatars/8e3ddf25b9c423f57484fddef4f0aafd.svg",
"fullname": "Alexander Hägele",
"isPro": false,
"type": "user",
"user": "haeggee"
}
},
{
"_id": "67a0e794042d0e5936db83d1",
"hidden": false,
"name": "Adrien Taylor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0e794042d0e5936db83d2",
"hidden": false,
"name": "Umut Simsekli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0e794042d0e5936db83d3",
"hidden": false,
"name": "Francis Bach",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T08:55:56 |
The Surprising Agreement Between Convex Optimization Theory and
Learning-Rate Scheduling for Large Model Training
|
We show that learning-rate schedules for large model training behave
surprisingly similar to a performance bound from non-smooth convex optimization
theory. We provide a bound for the constant schedule with linear cooldown; in
particular, the practical benefit of cooldown is reflected in the bound due to
the absence of logarithmic terms. Further, we show that this surprisingly close
match between optimization theory and practice can be exploited for
learning-rate tuning: we achieve noticeable improvements for training 124M and
210M Llama-type models by (i) extending the schedule for continued training
with optimal learning-rate, and (ii) transferring the optimal learning-rate
across schedules.
| 7 |
67a0e79e042d0e5936db858d
| null | null |
|
2025-02-03T10:59:18.508000 |
PixelWorld: Towards Perceiving Everything as Pixels
| 2 |
{
"_id": "6313a86154e6e5d9f0f94e04",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1662232951344-6313a86154e6e5d9f0f94e04.jpeg",
"followerCount": 33,
"fullname": "Wenhu Chen",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "wenhu",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/6313a86154e6e5d9f0f94e04/NnyW-XW-vW8IqQdK1pG5e.png"
] |
2501.19339
|
[
{
"_id": "67a044d1af1b65169565354c",
"hidden": false,
"name": "Zhiheng Lyu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a044d1af1b65169565354d",
"hidden": false,
"name": "Xueguang Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a044d1af1b65169565354e",
"hidden": false,
"name": "Wenhu Chen",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-03T04:23:47.571Z",
"user": {
"_id": "6313a86154e6e5d9f0f94e04",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1662232951344-6313a86154e6e5d9f0f94e04.jpeg",
"fullname": "Wenhu Chen",
"isPro": false,
"type": "user",
"user": "wenhu"
}
}
] | 2025-01-31T17:39:21 |
PixelWorld: Towards Perceiving Everything as Pixels
|
Existing foundation models typically process visual input as pixels and
textual input as tokens, a paradigm that contrasts with human perception, where
both modalities are processed in a unified manner. With the rise of embodied
and agentic AI, where inputs primarily come from camera pixels, the need for a
unified perception framework becomes increasingly evident. In this paper, we
propose to unify all modalities (text, tables, code, diagrams, images, etc) as
pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce
PixelWorld, a novel evaluation suite that unifies all the mentioned modalities
into pixel space to gauge the existing models' performance. Our findings show
that (1) PEAP outperforms baseline with token-based input in multimodal
datasets, benefiting from unified input for better disambiguation, (2)
significant declines in reasoning and coding capabilities across all models
when processing pixel-based input, underscoring the need to enhance foundation
models' perceptual abilities, (3) larger models can maintain strong performance
on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer
significant performance degradation, (4) the attention pattern of PEAP is
highly aligned with text token input, (5) PEAP can be accelerated significantly
by exploiting the spatial sparsity. We conclude that the existing frontier
models are competent in pixel perception, however, there is still headroom for
improvement. Our code, dataset will be released upon acceptance.
| 17 |
67a044d3af1b6516956535b6
| null | null |
|
2025-02-03T06:06:33.957000 |
Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models
| 3 |
{
"_id": "66ac77011cfb12c087605acb",
"avatarUrl": "/avatars/54c06bd1c4c9d491470ed4162c2301ae.svg",
"followerCount": 5,
"fullname": "Lin",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Qika",
"type": "user"
}
| true | null |
2501.18119
|
[
{
"_id": "67a0a3201d9fadf4470cb07a",
"hidden": false,
"name": "Qika Lin",
"status": "extracted_pending",
"statusLastChangedAt": "2025-02-03T11:06:10.149Z",
"user": {
"_id": "66ac77011cfb12c087605acb",
"avatarUrl": "/avatars/54c06bd1c4c9d491470ed4162c2301ae.svg",
"fullname": "Lin",
"isPro": false,
"type": "user",
"user": "Qika"
}
},
{
"_id": "67a0a3201d9fadf4470cb07b",
"hidden": false,
"name": "Tianzhe Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb07c",
"hidden": false,
"name": "Kai He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb07d",
"hidden": false,
"name": "Zhen Peng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb07e",
"hidden": false,
"name": "Fangzhi Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb07f",
"hidden": false,
"name": "Ling Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb080",
"hidden": false,
"name": "Jingying Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a0a3201d9fadf4470cb081",
"hidden": false,
"name": "Mengling Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T03:40:20 |
Self-supervised Quantized Representation for Seamlessly Integrating
Knowledge Graphs with Large Language Models
|
Due to the presence of the natural gap between Knowledge Graph (KG)
structures and the natural language, the effective integration of holistic
structural information of KGs with Large Language Models (LLMs) has emerged as
a significant question. To this end, we propose a two-stage framework to learn
and apply quantized codes for each entity, aiming for the seamless integration
of KGs with LLMs. Firstly, a self-supervised quantized representation (SSQR)
method is proposed to compress both KG structural and semantic knowledge into
discrete codes (\ie, tokens) that align the format of language sentences. We
further design KG instruction-following data by viewing these learned codes as
features to directly input to LLMs, thereby achieving seamless integration. The
experiment results demonstrate that SSQR outperforms existing unsupervised
quantized methods, producing more distinguishable codes. Further, the
fine-tuned LLaMA2 and LLaMA3.1 also have superior performance on KG link
prediction and triple classification tasks, utilizing only 16 tokens per entity
instead of thousands in conventional prompting methods.
| 25 |
67a0a3221d9fadf4470cb0f8
| null | null |
|
2025-02-03T05:59:36.106000 |
INT: Instance-Specific Negative Mining for Task-Generic Promptable Segmentation
| 2 |
{
"_id": "65e1b6e9501590df0173cbd3",
"avatarUrl": "/avatars/a73e2139700e23eff455734c99cef5ba.svg",
"followerCount": null,
"fullname": "Jian Hu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lwpyh",
"type": "user"
}
| true | null |
2501.18753
|
[
{
"_id": "67a0a09da2d6613d77a7d10e",
"hidden": false,
"name": "Jian Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:59:15.739Z",
"user": {
"_id": "65e1b6e9501590df0173cbd3",
"avatarUrl": "/avatars/a73e2139700e23eff455734c99cef5ba.svg",
"fullname": "Jian Hu",
"isPro": false,
"type": "user",
"user": "lwpyh"
}
},
{
"_id": "67a0a09da2d6613d77a7d10f",
"hidden": false,
"name": "Zixu Cheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:59:22.704Z",
"user": {
"_id": "667ee096b0fad0fdee319ed4",
"avatarUrl": "/avatars/d9df687e8522d47f7fcefe40fd9b575b.svg",
"fullname": "Zixu Cheng",
"isPro": false,
"type": "user",
"user": "Cade921"
}
},
{
"_id": "67a0a09da2d6613d77a7d110",
"hidden": false,
"name": "Shaogang Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T21:07:14 |
INT: Instance-Specific Negative Mining for Task-Generic Promptable
Segmentation
|
Task-generic promptable image segmentation aims to achieve segmentation of
diverse samples under a single task description by utilizing only one
task-generic prompt. Current methods leverage the generalization capabilities
of Vision-Language Models (VLMs) to infer instance-specific prompts from these
task-generic prompts in order to guide the segmentation process. However, when
VLMs struggle to generalise to some image instances, predicting
instance-specific prompts becomes poor. To solve this problem, we introduce
Instance-specific Negative Mining for Task-Generic
Promptable Segmentation (INT). The key idea of INT is to adaptively
reduce the influence of irrelevant (negative) prior knowledge whilst to
increase the use the most plausible prior knowledge, selected by negative
mining with higher contrast, in order to optimise instance-specific prompts
generation. Specifically, INT consists of two components: (1) instance-specific
prompt generation, which progressively fliters out incorrect information in
prompt generation; (2) semantic mask generation, which ensures each image
instance segmentation matches correctly the semantics of the instance-specific
prompts. INT is validated on six datasets, including camouflaged objects and
medical images, demonstrating its effectiveness, robustness and scalability.
| 3 |
67a0a09fa2d6613d77a7d174
| null | null |
|
2025-02-03T04:01:13.509000 |
Unraveling the Capabilities of Language Models in News Summarization
| 3 |
{
"_id": "647d79a736e109abce419102",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647d79a736e109abce419102/S8Hby6eO4WdPQrct0Ix3c.png",
"followerCount": 1,
"fullname": "Abdurrahman Odabaşı",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "odabashi",
"type": "user"
}
| true | null |
2501.18128
|
[
{
"_id": "679e04b792d873dfa23d0ba6",
"hidden": false,
"name": "Abdurrahman Odabaşı",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T08:14:51.873Z",
"user": {
"_id": "647d79a736e109abce419102",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/647d79a736e109abce419102/S8Hby6eO4WdPQrct0Ix3c.png",
"fullname": "Abdurrahman Odabaşı",
"isPro": false,
"type": "user",
"user": "odabashi"
}
},
{
"_id": "679e04b792d873dfa23d0ba7",
"hidden": false,
"name": "Göksel Biricik",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T04:20:16 |
Unraveling the Capabilities of Language Models in News Summarization
|
Given the recent introduction of multiple language models and the ongoing
demand for improved Natural Language Processing tasks, particularly
summarization, this work provides a comprehensive benchmarking of 20 recent
language models, focusing on smaller ones for the news summarization task. In
this work, we systematically test the capabilities and effectiveness of these
models in summarizing news article texts which are written in different styles
and presented in three distinct datasets. Specifically, we focus in this study
on zero-shot and few-shot learning settings and we apply a robust evaluation
methodology that combines different evaluation concepts including automatic
metrics, human evaluation, and LLM-as-a-judge. Interestingly, including
demonstration examples in the few-shot learning setting did not enhance models'
performance and, in some cases, even led to worse quality of the generated
summaries. This issue arises mainly due to the poor quality of the gold
summaries that have been used as reference summaries, which negatively impacts
the models' performance. Furthermore, our study's results highlight the
exceptional performance of GPT-3.5-Turbo and GPT-4, which generally dominate
due to their advanced capabilities. However, among the public models evaluated,
certain models such as Qwen1.5-7B, SOLAR-10.7B-Instruct-v1.0, Meta-Llama-3-8B
and Zephyr-7B-Beta demonstrated promising results. These models showed
significant potential, positioning them as competitive alternatives to large
models for the task of news summarization.
| 4 |
679e04b892d873dfa23d0bd3
| null | null |
|
2025-02-03T03:12:19.292000 |
Fast Encoder-Based 3D from Casual Videos via Point Track Processing
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2404.07097
|
[
{
"_id": "67a07a4b605a6c919dea84ec",
"hidden": false,
"name": "Yoni Kasten",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:59:44.973Z",
"user": {
"_id": "642294c112b4b51aae368d30",
"avatarUrl": "/avatars/017c19910ba01d6bd9cd864132652448.svg",
"fullname": "Yoni Kasten",
"isPro": false,
"type": "user",
"user": "ykasten"
}
},
{
"_id": "67a07a4b605a6c919dea84ed",
"hidden": false,
"name": "Wuyue Lu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:59:51.676Z",
"user": {
"_id": "65fbd280b0068def429d426f",
"avatarUrl": "/avatars/1c9caccbb08ce3c9fa3bd60fecab10b5.svg",
"fullname": "Wuyue Lu",
"isPro": false,
"type": "user",
"user": "Woo-wy"
}
},
{
"_id": "67a07a4b605a6c919dea84ee",
"hidden": false,
"name": "Haggai Maron",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-04-10T15:37:00 |
Fast Encoder-Based 3D from Casual Videos via Point Track Processing
|
This paper addresses the long-standing challenge of reconstructing 3D
structures from videos with dynamic content. Current approaches to this problem
were not designed to operate on casual videos recorded by standard cameras or
require a long optimization time.
Aiming to significantly improve the efficiency of previous approaches, we
present TracksTo4D, a learning-based approach that enables inferring 3D
structure and camera positions from dynamic content originating from casual
videos using a single efficient feed-forward pass. To achieve this, we propose
operating directly over 2D point tracks as input and designing an architecture
tailored for processing 2D point tracks. Our proposed architecture is designed
with two key principles in mind: (1) it takes into account the inherent
symmetries present in the input point tracks data, and (2) it assumes that the
movement patterns can be effectively represented using a low-rank
approximation. TracksTo4D is trained in an unsupervised way on a dataset of
casual videos utilizing only the 2D point tracks extracted from the videos,
without any 3D supervision. Our experiments show that TracksTo4D can
reconstruct a temporal point cloud and camera positions of the underlying video
with accuracy comparable to state-of-the-art methods, while drastically
reducing runtime by up to 95\%. We further show that TracksTo4D generalizes
well to unseen videos of unseen semantic categories at inference time.
| 4 |
67a07a4d605a6c919dea8555
| null | null |
|
2025-02-03T03:10:08.761000 |
DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2411.04983
|
[
{
"_id": "67a0783a1b24595484396c4d",
"hidden": false,
"name": "Gaoyue Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:58:14.740Z",
"user": {
"_id": "63486560e0bf88ccd36fe568",
"avatarUrl": "/avatars/934cffbd9f5c699abad20dcf86745382.svg",
"fullname": "Gaoyue Zhou",
"isPro": false,
"type": "user",
"user": "gaoyuezhou"
}
},
{
"_id": "67a0783a1b24595484396c4e",
"hidden": false,
"name": "Hengkai Pan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:58:20.238Z",
"user": {
"_id": "634236fc8d8089ebaefb8180",
"avatarUrl": "/avatars/e40518ff3f0d0a58ba4f46048c84640d.svg",
"fullname": "Hengkai Pan",
"isPro": false,
"type": "user",
"user": "garyphk"
}
},
{
"_id": "67a0783a1b24595484396c4f",
"hidden": false,
"name": "Yann LeCun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:58:26.526Z",
"user": {
"_id": "64ed0b8c2203a126eb1a5b9a",
"avatarUrl": "/avatars/9156dc406ed3f9ee62b73657ac20f5ed.svg",
"fullname": "Yann LeCun",
"isPro": false,
"type": "user",
"user": "ylecun"
}
},
{
"_id": "67a0783a1b24595484396c50",
"hidden": false,
"name": "Lerrel Pinto",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:58:41.008Z",
"user": {
"_id": "66fffa0766bc54f4e532e3d2",
"avatarUrl": "/avatars/9db8b2183097bcaddded06d1b800cf77.svg",
"fullname": "Lerrel Pinto",
"isPro": false,
"type": "user",
"user": "LerrelPinto"
}
}
] | 2024-11-07T18:54:37 |
DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot
Planning
|
The ability to predict future outcomes given control actions is fundamental
for physical reasoning. However, such predictive models, often called world
models, have proven challenging to learn and are typically developed for
task-specific solutions with online policy learning. We argue that the true
potential of world models lies in their ability to reason and plan across
diverse problems using only passive data. Concretely, we require world models
to have the following three properties: 1) be trainable on offline,
pre-collected trajectories, 2) support test-time behavior optimization, and 3)
facilitate task-agnostic reasoning. To realize this, we present DINO World
Model (DINO-WM), a new method to model visual dynamics without reconstructing
the visual world. DINO-WM leverages spatial patch features pre-trained with
DINOv2, enabling it to learn from offline behavioral trajectories by predicting
future patch features. This design allows DINO-WM to achieve observational
goals through action sequence optimization, facilitating task-agnostic behavior
planning by treating desired goal patch features as prediction targets. We
evaluate DINO-WM across various domains, including maze navigation, tabletop
pushing, and particle manipulation. Our experiments demonstrate that DINO-WM
can generate zero-shot behavioral solutions at test time without relying on
expert demonstrations, reward modeling, or pre-learned inverse models. Notably,
DINO-WM exhibits strong generalization capabilities compared to prior
state-of-the-art work, adapting to diverse task families such as arbitrarily
configured mazes, push manipulation with varied object shapes, and
multi-particle scenarios.
| 12 |
67a0783d1b24595484396cca
| null | null |
|
2025-02-03T00:05:21.087000 |
Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming
| 5 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.18837
|
[
{
"_id": "67a04e7ab6fd93f91c65457b",
"hidden": false,
"name": "Mrinank Sharma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65457c",
"hidden": false,
"name": "Meg Tong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:10.128Z",
"user": {
"_id": "63272a638624baac667c8bdb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63272a638624baac667c8bdb/ylZ-FNT9PLhn8sBCD1wQm.png",
"fullname": "Meg Tong",
"isPro": false,
"type": "user",
"user": "meg-tong"
}
},
{
"_id": "67a04e7ab6fd93f91c65457d",
"hidden": false,
"name": "Jesse Mu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:02.638Z",
"user": {
"_id": "62301010174feb5439c42e23",
"avatarUrl": "/avatars/cd3c8a97823e3cbc176fef245113624f.svg",
"fullname": "Jesse Mu",
"isPro": false,
"type": "user",
"user": "jayelm"
}
},
{
"_id": "67a04e7ab6fd93f91c65457e",
"hidden": false,
"name": "Jerry Wei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65457f",
"hidden": false,
"name": "Jorrit Kruthoff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654580",
"hidden": false,
"name": "Scott Goodfriend",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:11:51.464Z",
"user": {
"_id": "60ef38cdd36c6e3f5e270b5c",
"avatarUrl": "/avatars/a6c89092322364f35eb6051178f3fbcc.svg",
"fullname": "Scott Goodfriend",
"isPro": false,
"type": "user",
"user": "sgoodfriend"
}
},
{
"_id": "67a04e7ab6fd93f91c654581",
"hidden": false,
"name": "Euan Ong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T11:07:50.780Z",
"user": {
"_id": "64f2218f0d19f5ae05f6a807",
"avatarUrl": "/avatars/855d9f57b075855418e2db33a110ffed.svg",
"fullname": "Euan Ong",
"isPro": false,
"type": "user",
"user": "euanong"
}
},
{
"_id": "67a04e7ab6fd93f91c654582",
"hidden": false,
"name": "Alwin Peng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:18.423Z",
"user": {
"_id": "660ed54c6923ed21e630820d",
"avatarUrl": "/avatars/5c613fbff6d4d36eeaeae92296c88d2c.svg",
"fullname": "Alwin Peng",
"isPro": false,
"type": "user",
"user": "Primusa"
}
},
{
"_id": "67a04e7ab6fd93f91c654583",
"hidden": false,
"name": "Raj Agarwal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:26.895Z",
"user": {
"_id": "676158c7bedb5ba8dd41cad5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/yLEBoauB-32KlAi7arfAE.png",
"fullname": "Raj Agarwal",
"isPro": false,
"type": "user",
"user": "Raj32123"
}
},
{
"_id": "67a04e7ab6fd93f91c654584",
"hidden": false,
"name": "Cem Anil",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:35.275Z",
"user": {
"_id": "62f6b27d05ca68c0e0008549",
"avatarUrl": "/avatars/14d0075aa1b578cd7ee5f9e68d12e2f0.svg",
"fullname": "Cem Anil",
"isPro": false,
"type": "user",
"user": "anilcem"
}
},
{
"_id": "67a04e7ab6fd93f91c654585",
"hidden": false,
"name": "Amanda Askell",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:44.345Z",
"user": {
"_id": "6764dc0c7fa1ef387f891893",
"avatarUrl": "/avatars/eb060a92b65877ae90c1106cfa7c4314.svg",
"fullname": "Amanda askell",
"isPro": false,
"type": "user",
"user": "askeii"
}
},
{
"_id": "67a04e7ab6fd93f91c654586",
"hidden": false,
"name": "Nathan Bailey",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:50.549Z",
"user": {
"_id": "665991b987aedd2a572042e1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/665991b987aedd2a572042e1/nWbp1n_Ps_MPTmpWJcWoz.jpeg",
"fullname": "Nathan Bailey",
"isPro": false,
"type": "user",
"user": "nathanbaileyw"
}
},
{
"_id": "67a04e7ab6fd93f91c654587",
"hidden": false,
"name": "Joe Benton",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:12:58.193Z",
"user": {
"_id": "63124bb3f568fb0098f617c7",
"avatarUrl": "/avatars/6109b5c05452322246843b29b4662051.svg",
"fullname": "Joe Benton",
"isPro": false,
"type": "user",
"user": "JoeJBenton"
}
},
{
"_id": "67a04e7ab6fd93f91c654588",
"hidden": false,
"name": "Emma Bluemke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654589",
"hidden": false,
"name": "Samuel R. Bowman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:13:11.959Z",
"user": {
"_id": "65b7ceed0e2951626572e25d",
"avatarUrl": "/avatars/5d6085ca4260d663f0ddbe632c9e746c.svg",
"fullname": "Samuel Bowman",
"isPro": false,
"type": "user",
"user": "samuelpbowman"
}
},
{
"_id": "67a04e7ab6fd93f91c65458a",
"hidden": false,
"name": "Eric Christiansen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:13:19.490Z",
"user": {
"_id": "64dd22e1c29ed0b051d1a5c4",
"avatarUrl": "/avatars/c8511325f6d8cb485382c0de40975b65.svg",
"fullname": "Eric Christiansen",
"isPro": false,
"type": "user",
"user": "emchristiansen"
}
},
{
"_id": "67a04e7ab6fd93f91c65458b",
"hidden": false,
"name": "Hoagy Cunningham",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:13:24.996Z",
"user": {
"_id": "636146e7472131c3bc538bd8",
"avatarUrl": "/avatars/9db880163cc0eea796165d8bf5e2a91f.svg",
"fullname": "Hoagy Cunningham",
"isPro": false,
"type": "user",
"user": "HoagyC"
}
},
{
"_id": "67a04e7ab6fd93f91c65458c",
"hidden": false,
"name": "Andy Dau",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:13:38.325Z",
"user": {
"_id": "656f669dec366e93ca16cf98",
"avatarUrl": "/avatars/84e266ede3fe45d24666e1d8e03dd94d.svg",
"fullname": "Andy Dau",
"isPro": false,
"type": "user",
"user": "atadau"
}
},
{
"_id": "67a04e7ab6fd93f91c65458d",
"hidden": false,
"name": "Anjali Gopal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65458e",
"hidden": false,
"name": "Rob Gilson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65458f",
"hidden": false,
"name": "Logan Graham",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654590",
"hidden": false,
"name": "Logan Howard",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654591",
"hidden": false,
"name": "Nimit Kalra",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T08:14:42.317Z",
"user": {
"_id": "66fc4c692408eb3bdeba876f",
"avatarUrl": "/avatars/66ba18ccb95d150e66d7b6930d4eb938.svg",
"fullname": "Nimit Kalra",
"isPro": false,
"type": "user",
"user": "nimitkalra"
}
},
{
"_id": "67a04e7ab6fd93f91c654592",
"hidden": false,
"name": "Taesung Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654593",
"hidden": false,
"name": "Kevin Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654594",
"hidden": false,
"name": "Peter Lofgren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654595",
"hidden": false,
"name": "Francesco Mosconi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654596",
"hidden": false,
"name": "Clare O'Hara",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654597",
"hidden": false,
"name": "Catherine Olsson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654598",
"hidden": false,
"name": "Linda Petrini",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c654599",
"hidden": false,
"name": "Samir Rajani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459a",
"hidden": false,
"name": "Nikhil Saxena",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459b",
"hidden": false,
"name": "Alex Silverstein",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459c",
"hidden": false,
"name": "Tanya Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459d",
"hidden": false,
"name": "Theodore Sumers",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459e",
"hidden": false,
"name": "Leonard Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c65459f",
"hidden": false,
"name": "Kevin K. Troy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a0",
"hidden": false,
"name": "Constantin Weisser",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a1",
"hidden": false,
"name": "Ruiqi Zhong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a2",
"hidden": false,
"name": "Giulio Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a3",
"hidden": false,
"name": "Jan Leike",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a4",
"hidden": false,
"name": "Jared Kaplan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04e7ab6fd93f91c6545a5",
"hidden": false,
"name": "Ethan Perez",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T01:09:32 |
Constitutional Classifiers: Defending against Universal Jailbreaks
across Thousands of Hours of Red Teaming
|
Large language models (LLMs) are vulnerable to universal jailbreaks-prompting
strategies that systematically bypass model safeguards and enable users to
carry out harmful processes that require many model interactions, like
manufacturing illegal substances at scale. To defend against these attacks, we
introduce Constitutional Classifiers: safeguards trained on synthetic data,
generated by prompting LLMs with natural language rules (i.e., a constitution)
specifying permitted and restricted content. In over 3,000 estimated hours of
red teaming, no red teamer found a universal jailbreak that could extract
information from an early classifier-guarded LLM at a similar level of detail
to an unguarded model across most target queries. On automated evaluations,
enhanced classifiers demonstrated robust defense against held-out
domain-specific jailbreaks. These classifiers also maintain deployment
viability, with an absolute 0.38% increase in production-traffic refusals and a
23.7% inference overhead. Our work demonstrates that defending against
universal jailbreaks while maintaining practical deployment viability is
tractable.
| 10 |
67a04e7bb6fd93f91c6545bc
| null | null |
|
2025-02-02T23:10:16.068000 |
Reward-Guided Speculative Decoding for Efficient LLM Reasoning
| 4 |
{
"_id": "6602869253a0518b2a98cafd",
"avatarUrl": "/avatars/c14b5953a716f42c83ad28147f8308ae.svg",
"followerCount": 2,
"fullname": "Yuhui Xu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yuhuixu",
"type": "user"
}
| true | null |
2501.19324
|
[
{
"_id": "67a04151dd7b3a4aba880589",
"hidden": false,
"name": "Baohao Liao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:10:48.379Z",
"user": {
"_id": "62c414354ce7250560a1f67f",
"avatarUrl": "/avatars/28fd73973d1703c84f4f59644fef8a80.svg",
"fullname": "Baohao Liao",
"isPro": false,
"type": "user",
"user": "baohao"
}
},
{
"_id": "67a04151dd7b3a4aba88058a",
"hidden": false,
"name": "Yuhui Xu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T11:07:52.570Z",
"user": {
"_id": "6602869253a0518b2a98cafd",
"avatarUrl": "/avatars/c14b5953a716f42c83ad28147f8308ae.svg",
"fullname": "Yuhui Xu",
"isPro": false,
"type": "user",
"user": "yuhuixu"
}
},
{
"_id": "67a04151dd7b3a4aba88058b",
"hidden": false,
"name": "Hanze Dong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:10:54.880Z",
"user": {
"_id": "63a3ff69f91ad3ea5703841d",
"avatarUrl": "/avatars/69227c4bce01d33747c1377b6f9672db.svg",
"fullname": "Hanze Dong",
"isPro": false,
"type": "user",
"user": "hendrydong"
}
},
{
"_id": "67a04151dd7b3a4aba88058c",
"hidden": false,
"name": "Junnan Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:11:08.170Z",
"user": {
"_id": "61f9d3b54ac99e8a1bae85f4",
"avatarUrl": "/avatars/ac47d13204dd22452e4bc46e280842d5.svg",
"fullname": "JunnanLi",
"isPro": false,
"type": "user",
"user": "JunnanLi"
}
},
{
"_id": "67a04151dd7b3a4aba88058d",
"hidden": false,
"name": "Christof Monz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04151dd7b3a4aba88058e",
"hidden": false,
"name": "Silvio Savarese",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a04151dd7b3a4aba88058f",
"hidden": false,
"name": "Doyen Sahoo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:11:23.345Z",
"user": {
"_id": "65f84fd980481173afd91233",
"avatarUrl": "/avatars/6ac7bd6beba24d1476c5179b88c9e3fa.svg",
"fullname": "Doyen",
"isPro": false,
"type": "user",
"user": "doyensahoo"
}
},
{
"_id": "67a04151dd7b3a4aba880590",
"hidden": false,
"name": "Caiming Xiong",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:11:29.821Z",
"user": {
"_id": "649dbcc4e0fff1ed099dc80a",
"avatarUrl": "/avatars/c87c273ca628dbcddccbf1ee19b2ce33.svg",
"fullname": "Caiming Xiong",
"isPro": false,
"type": "user",
"user": "cxiong"
}
}
] | 2025-01-31T17:19:57 |
Reward-Guided Speculative Decoding for Efficient LLM Reasoning
|
We introduce Reward-Guided Speculative Decoding (RSD), a novel framework
aimed at improving the efficiency of inference in large language models (LLMs).
RSD synergistically combines a lightweight draft model with a more powerful
target model, incorporating a controlled bias to prioritize high-reward
outputs, in contrast to existing speculative decoding methods that enforce
strict unbiasedness. RSD employs a process reward model to evaluate
intermediate decoding steps and dynamically decide whether to invoke the target
model, optimizing the trade-off between computational cost and output quality.
We theoretically demonstrate that a threshold-based mixture strategy achieves
an optimal balance between resource utilization and performance. Extensive
evaluations on challenging reasoning benchmarks, including Olympiad-level
tasks, show that RSD delivers significant efficiency gains against decoding
with the target model only (up to 4.4x fewer FLOPs), while achieving
significant better accuracy than parallel decoding method on average (up to
+3.5). These results highlight RSD as a robust and cost-effective approach for
deploying LLMs in resource-intensive scenarios.
| 38 |
67a04152dd7b3a4aba8805c0
| null | null |
|
2025-02-02T21:45:49.841000 |
s1: Simple test-time scaling
| 11 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.19393
|
[
{
"_id": "67a02dd80e751b0476a1bcc6",
"hidden": false,
"name": "Niklas Muennighoff",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:07:41.451Z",
"user": {
"_id": "5f1eb362eec0ad2a071ad6e2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/5f1eb362eec0ad2a071ad6e2/IXMYkYKuTwn6kBdWnQeeY.png",
"fullname": "Niklas Muennighoff",
"isPro": false,
"type": "user",
"user": "Muennighoff"
}
},
{
"_id": "67a02dd80e751b0476a1bcc7",
"hidden": false,
"name": "Zitong Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:09:05.373Z",
"user": {
"_id": "65a5b721f6cfc4b24a75732b",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65a5b721f6cfc4b24a75732b/Hr2AZi3uC6nVl_x3er2Hn.png",
"fullname": "Zitong Yang",
"isPro": false,
"type": "user",
"user": "zitongyang"
}
},
{
"_id": "67a02dd80e751b0476a1bcc8",
"hidden": false,
"name": "Weijia Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:09:20.064Z",
"user": {
"_id": "6400f2ed568dbe30c9161e47",
"avatarUrl": "/avatars/c55938df5bce82b5d96e592a1ec36a8b.svg",
"fullname": "Weijia Shi",
"isPro": false,
"type": "user",
"user": "swj0419"
}
},
{
"_id": "67a02dd80e751b0476a1bcc9",
"hidden": false,
"name": "Xiang Lisa Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02dd80e751b0476a1bcca",
"hidden": false,
"name": "Li Fei-Fei",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02dd80e751b0476a1bccb",
"hidden": false,
"name": "Hannaneh Hajishirzi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02dd80e751b0476a1bccc",
"hidden": false,
"name": "Luke Zettlemoyer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02dd80e751b0476a1bccd",
"hidden": false,
"name": "Percy Liang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T11:10:17.521Z",
"user": {
"_id": "6409651b9e9f790c905b2335",
"avatarUrl": "/avatars/1fb8c80b60f21f65a0a027319101f236.svg",
"fullname": "Percy Liang",
"isPro": false,
"type": "user",
"user": "percyliang"
}
},
{
"_id": "67a02dd80e751b0476a1bcce",
"hidden": false,
"name": "Emmanuel Candès",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02dd80e751b0476a1bccf",
"hidden": false,
"name": "Tatsunori Hashimoto",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T18:48:08 |
s1: Simple test-time scaling
|
Test-time scaling is a promising new approach to language modeling that uses
extra test-time compute to improve performance. Recently, OpenAI's o1 model
showed this capability but did not publicly share its methodology, leading to
many replication efforts. We seek the simplest approach to achieve test-time
scaling and strong reasoning performance. First, we curate a small dataset s1K
of 1,000 questions paired with reasoning traces relying on three criteria we
validate through ablations: difficulty, diversity, and quality. Second, we
develop budget forcing to control test-time compute by forcefully terminating
the model's thinking process or lengthening it by appending "Wait" multiple
times to the model's generation when it tries to end. This can lead the model
to double-check its answer, often fixing incorrect reasoning steps. After
supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and
equipping it with budget forcing, our model s1 exceeds o1-preview on
competition math questions by up to 27% (MATH and AIME24). Further, scaling s1
with budget forcing allows extrapolating beyond its performance without
test-time intervention: from 50% to 57% on AIME24. Our model, data, and code
are open-source at https://github.com/simplescaling/s1.
| 108 |
67a02dd90e751b0476a1bd02
| null | null |
|
2025-02-02T21:40:11.158000 |
Trading Inference-Time Compute for Adversarial Robustness
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.18841
|
[
{
"_id": "67a02c75221b701e4c04da7f",
"hidden": false,
"name": "Wojciech Zaremba",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da80",
"hidden": false,
"name": "Evgenia Nitishinskaya",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:56:51.252Z",
"user": {
"_id": "6792b8359967b7f195447e43",
"avatarUrl": "/avatars/0fcad468c7062d003902e78975daf6ea.svg",
"fullname": "Evgenia Nitishinskaya",
"isPro": false,
"type": "user",
"user": "gadzin1203"
}
},
{
"_id": "67a02c75221b701e4c04da81",
"hidden": false,
"name": "Boaz Barak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da82",
"hidden": false,
"name": "Stephanie Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da83",
"hidden": false,
"name": "Sam Toyer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:57:23.000Z",
"user": {
"_id": "62952a811e87ffbe5c06e3d4",
"avatarUrl": "/avatars/8140d3a6cbb0f85c284a1fd388915cb2.svg",
"fullname": "Sam Toyer",
"isPro": false,
"type": "user",
"user": "qxcv"
}
},
{
"_id": "67a02c75221b701e4c04da84",
"hidden": false,
"name": "Yaodong Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:57:28.373Z",
"user": {
"_id": "6100e69a393be1b5c4c83867",
"avatarUrl": "/avatars/1b87098cffb9c50345789808daea4f68.svg",
"fullname": "Yaodong Yu",
"isPro": false,
"type": "user",
"user": "yaodongyu"
}
},
{
"_id": "67a02c75221b701e4c04da85",
"hidden": false,
"name": "Rachel Dias",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da86",
"hidden": false,
"name": "Eric Wallace",
"status": "admin_assigned",
"statusLastChangedAt": "2025-02-03T12:57:48.651Z",
"user": {
"_id": "63112d2431257261d20d5754",
"avatarUrl": "/avatars/502a68c0fb2f0c6989fe2869d0a7e3f4.svg",
"fullname": "Eric Wallace",
"isPro": false,
"type": "user",
"user": "EricWallace"
}
},
{
"_id": "67a02c75221b701e4c04da87",
"hidden": false,
"name": "Kai Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da88",
"hidden": false,
"name": "Johannes Heidecke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67a02c75221b701e4c04da89",
"hidden": false,
"name": "Amelia Glaese",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-31T01:20:44 |
Trading Inference-Time Compute for Adversarial Robustness
|
We conduct experiments on the impact of increasing inference-time compute in
reasoning models (specifically OpenAI o1-preview and o1-mini) on their
robustness to adversarial attacks. We find that across a variety of attacks,
increased inference-time compute leads to improved robustness. In many cases
(with important exceptions), the fraction of model samples where the attack
succeeds tends to zero as the amount of test-time compute grows. We perform no
adversarial training for the tasks we study, and we increase inference-time
compute by simply allowing the models to spend more compute on reasoning,
independently of the form of attack. Our results suggest that inference-time
compute has the potential to improve adversarial robustness for Large Language
Models. We also explore new attacks directed at reasoning models, as well as
settings where inference-time compute does not improve reliability, and
speculate on the reasons for these as well as ways to address them.
| 4 |
67a02c76221b701e4c04daf5
| null | null |
|
2025-01-31T20:58:14.538000 |
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
| 2 |
{
"_id": "64638bd36c27a7e33b26654b",
"avatarUrl": "/avatars/2ef5aeb94ef7016082975b4cc201873e.svg",
"followerCount": 0,
"fullname": "Yuyang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Yuyang-z",
"type": "user"
}
| false | null |
2501.18427
|
[
{
"_id": "679d7fae1f4b90cfa7b74d0b",
"hidden": false,
"name": "Enze Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d0c",
"hidden": false,
"name": "Junsong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d0d",
"hidden": false,
"name": "Yuyang Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d0e",
"hidden": false,
"name": "Jincheng Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d0f",
"hidden": false,
"name": "Ligeng Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d10",
"hidden": false,
"name": "Yujun Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d11",
"hidden": false,
"name": "Zhekai Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d12",
"hidden": false,
"name": "Muyang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d13",
"hidden": false,
"name": "Junyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d14",
"hidden": false,
"name": "Han Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d15",
"hidden": false,
"name": "Bingchen Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d16",
"hidden": false,
"name": "Daquan Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d7fae1f4b90cfa7b74d17",
"hidden": false,
"name": "Song Han",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T15:31:48 |
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute
in Linear Diffusion Transformer
|
This paper presents SANA-1.5, a linear Diffusion Transformer for efficient
scaling in text-to-image generation. Building upon SANA-1.0, we introduce three
key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that
enables scaling from 1.6B to 4.8B parameters with significantly reduced
computational resources, combined with a memory-efficient 8-bit optimizer. (2)
Model Depth Pruning: A block importance analysis technique for efficient model
compression to arbitrary sizes with minimal quality loss. (3) Inference-time
Scaling: A repeated sampling strategy that trades computation for model
capacity, enabling smaller models to match larger model quality at inference
time. Through these strategies, SANA-1.5 achieves a text-image alignment score
of 0.72 on GenEval, which can be further improved to 0.80 through inference
scaling, establishing a new SoTA on GenEval benchmark. These innovations enable
efficient model scaling across different compute budgets while maintaining high
quality, making high-quality image generation more accessible.
| 17 |
679d7fb11f4b90cfa7b74dbe
| null | null |
|
2025-01-31T13:33:38.548000 |
CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web Navigation
| 2 |
{
"_id": "638450f2834d3558a39939f4",
"avatarUrl": "/avatars/ab8efebd3aa50b31429046b60d8aa3c2.svg",
"followerCount": 1,
"fullname": "Faria Huq",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "oaishi",
"type": "user"
}
| false | null |
2501.16609
|
[
{
"_id": "679d17693f3f5f82f3541388",
"hidden": false,
"name": "Faria Huq",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f3541389",
"hidden": false,
"name": "Zora Zhiruo Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f354138a",
"hidden": false,
"name": "Frank F. Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f354138b",
"hidden": false,
"name": "Tianyue Ou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f354138c",
"hidden": false,
"name": "Shuyan Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f354138d",
"hidden": false,
"name": "Jeffrey P. Bigham",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679d17693f3f5f82f354138e",
"hidden": false,
"name": "Graham Neubig",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-28T00:56:53 |
CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web
Navigation
|
While much work on web agents emphasizes the promise of autonomously
performing tasks on behalf of users, in reality, agents often fall short on
complex tasks in real-world contexts and modeling user preference. This
presents an opportunity for humans to collaborate with the agent and leverage
the agent's capabilities effectively. We propose CowPilot, a framework
supporting autonomous as well as human-agent collaborative web navigation, and
evaluation across task success and task efficiency. CowPilot reduces the number
of steps humans need to perform by allowing agents to propose next steps, while
users are able to pause, reject, or take alternative actions. During execution,
users can interleave their actions with the agent by overriding suggestions or
resuming agent control when needed. We conducted case studies on five common
websites and found that the human-agent collaborative mode achieves the highest
success rate of 95% while requiring humans to perform only 15.2% of the total
steps. Even with human interventions during task execution, the agent
successfully drives up to half of task success on its own. CowPilot can serve
as a useful tool for data collection and agent evaluation across websites,
which we believe will enable research in how users and agents can work
together. Video demonstrations are available at
https://oaishi.github.io/cowpilot.html
| 6 |
679d176b3f3f5f82f3541408
| null | null |
|
2025-01-31T05:07:14.120000 |
Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch
| 7 |
{
"_id": "622792366303bf1dc304f49f",
"avatarUrl": "/avatars/975c1cc3eb2f97cf8e848162056d5bea.svg",
"followerCount": 4,
"fullname": "Arthur Douillard",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ArthurDouillard",
"type": "user"
}
| false | null |
2501.18512
|
[
{
"_id": "679ca01ecad2402cec0a939a",
"hidden": false,
"name": "Arthur Douillard",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a939b",
"hidden": false,
"name": "Yanislav Donchev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a939c",
"hidden": false,
"name": "Keith Rush",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a939d",
"hidden": false,
"name": "Satyen Kale",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a939e",
"hidden": false,
"name": "Zachary Charles",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a939f",
"hidden": false,
"name": "Zachary Garrett",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a0",
"hidden": false,
"name": "Gabriel Teston",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a1",
"hidden": false,
"name": "Dave Lacey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a2",
"hidden": false,
"name": "Ross McIlroy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a3",
"hidden": false,
"name": "Jiajun Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a4",
"hidden": false,
"name": "Alexandre Ramé",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a5",
"hidden": false,
"name": "Arthur Szlam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a6",
"hidden": false,
"name": "Marc'Aurelio Ranzato",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ca01ecad2402cec0a93a7",
"hidden": false,
"name": "Paul Barham",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T17:23:50 |
Streaming DiLoCo with overlapping communication: Towards a Distributed
Free Lunch
|
Training of large language models (LLMs) is typically distributed across a
large number of accelerators to reduce training time. Since internal states and
parameter gradients need to be exchanged at each and every single gradient
step, all devices need to be co-located using low-latency high-bandwidth
communication links to support the required high volume of exchanged bits.
Recently, distributed algorithms like DiLoCo have relaxed such co-location
constraint: accelerators can be grouped into ``workers'', where
synchronizations between workers only occur infrequently. This in turn means
that workers can afford being connected by lower bandwidth communication links
without affecting learning quality. However, in these methods, communication
across workers still requires the same peak bandwidth as before, as the
synchronizations require all parameters to be exchanged across all workers. In
this paper, we improve DiLoCo in three ways. First, we synchronize only subsets
of parameters in sequence, rather than all at once, which greatly reduces peak
bandwidth. Second, we allow workers to continue training while synchronizing,
which decreases wall clock time. Third, we quantize the data exchanged by
workers, which further reduces bandwidth across workers. By properly combining
these modifications, we show experimentally that we can distribute training of
billion-scale parameters and reach similar quality as before, but reducing
required bandwidth by two orders of magnitude.
| 27 |
679ca01fcad2402cec0a9404
| null | null |
|
2025-01-31T04:14:53.856000 |
MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding
| 2 |
{
"_id": "65597738deee83130a1301d5",
"avatarUrl": "/avatars/9bcc40aebe4db079927675d95c00463c.svg",
"followerCount": 1,
"fullname": "Shang (Lindsay) Qu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lindsay-qu",
"type": "user"
}
| true | null |
2501.18362
|
[
{
"_id": "679c5b0034f5df4416915177",
"hidden": false,
"name": "Yuxin Zuo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5b0034f5df4416915178",
"hidden": false,
"name": "Shang Qu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:48.269Z",
"user": {
"_id": "65597738deee83130a1301d5",
"avatarUrl": "/avatars/9bcc40aebe4db079927675d95c00463c.svg",
"fullname": "Shang (Lindsay) Qu",
"isPro": false,
"type": "user",
"user": "lindsay-qu"
}
},
{
"_id": "679c5b0034f5df4416915179",
"hidden": false,
"name": "Yifei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5b0034f5df441691517a",
"hidden": false,
"name": "Zhangren Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5b0034f5df441691517b",
"hidden": false,
"name": "Xuekai Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5b0034f5df441691517c",
"hidden": false,
"name": "Ermo Hua",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5b0034f5df441691517d",
"hidden": false,
"name": "Kaiyan Zhang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T15:33:58.392Z",
"user": {
"_id": "60bc94cd85a3ab33829b6211",
"avatarUrl": "/avatars/b57d36c7577fbbb42ea5b963eef4144a.svg",
"fullname": "Kaiyan Zhang",
"isPro": false,
"type": "user",
"user": "iseesaw"
}
},
{
"_id": "679c5b0034f5df441691517e",
"hidden": false,
"name": "Ning Ding",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T09:50:45.999Z",
"user": {
"_id": "60cf4bcb1ce3775ebb86e5d5",
"avatarUrl": "/avatars/12bcd18d215abf91f297f93007733148.svg",
"fullname": "Ning Ding",
"isPro": false,
"type": "user",
"user": "stingning"
}
},
{
"_id": "679c5b0034f5df441691517f",
"hidden": false,
"name": "Bowen Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T14:07:56 |
MedXpertQA: Benchmarking Expert-Level Medical Reasoning and
Understanding
|
We introduce MedXpertQA, a highly challenging and comprehensive benchmark to
evaluate expert-level medical knowledge and advanced reasoning. MedXpertQA
includes 4,460 questions spanning 17 specialties and 11 body systems. It
includes two subsets, Text for text evaluation and MM for multimodal
evaluation. Notably, MM introduces expert-level exam questions with diverse
images and rich clinical information, including patient records and examination
results, setting it apart from traditional medical multimodal benchmarks with
simple QA pairs generated from image captions. MedXpertQA applies rigorous
filtering and augmentation to address the insufficient difficulty of existing
benchmarks like MedQA, and incorporates specialty board questions to improve
clinical relevance and comprehensiveness. We perform data synthesis to mitigate
data leakage risk and conduct multiple rounds of expert reviews to ensure
accuracy and reliability. We evaluate 16 leading models on MedXpertQA.
Moreover, medicine is deeply connected to real-world decision-making, providing
a rich and representative setting for assessing reasoning abilities beyond
mathematics and code. To this end, we develop a reasoning-oriented subset to
facilitate the assessment of o1-like models.
| 21 |
679c5b0234f5df44169151e9
| null | null |
|
2025-01-31T04:13:28.061000 |
WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in Post-Training
| 4 |
{
"_id": "60107b385ac3e86b3ea4fc34",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"followerCount": 570,
"fullname": "Daniel van Strien",
"isHf": true,
"isMod": false,
"isPro": true,
"name": "davanstrien",
"type": "user"
}
| false | null |
2501.18511
|
[
{
"_id": "679c9419a01fd6df443d5729",
"hidden": false,
"name": "Benjamin Feuer",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T09:35:53.653Z",
"user": {
"_id": "62f7f4efe7c1c9bf10c81465",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62f7f4efe7c1c9bf10c81465/AYlOg0fkP1o4GAP-8Y3xt.jpeg",
"fullname": "Benjamin Feuer",
"isPro": true,
"type": "user",
"user": "penfever"
}
},
{
"_id": "679c9419a01fd6df443d572a",
"hidden": false,
"name": "Chinmay Hegde",
"status": "claimed_verified",
"statusLastChangedAt": "2025-03-03T16:07:36.155Z",
"user": {
"_id": "631620f6894404e25068856f",
"avatarUrl": "/avatars/52c30caa0ee11347f82420a14ec19996.svg",
"fullname": "Chinmay Hegde",
"isPro": false,
"type": "user",
"user": "chegde"
}
}
] | 2025-01-30T17:21:44 |
WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in
Post-Training
|
Language model (LLM) post-training, from DPO to distillation, can refine
behaviors and unlock new skills, but the open science supporting these
post-training techniques is still in its infancy. One limiting factor has been
the difficulty of conducting large-scale comparative analyses of synthetic data
generating models and LLM judges. To close this gap, we introduce WILDCHAT-50M,
the largest public chat dataset to date. We extend the existing WildChat
dataset to include responses not only from GPT, but from over 50 different
open-weight models, ranging in size from 0.5B to 104B parameters. We conduct an
extensive comparative analysis and demonstrate the potential of this dataset by
creating RE-WILD, our own public SFT mix, which outperforms the recent Tulu-3
SFT mixture from Allen AI with only 40% as many samples. Our dataset, samples
and code are available at https://github.com/penfever/wildchat-50m.
| 19 |
679c941da01fd6df443d5907
| null | null |
|
2025-01-31T02:35:40.107000 |
o3-mini vs DeepSeek-R1: Which One is Safer?
| 3 |
{
"_id": "65001514f322f9156663f096",
"avatarUrl": "/avatars/e8712f60d4e8b7c70ac02c532ad547ef.svg",
"followerCount": null,
"fullname": "Pablo Valle",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "pablovalle",
"type": "user"
}
| true | null |
2501.18438
|
[
{
"_id": "679c7d0ebd893fb2b7159aa3",
"hidden": false,
"name": "Aitor Arrieta",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-31T07:34:38.875Z",
"user": {
"_id": "657b3a44de028a439ea2ed9d",
"avatarUrl": "/avatars/9f05e8eb6809a0ce1b50cd1fc9b5a044.svg",
"fullname": "Aitor Arrieta",
"isPro": false,
"type": "user",
"user": "aitorarrieta"
}
},
{
"_id": "679c7d0ebd893fb2b7159aa4",
"hidden": false,
"name": "Miriam Ugarte",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c7d0ebd893fb2b7159aa5",
"hidden": false,
"name": "Pablo Valle",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:44.931Z",
"user": {
"_id": "65001514f322f9156663f096",
"avatarUrl": "/avatars/e8712f60d4e8b7c70ac02c532ad547ef.svg",
"fullname": "Pablo Valle",
"isPro": false,
"type": "user",
"user": "pablovalle"
}
},
{
"_id": "679c7d0ebd893fb2b7159aa6",
"hidden": false,
"name": "José Antonio Parejo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:49:45.440Z",
"user": {
"_id": "63527de67e4cc3135fd16651",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63527de67e4cc3135fd16651/bkeQlJEwsPs3E4EsvmmLB.jpeg",
"fullname": "José Antonio Parejo Maestre",
"isPro": false,
"type": "user",
"user": "japarejo"
}
},
{
"_id": "679c7d0ebd893fb2b7159aa7",
"hidden": false,
"name": "Sergio Segura",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-02T16:41:28.081Z",
"user": {
"_id": "6790d642a1863df579840ae3",
"avatarUrl": "/avatars/a10a6f4af327c1bb67513c56d7f84820.svg",
"fullname": "Sergio Segura",
"isPro": false,
"type": "user",
"user": "ssegura"
}
}
] | 2025-01-30T15:45:56 |
o3-mini vs DeepSeek-R1: Which One is Safer?
|
The irruption of DeepSeek-R1 constitutes a turning point for the AI industry
in general and the LLMs in particular. Its capabilities have demonstrated
outstanding performance in several tasks, including creative thinking, code
generation, maths and automated program repair, at apparently lower execution
cost. However, LLMs must adhere to an important qualitative property, i.e.,
their alignment with safety and human values. A clear competitor of DeepSeek-R1
is its American counterpart, OpenAI's o3-mini model, which is expected to set
high standards in terms of performance, safety and cost. In this paper we
conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b
version) and OpenAI's o3-mini (beta version). To this end, we make use of our
recently released automated safety testing tool, named ASTRAL. By leveraging
this tool, we automatically and systematically generate and execute a total of
1260 unsafe test inputs on both models. After conducting a semi-automated
assessment of the outcomes provided by both LLMs, the results indicate that
DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our
evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts
whereas o3-mini only to 1.19%.
| 22 |
679c7d0ebd893fb2b7159af5
| null | null |
|
2025-01-31T00:16:36.453000 |
Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
| 11 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.18585
|
[
{
"_id": "679c5ca666c379e215bc9e74",
"hidden": false,
"name": "Yue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5ca666c379e215bc9e75",
"hidden": false,
"name": "Qiuzhi Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:45:37.562Z",
"user": {
"_id": "63e60ff62d704152abac8af8",
"avatarUrl": "/avatars/a54c34fb87a7ed5aeba792852747de92.svg",
"fullname": "Qiuzhi Liu",
"isPro": false,
"type": "user",
"user": "Dennis364"
}
},
{
"_id": "679c5ca666c379e215bc9e76",
"hidden": false,
"name": "Jiahao Xu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:45:31.807Z",
"user": {
"_id": "660399710f1fc2f16de18072",
"avatarUrl": "/avatars/c22a749cc45db693c2d9ea877c7cace4.svg",
"fullname": "Jiahao Xu",
"isPro": false,
"type": "user",
"user": "Jiahao004"
}
},
{
"_id": "679c5ca666c379e215bc9e77",
"hidden": false,
"name": "Tian Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5ca666c379e215bc9e78",
"hidden": false,
"name": "Xingyu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5ca666c379e215bc9e79",
"hidden": false,
"name": "Zhiwei He",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:44:45.300Z",
"user": {
"_id": "638439ca834d3558a398d035",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669609868550-noauth.png",
"fullname": "Zhiwei He",
"isPro": false,
"type": "user",
"user": "zwhe99"
}
},
{
"_id": "679c5ca666c379e215bc9e7a",
"hidden": false,
"name": "Linfeng Song",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:44:29.221Z",
"user": {
"_id": "64c94eddcb2f1bf0e7db5a4d",
"avatarUrl": "/avatars/f7e2532d3c85d5e5b5a02c579ea68c3a.svg",
"fullname": "Linfeng Song",
"isPro": false,
"type": "user",
"user": "freesunshine0316"
}
},
{
"_id": "679c5ca666c379e215bc9e7b",
"hidden": false,
"name": "Dian Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:44:23.114Z",
"user": {
"_id": "62d58fd53bf5e059f7cc3245",
"avatarUrl": "/avatars/7a4f3ee4a37245f67efd26749d66a706.svg",
"fullname": "Dian Yu",
"isPro": false,
"type": "user",
"user": "yudian"
}
},
{
"_id": "679c5ca666c379e215bc9e7c",
"hidden": false,
"name": "Juntao Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:44:12.069Z",
"user": {
"_id": "6670e285b0c03c4e9d6e0985",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/uCZHm4gKSHZ2b0hpHWgZv.jpeg",
"fullname": "Juntao Li",
"isPro": false,
"type": "user",
"user": "douvleplus"
}
},
{
"_id": "679c5ca666c379e215bc9e7d",
"hidden": false,
"name": "Zhuosheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:44:05.749Z",
"user": {
"_id": "5f82f9f7f0801648bf8844b2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1669627733134-5f82f9f7f0801648bf8844b2.jpeg",
"fullname": "Zhuosheng Zhang",
"isPro": false,
"type": "user",
"user": "cooelf"
}
},
{
"_id": "679c5ca666c379e215bc9e7e",
"hidden": false,
"name": "Rui Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c5ca666c379e215bc9e7f",
"hidden": false,
"name": "Zhaopeng Tu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:43:27.683Z",
"user": {
"_id": "67485743561b1e6f9579389f",
"avatarUrl": "/avatars/8a4cc63bd7be388010bc329bb74582a1.svg",
"fullname": "Zhaopeng Tu",
"isPro": false,
"type": "user",
"user": "zptu"
}
},
{
"_id": "679c5ca666c379e215bc9e80",
"hidden": false,
"name": "Haitao Mi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:43:21.871Z",
"user": {
"_id": "65147a1426fbd558dbd08f1b",
"avatarUrl": "/avatars/86574ee2d5c22e940be1c4e50be88675.svg",
"fullname": "Haitao Mi",
"isPro": false,
"type": "user",
"user": "haitaominlp"
}
},
{
"_id": "679c5ca666c379e215bc9e81",
"hidden": false,
"name": "Dong Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-30T18:58:18 |
Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
|
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable
abilities in complex reasoning tasks by scaling test-time compute and
exhibiting human-like deep thinking. However, we identify a phenomenon we term
underthinking, where o1-like LLMs frequently switch between different reasoning
thoughts without sufficiently exploring promising paths to reach a correct
solution. This behavior leads to inadequate depth of reasoning and decreased
performance, particularly on challenging mathematical problems. To
systematically analyze this issue, we conduct experiments on three challenging
test sets and two representative open-source o1-like models, revealing that
frequent thought switching correlates with incorrect responses. We introduce a
novel metric to quantify underthinking by measuring token efficiency in
incorrect answers. To address underthinking, we propose a decoding strategy
with thought switching penalty TIP that discourages premature transitions
between thoughts, encouraging deeper exploration of each reasoning path.
Experimental results demonstrate that our approach improves accuracy across
challenging datasets without requiring model fine-tuning. Our findings
contribute to understanding reasoning inefficiencies in o1-like LLMs and offer
a practical solution to enhance their problem-solving capabilities.
| 56 |
679c5ca766c379e215bc9eb1
| null | null |
|
2025-01-31T00:09:40.077000 |
Large Language Models Think Too Fast To Explore Effectively
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.18009
|
[
{
"_id": "679c5b0259e9218a222ab742",
"hidden": false,
"name": "Lan Pan",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-31T06:33:49.785Z",
"user": {
"_id": "6689f7fb8c440fe1955a51b5",
"avatarUrl": "/avatars/9b23ee2f05f55615c6174a678436b30d.svg",
"fullname": "Lan Pan",
"isPro": false,
"type": "user",
"user": "louanna"
}
},
{
"_id": "679c5b0259e9218a222ab743",
"hidden": false,
"name": "Hanbo Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:51:40.573Z",
"user": {
"_id": "63fd543a3c880680af459cad",
"avatarUrl": "/avatars/d7ca320380cc98918c8aaa33790babec.svg",
"fullname": "Hanbo Xie",
"isPro": false,
"type": "user",
"user": "xhb120633"
}
},
{
"_id": "679c5b0259e9218a222ab744",
"hidden": false,
"name": "Robert C. Wilson",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-29T21:51:17 |
Large Language Models Think Too Fast To Explore Effectively
|
Large Language Models have emerged many intellectual capacities. While
numerous benchmarks assess their intelligence, limited attention has been given
to their ability to explore, an essential capacity for discovering new
information and adapting to novel environments in both natural and artificial
systems. The extent to which LLMs can effectively explore, particularly in
open-ended tasks, remains unclear. This study investigates whether LLMs can
surpass humans in exploration during an open-ended task, using Little Alchemy 2
as a paradigm, where agents combine elements to discover new ones. Results show
most LLMs underperform compared to humans, except for the o1 model, with those
traditional LLMs relying primarily on uncertainty driven strategies, unlike
humans who balance uncertainty and empowerment. Representational analysis of
the models with Sparse Autoencoders revealed that uncertainty and choices are
represented at earlier transformer blocks, while empowerment values are
processed later, causing LLMs to think too fast and make premature decisions,
hindering effective exploration. These findings shed light on the limitations
of LLM exploration and suggest directions for improving their adaptability.
| 23 |
679c5b0359e9218a222ab76f
| null | null |
|
2025-01-30T23:19:24.751000 |
PhysBench: Benchmarking and Enhancing Vision-Language Models for Physical World Understanding
| 3 |
{
"_id": "644b71ddb2e7823a76abcf91",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644b71ddb2e7823a76abcf91/JPF7Eqeq2jx8i79nQ962K.jpeg",
"followerCount": 4,
"fullname": "zhou wei",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "WeiChow",
"type": "user"
}
| true | null |
2501.16411
|
[
{
"_id": "679c4f344061a1ab60ebe6fa",
"hidden": false,
"name": "Wei Chow",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:49.674Z",
"user": {
"_id": "644b71ddb2e7823a76abcf91",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/644b71ddb2e7823a76abcf91/JPF7Eqeq2jx8i79nQ962K.jpeg",
"fullname": "zhou wei",
"isPro": false,
"type": "user",
"user": "WeiChow"
}
},
{
"_id": "679c4f344061a1ab60ebe6fb",
"hidden": false,
"name": "Jiageng Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c4f344061a1ab60ebe6fc",
"hidden": false,
"name": "Boyi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:46:09.305Z",
"user": {
"_id": "620dd3888528f797e88cb9b5",
"avatarUrl": "/avatars/af04728788d78fe7d6375e19e32a535e.svg",
"fullname": "Boyi Li",
"isPro": false,
"type": "user",
"user": "Boyiliee"
}
},
{
"_id": "679c4f344061a1ab60ebe6fd",
"hidden": false,
"name": "Daniel Seita",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c4f344061a1ab60ebe6fe",
"hidden": false,
"name": "Vitor Guizilini",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c4f344061a1ab60ebe6ff",
"hidden": false,
"name": "Yue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T18:59:58 |
PhysBench: Benchmarking and Enhancing Vision-Language Models for
Physical World Understanding
|
Understanding the physical world is a fundamental challenge in embodied AI,
critical for enabling agents to perform complex tasks and operate safely in
real-world environments. While Vision-Language Models (VLMs) have shown great
promise in reasoning and task planning for embodied agents, their ability to
comprehend physical phenomena remains extremely limited. To close this gap, we
introduce PhysBench, a comprehensive benchmark designed to evaluate VLMs'
physical world understanding capability across a diverse set of tasks.
PhysBench contains 10,002 entries of interleaved video-image-text data,
categorized into four major domains: physical object properties, physical
object relationships, physical scene understanding, and physics-based dynamics,
further divided into 19 subclasses and 8 distinct capability dimensions. Our
extensive experiments, conducted on 75 representative VLMs, reveal that while
these models excel in common-sense reasoning, they struggle with understanding
the physical world -- likely due to the absence of physical knowledge in their
training data and the lack of embedded physical priors. To tackle the
shortfall, we introduce PhysAgent, a novel framework that combines the
generalization strengths of VLMs with the specialized expertise of vision
models, significantly enhancing VLMs' physical understanding across a variety
of tasks, including an 18.4\% improvement on GPT-4o. Furthermore, our results
demonstrate that enhancing VLMs' physical world understanding capabilities can
help embodied agents such as MOKA. We believe that PhysBench and PhysAgent
offer valuable insights and contribute to bridging the gap between VLMs and
physical world understanding.
| 18 |
679c4f394061a1ab60ebe7f0
| null | null |
|
2025-01-30T23:01:47.466000 |
GuardReasoner: Towards Reasoning-based LLM Safeguards
| 3 |
{
"_id": "6650c77a74664a42ddfb9187",
"avatarUrl": "/avatars/92001bbe0ae9b14309730316b639cede.svg",
"followerCount": 3,
"fullname": "yueliu1999",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "yueliu1999",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/6650c77a74664a42ddfb9187/Kza1q-PVKsgu_6SaQ9Oze.png",
"https://cdn-uploads.huggingface.co/production/uploads/6650c77a74664a42ddfb9187/rqViZgnFQQJcAfgC1a17n.png",
"https://cdn-uploads.huggingface.co/production/uploads/6650c77a74664a42ddfb9187/5Dk0HJkhOCoSXoWdVUzBo.png",
"https://cdn-uploads.huggingface.co/production/uploads/6650c77a74664a42ddfb9187/DWg1wTHDx939H4bZPVj1W.png"
] |
2501.18492
|
[
{
"_id": "679c4ac5e2c0dbf282597d35",
"hidden": false,
"name": "Yue Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T12:37:21.603Z",
"user": {
"_id": "6650c77a74664a42ddfb9187",
"avatarUrl": "/avatars/92001bbe0ae9b14309730316b639cede.svg",
"fullname": "yueliu1999",
"isPro": false,
"type": "user",
"user": "yueliu1999"
}
},
{
"_id": "679c4ac5e2c0dbf282597d36",
"hidden": false,
"name": "Hongcheng Gao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:51.645Z",
"user": {
"_id": "62728f4f6253fe2068da1021",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/62728f4f6253fe2068da1021/KZ65X0EH98AF3zXemPiap.jpeg",
"fullname": "Hongcheng Gao",
"isPro": false,
"type": "user",
"user": "HongchengGao"
}
},
{
"_id": "679c4ac5e2c0dbf282597d37",
"hidden": false,
"name": "Shengfang Zhai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:41:32.474Z",
"user": {
"_id": "6366429195204b4649c658b8",
"avatarUrl": "/avatars/5d80e9ebe0b57fd815f36796b9187248.svg",
"fullname": "Shengfang Zhai",
"isPro": false,
"type": "user",
"user": "zsf"
}
},
{
"_id": "679c4ac5e2c0dbf282597d38",
"hidden": false,
"name": "Jun Xia",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:41:53.366Z",
"user": {
"_id": "679c68bbfc30f43de85206f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/IJWda9ZYtjzlhr2ehsLHu.jpeg",
"fullname": "Jun Xia",
"isPro": false,
"type": "user",
"user": "JunXia97"
}
},
{
"_id": "679c4ac5e2c0dbf282597d39",
"hidden": false,
"name": "Tianyi Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c4ac5e2c0dbf282597d3a",
"hidden": false,
"name": "Zhiwei Xue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:42:30.842Z",
"user": {
"_id": "63f42ca3520c1461892ee929",
"avatarUrl": "/avatars/095241acfe7c783d2406abf63ff81f65.svg",
"fullname": "xuezhiwei",
"isPro": false,
"type": "user",
"user": "lakxtxue"
}
},
{
"_id": "679c4ac5e2c0dbf282597d3b",
"hidden": false,
"name": "Yulin Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:42:41.013Z",
"user": {
"_id": "65efc25828426de60f977dfc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/u8ZcIoo58JPLdnjm-jZeo.png",
"fullname": "Yulin Chen",
"isPro": false,
"type": "user",
"user": "CallMeChen"
}
},
{
"_id": "679c4ac5e2c0dbf282597d3c",
"hidden": false,
"name": "Kenji Kawaguchi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c4ac5e2c0dbf282597d3d",
"hidden": false,
"name": "Jiaheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:37:04.493Z",
"user": {
"_id": "669e19e5dac1eb34c0f5f505",
"avatarUrl": "/avatars/bec7d1d1dac2ad6570844d1f00e7df0a.svg",
"fullname": "Jiaheng Zhang",
"isPro": false,
"type": "user",
"user": "jiaheng233"
}
},
{
"_id": "679c4ac5e2c0dbf282597d3e",
"hidden": false,
"name": "Bryan Hooi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-31T08:42:50.273Z",
"user": {
"_id": "651d8032c50012d33e914f2f",
"avatarUrl": "/avatars/0a44c9f51fc50ce86582e328c361ea00.svg",
"fullname": "Bryan Hooi",
"isPro": false,
"type": "user",
"user": "bhooi"
}
}
] | 2025-01-30T17:06:06 |
GuardReasoner: Towards Reasoning-based LLM Safeguards
|
As LLMs increasingly impact safety-critical applications, ensuring their
safety using guardrails remains a key challenge. This paper proposes
GuardReasoner, a new safeguard for LLMs, by guiding the guard model to learn to
reason. Concretely, we first create the GuardReasonerTrain dataset, which
consists of 127K samples with 460K detailed reasoning steps. Then, we introduce
reasoning SFT to unlock the reasoning capability of guard models. In addition,
we present hard sample DPO to further strengthen their reasoning ability. In
this manner, GuardReasoner achieves better performance, explainability, and
generalizability. Extensive experiments and analyses on 13 benchmarks of 3
guardrail tasks demonstrate its superiority. Remarkably, GuardReasoner 8B
surpasses GPT-4o+CoT by 5.74% and LLaMA Guard 3 8B by 20.84% F1 score on
average. We release the training data, code, and models with different scales
(1B, 3B, 8B) of GuardReasoner : https://github.com/yueliu1999/GuardReasoner/.
| 82 |
679c4ac6e2c0dbf282597d80
| null | null |
|
2025-01-30T20:14:03.298000 |
Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile Virtual Clothing Tasks
| 3 |
{
"_id": "6671214c92412fd4640714eb",
"avatarUrl": "/avatars/48fa84e7bc3bb92ad0192aa26b32de10.svg",
"followerCount": 2,
"fullname": "bohan zeng",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zbhpku",
"type": "user"
}
| true | null |
2501.15891
|
[
{
"_id": "679c23c74ca5036d02b91927",
"hidden": false,
"name": "Hailong Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c23c74ca5036d02b91928",
"hidden": false,
"name": "Bohan Zeng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:59.690Z",
"user": {
"_id": "6671214c92412fd4640714eb",
"avatarUrl": "/avatars/48fa84e7bc3bb92ad0192aa26b32de10.svg",
"fullname": "bohan zeng",
"isPro": false,
"type": "user",
"user": "zbhpku"
}
},
{
"_id": "679c23c74ca5036d02b91929",
"hidden": false,
"name": "Yiren Song",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T09:39:43.380Z",
"user": {
"_id": "64311a95034ecbefddd141ef",
"avatarUrl": "/avatars/b6dc5ca373bedbaa368208517954c375.svg",
"fullname": "Yiren Song",
"isPro": true,
"type": "user",
"user": "yiren98"
}
},
{
"_id": "679c23c74ca5036d02b9192a",
"hidden": false,
"name": "Wentao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c23c74ca5036d02b9192b",
"hidden": false,
"name": "Chuang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679c23c74ca5036d02b9192c",
"hidden": false,
"name": "Jiaming Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-31T08:35:56.805Z",
"user": {
"_id": "637c941588699fba70e29f70",
"avatarUrl": "/avatars/1b6ce2faf0f8c98b987d8efc9b92cc31.svg",
"fullname": "LIU JIAMING",
"isPro": true,
"type": "user",
"user": "jamesliu1217"
}
}
] | 2025-01-27T09:33:23 |
Any2AnyTryon: Leveraging Adaptive Position Embeddings for Versatile
Virtual Clothing Tasks
|
Image-based virtual try-on (VTON) aims to generate a virtual try-on result by
transferring an input garment onto a target person's image. However, the
scarcity of paired garment-model data makes it challenging for existing methods
to achieve high generalization and quality in VTON. Also, it limits the ability
to generate mask-free try-ons. To tackle the data scarcity problem, approaches
such as Stable Garment and MMTryon use a synthetic data strategy, effectively
increasing the amount of paired data on the model side. However, existing
methods are typically limited to performing specific try-on tasks and lack
user-friendliness. To enhance the generalization and controllability of VTON
generation, we propose Any2AnyTryon, which can generate try-on results based on
different textual instructions and model garment images to meet various needs,
eliminating the reliance on masks, poses, or other conditions. Specifically, we
first construct the virtual try-on dataset LAION-Garment, the largest known
open-source garment try-on dataset. Then, we introduce adaptive position
embedding, which enables the model to generate satisfactory outfitted model
images or garment images based on input images of different sizes and
categories, significantly enhancing the generalization and controllability of
VTON generation. In our experiments, we demonstrate the effectiveness of our
Any2AnyTryon and compare it with existing methods. The results show that
Any2AnyTryon enables flexible, controllable, and high-quality image-based
virtual try-on generation.https://logn-2024.github.io/Any2anyTryonProjectPage/
| 14 |
679c23cd4ca5036d02b91afd
| null | null |
|
2025-01-30T09:31:27.980000 |
People who frequently use ChatGPT for writing tasks are accurate and robust detectors of AI-generated text
| 2 |
{
"_id": "637519809ef4e3b87a5e88fb",
"avatarUrl": "/avatars/4465c9194c17f9b5e5a5a5e88d4a4656.svg",
"followerCount": null,
"fullname": "Jenna Russell",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "jjrussell10",
"type": "user"
}
| true | null |
2501.15654
|
[
{
"_id": "679a7a7ea3ffd2887d76a1e7",
"hidden": false,
"name": "Jenna Russell",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T21:06:15.282Z",
"user": {
"_id": "637519809ef4e3b87a5e88fb",
"avatarUrl": "/avatars/4465c9194c17f9b5e5a5a5e88d4a4656.svg",
"fullname": "Jenna Russell",
"isPro": false,
"type": "user",
"user": "jjrussell10"
}
},
{
"_id": "679a7a7ea3ffd2887d76a1e8",
"hidden": false,
"name": "Marzena Karpinska",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-31T03:42:57.813Z",
"user": {
"_id": "62293b03acd5bef90e55c4ae",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1667593978543-62293b03acd5bef90e55c4ae.png",
"fullname": "Marzena Karpinska",
"isPro": false,
"type": "user",
"user": "marzena"
}
},
{
"_id": "679a7a7ea3ffd2887d76a1e9",
"hidden": false,
"name": "Mohit Iyyer",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-26T19:31:34 |
People who frequently use ChatGPT for writing tasks are accurate and
robust detectors of AI-generated text
|
In this paper, we study how well humans can detect text generated by
commercial LLMs (GPT-4o, Claude, o1). We hire annotators to read 300
non-fiction English articles, label them as either human-written or
AI-generated, and provide paragraph-length explanations for their decisions.
Our experiments show that annotators who frequently use LLMs for writing tasks
excel at detecting AI-generated text, even without any specialized training or
feedback. In fact, the majority vote among five such "expert" annotators
misclassifies only 1 of 300 articles, significantly outperforming most
commercial and open-source detectors we evaluated even in the presence of
evasion tactics like paraphrasing and humanization. Qualitative analysis of the
experts' free-form explanations shows that while they rely heavily on specific
lexical clues ('AI vocabulary'), they also pick up on more complex phenomena
within the text (e.g., formality, originality, clarity) that are challenging to
assess for automatic detectors. We release our annotated dataset and code to
spur future research into both human and automated detection of AI-generated
text.
| 12 |
679a7a82a3ffd2887d76a32d
| null | null |
|
2025-01-30T03:05:08.789000 |
Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts
| 3 |
{
"_id": "644156da1a80f6d83cb1667c",
"avatarUrl": "/avatars/106d30a576b0fb58118ac4333b17260b.svg",
"followerCount": 3,
"fullname": "Clement Desroches",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "clementdesroches",
"type": "user"
}
| true | null |
2501.14334
|
[
{
"_id": "679a7546805383520ce065af",
"hidden": false,
"name": "Clément Desroches",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T21:06:17.418Z",
"user": {
"_id": "644156da1a80f6d83cb1667c",
"avatarUrl": "/avatars/106d30a576b0fb58118ac4333b17260b.svg",
"fullname": "Clement Desroches",
"isPro": false,
"type": "user",
"user": "clementdesroches"
}
},
{
"_id": "679a7546805383520ce065b0",
"hidden": false,
"name": "Martin Chauvin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T09:38:17.235Z",
"user": {
"_id": "66221f6295e8f09a668f07f0",
"avatarUrl": "/avatars/f7c943996c814630ab5dcfaaaba01a83.svg",
"fullname": "Martin Chauvin",
"isPro": false,
"type": "user",
"user": "Neyri56"
}
},
{
"_id": "679a7546805383520ce065b1",
"hidden": false,
"name": "Louis Ladan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679a7546805383520ce065b2",
"hidden": false,
"name": "Caroline Vateau",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679a7546805383520ce065b3",
"hidden": false,
"name": "Simon Gosset",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679a7546805383520ce065b4",
"hidden": false,
"name": "Philippe Cordier",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-24T08:58:49 |
Exploring the sustainable scaling of AI dilemma: A projective study of
corporations' AI environmental impacts
|
The rapid growth of artificial intelligence (AI), particularly Large Language
Models (LLMs), has raised concerns regarding its global environmental impact
that extends beyond greenhouse gas emissions to include consideration of
hardware fabrication and end-of-life processes. The opacity from major
providers hinders companies' abilities to evaluate their AI-related
environmental impacts and achieve net-zero targets.
In this paper, we propose a methodology to estimate the environmental impact
of a company's AI portfolio, providing actionable insights without
necessitating extensive AI and Life-Cycle Assessment (LCA) expertise. Results
confirm that large generative AI models consume up to 4600x more energy than
traditional models. Our modelling approach, which accounts for increased AI
usage, hardware computing efficiency, and changes in electricity mix in line
with IPCC scenarios, forecasts AI electricity use up to 2030. Under a high
adoption scenario, driven by widespread Generative AI and agents adoption
associated to increasingly complex models and frameworks, AI electricity use is
projected to rise by a factor of 24.4.
Mitigating the environmental impact of Generative AI by 2030 requires
coordinated efforts across the AI value chain. Isolated measures in hardware
efficiency, model efficiency, or grid improvements alone are insufficient. We
advocate for standardized environmental assessment frameworks, greater
transparency from the all actors of the value chain and the introduction of a
"Return on Environment" metric to align AI development with net-zero goals.
| 20 |
679a7548805383520ce065f5
| null | null |
|
2025-01-30T01:30:18.013000 |
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation
| 3 |
{
"_id": "67325283b318faa97f7ae5f7",
"avatarUrl": "/avatars/2f83452768148b323c540c43ad695ee6.svg",
"followerCount": 1,
"fullname": "TianshengHuang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "TianshengHuang",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/67325283b318faa97f7ae5f7/1hJo5gEfGEXAwYB5a6yWY.png",
"https://cdn-uploads.huggingface.co/production/uploads/67325283b318faa97f7ae5f7/8SaMXA1izw5vcfwtU2Nhj.png"
] |
2501.17433
|
[
{
"_id": "679b1319f87b99a2a7c41e36",
"hidden": false,
"name": "Tiansheng Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T08:39:47.548Z",
"user": {
"_id": "67325283b318faa97f7ae5f7",
"avatarUrl": "/avatars/2f83452768148b323c540c43ad695ee6.svg",
"fullname": "TianshengHuang",
"isPro": false,
"type": "user",
"user": "TianshengHuang"
}
},
{
"_id": "679b1319f87b99a2a7c41e37",
"hidden": false,
"name": "Sihao Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:39:58.723Z",
"user": {
"_id": "6539cab119c3ef6679794706",
"avatarUrl": "/avatars/a88691ff5a547c7a1384edcc615c8209.svg",
"fullname": "Sihao Hu",
"isPro": false,
"type": "user",
"user": "SihaoHu"
}
},
{
"_id": "679b1319f87b99a2a7c41e38",
"hidden": false,
"name": "Fatih Ilhan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:40:06.004Z",
"user": {
"_id": "647615b995a4dc98e58c24f2",
"avatarUrl": "/avatars/7f73999246526c1aef4d019d5f5595ad.svg",
"fullname": "Fatih Ilhan",
"isPro": false,
"type": "user",
"user": "tawreos"
}
},
{
"_id": "679b1319f87b99a2a7c41e39",
"hidden": false,
"name": "Selim Furkan Tekin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:40:16.339Z",
"user": {
"_id": "65aae89948c718a57434db6f",
"avatarUrl": "/avatars/6c0fae8dafad9b9265098a9bc3bfc102.svg",
"fullname": "selim tekin",
"isPro": false,
"type": "user",
"user": "sftekin25"
}
},
{
"_id": "679b1319f87b99a2a7c41e3a",
"hidden": false,
"name": "Ling Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:40:37.075Z",
"user": {
"_id": "65c998005e17dbeaf147db84",
"avatarUrl": "/avatars/6fb47b1e095971b93ff7dcd10369f926.svg",
"fullname": "Ling Liu",
"isPro": false,
"type": "user",
"user": "ling1119"
}
}
] | 2025-01-29T06:24:58 |
Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing
Guardrail Moderation
|
Recent research shows that Large Language Models (LLMs) are vulnerable to
harmful fine-tuning attacks -- models lose their safety alignment ability after
fine-tuning on a few harmful samples. For risk mitigation, a guardrail is
typically used to filter out harmful samples before fine-tuning. By designing a
new red-teaming method, we in this paper show that purely relying on the
moderation guardrail for data filtration is not reliable. Our proposed attack
method, dubbed Virus, easily bypasses the guardrail moderation by slightly
modifying the harmful data. Experimental results show that the harmful data
optimized by Virus is not detectable by the guardrail with up to 100\% leakage
ratio, and can simultaneously achieve superior attack performance. Finally, the
key message we want to convey through this paper is that: it is
reckless to consider guardrail moderation as a clutch at straws towards harmful
fine-tuning attack, as it cannot solve the inherent safety issue of the
pre-trained LLMs. Our code is available at https://github.com/git-disl/Virus
| 9 |
679b131bf87b99a2a7c41ede
| null | null |
|
2025-01-29T21:51:11.227000 |
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
| 6 |
{
"_id": "636a35eff8d9af4aea181608",
"avatarUrl": "/avatars/d9c5cf3491243d1f2b1c5df1873ee8e7.svg",
"followerCount": 4,
"fullname": "yubo",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ubowang",
"type": "user"
}
| true | null |
2501.17703
|
[
{
"_id": "679ae76cf211c66bd702f5d5",
"hidden": false,
"name": "Yubo Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T08:39:49.375Z",
"user": {
"_id": "636a35eff8d9af4aea181608",
"avatarUrl": "/avatars/d9c5cf3491243d1f2b1c5df1873ee8e7.svg",
"fullname": "yubo",
"isPro": false,
"type": "user",
"user": "ubowang"
}
},
{
"_id": "679ae76cf211c66bd702f5d6",
"hidden": false,
"name": "Xiang Yue",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T15:17:01.780Z",
"user": {
"_id": "6230d750d93e84e233882dbc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6230d750d93e84e233882dbc/4MGEekLW3oWzqeFWDWvIK.jpeg",
"fullname": "Xiang Yue",
"isPro": false,
"type": "user",
"user": "yuexiang96"
}
},
{
"_id": "679ae76cf211c66bd702f5d7",
"hidden": false,
"name": "Wenhu Chen",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-30T02:43:59.302Z",
"user": {
"_id": "6313a86154e6e5d9f0f94e04",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1662232951344-6313a86154e6e5d9f0f94e04.jpeg",
"fullname": "Wenhu Chen",
"isPro": false,
"type": "user",
"user": "wenhu"
}
}
] | 2025-01-29T15:20:30 |
Critique Fine-Tuning: Learning to Critique is More Effective than
Learning to Imitate
|
Supervised Fine-Tuning (SFT) is commonly used to train language models to
imitate annotated responses for given instructions. In this paper, we challenge
this paradigm and propose Critique Fine-Tuning (CFT), a strategy where models
learn to critique noisy responses rather than simply imitate correct ones.
Inspired by human learning processes that emphasize critical thinking, CFT
encourages deeper analysis and nuanced understanding-traits often overlooked by
standard SFT. To validate the effectiveness of CFT, we construct a 50K-sample
dataset from WebInstruct, using GPT-4o as the teacher to generate critiques in
the form of (input=[query; noisy response], output=critique). CFT on this
dataset yields a consistent 4-10% improvement over SFT on six math benchmarks
with different base models like Qwen2.5, Qwen2.5-Math and DeepSeek-Math. We
further expand to MetaMath and NuminaMath datasets and observe similar gains
over SFT. Notably, our Qwen2.5-Math-CFT model-trained on just 50K
samples-matches or outperforms competitive models such as AceMath and
Qwen2.5-Math-Instruct on most benchmarks, both of which use over 2M samples.
Ablation studies show that CFT is robust to the source of noisy response and
teacher critique model. Through these findings, we argue that critique-based
training offers a more effective alternative to advance the reasoning of
language models.
| 55 |
679ae770f211c66bd702f697
| null | null |
|
2025-01-29T21:44:37.041000 |
Atla Selene Mini: A General Purpose Evaluation Model
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.17195
|
[
{
"_id": "679ae7655c55250b48483742",
"hidden": false,
"name": "Andrei Alexandru",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T15:17:07.941Z",
"user": {
"_id": "62571e9e0e0c97db812e3afb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1662586587273-62571e9e0e0c97db812e3afb.jpeg",
"fullname": "Andrei Alexandru",
"isPro": false,
"type": "user",
"user": "inwaves"
}
},
{
"_id": "679ae7655c55250b48483743",
"hidden": false,
"name": "Antonia Calvi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:40:54.827Z",
"user": {
"_id": "66e184e86048d62cd8fb4e52",
"avatarUrl": "/avatars/dc459c692fe9fce0911fa1229df0aeee.svg",
"fullname": "Antonia Calvi",
"isPro": false,
"type": "user",
"user": "NinaCalvi"
}
},
{
"_id": "679ae7655c55250b48483744",
"hidden": false,
"name": "Henry Broomfield",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b48483745",
"hidden": false,
"name": "Jackson Golden",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b48483746",
"hidden": false,
"name": "Kyle Dai",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T15:17:03.347Z",
"user": {
"_id": "659fc8832cb13cede03047bb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/659fc8832cb13cede03047bb/Wo_LjryGEJFnxXrOcokfE.jpeg",
"fullname": "kyle",
"isPro": true,
"type": "user",
"user": "kaikaidai"
}
},
{
"_id": "679ae7655c55250b48483747",
"hidden": false,
"name": "Mathias Leys",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b48483748",
"hidden": false,
"name": "Maurice Burger",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T15:17:05.147Z",
"user": {
"_id": "66d08d5c952f5e4e64bd6be0",
"avatarUrl": "/avatars/2fc3a6e3813718f0c001fb26337dab45.svg",
"fullname": "Maurice",
"isPro": false,
"type": "user",
"user": "MauriceBurg"
}
},
{
"_id": "679ae7655c55250b48483749",
"hidden": false,
"name": "Max Bartolo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b4848374a",
"hidden": false,
"name": "Roman Engeler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b4848374b",
"hidden": false,
"name": "Sashank Pisupati",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T15:17:06.579Z",
"user": {
"_id": "633c4fb100732349209f2aad",
"avatarUrl": "/avatars/b44ccae4fb097284730291e4fcc47a24.svg",
"fullname": "Sashank Pisupati",
"isPro": false,
"type": "user",
"user": "spisupat"
}
},
{
"_id": "679ae7655c55250b4848374c",
"hidden": false,
"name": "Toby Drane",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae7655c55250b4848374d",
"hidden": false,
"name": "Young Sun Park",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T15:09:08 |
Atla Selene Mini: A General Purpose Evaluation Model
|
We introduce Atla Selene Mini, a state-of-the-art small language
model-as-a-judge (SLMJ). Selene Mini is a general-purpose evaluator that
outperforms the best SLMJs and GPT-4o-mini on overall performance across 11
out-of-distribution benchmarks, spanning absolute scoring, classification, and
pairwise preference tasks. It is the highest-scoring 8B generative model on
RewardBench, surpassing strong baselines like GPT-4o and specialized judges. To
achieve this, we develop a principled data curation strategy that augments
public datasets with synthetically generated critiques and ensures high quality
through filtering and dataset ablations. We train our model on a combined
direct preference optimization (DPO) and supervised fine-tuning (SFT) loss, and
produce a highly promptable evaluator that excels in real-world scenarios.
Selene Mini shows dramatically improved zero-shot agreement with human expert
evaluations on financial and medical industry datasets. It is also robust to
variations in prompt format. Preliminary results indicate that Selene Mini is
the top-ranking evaluator in a live, community-driven Judge Arena. We release
the model weights on HuggingFace
(https://hf.co/AtlaAI/Selene-1-Mini-Llama-3.1-8B) and Ollama to encourage
widespread community adoption.
| 33 |
679ae76b5c55250b484838e0
| null | null |
|
2025-01-29T21:38:42.464000 |
Early External Safety Testing of OpenAI's o3-mini: Insights from the Pre-Deployment Evaluation
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.17749
|
[
{
"_id": "679ae5eab898ac90bf4480b6",
"hidden": false,
"name": "Aitor Arrieta",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-30T08:45:20.561Z",
"user": {
"_id": "657b3a44de028a439ea2ed9d",
"avatarUrl": "/avatars/9f05e8eb6809a0ce1b50cd1fc9b5a044.svg",
"fullname": "Aitor Arrieta",
"isPro": false,
"type": "user",
"user": "aitorarrieta"
}
},
{
"_id": "679ae5eab898ac90bf4480b7",
"hidden": false,
"name": "Miriam Ugarte",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae5eab898ac90bf4480b8",
"hidden": false,
"name": "Pablo Valle",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:39:30.629Z",
"user": {
"_id": "65001514f322f9156663f096",
"avatarUrl": "/avatars/e8712f60d4e8b7c70ac02c532ad547ef.svg",
"fullname": "Pablo Valle",
"isPro": false,
"type": "user",
"user": "pablovalle"
}
},
{
"_id": "679ae5eab898ac90bf4480b9",
"hidden": false,
"name": "José Antonio Parejo",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:39:06.958Z",
"user": {
"_id": "63527de67e4cc3135fd16651",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63527de67e4cc3135fd16651/bkeQlJEwsPs3E4EsvmmLB.jpeg",
"fullname": "José Antonio Parejo Maestre",
"isPro": false,
"type": "user",
"user": "japarejo"
}
},
{
"_id": "679ae5eab898ac90bf4480ba",
"hidden": false,
"name": "Sergio Segura",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-02T16:41:27.270Z",
"user": {
"_id": "6790d642a1863df579840ae3",
"avatarUrl": "/avatars/a10a6f4af327c1bb67513c56d7f84820.svg",
"fullname": "Sergio Segura",
"isPro": false,
"type": "user",
"user": "ssegura"
}
}
] | 2025-01-29T16:36:53 |
Early External Safety Testing of OpenAI's o3-mini: Insights from the
Pre-Deployment Evaluation
|
Large Language Models (LLMs) have become an integral part of our daily lives.
However, they impose certain risks, including those that can harm individuals'
privacy, perpetuate biases and spread misinformation. These risks highlight the
need for robust safety mechanisms, ethical guidelines, and thorough testing to
ensure their responsible deployment. Safety of LLMs is a key property that
needs to be thoroughly tested prior the model to be deployed and accessible to
the general users. This paper reports the external safety testing experience
conducted by researchers from Mondragon University and University of Seville on
OpenAI's new o3-mini LLM as part of OpenAI's early access for safety testing
program. In particular, we apply our tool, ASTRAL, to automatically and
systematically generate up to date unsafe test inputs (i.e., prompts) that
helps us test and assess different safety categories of LLMs. We automatically
generate and execute a total of 10,080 unsafe test input on a early o3-mini
beta version. After manually verifying the test cases classified as unsafe by
ASTRAL, we identify a total of 87 actual instances of unsafe LLM behavior. We
highlight key insights and findings uncovered during the pre-deployment
external testing phase of OpenAI's latest LLM.
| 13 |
679ae5f0b898ac90bf44826c
| null | null |
|
2025-01-29T21:18:54.916000 |
DeepFlow: Serverless Large Language Model Serving at Scale
| 2 |
{
"_id": "6457885a75f8f7d26aa5bc44",
"avatarUrl": "/avatars/8ce57c4d60a1f1b5afa2c592207a8335.svg",
"followerCount": 1,
"fullname": "allthingsdisaggregated",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "lastweek",
"type": "user"
}
| false | null |
2501.14417
|
[
{
"_id": "679ae0db7b24dd74c70f243e",
"hidden": false,
"name": "Junhao Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f243f",
"hidden": false,
"name": "Jiang Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2440",
"hidden": false,
"name": "Zhixia Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2441",
"hidden": false,
"name": "Yulong He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2442",
"hidden": false,
"name": "Yuetao Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2443",
"hidden": false,
"name": "Hao Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2444",
"hidden": false,
"name": "Jiang Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2445",
"hidden": false,
"name": "Baoquan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2446",
"hidden": false,
"name": "Shining Wan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2447",
"hidden": false,
"name": "Gengyuan Dan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2448",
"hidden": false,
"name": "Zhiyu Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2449",
"hidden": false,
"name": "Zhihao Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244a",
"hidden": false,
"name": "Jie Meng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244b",
"hidden": false,
"name": "Chao He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244c",
"hidden": false,
"name": "Changhong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244d",
"hidden": false,
"name": "Tao Xie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244e",
"hidden": false,
"name": "Dayun Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f244f",
"hidden": false,
"name": "Qin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2450",
"hidden": false,
"name": "Yue Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2451",
"hidden": false,
"name": "Hao Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2452",
"hidden": false,
"name": "Xusheng Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679ae0db7b24dd74c70f2453",
"hidden": false,
"name": "Yizhou Shan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-24T11:34:13 |
DeepFlow: Serverless Large Language Model Serving at Scale
|
This paper introduces DeepFlow, a scalable and serverless AI platform
designed to efficiently serve large language models (LLMs) at scale in cloud
environments. DeepFlow addresses key challenges such as resource allocation,
serving efficiency, and cold start latencies through four main design
components. First, it uses a simple serverless abstraction called the
request-job-task model, which helps manage AI workloads across post-training
and model serving tasks. Second, it builds an in-house serving engine FlowServe
using a microkernel-inspired design, NPU-centric execution, and SPMD-based
parallelism to optimize LLM serving. The system also includes novel scheduling
policies tailored for both PD-disaggregated and PD-colocated configurations.
With optimizations like pre-warmed pods, DRAM pre-loading, and NPU-fork,
DeepFlow can scale up to 64 instances in seconds. DeepFlow has been in
production for over a year, operating on a large Ascend NPU cluster and
providing industrystandard APIs for fine-tuning, agent serving, and model
serving to our customers.
| 3 |
679ae0e47b24dd74c70f27bf
| null | null |
|
2025-01-29T20:48:38.815000 |
TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.16937
|
[
{
"_id": "679990fa121155210e3ac4e0",
"hidden": false,
"name": "Makoto Shing",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T08:40:33.171Z",
"user": {
"_id": "60c2e7747a42b2edc5d2ccf7",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60c2e7747a42b2edc5d2ccf7/5GdUBW1HYEy17orItOqfV.png",
"fullname": "Makoto Shing",
"isPro": false,
"type": "user",
"user": "mkshing"
}
},
{
"_id": "679990fa121155210e3ac4e1",
"hidden": false,
"name": "Kou Misaki",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:36:28.498Z",
"user": {
"_id": "66601725a7167714d56e1f28",
"avatarUrl": "/avatars/752324a350685027d125f95ef2eea665.svg",
"fullname": "Kou Misaki",
"isPro": false,
"type": "user",
"user": "takkyu2"
}
},
{
"_id": "679990fa121155210e3ac4e2",
"hidden": false,
"name": "Han Bao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679990fa121155210e3ac4e3",
"hidden": false,
"name": "Sho Yokoi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:36:06.697Z",
"user": {
"_id": "66c5d59e55c196259b54e383",
"avatarUrl": "/avatars/708f54d2888885f07d6db1505712a173.svg",
"fullname": "Sho Yokoi",
"isPro": false,
"type": "user",
"user": "eumesy"
}
},
{
"_id": "679990fa121155210e3ac4e4",
"hidden": false,
"name": "Takuya Akiba",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-30T09:36:00.611Z",
"user": {
"_id": "6482810dba6c556892f6f257",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6482810dba6c556892f6f257/c7-wiVKenXiRtwnRpnjZN.jpeg",
"fullname": "Takuya Akiba",
"isPro": false,
"type": "user",
"user": "iwiwi"
}
}
] | 2025-01-28T13:31:18 |
TAID: Temporally Adaptive Interpolated Distillation for Efficient
Knowledge Transfer in Language Models
|
Causal language models have demonstrated remarkable capabilities, but their
size poses significant challenges for deployment in resource-constrained
environments. Knowledge distillation, a widely-used technique for transferring
knowledge from a large teacher model to a small student model, presents a
promising approach for model compression. A significant remaining issue lies in
the major differences between teacher and student models, namely the
substantial capacity gap, mode averaging, and mode collapse, which pose
barriers during distillation. To address these issues, we introduce
Temporally Adaptive Interpolated Distillation (TAID), a novel
knowledge distillation approach that dynamically interpolates student and
teacher distributions through an adaptive intermediate distribution, gradually
shifting from the student's initial distribution towards the teacher's
distribution. We provide a theoretical analysis demonstrating TAID's ability to
prevent mode collapse and empirically show its effectiveness in addressing the
capacity gap while balancing mode averaging and mode collapse. Our
comprehensive experiments demonstrate TAID's superior performance across
various model sizes and architectures in both instruction tuning and
pre-training scenarios. Furthermore, we showcase TAID's practical impact by
developing two state-of-the-art compact foundation models:
TAID-LLM-1.5B for language tasks and TAID-VLM-2B for
vision-language tasks. These results demonstrate TAID's effectiveness in
creating high-performing and efficient models, advancing the development of
more accessible AI technologies.
| 5 |
679990fb121155210e3ac519
| null | null |
|
2025-01-29T03:32:09.927000 |
Histoires Morales: A French Dataset for Assessing Moral Alignment
| 2 |
{
"_id": "629a3dbcd496c6dcdebf41cc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1655113762275-629a3dbcd496c6dcdebf41cc.jpeg",
"followerCount": 2,
"fullname": "Irina Proskurina",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "iproskurina",
"type": "user"
}
| true | null |
2501.17117
|
[
{
"_id": "6799e5f9121155210e4fa48c",
"hidden": false,
"name": "Thibaud Leteno",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:29:00.764Z",
"user": {
"_id": "6239d3fa32af073244f7740d",
"avatarUrl": "/avatars/36d4c6f7a650a5606613efaf9f0bc71e.svg",
"fullname": "Thibaud Leteno",
"isPro": false,
"type": "user",
"user": "thibaudltn"
}
},
{
"_id": "6799e5f9121155210e4fa48d",
"hidden": false,
"name": "Irina Proskurina",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:01.526Z",
"user": {
"_id": "629a3dbcd496c6dcdebf41cc",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1655113762275-629a3dbcd496c6dcdebf41cc.jpeg",
"fullname": "Irina Proskurina",
"isPro": false,
"type": "user",
"user": "iproskurina"
}
},
{
"_id": "6799e5f9121155210e4fa48e",
"hidden": false,
"name": "Antoine Gourru",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:29:06.818Z",
"user": {
"_id": "64e475d0ba1c93ea6d23cf78",
"avatarUrl": "/avatars/217833f8932d494465e4e383ee03c384.svg",
"fullname": "Antoine Gourru",
"isPro": false,
"type": "user",
"user": "AntoineGourru"
}
},
{
"_id": "6799e5f9121155210e4fa48f",
"hidden": false,
"name": "Julien Velcin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:29:13.336Z",
"user": {
"_id": "638f42767879bae278a73482",
"avatarUrl": "/avatars/2fadaf4e4457aab35a18a129fffaf3eb.svg",
"fullname": "Julien Velcin",
"isPro": false,
"type": "user",
"user": "jvelcin"
}
},
{
"_id": "6799e5f9121155210e4fa490",
"hidden": false,
"name": "Charlotte Laclau",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799e5f9121155210e4fa491",
"hidden": false,
"name": "Guillaume Metzler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799e5f9121155210e4fa492",
"hidden": false,
"name": "Christophe Gravier",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-28T18:07:30 |
Histoires Morales: A French Dataset for Assessing Moral Alignment
|
Aligning language models with human values is crucial, especially as they
become more integrated into everyday life. While models are often adapted to
user preferences, it is equally important to ensure they align with moral norms
and behaviours in real-world social situations. Despite significant progress in
languages like English and Chinese, French has seen little attention in this
area, leaving a gap in understanding how LLMs handle moral reasoning in this
language. To address this gap, we introduce Histoires Morales, a French dataset
derived from Moral Stories, created through translation and subsequently
refined with the assistance of native speakers to guarantee grammatical
accuracy and adaptation to the French cultural context. We also rely on
annotations of the moral values within the dataset to ensure their alignment
with French norms. Histoires Morales covers a wide range of social situations,
including differences in tipping practices, expressions of honesty in
relationships, and responsibilities toward animals. To foster future research,
we also conduct preliminary experiments on the alignment of multilingual models
on French and English data and the robustness of the alignment. We find that
while LLMs are generally aligned with human moral norms by default, they can be
easily influenced with user-preference optimization for both moral and immoral
data.
| 3 |
6799e5fb121155210e4fa500
| null | null |
|
2025-01-29T01:12:02.839000 |
DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation
| 3 |
{
"_id": "654866e8cd0a5621395f8287",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/654866e8cd0a5621395f8287/4Bccwd1ehn-Ee4T1rId5S.jpeg",
"followerCount": 6,
"fullname": "Panwang Pan",
"isHf": false,
"isMod": false,
"isPro": true,
"name": "paulpanwang",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/654866e8cd0a5621395f8287/TFJMeKzXxMLOnq8NH8ltZ.mp4",
"https://cdn-uploads.huggingface.co/production/uploads/654866e8cd0a5621395f8287/6kn1RLEUsUV-W6S0Taylo.png"
] |
2501.16764
|
[
{
"_id": "6799aa5a311dbfe3c96724cd",
"hidden": false,
"name": "Chenguo Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:14.090Z",
"user": {
"_id": "62e18206926f4892a4c782bd",
"avatarUrl": "/avatars/0f89091a5eb72165d2e860d15b339539.svg",
"fullname": "Chenguo Lin",
"isPro": false,
"type": "user",
"user": "chenguolin"
}
},
{
"_id": "6799aa5a311dbfe3c96724ce",
"hidden": false,
"name": "Panwang Pan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:26:28.217Z",
"user": {
"_id": "654866e8cd0a5621395f8287",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/654866e8cd0a5621395f8287/4Bccwd1ehn-Ee4T1rId5S.jpeg",
"fullname": "Panwang Pan",
"isPro": true,
"type": "user",
"user": "paulpanwang"
}
},
{
"_id": "6799aa5a311dbfe3c96724cf",
"hidden": false,
"name": "Bangbang Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799aa5a311dbfe3c96724d0",
"hidden": false,
"name": "Zeming Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:26:44.370Z",
"user": {
"_id": "65c04adbc63d6a8d7f217c4d",
"avatarUrl": "/avatars/3407dd73d1f5cf41c56cee7542858f93.svg",
"fullname": "Zeming Li",
"isPro": false,
"type": "user",
"user": "ZemingLi"
}
},
{
"_id": "6799aa5a311dbfe3c96724d1",
"hidden": false,
"name": "Yadong Mu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-28T07:38:59 |
DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian
Splat Generation
|
Recent advancements in 3D content generation from text or a single image
struggle with limited high-quality 3D datasets and inconsistency from 2D
multi-view generation. We introduce DiffSplat, a novel 3D generative framework
that natively generates 3D Gaussian splats by taming large-scale text-to-image
diffusion models. It differs from previous 3D generative models by effectively
utilizing web-scale 2D priors while maintaining 3D consistency in a unified
model. To bootstrap the training, a lightweight reconstruction model is
proposed to instantly produce multi-view Gaussian splat grids for scalable
dataset curation. In conjunction with the regular diffusion loss on these
grids, a 3D rendering loss is introduced to facilitate 3D coherence across
arbitrary views. The compatibility with image diffusion models enables seamless
adaptions of numerous techniques for image generation to the 3D realm.
Extensive experiments reveal the superiority of DiffSplat in text- and
image-conditioned generation tasks and downstream applications. Thorough
ablation studies validate the efficacy of each critical design choice and
provide insights into the underlying mechanism.
| 22 |
6799aa5c311dbfe3c9672542
| null | null |
|
2025-01-28T23:50:56.664000 |
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
| 6 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.17161
|
[
{
"_id": "6799b39b15f4661561c22968",
"hidden": false,
"name": "Tianzhe Chu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:03.469Z",
"user": {
"_id": "65127ec162b7d28b1eaab17a",
"avatarUrl": "/avatars/0bcfeb68a405be4efb6e8a29738a5598.svg",
"fullname": "Tianzhe",
"isPro": false,
"type": "user",
"user": "tianzhechu"
}
},
{
"_id": "6799b39b15f4661561c22969",
"hidden": false,
"name": "Yuexiang Zhai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b39b15f4661561c2296a",
"hidden": false,
"name": "Jihan Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:18:57.418Z",
"user": {
"_id": "6304baf041387c7f1177a5d2",
"avatarUrl": "/avatars/795c63f2394080eec78ca7981d4a1f78.svg",
"fullname": "Jihan Yang",
"isPro": false,
"type": "user",
"user": "jihanyang"
}
},
{
"_id": "6799b39b15f4661561c2296b",
"hidden": false,
"name": "Shengbang Tong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b39b15f4661561c2296c",
"hidden": false,
"name": "Saining Xie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:19:29.194Z",
"user": {
"_id": "6596422646624a86ff3b3bda",
"avatarUrl": "/avatars/216e12b77e45ac5f1fa20932f5745411.svg",
"fullname": "Saining Xie",
"isPro": false,
"type": "user",
"user": "sainx"
}
},
{
"_id": "6799b39b15f4661561c2296d",
"hidden": false,
"name": "Dale Schuurmans",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b39b15f4661561c2296e",
"hidden": false,
"name": "Quoc V. Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b39b15f4661561c2296f",
"hidden": false,
"name": "Sergey Levine",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:19:47.824Z",
"user": {
"_id": "665ce54120a307a3754849dd",
"avatarUrl": "/avatars/e698726e9be61dd50ce2efe372ed5dac.svg",
"fullname": "Sergey Levine",
"isPro": false,
"type": "user",
"user": "svlevine"
}
},
{
"_id": "6799b39b15f4661561c22970",
"hidden": false,
"name": "Yi Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-28T18:59:44 |
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model
Post-training
|
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used
post-training techniques for foundation models. However, their roles in
enhancing model generalization capabilities remain unclear. This paper studies
the difference between SFT and RL on generalization and memorization, focusing
on text-based rule variants and visual variants. We introduce GeneralPoints, an
arithmetic reasoning card game, and adopt V-IRL, a real-world navigation
environment, to assess how models trained with SFT and RL generalize to unseen
variants in both textual and visual domains. We show that RL, especially when
trained with an outcome-based reward, generalizes across both rule-based
textual and visual variants. SFT, in contrast, tends to memorize training data
and struggles to generalize out-of-distribution scenarios. Further analysis
reveals that RL improves the model's underlying visual recognition
capabilities, contributing to its enhanced generalization in the visual domain.
Despite RL's superior generalization, we show that SFT remains essential for
effective RL training; SFT stabilizes the model's output format, enabling
subsequent RL to achieve its performance gains. These findings demonstrates the
capability of RL for acquiring generalizable knowledge in complex, multi-modal
tasks.
| 108 |
6799b39d15f4661561c229e6
| null | null |
|
2025-01-28T23:50:12.472000 |
Optimizing Large Language Model Training Using FP4 Quantization
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.17116
|
[
{
"_id": "6799b367d30dc065a2d51592",
"hidden": false,
"name": "Ruizhe Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-30T08:40:31.349Z",
"user": {
"_id": "63203d4e260e691cfc19fcb1",
"avatarUrl": "/avatars/72437259c73cc4a950a2e84141097310.svg",
"fullname": "Ruizhe Wang",
"isPro": false,
"type": "user",
"user": "Mr-Philo"
}
},
{
"_id": "6799b367d30dc065a2d51593",
"hidden": false,
"name": "Yeyun Gong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b367d30dc065a2d51594",
"hidden": false,
"name": "Xiao Liu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:05.045Z",
"user": {
"_id": "63fb6e281b4b1bd4e7ffc5be",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1677422062937-noauth.jpeg",
"fullname": "Xiao Liu",
"isPro": false,
"type": "user",
"user": "lx865712528"
}
},
{
"_id": "6799b367d30dc065a2d51595",
"hidden": false,
"name": "Guoshuai Zhao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:20:36.800Z",
"user": {
"_id": "663de80ca920d195191807da",
"avatarUrl": "/avatars/2437ce3fa073a07b971d370c26c7ab65.svg",
"fullname": "Guoshuai Zhao",
"isPro": false,
"type": "user",
"user": "crayonshine"
}
},
{
"_id": "6799b367d30dc065a2d51596",
"hidden": false,
"name": "Ziyue Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:20:42.213Z",
"user": {
"_id": "62f6a9add3bdacb7eec0d4f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1660332390183-noauth.jpeg",
"fullname": "Ziyue Yang",
"isPro": false,
"type": "user",
"user": "ziyueyang37"
}
},
{
"_id": "6799b367d30dc065a2d51597",
"hidden": false,
"name": "Baining Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b367d30dc065a2d51598",
"hidden": false,
"name": "Zhengjun Zha",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b367d30dc065a2d51599",
"hidden": false,
"name": "Peng Cheng",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-29T04:49:44.372Z",
"user": {
"_id": "653feb7ccf1f9c88f4928910",
"avatarUrl": "/avatars/23a6a6818116683ea9485e1470a0062f.svg",
"fullname": "Peng Cheng",
"isPro": false,
"type": "user",
"user": "cp5555"
}
}
] | 2025-01-28T18:04:50 |
Optimizing Large Language Model Training Using FP4 Quantization
|
The growing computational demands of training large language models (LLMs)
necessitate more efficient methods. Quantized training presents a promising
solution by enabling low-bit arithmetic operations to reduce these costs. While
FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge
due to significant quantization errors and limited representational capacity.
This work introduces the first FP4 training framework for LLMs, addressing
these challenges with two key innovations: a differentiable quantization
estimator for precise weight updates and an outlier clamping and compensation
strategy to prevent activation collapse. To ensure stability, the framework
integrates a mixed-precision training scheme and vector-wise quantization.
Experimental results demonstrate that our FP4 framework achieves accuracy
comparable to BF16 and FP8, with minimal degradation, scaling effectively to
13B-parameter LLMs trained on up to 100B tokens. With the emergence of
next-generation hardware supporting FP4, our framework sets a foundation for
efficient ultra-low precision training.
| 36 |
6799b368d30dc065a2d515bf
| null | null |
|
2025-01-28T23:49:26.959000 |
Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling
| 4 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.16975
|
[
{
"_id": "6799b345a66ae6b357bef986",
"hidden": false,
"name": "Hongzhi Huang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:08.642Z",
"user": {
"_id": "66fbbed8a0154e0b6498293d",
"avatarUrl": "/avatars/5c4202f13cc5af6424154fae293fad52.svg",
"fullname": "Huang Hongzhi",
"isPro": false,
"type": "user",
"user": "xyzed"
}
},
{
"_id": "6799b345a66ae6b357bef987",
"hidden": false,
"name": "Defa Zhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:06.916Z",
"user": {
"_id": "667505f4361b960c79e35486",
"avatarUrl": "/avatars/d352639c520075220f6abaae23c39376.svg",
"fullname": "Defa Zhu",
"isPro": false,
"type": "user",
"user": "mathfinder"
}
},
{
"_id": "6799b345a66ae6b357bef988",
"hidden": false,
"name": "Banggu Wu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:21:12.737Z",
"user": {
"_id": "64a3b0fa565496b629879293",
"avatarUrl": "/avatars/9d5de6cc4a01052ea971701f72bd3489.svg",
"fullname": "wubanggu",
"isPro": false,
"type": "user",
"user": "banggu"
}
},
{
"_id": "6799b345a66ae6b357bef989",
"hidden": false,
"name": "Yutao Zeng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:16:10.495Z",
"user": {
"_id": "6371128eafbe42caa5a5222b",
"avatarUrl": "/avatars/c3b2ab35949c38aa3dfb2657a1300aac.svg",
"fullname": "Yutao Zeng",
"isPro": false,
"type": "user",
"user": "Taoer"
}
},
{
"_id": "6799b345a66ae6b357bef98a",
"hidden": false,
"name": "Ya Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b345a66ae6b357bef98b",
"hidden": false,
"name": "Qiyang Min",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:21:23.469Z",
"user": {
"_id": "645507b89d37c3fb33279fe3",
"avatarUrl": "/avatars/ec1122137c49204ab968182d1f726c35.svg",
"fullname": "min",
"isPro": false,
"type": "user",
"user": "qiyang-attn"
}
},
{
"_id": "6799b345a66ae6b357bef98c",
"hidden": false,
"name": "Xun Zhou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:21:37.188Z",
"user": {
"_id": "62533db4a06ec75172eeabe7",
"avatarUrl": "/avatars/b1a4dad90afae5c00df97233a97777db.svg",
"fullname": "xunzhou",
"isPro": false,
"type": "user",
"user": "xunzhou"
}
}
] | 2025-01-28T14:15:42 |
Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling
|
Tokenization is a fundamental component of large language models (LLMs), yet
its influence on model scaling and performance is not fully explored. In this
paper, we introduce Over-Tokenized Transformers, a novel framework that
decouples input and output vocabularies to improve language modeling
performance. Specifically, our approach scales up input vocabularies to
leverage multi-gram tokens. Through extensive experiments, we uncover a
log-linear relationship between input vocabulary size and training loss,
demonstrating that larger input vocabularies consistently enhance model
performance, regardless of model size. Using a large input vocabulary, we
achieve performance comparable to double-sized baselines with no additional
cost. Our findings highlight the importance of tokenization in scaling laws and
provide practical insight for tokenizer design, paving the way for more
efficient and powerful LLMs.
| 26 |
6799b346a66ae6b357bef9e3
| null | null |
|
2025-01-28T23:48:30.888000 |
Open Problems in Mechanistic Interpretability
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.16496
|
[
{
"_id": "6799b2fbfe3c29ec219d7d99",
"hidden": false,
"name": "Lee Sharkey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7d9a",
"hidden": false,
"name": "Bilal Chughtai",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-29T04:47:56.702Z",
"user": {
"_id": "64ad563f4beffa272de6efac",
"avatarUrl": "/avatars/f1a4902a95830cc3936058449626f8e4.svg",
"fullname": "Bilal Chughtai",
"isPro": false,
"type": "user",
"user": "bilalchughtai"
}
},
{
"_id": "6799b2fbfe3c29ec219d7d9b",
"hidden": false,
"name": "Joshua Batson",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:21:53.141Z",
"user": {
"_id": "6397aa4e41e44ea30597cd69",
"avatarUrl": "/avatars/5644feab569e5dbd0c4be9bd9c4646ce.svg",
"fullname": "Joshua Batson",
"isPro": false,
"type": "user",
"user": "thebasepoint"
}
},
{
"_id": "6799b2fbfe3c29ec219d7d9c",
"hidden": false,
"name": "Jack Lindsey",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:22:00.071Z",
"user": {
"_id": "65f244830d81c637f9cd78c8",
"avatarUrl": "/avatars/c3c196b89223a66b3a6b3a0ba2350f9f.svg",
"fullname": "Jack Lindsey",
"isPro": false,
"type": "user",
"user": "BV29"
}
},
{
"_id": "6799b2fbfe3c29ec219d7d9d",
"hidden": false,
"name": "Jeff Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7d9e",
"hidden": false,
"name": "Lucius Bushnaq",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7d9f",
"hidden": false,
"name": "Nicholas Goldowsky-Dill",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7da0",
"hidden": false,
"name": "Stefan Heimersheim",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:22:54.589Z",
"user": {
"_id": "66b392b2269c7cd48bef2f99",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66b392b2269c7cd48bef2f99/7sOvaB4yAc_omhlYy9Jiz.jpeg",
"fullname": "Stefan Heimersheim",
"isPro": false,
"type": "user",
"user": "stefanhex-apollo"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da1",
"hidden": false,
"name": "Alejandro Ortega",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7da2",
"hidden": false,
"name": "Joseph Bloom",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:23:26.417Z",
"user": {
"_id": "6313dffa0b24eab4746ff6a4",
"avatarUrl": "/avatars/ccccd6c832572723602019415992b6ac.svg",
"fullname": "Joseph Bloom",
"isPro": false,
"type": "user",
"user": "jbloom"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da3",
"hidden": false,
"name": "Stella Biderman",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:23:32.126Z",
"user": {
"_id": "60347d3660e3dd96631c9093",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/60347d3660e3dd96631c9093/B3fuZer5N04tZIAYrLnz4.jpeg",
"fullname": "Stella Biderman",
"isPro": false,
"type": "user",
"user": "stellaathena"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da4",
"hidden": false,
"name": "Adria Garriga-Alonso",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:23:39.165Z",
"user": {
"_id": "645ecd18f0f92653b9f33d4e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/645ecd18f0f92653b9f33d4e/nHDMWtM9ZHrji0c4Y4XW1.jpeg",
"fullname": "Adrià Garriga-Alonso",
"isPro": false,
"type": "user",
"user": "agaralon"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da5",
"hidden": false,
"name": "Arthur Conmy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:23:55.013Z",
"user": {
"_id": "631ecc58daa9591e522e1494",
"avatarUrl": "/avatars/4f808fae966e808105e89712c97d90d2.svg",
"fullname": "VConm",
"isPro": false,
"type": "user",
"user": "ArthurConmy"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da6",
"hidden": false,
"name": "Neel Nanda",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:24:02.966Z",
"user": {
"_id": "62669380c8bc5cf80ca97350",
"avatarUrl": "/avatars/6d5cd2261163308b82341c1ce28984d1.svg",
"fullname": "Neel Nanda",
"isPro": false,
"type": "user",
"user": "NeelNanda"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da7",
"hidden": false,
"name": "Jessica Rumbelow",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:24:13.466Z",
"user": {
"_id": "64faffff304b8cb412aca2c6",
"avatarUrl": "/avatars/1d3c97f338cf9eae7b786b202da99092.svg",
"fullname": "Jessica Rumbelow",
"isPro": false,
"type": "user",
"user": "J-RUM"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da8",
"hidden": false,
"name": "Martin Wattenberg",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:24:23.067Z",
"user": {
"_id": "6303e1cc0907b9a115c4b047",
"avatarUrl": "/avatars/0e9bd14e28ead268b2f0cf40f39b53c2.svg",
"fullname": "Martin Wattenberg",
"isPro": false,
"type": "user",
"user": "wattenberg"
}
},
{
"_id": "6799b2fbfe3c29ec219d7da9",
"hidden": false,
"name": "Nandi Schoots",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7daa",
"hidden": false,
"name": "Joseph Miller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7dab",
"hidden": false,
"name": "Eric J. Michaud",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:24:59.946Z",
"user": {
"_id": "6303e59c7b50dd9d0a367375",
"avatarUrl": "/avatars/353aaaae6db1c7dc89895927a65fa9b1.svg",
"fullname": "Eric Michaud",
"isPro": false,
"type": "user",
"user": "ericjm"
}
},
{
"_id": "6799b2fbfe3c29ec219d7dac",
"hidden": false,
"name": "Stephen Casper",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:25:19.810Z",
"user": {
"_id": "6466a046326128fd2c6c59c2",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6466a046326128fd2c6c59c2/kYwPcxupelOBvKFB0y8Me.png",
"fullname": "Stephen Casper",
"isPro": false,
"type": "user",
"user": "stecas"
}
},
{
"_id": "6799b2fbfe3c29ec219d7dad",
"hidden": false,
"name": "Max Tegmark",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7dae",
"hidden": false,
"name": "William Saunders",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:25:30.107Z",
"user": {
"_id": "6689b044b424c825f7244cab",
"avatarUrl": "/avatars/d6d519689012a840f186397f5cd24c66.svg",
"fullname": "William Saunders",
"isPro": false,
"type": "user",
"user": "william-r-s"
}
},
{
"_id": "6799b2fbfe3c29ec219d7daf",
"hidden": false,
"name": "David Bau",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:25:43.370Z",
"user": {
"_id": "6214d6c01e35c843d42d1f77",
"avatarUrl": "/avatars/ac208cd180b4f3ed1ec367e581facfcf.svg",
"fullname": "David Bau",
"isPro": false,
"type": "user",
"user": "davidbau"
}
},
{
"_id": "6799b2fbfe3c29ec219d7db0",
"hidden": false,
"name": "Eric Todd",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T17:32:56.846Z",
"user": {
"_id": "645a5a01c35da9c7afd5cdc3",
"avatarUrl": "/avatars/bb216e9194514faaf195cc4ab525a6ed.svg",
"fullname": "Eric Todd",
"isPro": false,
"type": "user",
"user": "ericwtodd"
}
},
{
"_id": "6799b2fbfe3c29ec219d7db1",
"hidden": false,
"name": "Atticus Geiger",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:25:56.041Z",
"user": {
"_id": "627b2d0527dc4650b62eef42",
"avatarUrl": "/avatars/e70381850f5657b54e90f5539f3d74eb.svg",
"fullname": "Atticus Geiger",
"isPro": false,
"type": "user",
"user": "atticusg"
}
},
{
"_id": "6799b2fbfe3c29ec219d7db2",
"hidden": false,
"name": "Mor Geva",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:26:02.930Z",
"user": {
"_id": "610b729f9da682cd54ad9adf",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1628140189042-noauth.jpeg",
"fullname": "Mor Geva",
"isPro": false,
"type": "user",
"user": "mega"
}
},
{
"_id": "6799b2fbfe3c29ec219d7db3",
"hidden": false,
"name": "Jesse Hoogland",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:26:09.318Z",
"user": {
"_id": "630f0804236215d0b705996a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1661929457771-noauth.png",
"fullname": "Jesse Hoogland",
"isPro": false,
"type": "user",
"user": "jqhoogland"
}
},
{
"_id": "6799b2fbfe3c29ec219d7db4",
"hidden": false,
"name": "Daniel Murfet",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6799b2fbfe3c29ec219d7db5",
"hidden": false,
"name": "Tom McGrath",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T20:57:18 |
Open Problems in Mechanistic Interpretability
|
Mechanistic interpretability aims to understand the computational mechanisms
underlying neural networks' capabilities in order to accomplish concrete
scientific and engineering goals. Progress in this field thus promises to
provide greater assurance over AI system behavior and shed light on exciting
scientific questions about the nature of intelligence. Despite recent progress
toward these goals, there are many open problems in the field that require
solutions before many scientific and practical benefits can be realized: Our
methods require both conceptual and practical improvements to reveal deeper
insights; we must figure out how best to apply our methods in pursuit of
specific goals; and the field must grapple with socio-technical challenges that
influence and are influenced by our work. This forward-facing review discusses
the current frontier of mechanistic interpretability and the open problems that
the field may benefit from prioritizing.
| 19 |
6799b2fcfe3c29ec219d7dca
| null | null |
|
2025-01-28T22:11:04.472000 |
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.16372
|
[
{
"_id": "67999c3dc1e34886f90320ee",
"hidden": false,
"name": "J. Pablo Muñoz",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:27:08.761Z",
"user": {
"_id": "63dece97e742e86dc92169b9",
"avatarUrl": "/avatars/b408736f5089200ffd2898cd00132f0a.svg",
"fullname": "J. Pablo Munoz",
"isPro": false,
"type": "user",
"user": "jpablomch"
}
},
{
"_id": "67999c3dc1e34886f90320ef",
"hidden": false,
"name": "Jinjie Yuan",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:27:14.404Z",
"user": {
"_id": "6541d41ae2e170b6a8de2f78",
"avatarUrl": "/avatars/f7b524d17910b0e93548d08089d24f60.svg",
"fullname": "Jinjie Yuan",
"isPro": false,
"type": "user",
"user": "jinjieyuan"
}
},
{
"_id": "67999c3dc1e34886f90320f0",
"hidden": false,
"name": "Nilesh Jain",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:27:20.569Z",
"user": {
"_id": "62d28f0c8d8206fd4d84ceca",
"avatarUrl": "/avatars/bfcaa0ba84e80710964f4161e0aa56b7.svg",
"fullname": "Nilesh Jain",
"isPro": false,
"type": "user",
"user": "dalle2"
}
}
] | 2025-01-23T02:14:08 |
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression
|
The rapid expansion of Large Language Models (LLMs) has posed significant
challenges regarding the computational resources required for fine-tuning and
deployment. Recent advancements in low-rank adapters have demonstrated their
efficacy in parameter-efficient fine-tuning (PEFT) of these models. This
retrospective paper comprehensively discusses innovative approaches that
synergize low-rank representations with Neural Architecture Search (NAS)
techniques, particularly weight-sharing super-networks. Robust solutions for
compressing and fine-tuning large pre-trained models are developed by
integrating these methodologies. Our analysis highlights the potential of these
combined strategies to democratize the use of LLMs, making them more accessible
for deployment in resource-constrained environments. The resulting models
exhibit reduced memory footprints and faster inference times, paving the way
for more practical and scalable applications of LLMs. Models and code are
available at
https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning.
| 9 |
67999c3dc1e34886f9032140
| null | null |
|
2025-01-28T21:38:17.182000 |
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding
| 2 |
{
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"followerCount": 3,
"fullname": "Aman Chadha",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "amanchadha",
"type": "user"
}
| true | null |
2501.15747
|
[
{
"_id": "6799946c18cb282841d42639",
"hidden": false,
"name": "Sankalp KJ",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:27:40.565Z",
"user": {
"_id": "664e218ea10258b399d5c358",
"avatarUrl": "/avatars/77f2ee0949560504e1643263fd7084da.svg",
"fullname": "Sankalp KJ",
"isPro": false,
"type": "user",
"user": "SankalpKJ"
}
},
{
"_id": "6799946c18cb282841d4263a",
"hidden": false,
"name": "Ashutosh Kumar",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T21:06:22.798Z",
"user": {
"_id": "65c7fa3a242e3ee0c656927d",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/65c7fa3a242e3ee0c656927d/yulc-eFl2hntzfbRJWiox.jpeg",
"fullname": "Ashutosh Kumar",
"isPro": false,
"type": "user",
"user": "ashu-1069"
}
},
{
"_id": "6799946c18cb282841d4263b",
"hidden": false,
"name": "Laxmaan Balaji",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-29T02:37:34.240Z",
"user": {
"_id": "66707b60405252abeefd4c50",
"avatarUrl": "/avatars/ee2728f115376e234e96820b8b376849.svg",
"fullname": "Laxmaan Balaji",
"isPro": false,
"type": "user",
"user": "laxmaanb"
}
},
{
"_id": "6799946c18cb282841d4263c",
"hidden": false,
"name": "Nikunj Kotecha",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:28:09.074Z",
"user": {
"_id": "66c607f85eae54107879aee9",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66c607f85eae54107879aee9/m2rRKcN9H6baABGzN-Fzr.png",
"fullname": "Nikunj Kotecha",
"isPro": false,
"type": "user",
"user": "nikunjkotecha"
}
},
{
"_id": "6799946c18cb282841d4263d",
"hidden": false,
"name": "Vinija Jain",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:28:16.742Z",
"user": {
"_id": "6517bfecce732bf33a29d04b",
"avatarUrl": "/avatars/b6534a540fa10199df8f9acc497083d5.svg",
"fullname": "Vinija Jain",
"isPro": false,
"type": "user",
"user": "Vinija"
}
},
{
"_id": "6799946c18cb282841d4263e",
"hidden": false,
"name": "Aman Chadha",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T08:55:14.312Z",
"user": {
"_id": "63a4754927f1f64ed7238dac",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63a4754927f1f64ed7238dac/aH-eJF-31g4vof9jv2gmI.jpeg",
"fullname": "Aman Chadha",
"isPro": false,
"type": "user",
"user": "amanchadha"
}
},
{
"_id": "6799946c18cb282841d4263f",
"hidden": false,
"name": "Sreyoshi Bhaduri",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-29T17:28:22.871Z",
"user": {
"_id": "6760b279deb31f2c1e5b4f42",
"avatarUrl": "/avatars/b3863ab0d63d052198dc9b4261def623.svg",
"fullname": "sreyoshi bhaduri",
"isPro": false,
"type": "user",
"user": "sreyoshibhaduri"
}
}
] | 2025-01-27T03:19:03 |
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task
Language Understanding
|
Known by more than 1.5 billion people in the Indian subcontinent, Indic
languages present unique challenges and opportunities for natural language
processing (NLP) research due to their rich cultural heritage, linguistic
diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark
designed to evaluate Large Language Models (LLMs) across Indic languages,
building upon the MMLU Pro (Massive Multitask Language Understanding)
framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi,
Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique
challenges and opportunities presented by the linguistic diversity of the
Indian subcontinent. This benchmark encompasses a wide range of tasks in
language comprehension, reasoning, and generation, meticulously crafted to
capture the intricacies of Indian languages. IndicMMLU-Pro provides a
standardized evaluation framework to push the research boundaries in Indic
language AI, facilitating the development of more accurate, efficient, and
culturally sensitive models. This paper outlines the benchmarks' design
principles, task taxonomy, and data collection methodology, and presents
baseline results from state-of-the-art multilingual models.
| 7 |
6799946e18cb282841d426d6
| null | null |
|
2025-01-28T15:00:51.189000 |
CodeMonkeys: Scaling Test-Time Compute for Software Engineering
| 2 |
{
"_id": "60799bed489fc71534e91bf3",
"avatarUrl": "/avatars/0f57ee357b29fed39f253f28e39abf6b.svg",
"followerCount": null,
"fullname": "Brown",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "Bradley",
"type": "user"
}
| true |
[
"https://cdn-uploads.huggingface.co/production/uploads/60799bed489fc71534e91bf3/6odkURJDAikOXG2gFhzC0.png"
] |
2501.14723
|
[
{
"_id": "67991502dc9404d4424ce38c",
"hidden": false,
"name": "Ryan Ehrlich",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67991502dc9404d4424ce38d",
"hidden": false,
"name": "Bradley Brown",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67991502dc9404d4424ce38e",
"hidden": false,
"name": "Jordan Juravsky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67991502dc9404d4424ce38f",
"hidden": false,
"name": "Ronald Clark",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67991502dc9404d4424ce390",
"hidden": false,
"name": "Christopher Ré",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67991502dc9404d4424ce391",
"hidden": false,
"name": "Azalia Mirhoseini",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-02T01:37:18.386Z",
"user": {
"_id": "66ba4e72c9b2ab14b3707be0",
"avatarUrl": "/avatars/97ef14d683ed2d0115a9c4694b9763dc.svg",
"fullname": "Azalia Mirhoseini",
"isPro": false,
"type": "user",
"user": "am34"
}
}
] | 2025-01-24T18:58:40 |
CodeMonkeys: Scaling Test-Time Compute for Software Engineering
|
Scaling test-time compute is a promising axis for improving LLM capabilities.
However, test-time compute can be scaled in a variety of ways, and effectively
combining different approaches remains an active area of research. Here, we
explore this problem in the context of solving real-world GitHub issues from
the SWE-bench dataset. Our system, named CodeMonkeys, allows models to
iteratively edit a codebase by jointly generating and running a testing script
alongside their draft edit. We sample many of these multi-turn trajectories for
every issue to generate a collection of candidate edits. This approach lets us
scale "serial" test-time compute by increasing the number of iterations per
trajectory and "parallel" test-time compute by increasing the number of
trajectories per problem. With parallel scaling, we can amortize up-front costs
across multiple downstream samples, allowing us to identify relevant codebase
context using the simple method of letting an LLM read every file. In order to
select between candidate edits, we combine voting using model-generated tests
with a final multi-turn trajectory dedicated to selection. Overall, CodeMonkeys
resolves 57.4% of issues from SWE-bench Verified using a budget of
approximately 2300 USD. Our selection method can also be used to combine
candidates from different sources. Selecting over an ensemble of edits from
existing top SWE-bench Verified submissions obtains a score of 66.2% and
outperforms the best member of the ensemble on its own. We fully release our
code and data at https://scalingintelligence.stanford.edu/pubs/codemonkeys.
| 9 |
67991503dc9404d4424ce3e7
| null | null |
|
2025-01-28T14:17:04.887000 |
Visual Generation Without Guidance
| 3 |
{
"_id": "65571135bfb62d747abc8129",
"avatarUrl": "/avatars/5f4542daa34597f17e6280b9cce18c91.svg",
"followerCount": 4,
"fullname": "Hugging",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ChenDRAG",
"type": "user"
}
| true | null |
2501.15420
|
[
{
"_id": "67992c274c3dbd12f9c75abb",
"hidden": false,
"name": "Huayu Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T19:24:41.148Z",
"user": {
"_id": "65571135bfb62d747abc8129",
"avatarUrl": "/avatars/5f4542daa34597f17e6280b9cce18c91.svg",
"fullname": "Hugging",
"isPro": false,
"type": "user",
"user": "ChenDRAG"
}
},
{
"_id": "67992c274c3dbd12f9c75abc",
"hidden": false,
"name": "Kai Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67992c274c3dbd12f9c75abd",
"hidden": false,
"name": "Kaiwen Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T09:41:04.468Z",
"user": {
"_id": "652bf7edc3cba555d5673c6e",
"avatarUrl": "/avatars/78f6416c30203b30671f8423f061c657.svg",
"fullname": "Kaiwen Zheng",
"isPro": false,
"type": "user",
"user": "worstcoder"
}
},
{
"_id": "67992c274c3dbd12f9c75abe",
"hidden": true,
"name": "Jianfei Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:43:24.216Z",
"user": {
"_id": "65fcad0ba0d7adc40b54fac2",
"avatarUrl": "/avatars/7564b5642378fddb46ec3b5ae57c0402.svg",
"fullname": "Jianfei Chen",
"isPro": false,
"type": "user",
"user": "surfingtomchen"
}
},
{
"_id": "67992c274c3dbd12f9c75abf",
"hidden": false,
"name": "Hang Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67992c274c3dbd12f9c75ac0",
"hidden": false,
"name": "Jun Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-26T06:48:05 |
Visual Generation Without Guidance
|
Classifier-Free Guidance (CFG) has been a default technique in various visual
generative models, yet it requires inference from both conditional and
unconditional models during sampling. We propose to build visual models that
are free from guided sampling. The resulting algorithm, Guidance-Free Training
(GFT), matches the performance of CFG while reducing sampling to a single
model, halving the computational cost. Unlike previous distillation-based
approaches that rely on pretrained CFG networks, GFT enables training directly
from scratch. GFT is simple to implement. It retains the same maximum
likelihood objective as CFG and differs mainly in the parameterization of
conditional models. Implementing GFT requires only minimal modifications to
existing codebases, as most design choices and hyperparameters are directly
inherited from CFG. Our extensive experiments across five distinct visual
models demonstrate the effectiveness and versatility of GFT. Across domains of
diffusion, autoregressive, and masked-prediction modeling, GFT consistently
achieves comparable or even lower FID scores, with similar diversity-fidelity
trade-offs compared with CFG baselines, all while being guidance-free. Code
will be available at https://github.com/thu-ml/GFT.
| 8 |
67992c2a4c3dbd12f9c75b9a
| null | null |
|
2025-01-28T14:06:36.924000 |
Are Vision Language Models Texture or Shape Biased and Can We Steer Them?
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2403.09193
|
[
{
"_id": "674c73752f5974eb9a7ed124",
"hidden": false,
"name": "Paul Gavrikov",
"status": "claimed_verified",
"statusLastChangedAt": "2024-12-02T08:50:16.337Z",
"user": {
"_id": "6266c07e7a1f5a1562c4113b",
"avatarUrl": "/avatars/f20e6d735ff52e2941c2240fda42c422.svg",
"fullname": "Paul Gavrikov",
"isPro": false,
"type": "user",
"user": "paulgavrikov"
}
},
{
"_id": "674c73752f5974eb9a7ed125",
"hidden": false,
"name": "Jovita Lukasik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "674c73752f5974eb9a7ed126",
"hidden": false,
"name": "Steffen Jung",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "674c73752f5974eb9a7ed127",
"hidden": false,
"name": "Robert Geirhos",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-17T17:22:38.274Z",
"user": {
"_id": "673bbe0d7dfcdedd52619ec2",
"avatarUrl": "/avatars/531a44f05d0c738bbe3e028c76c2e948.svg",
"fullname": "Robert Geirhos",
"isPro": false,
"type": "user",
"user": "rgeirhos"
}
},
{
"_id": "674c73752f5974eb9a7ed128",
"hidden": false,
"name": "Bianca Lamm",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "674c73752f5974eb9a7ed129",
"hidden": false,
"name": "Muhammad Jehanzeb Mirza",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "674c73752f5974eb9a7ed12a",
"hidden": false,
"name": "Margret Keuper",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "674c73752f5974eb9a7ed12b",
"hidden": false,
"name": "Janis Keuper",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-03-14T09:07:14 |
Are Vision Language Models Texture or Shape Biased and Can We Steer
Them?
|
Vision language models (VLMs) have drastically changed the computer vision
model landscape in only a few years, opening an exciting array of new
applications from zero-shot image classification, over to image captioning, and
visual question answering. Unlike pure vision models, they offer an intuitive
way to access visual content through language prompting. The wide applicability
of such models encourages us to ask whether they also align with human vision -
specifically, how far they adopt human-induced visual biases through multimodal
fusion, or whether they simply inherit biases from pure vision models. One
important visual bias is the texture vs. shape bias, or the dominance of local
over global information. In this paper, we study this bias in a wide range of
popular VLMs. Interestingly, we find that VLMs are often more shape-biased than
their vision encoders, indicating that visual biases are modulated to some
extent through text in multimodal models. If text does indeed influence visual
biases, this suggests that we may be able to steer visual biases not just
through visual input but also through language: a hypothesis that we confirm
through extensive experiments. For instance, we are able to steer shape bias
from as low as 49% to as high as 72% through prompting alone. For now, the
strong human bias towards shape (96%) remains out of reach for all tested VLMs.
| 9 |
674c73762f5974eb9a7ed1a1
| null | null |
|
2025-01-28T12:39:34.021000 |
OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale Synthetic Personas
| 2 |
{
"_id": "657cd228138b7e391444a65d",
"avatarUrl": "/avatars/c7c984ae483144fab627aa2c54d91d0f.svg",
"followerCount": 6,
"fullname": "Xiaoyang Wang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xywang1",
"type": "user"
}
| true | null |
2501.15427
|
[
{
"_id": "67984dfa6e816a0edaa8d7b1",
"hidden": false,
"name": "Xiaoyang Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:41:12.925Z",
"user": {
"_id": "657cd228138b7e391444a65d",
"avatarUrl": "/avatars/c7c984ae483144fab627aa2c54d91d0f.svg",
"fullname": "Xiaoyang Wang",
"isPro": false,
"type": "user",
"user": "xywang1"
}
},
{
"_id": "67984dfa6e816a0edaa8d7b2",
"hidden": false,
"name": "Hongming Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:41:33.772Z",
"user": {
"_id": "64ed478bec06efeb03034933",
"avatarUrl": "/avatars/cd7dc3165831e90cb36d39d41c3c8157.svg",
"fullname": "Hongming Zhang",
"isPro": false,
"type": "user",
"user": "Hongming98"
}
},
{
"_id": "67984dfa6e816a0edaa8d7b3",
"hidden": false,
"name": "Tao Ge",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67984dfa6e816a0edaa8d7b4",
"hidden": false,
"name": "Wenhao Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:42:23.986Z",
"user": {
"_id": "5feab3a28a3201f8e554c969",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1660795228685-5feab3a28a3201f8e554c969.png",
"fullname": "Wenhao Yu",
"isPro": false,
"type": "user",
"user": "wyu1"
}
},
{
"_id": "67984dfa6e816a0edaa8d7b5",
"hidden": false,
"name": "Dian Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67984dfa6e816a0edaa8d7b6",
"hidden": false,
"name": "Dong Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-26T07:07:01 |
OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale
Synthetic Personas
|
Customizable role-playing in large language models (LLMs), also known as
character generalization, is gaining increasing attention for its versatility
and cost-efficiency in developing and deploying role-playing dialogue agents.
This study explores a large-scale data synthesis approach to equip LLMs with
character generalization capabilities. We begin by synthesizing large-scale
character profiles using personas from Persona Hub and then explore two
strategies: response rewriting and response generation, to create
character-aligned instructional responses. To validate the effectiveness of our
synthetic instruction tuning data for character generalization, we perform
supervised fine-tuning (SFT) using the LLaMA-3 8B model. Our best-performing
model strengthens the original LLaMA-3 8B Instruct model and achieves
performance comparable to GPT-4o models on role-playing dialogue. We release
our synthetic characters and instruction-tuning dialogues to support public
research.
| 6 |
67984dfb6e816a0edaa8d7de
| null | null |
|
2025-01-28T12:09:18.563000 |
Return of the Encoder: Maximizing Parameter Efficiency for SLMs
| 2 |
{
"_id": "67984a3a02ff123f680a15c6",
"avatarUrl": "/avatars/3595c8962e1739325ba03ead8f76d2e9.svg",
"followerCount": null,
"fullname": "Mohamed Elfeki",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "melfeki11",
"type": "user"
}
| true | null |
2501.16273
|
[
{
"_id": "67984addd46e4d88ee27f43f",
"hidden": false,
"name": "Mohamed Elfeki",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T10:10:17.395Z",
"user": {
"_id": "67984a3a02ff123f680a15c6",
"avatarUrl": "/avatars/3595c8962e1739325ba03ead8f76d2e9.svg",
"fullname": "Mohamed Elfeki",
"isPro": false,
"type": "user",
"user": "melfeki11"
}
},
{
"_id": "67984addd46e4d88ee27f440",
"hidden": false,
"name": "Rui Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67984addd46e4d88ee27f441",
"hidden": false,
"name": "Chad Voegele",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T18:06:36 |
Return of the Encoder: Maximizing Parameter Efficiency for SLMs
|
The dominance of large decoder-only language models has overshadowed
encoder-decoder architectures, despite their fundamental efficiency advantages
in sequence processing. For small language models (SLMs) - those with 1 billion
parameters or fewer - our systematic analysis across GPU, CPU, and NPU
platforms reveals that encoder-decoder architectures achieve 47% lower
first-token latency and 4.7x higher throughput compared to decoder-only models
on edge devices. These gains may be attributed to encoder-decoder's one-time
input processing and efficient separation of understanding and generation
phases.
We introduce a novel knowledge distillation framework that enables
encoder-decoder models to leverage capabilities from large scalable
decoder-only teachers while preserving their architectural advantages,
achieving up to 6 average performance points improvement across diverse tasks,
with significant gains in asymmetric sequence tasks where input and output
distributions can benefit from different processing approaches.
When combined with modern advances like Rotary Positional Embeddings (RoPE)
and Vision encoders, our systematic investigation demonstrates that
encoder-decoder architectures provide a more practical path toward deploying
capable language models in resource-constrained environments. Our findings
challenge the prevailing trend toward decoder-only scaling, showing that
architectural choices become increasingly crucial as parameter budgets
decrease, particularly for on-device and edge deployments where computational
efficiency is paramount.
| 5 |
67984addd46e4d88ee27f47f
| null | null |
|
2025-01-28T07:42:14.777000 |
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models
| 2 |
{
"_id": "651e96991b97c9f33d26bde6",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/651e96991b97c9f33d26bde6/-Bqs6qrmz0yCfwtB2e-6q.jpeg",
"followerCount": 128,
"fullname": "Elie Bakouch",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "eliebak",
"type": "user"
}
| false | null |
2501.12370
|
[
{
"_id": "6798d09d208ffebef5bcfa47",
"hidden": false,
"name": "Samira Abnar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798d09d208ffebef5bcfa48",
"hidden": false,
"name": "Harshay Shah",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:16:56.095Z",
"user": {
"_id": "64b1a4f64dd3e24895daa236",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64b1a4f64dd3e24895daa236/lzlZ4DBOw-YVspNttAEY3.jpeg",
"fullname": "Harshay Shah",
"isPro": false,
"type": "user",
"user": "harshay"
}
},
{
"_id": "6798d09d208ffebef5bcfa49",
"hidden": false,
"name": "Dan Busbridge",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:17:03.153Z",
"user": {
"_id": "64c3726f2a5eaefd000cdedd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64c3726f2a5eaefd000cdedd/iwFifH1sWQy7agW3eTmNQ.png",
"fullname": "Dan Busbridge",
"isPro": false,
"type": "user",
"user": "dbusbridge"
}
},
{
"_id": "6798d09d208ffebef5bcfa4a",
"hidden": false,
"name": "Alaaeldin Mohamed Elnouby Ali",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798d09d208ffebef5bcfa4b",
"hidden": false,
"name": "Josh Susskind",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798d09d208ffebef5bcfa4c",
"hidden": false,
"name": "Vimal Thilak",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:17:43.503Z",
"user": {
"_id": "6737e918edaf7e05e4b35791",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/uk--P2mE2Jeg1QoxmXSTS.png",
"fullname": "Vimal Thilak",
"isPro": false,
"type": "user",
"user": "vimalthilak"
}
}
] | 2025-01-21T18:51:15 |
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for
Mixture-of-Experts Language Models
|
Scaling the capacity of language models has consistently proven to be a
reliable approach for improving performance and unlocking new capabilities.
Capacity can be primarily defined by two dimensions: the number of model
parameters and the compute per example. While scaling typically involves
increasing both, the precise interplay between these factors and their combined
contribution to overall capacity remains not fully understood. We explore this
relationship in the context of sparse Mixture-of-Experts (MoEs), which allow
scaling the number of parameters without proportionally increasing the FLOPs
per example. We investigate how varying the sparsity level, i.e., the fraction
of inactive parameters, impacts model's performance during pretraining and
downstream few-shot evaluation. We find that under different constraints (e.g.,
parameter size and total training compute), there is an optimal level of
sparsity that improves both training efficiency and model performance. These
results provide a better understanding of the impact of sparsity in scaling
laws for MoEs and complement existing works in this area, offering insights for
designing more efficient architectures.
| 11 |
6798d09e208ffebef5bcfa9c
| null | null |
|
2025-01-28T05:40:25.750000 |
Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for Speech Generation
| 2 |
{
"_id": "61a7569eaf0333e76eb428a8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61a7569eaf0333e76eb428a8/zwseNheR4Hx0DtCmf_v5H.jpeg",
"followerCount": 11,
"fullname": "HarryHe11",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "HarryHe",
"type": "user"
}
| true | null |
2501.15907
|
[
{
"_id": "6798a917a8b0d165e39e17f5",
"hidden": false,
"name": "Haorui He",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T13:52:52.095Z",
"user": {
"_id": "61a7569eaf0333e76eb428a8",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/61a7569eaf0333e76eb428a8/zwseNheR4Hx0DtCmf_v5H.jpeg",
"fullname": "HarryHe11",
"isPro": false,
"type": "user",
"user": "HarryHe"
}
},
{
"_id": "6798a917a8b0d165e39e17f6",
"hidden": false,
"name": "Zengqiang Shang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:12:12.565Z",
"user": {
"_id": "64b77dd308e2452d18ddd279",
"avatarUrl": "/avatars/258f21fa20a3a187050d80c6088a1f50.svg",
"fullname": "shangzengqiang",
"isPro": false,
"type": "user",
"user": "clatter-1"
}
},
{
"_id": "6798a917a8b0d165e39e17f7",
"hidden": false,
"name": "Chaoren Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17f8",
"hidden": false,
"name": "Xuyuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17f9",
"hidden": false,
"name": "Yicheng Gu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:12:57.932Z",
"user": {
"_id": "66b5f38a080d890d1727a2a4",
"avatarUrl": "/avatars/4d73017ce888437225d994d8ba370e5d.svg",
"fullname": "guyicheng",
"isPro": false,
"type": "user",
"user": "guyicheng"
}
},
{
"_id": "6798a917a8b0d165e39e17fa",
"hidden": false,
"name": "Hua Hua",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17fb",
"hidden": false,
"name": "Liwei Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17fc",
"hidden": false,
"name": "Chen Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17fd",
"hidden": false,
"name": "Jiaqi Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:14:30.597Z",
"user": {
"_id": "6635a711a5243c9638f5e4df",
"avatarUrl": "/avatars/08651622fc1fd5089551b510be8c4530.svg",
"fullname": "Jiaqi Li",
"isPro": false,
"type": "user",
"user": "jiaqili3"
}
},
{
"_id": "6798a917a8b0d165e39e17fe",
"hidden": false,
"name": "Peiyang Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e17ff",
"hidden": false,
"name": "Yuancheng Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:14:50.950Z",
"user": {
"_id": "63072d60cd148dbc5e49f4dd",
"avatarUrl": "/avatars/ffa61038c0ff20848fbcde7c1c34570e.svg",
"fullname": "Yuancheng Wang",
"isPro": false,
"type": "user",
"user": "Hecheng0625"
}
},
{
"_id": "6798a917a8b0d165e39e1800",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6798a917a8b0d165e39e1801",
"hidden": false,
"name": "Pengyuan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:14:57.414Z",
"user": {
"_id": "65fbe8eb030389a29b87446f",
"avatarUrl": "/avatars/6d3ba153c41945e566b7c2c2d6af6da6.svg",
"fullname": "pengyuan zhang",
"isPro": false,
"type": "user",
"user": "pengyuan2024"
}
},
{
"_id": "6798a917a8b0d165e39e1802",
"hidden": false,
"name": "Zhizheng Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T09:59:20 |
Emilia: A Large-Scale, Extensive, Multilingual, and Diverse Dataset for
Speech Generation
|
Recent advancements in speech generation have been driven by the large-scale
training datasets. However, current models fall short of capturing the
spontaneity and variability inherent in real-world human speech, due to their
reliance on audiobook datasets limited to formal read-aloud speech styles. To
bridge this gap, we introduce Emilia-Pipe, an open-source preprocessing
pipeline to extract high-quality training data from valuable yet underexplored
in-the-wild data that capture spontaneous human speech in real-world contexts.
By leveraging Emilia-Pipe, we construct Emilia, the first multilingual speech
generation dataset derived from in-the-wild speech data. This dataset comprises
over 101k hours of speech across six languages: English, Chinese, German,
French, Japanese, and Korean. Besides, we expand Emilia to Emilia-Large, a
dataset exceeding 216k hours, making it the largest open-source speech
generation dataset available. Extensive experiments demonstrate that Emilia
significantly outperforms traditional audiobook datasets in generating
spontaneous and human-like speech, showcasing superior performance in capturing
diverse speaker timbre and speaking styles of real-world human speech.
Furthermore, this work underscores the importance of scaling dataset size to
advance speech generation research and validates the effectiveness of Emilia
for both multilingual and crosslingual speech generation.
| 15 |
6798a919a8b0d165e39e187d
| null | null |
|
2025-01-28T03:02:56.062000 |
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language Model Born from Transformer
| 2 |
{
"_id": "6176b32847ee6431f632981e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6176b32847ee6431f632981e/02rZ_oLAI0Ll6Y6be7Q9F.jpeg",
"followerCount": 84,
"fullname": "IvanD",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "xiaol",
"type": "user"
}
| true | null |
2501.15570
|
[
{
"_id": "679843ae7d7b7f8196c61ab7",
"hidden": false,
"name": "Lin Yueyu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:10:34.376Z",
"user": {
"_id": "63a00aa29f1f2baab2034cf8",
"avatarUrl": "/avatars/818d104f45cbce2c47d443756fa806c8.svg",
"fullname": "Yueyu Lin",
"isPro": false,
"type": "user",
"user": "yueyulin"
}
},
{
"_id": "679843ae7d7b7f8196c61ab8",
"hidden": false,
"name": "Li Zhiyuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679843ae7d7b7f8196c61ab9",
"hidden": false,
"name": "Peter Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:11:02.358Z",
"user": {
"_id": "64087a0992033c15073afb8c",
"avatarUrl": "/avatars/9c590ab5c6526edce5084169ec7bde2e.svg",
"fullname": "peteryue",
"isPro": false,
"type": "user",
"user": "peteryue"
}
},
{
"_id": "679843ae7d7b7f8196c61aba",
"hidden": false,
"name": "Liu Xiao",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-28T02:44:02.658Z",
"user": {
"_id": "6176b32847ee6431f632981e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6176b32847ee6431f632981e/02rZ_oLAI0Ll6Y6be7Q9F.jpeg",
"fullname": "IvanD",
"isPro": false,
"type": "user",
"user": "xiaol"
}
}
] | 2025-01-26T15:56:56 |
ARWKV: Pretrain is not what we need, an RNN-Attention-Based Language
Model Born from Transformer
|
As is known, hybrid quadratic and subquadratic attention models in multi-head
architectures have surpassed both Transformer and Linear RNN models , with
these works primarily focusing on reducing KV complexity and improving
efficiency. For further research on expressiveness, we introduce our series of
models distilled from Qwen 2.5, based on pure native RWKV-7 attention, which
aims to make RNN more expressive and demonstrates state tracking ability beyond
transformers. We work with QRWK 32B based on RWKV-6 architecture, another
approach that reduces the entire knowledge processing time to just 8 hours
using 16 AMD MI300X GPUs while maintaining Qwen 2.5's performance. In fact, the
distillation process can utilize any LLM, not just Qwen, and enables knowledge
transfer from larger LLMs to smaller ones with more fewer tokens. We will
explain the detailed process and share our insights on building more powerful
foundation models. Please note that this is an ongoing work that will be
updated continuously. The model checkpoints and source code are available at
https://github.com/yynil/RWKVInside{https://github.com/yynil/RWKVInside},
https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1{https://huggingface.co/RWKV-Red-Team/ARWKV-7B-Preview-0.1}.
| 23 |
679843af7d7b7f8196c61b21
| null | null |
|
2025-01-28T00:51:51.263000 |
iFormer: Integrating ConvNet and Transformer for Mobile Application
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.15369
|
[
{
"_id": "6798706dabdc35456a92212d",
"hidden": false,
"name": "Chuanyang Zheng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:16:05.976Z",
"user": {
"_id": "65019cc870367843160fbb33",
"avatarUrl": "/avatars/f5455482fe9efbdeeea1bd3a3c119f02.svg",
"fullname": "ZhengChuanyang",
"isPro": false,
"type": "user",
"user": "BillionZheng"
}
}
] | 2025-01-26T02:34:58 |
iFormer: Integrating ConvNet and Transformer for Mobile Application
|
We present a new family of mobile hybrid vision networks, called iFormer,
with a focus on optimizing latency and accuracy on mobile applications. iFormer
effectively integrates the fast local representation capacity of convolution
with the efficient global modeling ability of self-attention. The local
interactions are derived from transforming a standard convolutional network,
i.e., ConvNeXt, to design a more lightweight mobile network. Our newly
introduced mobile modulation attention removes memory-intensive operations in
MHA and employs an efficient modulation mechanism to boost dynamic global
representational capacity. We conduct comprehensive experiments demonstrating
that iFormer outperforms existing lightweight networks across various tasks.
Notably, iFormer achieves an impressive Top-1 accuracy of 80.4\% on ImageNet-1k
with a latency of only 1.10 ms on an iPhone 13, surpassing the recently
proposed MobileNetV4 under similar latency constraints. Additionally, our
method shows significant improvements in downstream tasks, including COCO
object detection, instance segmentation, and ADE20k semantic segmentation,
while still maintaining low latency on mobile devices for high-resolution
inputs in these scenarios.
| 12 |
6798706eabdc35456a92215a
| null | null |
|
2025-01-28T00:39:11.423000 |
Feasible Learning
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.14912
|
[
{
"_id": "67986d764fccd4b95149db0b",
"hidden": false,
"name": "Juan Ramirez",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T16:53:24.742Z",
"user": {
"_id": "65555c6c6947208b77271f1e",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/CQPL5xB-Y-F89BxJrB7tg.png",
"fullname": "Juan Ramírez",
"isPro": false,
"type": "user",
"user": "juanramirezneilson"
}
},
{
"_id": "67986d764fccd4b95149db0c",
"hidden": false,
"name": "Ignacio Hounie",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:40:24.939Z",
"user": {
"_id": "6508b3f15b34509e16c78fea",
"avatarUrl": "/avatars/f663439c405354504b27f5ffab5c401a.svg",
"fullname": "Ignacio Hounie",
"isPro": false,
"type": "user",
"user": "ihounie"
}
},
{
"_id": "67986d764fccd4b95149db0d",
"hidden": false,
"name": "Juan Elenter",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:40:32.041Z",
"user": {
"_id": "65a04adddb5d37ad5e6b9d29",
"avatarUrl": "/avatars/071a137c2c1aad0699ee1b8b001e4a58.svg",
"fullname": "Elenter",
"isPro": false,
"type": "user",
"user": "juanelenter"
}
},
{
"_id": "67986d764fccd4b95149db0e",
"hidden": false,
"name": "Jose Gallego-Posada",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986d764fccd4b95149db0f",
"hidden": false,
"name": "Meraj Hashemizadeh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986d764fccd4b95149db10",
"hidden": false,
"name": "Alejandro Ribeiro",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T19:24:31.422Z",
"user": {
"_id": "6724c42c055ce33014feaec2",
"avatarUrl": "/avatars/8854289d890b555ca9562f95edeab784.svg",
"fullname": "Alejandro Ribeiro Prieto",
"isPro": false,
"type": "user",
"user": "Prieto"
}
},
{
"_id": "67986d764fccd4b95149db11",
"hidden": false,
"name": "Simon Lacoste-Julien",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T16:53:32.537Z",
"user": {
"_id": "6406bcaca577649430c6bff4",
"avatarUrl": "/avatars/77b386172b0d86d40cddd7a7a8744491.svg",
"fullname": "Simon Lacoste-Julien",
"isPro": false,
"type": "user",
"user": "slacoste"
}
}
] | 2025-01-24T20:39:38 |
Feasible Learning
|
We introduce Feasible Learning (FL), a sample-centric learning paradigm where
models are trained by solving a feasibility problem that bounds the loss for
each training sample. In contrast to the ubiquitous Empirical Risk Minimization
(ERM) framework, which optimizes for average performance, FL demands
satisfactory performance on every individual data point. Since any model that
meets the prescribed performance threshold is a valid FL solution, the choice
of optimization algorithm and its dynamics play a crucial role in shaping the
properties of the resulting solutions. In particular, we study a primal-dual
approach which dynamically re-weights the importance of each sample during
training. To address the challenge of setting a meaningful threshold in
practice, we introduce a relaxation of FL that incorporates slack variables of
minimal norm. Our empirical analysis, spanning image classification, age
regression, and preference optimization in large language models, demonstrates
that models trained via FL can learn from data while displaying improved tail
behavior compared to ERM, with only a marginal impact on average performance.
| 5 |
67986d784fccd4b95149db6b
| null | null |
|
2025-01-28T00:36:31.841000 |
Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with Modality-Aware Sparsity
| 1 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.16295
|
[
{
"_id": "67986cd6bdc99911a989b0a5",
"hidden": false,
"name": "Weixin Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986cd6bdc99911a989b0a6",
"hidden": false,
"name": "Junhong Shen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:18:15.019Z",
"user": {
"_id": "6532e347b66f4bf689cf269a",
"avatarUrl": "/avatars/76b5dddf80a24d3ef5c68b702280da82.svg",
"fullname": "Junhong Shen",
"isPro": false,
"type": "user",
"user": "sjunhongs"
}
},
{
"_id": "67986cd6bdc99911a989b0a7",
"hidden": false,
"name": "Genghan Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:18:21.072Z",
"user": {
"_id": "65a76ff1e504d9738d636217",
"avatarUrl": "/avatars/26bf5e3f19057835ee95d72c24904d77.svg",
"fullname": "Genghan Zhang",
"isPro": false,
"type": "user",
"user": "Genghan"
}
},
{
"_id": "67986cd6bdc99911a989b0a8",
"hidden": false,
"name": "Ning Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986cd6bdc99911a989b0a9",
"hidden": false,
"name": "Luke Zettlemoyer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986cd6bdc99911a989b0aa",
"hidden": false,
"name": "Lili Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T18:35:05 |
Mixture-of-Mamba: Enhancing Multi-Modal State-Space Models with
Modality-Aware Sparsity
|
State Space Models (SSMs) have emerged as efficient alternatives to
Transformers for sequential modeling, but their inability to leverage
modality-specific features limits their performance in multi-modal pretraining.
Here, we propose Mixture-of-Mamba, a novel SSM architecture that introduces
modality-aware sparsity through modality-specific parameterization of the Mamba
block. Building on Mixture-of-Transformers (W. Liang et al. arXiv:2411.04996;
2024), we extend the benefits of modality-aware sparsity to SSMs while
preserving their computational efficiency. We evaluate Mixture-of-Mamba across
three multi-modal pretraining settings: Transfusion (interleaved text and
continuous image tokens with diffusion loss), Chameleon (interleaved text and
discrete image tokens), and an extended three-modality framework incorporating
speech. Mixture-of-Mamba consistently reaches the same loss values at earlier
training steps with significantly reduced computational costs. In the
Transfusion setting, Mixture-of-Mamba achieves equivalent image loss using only
34.76% of the training FLOPs at the 1.4B scale. In the Chameleon setting,
Mixture-of-Mamba reaches similar image loss with just 42.50% of the FLOPs at
the 1.4B scale, and similar text loss with just 65.40% of the FLOPs. In the
three-modality setting, MoM matches speech loss at 24.80% of the FLOPs at the
1.4B scale. Our ablation study highlights the synergistic effects of decoupling
projection components, where joint decoupling yields greater gains than
individual modifications. These results establish modality-aware sparsity as a
versatile and effective design principle, extending its impact from
Transformers to SSMs and setting new benchmarks in multi-modal pretraining. Our
code can be accessed at https://github.com/Weixin-Liang/Mixture-of-Mamba
| 8 |
67986cd7bdc99911a989b0ea
| null | null |
|
2025-01-28T00:36:09.186000 |
Towards General-Purpose Model-Free Reinforcement Learning
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.16142
|
[
{
"_id": "67986cbc7dbf69e4e38539b7",
"hidden": false,
"name": "Scott Fujimoto",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986cbc7dbf69e4e38539b8",
"hidden": false,
"name": "Pierluca D'Oro",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:09:37.496Z",
"user": {
"_id": "64b6df54dce8f1fbb8ac9ed7",
"avatarUrl": "/avatars/82ca21cb9c8bacde071769bf4a888375.svg",
"fullname": "Pierluca D'Oro",
"isPro": false,
"type": "user",
"user": "pierluca"
}
},
{
"_id": "67986cbc7dbf69e4e38539b9",
"hidden": false,
"name": "Amy Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986cbc7dbf69e4e38539ba",
"hidden": false,
"name": "Yuandong Tian",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T14:10:10.676Z",
"user": {
"_id": "6344cf73ee1504dbcd5bdfe7",
"avatarUrl": "/avatars/6dd2bf1f9c5679e5c8c85d62c9836aac.svg",
"fullname": "Yuandong Tian",
"isPro": false,
"type": "user",
"user": "tydsh"
}
},
{
"_id": "67986cbc7dbf69e4e38539bb",
"hidden": false,
"name": "Michael Rabbat",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-27T15:36:37 |
Towards General-Purpose Model-Free Reinforcement Learning
|
Reinforcement learning (RL) promises a framework for near-universal
problem-solving. In practice however, RL algorithms are often tailored to
specific benchmarks, relying on carefully tuned hyperparameters and algorithmic
choices. Recently, powerful model-based RL methods have shown impressive
general results across benchmarks but come at the cost of increased complexity
and slow run times, limiting their broader applicability. In this paper, we
attempt to find a unifying model-free deep RL algorithm that can address a
diverse class of domains and problem settings. To achieve this, we leverage
model-based representations that approximately linearize the value function,
taking advantage of the denser task objectives used by model-based RL while
avoiding the costs associated with planning or simulated trajectories. We
evaluate our algorithm, MR.Q, on a variety of common RL benchmarks with a
single set of hyperparameters and show a competitive performance against
domain-specific and general baselines, providing a concrete step towards
building general-purpose model-free deep RL algorithms.
| 26 |
67986cbf7dbf69e4e3853a89
| null | null |
|
2025-01-28T00:35:46.871000 |
Qwen2.5-1M Technical Report
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.15383
|
[
{
"_id": "67986c83b5e71350993d28eb",
"hidden": false,
"name": "An Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28ec",
"hidden": false,
"name": "Bowen Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:56:10.598Z",
"user": {
"_id": "6583ab7983a9e1460c67d876",
"avatarUrl": "/avatars/74400bc448c3f07e23a4cd53d68a6af7.svg",
"fullname": "bowen",
"isPro": false,
"type": "user",
"user": "bowenYu"
}
},
{
"_id": "67986c83b5e71350993d28ed",
"hidden": false,
"name": "Chengyuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28ee",
"hidden": false,
"name": "Dayiheng Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:56:44.491Z",
"user": {
"_id": "6434d4989bd5a84b5dd0b0f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6434d4989bd5a84b5dd0b0f5/0Elf9qbfG9Hkgypm9pTGm.jpeg",
"fullname": "Dayiheng Liu",
"isPro": false,
"type": "user",
"user": "Losin94"
}
},
{
"_id": "67986c83b5e71350993d28ef",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f0",
"hidden": false,
"name": "Haoyan Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f1",
"hidden": false,
"name": "Jiandong Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f2",
"hidden": false,
"name": "Jianhong Tu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:58:35.581Z",
"user": {
"_id": "654bead777401b47e6424f88",
"avatarUrl": "/avatars/7bcbdbb051c93b004f0dc3ad36c4a0ce.svg",
"fullname": "Jianhong Tu",
"isPro": false,
"type": "user",
"user": "ToviTu"
}
},
{
"_id": "67986c83b5e71350993d28f3",
"hidden": false,
"name": "Jianwei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f4",
"hidden": false,
"name": "Jingren Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f5",
"hidden": false,
"name": "Junyang Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:57:31.261Z",
"user": {
"_id": "620760a26e3b7210c2ff1943",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"isPro": false,
"type": "user",
"user": "JustinLin610"
}
},
{
"_id": "67986c83b5e71350993d28f6",
"hidden": false,
"name": "Kai Dang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f7",
"hidden": false,
"name": "Kexin Yang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:57:56.435Z",
"user": {
"_id": "65b0b3957e5d5a4ecc750de0",
"avatarUrl": "/avatars/e0d79d3265ca4ad5c5411feb01043fb4.svg",
"fullname": "Kexin Yang",
"isPro": false,
"type": "user",
"user": "dawn0929"
}
},
{
"_id": "67986c83b5e71350993d28f8",
"hidden": false,
"name": "Le Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28f9",
"hidden": false,
"name": "Mei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28fa",
"hidden": false,
"name": "Minmin Sun",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:58:27.795Z",
"user": {
"_id": "636a390037d9329b4a007009",
"avatarUrl": "/avatars/a3c9117e104d4667e39e20ec83dc5cd6.svg",
"fullname": "Minmin Sun",
"isPro": false,
"type": "user",
"user": "minminsun"
}
},
{
"_id": "67986c83b5e71350993d28fb",
"hidden": false,
"name": "Qin Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28fc",
"hidden": false,
"name": "Rui Men",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28fd",
"hidden": false,
"name": "Tao He",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28fe",
"hidden": false,
"name": "Weijia Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d28ff",
"hidden": false,
"name": "Wenbiao Yin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d2900",
"hidden": false,
"name": "Wenyuan Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:58:11.021Z",
"user": {
"_id": "63f4c99721eb234ab73dd112",
"avatarUrl": "/avatars/162e92d7aeb7de1c6ebf4d6e2bff33f5.svg",
"fullname": "yu wenyuan",
"isPro": false,
"type": "user",
"user": "liuxinyijian"
}
},
{
"_id": "67986c83b5e71350993d2901",
"hidden": false,
"name": "Xiafei Qiu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d2902",
"hidden": false,
"name": "Xingzhang Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d2903",
"hidden": false,
"name": "Xinlong Yang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-04T14:48:49.659Z",
"user": {
"_id": "6493eb93f453c3a7a2828362",
"avatarUrl": "/avatars/0a0e1f010465ab2042e532fc1f5b8053.svg",
"fullname": "Yang Xinlong",
"isPro": false,
"type": "user",
"user": "Yangyy666"
}
},
{
"_id": "67986c83b5e71350993d2904",
"hidden": false,
"name": "Yong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d2905",
"hidden": false,
"name": "Zhiying Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c83b5e71350993d2906",
"hidden": false,
"name": "Zipeng Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-26T03:47:25 |
Qwen2.5-1M Technical Report
|
We introduce Qwen2.5-1M, a series of models that extend the context length to
1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series
have significantly enhanced long-context capabilities through long-context
pre-training and post-training. Key techniques such as long data synthesis,
progressive pre-training, and multi-stage supervised fine-tuning are employed
to effectively enhance long-context performance while reducing training costs.
To promote the use of long-context models among a broader user base, we
present and open-source our inference framework. This framework includes a
length extrapolation method that can expand the model context lengths by at
least four times, or even more, without additional training. To reduce
inference costs, we implement a sparse attention method along with chunked
prefill optimization for deployment scenarios and a sparsity refinement method
to improve precision. Additionally, we detail our optimizations in the
inference engine, including kernel optimization, pipeline parallelism, and
scheduling optimization, which significantly enhance overall inference
performance. By leveraging our inference framework, the Qwen2.5-1M models
achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million
tokens of context. This framework provides an efficient and powerful solution
for developing applications that require long-context processing using
open-source models.
The Qwen2.5-1M series currently includes the open-source models
Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, as well as the API-accessed
model Qwen2.5-Turbo. Evaluations show that Qwen2.5-1M models have been greatly
improved in long-context tasks without compromising performance in
short-context scenarios. Specifically, the Qwen2.5-14B-Instruct-1M model
significantly outperforms GPT-4o-mini in long-context tasks and supports
contexts eight times longer.
| 62 |
67986c84b5e71350993d2974
| null | null |
|
2025-01-28T00:34:49.721000 |
Baichuan-Omni-1.5 Technical Report
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.15368
|
[
{
"_id": "67986c6822990ae89bb71fb9",
"hidden": false,
"name": "Yadong Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T10:29:02.659Z",
"user": {
"_id": "6797cc0ff386b10d1609e3ff",
"avatarUrl": "/avatars/3ec1020e974ed01f60a46150501171da.svg",
"fullname": "Yadong Li",
"isPro": false,
"type": "user",
"user": "AdamLee1"
}
},
{
"_id": "67986c6822990ae89bb71fba",
"hidden": false,
"name": "Jun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fbb",
"hidden": false,
"name": "Tao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fbc",
"hidden": false,
"name": "Tao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fbd",
"hidden": false,
"name": "Song Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fbe",
"hidden": false,
"name": "Tianpeng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fbf",
"hidden": false,
"name": "Zehuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc0",
"hidden": false,
"name": "Lijun Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc1",
"hidden": false,
"name": "Lingfeng Ming",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc2",
"hidden": false,
"name": "Guosheng Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc3",
"hidden": false,
"name": "Da Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc4",
"hidden": false,
"name": "Chong Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc5",
"hidden": false,
"name": "Yuanbo Fang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc6",
"hidden": false,
"name": "Dongdong Kuang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:17:34.592Z",
"user": {
"_id": "6455ec9bd808eebdefc4ceec",
"avatarUrl": "/avatars/86c989f7abf6558573409e9e42a721f9.svg",
"fullname": "Dongdong Kuang",
"isPro": false,
"type": "user",
"user": "kingsley01"
}
},
{
"_id": "67986c6822990ae89bb71fc7",
"hidden": false,
"name": "Mingrui Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T17:17:32.373Z",
"user": {
"_id": "66af35946149eb45a6730a8f",
"avatarUrl": "/avatars/7f613f925b5d57798e03c0320661247e.svg",
"fullname": "Mingrui Wang",
"isPro": false,
"type": "user",
"user": "ruillm"
}
},
{
"_id": "67986c6822990ae89bb71fc8",
"hidden": false,
"name": "Chenglin Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fc9",
"hidden": false,
"name": "Youwei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fca",
"hidden": false,
"name": "Hongyu Guo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fcb",
"hidden": false,
"name": "Fengyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fcc",
"hidden": false,
"name": "Yuran Wang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T13:52:53.610Z",
"user": {
"_id": "65e71ef39cf349af2940b317",
"avatarUrl": "/avatars/fc1cd8d3510946fc947d67b16b51834b.svg",
"fullname": "Yuran Wang",
"isPro": false,
"type": "user",
"user": "Ryann829"
}
},
{
"_id": "67986c6822990ae89bb71fcd",
"hidden": false,
"name": "Bowen Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fce",
"hidden": false,
"name": "Wei Song",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fcf",
"hidden": false,
"name": "Xu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd0",
"hidden": false,
"name": "Yuqi Huo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd1",
"hidden": false,
"name": "Zheng Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd2",
"hidden": false,
"name": "Shusen Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd3",
"hidden": false,
"name": "Xin Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd4",
"hidden": false,
"name": "Shuai Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd5",
"hidden": false,
"name": "Linchu Xiong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd6",
"hidden": false,
"name": "Yozhen Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd7",
"hidden": false,
"name": "Jiahui Ye",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd8",
"hidden": false,
"name": "Wenhao Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fd9",
"hidden": false,
"name": "Bowen Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fda",
"hidden": false,
"name": "Yan Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fdb",
"hidden": false,
"name": "Yaqi Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fdc",
"hidden": false,
"name": "Xin Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fdd",
"hidden": false,
"name": "Lei Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fde",
"hidden": false,
"name": "Hongda Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fdf",
"hidden": false,
"name": "Fuzhong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe0",
"hidden": false,
"name": "Xuezhen Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe1",
"hidden": false,
"name": "Na Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe2",
"hidden": false,
"name": "Zhiying Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe3",
"hidden": false,
"name": "Bin Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe4",
"hidden": false,
"name": "Ting Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe5",
"hidden": false,
"name": "Shunya Dang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe6",
"hidden": false,
"name": "Ping Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe7",
"hidden": false,
"name": "Yijia Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe8",
"hidden": false,
"name": "Jincheng Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fe9",
"hidden": false,
"name": "Jinjie Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fea",
"hidden": false,
"name": "Xionghai Lin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71feb",
"hidden": false,
"name": "Zhi Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fec",
"hidden": false,
"name": "Kegeng Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fed",
"hidden": false,
"name": "Jia li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fee",
"hidden": false,
"name": "Aiyuan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fef",
"hidden": false,
"name": "Hui Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff0",
"hidden": false,
"name": "Jianqiang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff1",
"hidden": false,
"name": "Xiaoxi Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff2",
"hidden": false,
"name": "Guangwei Ai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff3",
"hidden": false,
"name": "Wentao Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff4",
"hidden": false,
"name": "Yicong Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff5",
"hidden": false,
"name": "Xiaoqin Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff6",
"hidden": false,
"name": "Kun Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff7",
"hidden": false,
"name": "Wenjing Luo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff8",
"hidden": false,
"name": "Yifei Duan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ff9",
"hidden": false,
"name": "Lingling Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ffa",
"hidden": false,
"name": "Ran Xiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ffb",
"hidden": false,
"name": "Zhe Su",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ffc",
"hidden": false,
"name": "Jiani Pu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ffd",
"hidden": false,
"name": "Dian Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71ffe",
"hidden": false,
"name": "Xu Jia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb71fff",
"hidden": false,
"name": "Tianyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72000",
"hidden": false,
"name": "Mengyu Ai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72001",
"hidden": false,
"name": "Mang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72002",
"hidden": false,
"name": "Yujing Qiao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72003",
"hidden": false,
"name": "Lei Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72004",
"hidden": false,
"name": "Yanjun Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72005",
"hidden": false,
"name": "Fan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72006",
"hidden": false,
"name": "Miao Zhen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72007",
"hidden": false,
"name": "Yijie Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72008",
"hidden": false,
"name": "Mingyang Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72009",
"hidden": false,
"name": "Fei Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200a",
"hidden": false,
"name": "Chenzheng Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200b",
"hidden": false,
"name": "Keer Lu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200c",
"hidden": false,
"name": "Yaqi Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200d",
"hidden": false,
"name": "Hao Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200e",
"hidden": false,
"name": "Youquan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb7200f",
"hidden": false,
"name": "Yanzhao Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72010",
"hidden": false,
"name": "Linzhuang Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72011",
"hidden": false,
"name": "Jianhua Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72012",
"hidden": false,
"name": "Haoze Sun",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-11T10:03:25.944Z",
"user": {
"_id": "6436bb0dd58a5ea528c55acb",
"avatarUrl": "/avatars/df17b66780e14e07bbe4625f068a94ad.svg",
"fullname": "Alvin Sun",
"isPro": false,
"type": "user",
"user": "AlvinSunYooo"
}
},
{
"_id": "67986c6822990ae89bb72013",
"hidden": false,
"name": "Mingan Lin",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-12T09:17:07.915Z",
"user": {
"_id": "6415947858a690df103af49f",
"avatarUrl": "/avatars/38aec23b869833bceb25b9250809b419.svg",
"fullname": "lma",
"isPro": false,
"type": "user",
"user": "lin5547"
}
},
{
"_id": "67986c6822990ae89bb72014",
"hidden": false,
"name": "Zenan Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67986c6822990ae89bb72015",
"hidden": false,
"name": "Weipeng Chen",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-28T13:54:48.451Z",
"user": {
"_id": "6501587887b370a56ad2608e",
"avatarUrl": "/avatars/6779baaa8ed9032de55a2f78e1f52e20.svg",
"fullname": "Wei-Peng Chen",
"isPro": false,
"type": "user",
"user": "whenfra"
}
}
] | 2025-01-26T02:19:03 |
Baichuan-Omni-1.5 Technical Report
|
We introduce Baichuan-Omni-1.5, an omni-modal model that not only has
omni-modal understanding capabilities but also provides end-to-end audio
generation capabilities. To achieve fluent and high-quality interaction across
modalities without compromising the capabilities of any modality, we
prioritized optimizing three key aspects. First, we establish a comprehensive
data cleaning and synthesis pipeline for multimodal data, obtaining about 500B
high-quality data (text, audio, and vision). Second, an audio-tokenizer
(Baichuan-Audio-Tokenizer) has been designed to capture both semantic and
acoustic information from audio, enabling seamless integration and enhanced
compatibility with MLLM. Lastly, we designed a multi-stage training strategy
that progressively integrates multimodal alignment and multitask fine-tuning,
ensuring effective synergy across all modalities. Baichuan-Omni-1.5 leads
contemporary models (including GPT4o-mini and MiniCPM-o 2.6) in terms of
comprehensive omni-modal capabilities. Notably, it achieves results comparable
to leading models such as Qwen2-VL-72B across various multimodal medical
benchmarks.
| 61 |
67986c6b22990ae89bb720aa
| null | null |
|
2025-01-27T12:48:02.005000 |
CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On with Temporal Concatenation
| 3 |
{
"_id": "6381847a471a4550ff298c63",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png",
"followerCount": 31,
"fullname": "Jun",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "zxbsmk",
"type": "user"
}
| true | null |
2501.11325
|
[
{
"_id": "6795f11746f22e87c8ab5895",
"hidden": false,
"name": "Zheng Chong",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-29T08:55:29.276Z",
"user": {
"_id": "646446517572c66a8e652e94",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/646446517572c66a8e652e94/A4LIdECGW03ixc0HfIhfo.png",
"fullname": "ZhengChong",
"isPro": false,
"type": "user",
"user": "zhengchong"
}
},
{
"_id": "6795f11746f22e87c8ab5896",
"hidden": false,
"name": "Wenqing Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab5897",
"hidden": false,
"name": "Shiyue Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab5898",
"hidden": false,
"name": "Jun Zheng",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-26T11:38:25.373Z",
"user": {
"_id": "6381847a471a4550ff298c63",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6381847a471a4550ff298c63/RTKepvX67R6pLiiUidpUO.png",
"fullname": "Jun",
"isPro": false,
"type": "user",
"user": "zxbsmk"
}
},
{
"_id": "6795f11746f22e87c8ab5899",
"hidden": false,
"name": "Xiao Dong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab589a",
"hidden": false,
"name": "Haoxiang Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab589b",
"hidden": false,
"name": "Yiling Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab589c",
"hidden": false,
"name": "Dongmei Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6795f11746f22e87c8ab589d",
"hidden": false,
"name": "Xiaodan Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-20T08:09:36 |
CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On
with Temporal Concatenation
|
Virtual try-on (VTON) technology has gained attention due to its potential to
transform online retail by enabling realistic clothing visualization of images
and videos. However, most existing methods struggle to achieve high-quality
results across image and video try-on tasks, especially in long video
scenarios. In this work, we introduce CatV2TON, a simple and effective
vision-based virtual try-on (V2TON) method that supports both image and video
try-on tasks with a single diffusion transformer model. By temporally
concatenating garment and person inputs and training on a mix of image and
video datasets, CatV2TON achieves robust try-on performance across static and
dynamic settings. For efficient long-video generation, we propose an
overlapping clip-based inference strategy that uses sequential frame guidance
and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with
reduced resource demands. We also present ViViD-S, a refined video try-on
dataset, achieved by filtering back-facing frames and applying 3D mask
smoothing for enhanced temporal consistency. Comprehensive experiments
demonstrate that CatV2TON outperforms existing methods in both image and video
try-on tasks, offering a versatile and reliable solution for realistic virtual
try-ons across diverse scenarios.
| 5 |
6795f11846f22e87c8ab5934
| null | null |
|
2025-01-27T11:06:04.998000 |
Question Answering on Patient Medical Records with Private Fine-Tuned LLMs
| 2 |
{
"_id": "64b81834f1f8e6ea5841c690",
"avatarUrl": "/avatars/9d8385e687f6bcfe14dff7e6754cc97f.svg",
"followerCount": null,
"fullname": "Ayush Gupta",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "ayushgs",
"type": "user"
}
| true | null |
2501.13687
|
[
{
"_id": "6797ae961d3cfd7ca5a582a6",
"hidden": false,
"name": "Sara Kothari",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-27T16:30:51.516Z",
"user": {
"_id": "66d530c8f040611f7cb342b4",
"avatarUrl": "/avatars/574342d232f897364ab43255e746fc57.svg",
"fullname": "Sara Kothari",
"isPro": false,
"type": "user",
"user": "sarako"
}
},
{
"_id": "6797ae961d3cfd7ca5a582a7",
"hidden": false,
"name": "Ayush Gupta",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-01-27T18:59:07.894Z",
"user": {
"_id": "64b81834f1f8e6ea5841c690",
"avatarUrl": "/avatars/9d8385e687f6bcfe14dff7e6754cc97f.svg",
"fullname": "Ayush Gupta",
"isPro": false,
"type": "user",
"user": "ayushgs"
}
}
] | 2025-01-23T14:13:56 |
Question Answering on Patient Medical Records with Private Fine-Tuned
LLMs
|
Healthcare systems continuously generate vast amounts of electronic health
records (EHRs), commonly stored in the Fast Healthcare Interoperability
Resources (FHIR) standard. Despite the wealth of information in these records,
their complexity and volume make it difficult for users to retrieve and
interpret crucial health insights. Recent advances in Large Language Models
(LLMs) offer a solution, enabling semantic question answering (QA) over medical
data, allowing users to interact with their health records more effectively.
However, ensuring privacy and compliance requires edge and private deployments
of LLMs.
This paper proposes a novel approach to semantic QA over EHRs by first
identifying the most relevant FHIR resources for a user query (Task1) and
subsequently answering the query based on these resources (Task2). We explore
the performance of privately hosted, fine-tuned LLMs, evaluating them against
benchmark models such as GPT-4 and GPT-4o. Our results demonstrate that
fine-tuned LLMs, while 250x smaller in size, outperform GPT-4 family models by
0.55% in F1 score on Task1 and 42% on Meteor Task in Task2. Additionally, we
examine advanced aspects of LLM usage, including sequential fine-tuning, model
self-evaluation (narcissistic evaluation), and the impact of training data size
on performance. The models and datasets are available here:
https://huggingface.co/genloop
| 9 |
6797ae971d3cfd7ca5a58307
| null | null |
|
2025-01-27T09:59:51.940000 |
RL + Transformer = A General-Purpose Problem Solver
| 2 |
{
"_id": "63c19eb3a0ffa3857eae2efa",
"avatarUrl": "/avatars/35b06ca092f615a6d11ee99683d0376a.svg",
"followerCount": null,
"fullname": "Jesse Roberts",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "JesseTNRoberts",
"type": "user"
}
| true | null |
2501.14176
|
[
{
"_id": "67979f107dbf69e4e34cc51a",
"hidden": false,
"name": "Micah Rentschler",
"status": "extracted_confirmed",
"statusLastChangedAt": "2025-02-06T17:21:33.919Z",
"user": {
"_id": "66fffe1b3ec4cc293d40f2d5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/66fffe1b3ec4cc293d40f2d5/-6Ff7rSQ_fvqBmTQRTlB4.png",
"fullname": "Micah Rentschler",
"isPro": true,
"type": "user",
"user": "micahr234"
}
},
{
"_id": "67979f107dbf69e4e34cc51b",
"hidden": false,
"name": "Jesse Roberts",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:09:51.776Z",
"user": {
"_id": "63c19eb3a0ffa3857eae2efa",
"avatarUrl": "/avatars/35b06ca092f615a6d11ee99683d0376a.svg",
"fullname": "Jesse Roberts",
"isPro": false,
"type": "user",
"user": "JesseTNRoberts"
}
}
] | 2025-01-24T01:55:20 |
RL + Transformer = A General-Purpose Problem Solver
|
What if artificial intelligence could not only solve problems for which it
was trained but also learn to teach itself to solve new problems (i.e.,
meta-learn)? In this study, we demonstrate that a pre-trained transformer
fine-tuned with reinforcement learning over multiple episodes develops the
ability to solve problems that it has never encountered before - an emergent
ability called In-Context Reinforcement Learning (ICRL). This powerful
meta-learner not only excels in solving unseen in-distribution environments
with remarkable sample efficiency, but also shows strong performance in
out-of-distribution environments. In addition, we show that it exhibits
robustness to the quality of its training data, seamlessly stitches together
behaviors from its context, and adapts to non-stationary environments. These
behaviors demonstrate that an RL-trained transformer can iteratively improve
upon its own solutions, making it an excellent general-purpose problem solver.
| 25 |
67979f117dbf69e4e34cc565
| null | null |
|
2025-01-27T08:48:16.707000 |
AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and Modulation
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2403.14614
|
[
{
"_id": "67978e94d8e2dcea3de32387",
"hidden": false,
"name": "Yuning Cui",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67978e94d8e2dcea3de32388",
"hidden": false,
"name": "Syed Waqas Zamir",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:05:10.425Z",
"user": {
"_id": "6245a8cc4db06ca3fff5a4de",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1648732357931-noauth.jpeg",
"fullname": "Syed Waqas Zamir",
"isPro": false,
"type": "user",
"user": "swzamir"
}
},
{
"_id": "67978e94d8e2dcea3de32389",
"hidden": false,
"name": "Salman Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67978e94d8e2dcea3de3238a",
"hidden": false,
"name": "Alois Knoll",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67978e94d8e2dcea3de3238b",
"hidden": false,
"name": "Mubarak Shah",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67978e94d8e2dcea3de3238c",
"hidden": false,
"name": "Fahad Shahbaz Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-03-21T17:58:14 |
AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and
Modulation
|
In the image acquisition process, various forms of degradation, including
noise, haze, and rain, are frequently introduced. These degradations typically
arise from the inherent limitations of cameras or unfavorable ambient
conditions. To recover clean images from degraded versions, numerous
specialized restoration methods have been developed, each targeting a specific
type of degradation. Recently, all-in-one algorithms have garnered significant
attention by addressing different types of degradations within a single model
without requiring prior information of the input degradation type. However,
these methods purely operate in the spatial domain and do not delve into the
distinct frequency variations inherent to different degradation types. To
address this gap, we propose an adaptive all-in-one image restoration network
based on frequency mining and modulation. Our approach is motivated by the
observation that different degradation types impact the image content on
different frequency subbands, thereby requiring different treatments for each
restoration task. Specifically, we first mine low- and high-frequency
information from the input features, guided by the adaptively decoupled spectra
of the degraded image. The extracted features are then modulated by a
bidirectional operator to facilitate interactions between different frequency
components. Finally, the modulated features are merged into the original input
for a progressively guided restoration. With this approach, the model achieves
adaptive reconstruction by accentuating the informative frequency subbands
according to different input degradations. Extensive experiments demonstrate
that the proposed method achieves state-of-the-art performance on different
image restoration tasks, including denoising, dehazing, deraining, motion
deblurring, and low-light image enhancement. Our code is available at
https://github.com/c-yn/AdaIR.
| 4 |
67978e9bd8e2dcea3de32553
| null | null |
|
2025-01-27T08:47:36.052000 |
Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2406.18516
|
[
{
"_id": "67978a6cba1b09be7b538b0a",
"hidden": false,
"name": "Kang Liao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-25T14:09:50.240Z",
"user": {
"_id": "65bc98383b879593a5a2f5e5",
"avatarUrl": "/avatars/70f6fec5bf29c89eda7b909ec1472ace.svg",
"fullname": "Kang Liao",
"isPro": false,
"type": "user",
"user": "KangLiao"
}
},
{
"_id": "67978a6cba1b09be7b538b0b",
"hidden": false,
"name": "Zongsheng Yue",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:10:20.782Z",
"user": {
"_id": "630ad0dd2ff113e0fb31c6b0",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1671174653229-630ad0dd2ff113e0fb31c6b0.jpeg",
"fullname": "Zongsheng Yue",
"isPro": true,
"type": "user",
"user": "OAOA"
}
},
{
"_id": "67978a6cba1b09be7b538b0c",
"hidden": false,
"name": "Zhouxia Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:10:14.521Z",
"user": {
"_id": "64db92a5858f8a41c11669b7",
"avatarUrl": "/avatars/e834d8f1d4781e3bb0b5d6d25b3b3505.svg",
"fullname": "Zhouxia Wang",
"isPro": false,
"type": "user",
"user": "wzhouxiff"
}
},
{
"_id": "67978a6cba1b09be7b538b0d",
"hidden": false,
"name": "Chen Change Loy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:10:07.079Z",
"user": {
"_id": "67459d997a49660f7f62452f",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/no-auth/X25yTdxC_hcLVxpx4voZo.png",
"fullname": "Chen Change Loy",
"isPro": false,
"type": "user",
"user": "cavanloy"
}
}
] | 2024-06-26T17:40:30 |
Denoising as Adaptation: Noise-Space Domain Adaptation for Image
Restoration
|
Although learning-based image restoration methods have made significant
progress, they still struggle with limited generalization to real-world
scenarios due to the substantial domain gap caused by training on synthetic
data. Existing methods address this issue by improving data synthesis
pipelines, estimating degradation kernels, employing deep internal learning,
and performing domain adaptation and regularization. Previous domain adaptation
methods have sought to bridge the domain gap by learning domain-invariant
knowledge in either feature or pixel space. However, these techniques often
struggle to extend to low-level vision tasks within a stable and compact
framework. In this paper, we show that it is possible to perform domain
adaptation via the noise space using diffusion models. In particular, by
leveraging the unique property of how auxiliary conditional inputs influence
the multi-step denoising process, we derive a meaningful diffusion loss that
guides the restoration model in progressively aligning both restored synthetic
and real-world outputs with a target clean distribution. We refer to this
method as denoising as adaptation. To prevent shortcuts during joint training,
we present crucial strategies such as channel-shuffling layer and
residual-swapping contrastive learning in the diffusion model. They implicitly
blur the boundaries between conditioned synthetic and real data and prevent the
reliance of the model on easily distinguishable features. Experimental results
on three classical image restoration tasks, namely denoising, deblurring, and
deraining, demonstrate the effectiveness of the proposed method.
| 3 |
67978a6fba1b09be7b538c86
| null | null |
|
2025-01-27T08:43:43.143000 |
Multiview Equivariance Improves 3D Correspondence Understanding with Minimal Feature Finetuning
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2411.19458
|
[
{
"_id": "6796e43ce05ca91d7eb430b5",
"hidden": false,
"name": "Yang You",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-27T10:43:22.673Z",
"user": {
"_id": "64791d760c2c2297c335fb24",
"avatarUrl": "/avatars/716c3c2ea42baf074ba9680c9939da28.svg",
"fullname": "Yang You",
"isPro": true,
"type": "user",
"user": "qq456cvb"
}
},
{
"_id": "6796e43ce05ca91d7eb430b6",
"hidden": false,
"name": "Yixin Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:10:55.826Z",
"user": {
"_id": "66315569e9960a198f7cf634",
"avatarUrl": "/avatars/9fc7dd96a3ba1e47d9574c226e0101e0.svg",
"fullname": "Yixin Li",
"isPro": false,
"type": "user",
"user": "yixinli"
}
},
{
"_id": "6796e43ce05ca91d7eb430b7",
"hidden": false,
"name": "Congyue Deng",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:11:05.862Z",
"user": {
"_id": "634483306e988c6a792d9a9b",
"avatarUrl": "/avatars/da717bf4e270ec0e2c8d8a82f8884081.svg",
"fullname": "Deng",
"isPro": false,
"type": "user",
"user": "Congyue"
}
},
{
"_id": "6796e43ce05ca91d7eb430b8",
"hidden": false,
"name": "Yue Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796e43ce05ca91d7eb430b9",
"hidden": false,
"name": "Leonidas Guibas",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2024-11-29T04:02:11 |
Multiview Equivariance Improves 3D Correspondence Understanding with
Minimal Feature Finetuning
|
Vision foundation models, particularly the ViT family, have revolutionized
image understanding by providing rich semantic features. However, despite their
success in 2D comprehension, their abilities on grasping 3D spatial
relationships are still unclear. In this work, we evaluate and enhance the 3D
awareness of ViT-based models. We begin by systematically assessing their
ability to learn 3D equivariant features, specifically examining the
consistency of semantic embeddings across different viewpoints. Our findings
indicate that improved 3D equivariance leads to better performance on various
downstream tasks, including pose estimation, tracking, and semantic transfer.
Building on this insight, we propose a simple yet effective finetuning strategy
based on 3D correspondences, which significantly enhances the 3D correspondence
understanding of existing vision models. Remarkably, even finetuning on a
single object for just one iteration results in substantial performance gains.
All code and resources will be made publicly available to support further
advancements in 3D-aware vision models. Our code is available at
https://github.com/qq456cvb/3DCorrEnhance.
| 6 |
6796e441e05ca91d7eb4325d
| null | null |
|
2025-01-27T08:27:33.004000 |
GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing
| 2 |
{
"_id": "5f1158120c833276f61f1a84",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1608042047613-5f1158120c833276f61f1a84.jpeg",
"followerCount": 777,
"fullname": "Niels Rogge",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "nielsr",
"type": "user"
}
| false | null |
2501.13925
|
[
{
"_id": "679789af4a10be7109a28675",
"hidden": false,
"name": "Akashah Shabbir",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679789af4a10be7109a28676",
"hidden": false,
"name": "Mohammed Zumri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679789af4a10be7109a28677",
"hidden": false,
"name": "Mohammed Bennamoun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679789af4a10be7109a28678",
"hidden": false,
"name": "Fahad S. Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679789af4a10be7109a28679",
"hidden": false,
"name": "Salman Khan",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-23T18:59:30 |
GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing
|
Recent advances in large multimodal models (LMMs) have recognized
fine-grained grounding as an imperative factor of visual understanding and
dialogue. However, the benefits of such representation in LMMs are limited to
the natural image domain, and these models perform poorly for remote sensing
(RS). The distinct overhead viewpoint, scale variation, and presence of small
objects in high-resolution RS imagery present a unique challenge in
region-level comprehension. Moreover, the development of the grounding
conversation capability of LMMs within RS is hindered by the lack of granular,
RS domain-specific grounded data. Addressing these limitations, we propose
GeoPixel - the first end-to-end high resolution RS-LMM that supports
pixel-level grounding. This capability allows fine-grained visual perception by
generating interleaved masks in conversation. GeoPixel supports up to 4K HD
resolution in any aspect ratio, ideal for high-precision RS image analysis. To
support the grounded conversation generation (GCG) in RS imagery, we curate a
visually grounded dataset GeoPixelD through a semi-automated pipeline that
utilizes set-of-marks prompting and spatial priors tailored for RS data to
methodically control the data generation process. GeoPixel demonstrates
superior performance in pixel-level comprehension, surpassing existing LMMs in
both single-target and multi-target segmentation tasks. Our methodological
ablation studies validate the effectiveness of each component in the overall
architecture. Our code and data will be publicly released.
| 7 |
679789b44a10be7109a28733
| null | null |
|
2025-01-27T03:05:15.907000 |
Redundancy Principles for MLLMs Benchmarks
| 2 |
{
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1676546883247-noauth.png",
"followerCount": 21,
"fullname": "HAODONG DUAN",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "KennyUTC",
"type": "user"
}
| true | null |
2501.13953
|
[
{
"_id": "67973e05495916be7c0086cc",
"hidden": false,
"name": "Zicheng Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:56:43.367Z",
"user": {
"_id": "6526cc6bab4f5d98382f5603",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6526cc6bab4f5d98382f5603/NekYYb61I5nt4au6gXsVK.jpeg",
"fullname": "Zicheng Zhang",
"isPro": false,
"type": "user",
"user": "zhangzicheng"
}
},
{
"_id": "67973e05495916be7c0086cd",
"hidden": false,
"name": "Xiangyu Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67973e05495916be7c0086ce",
"hidden": false,
"name": "Xinyu Fang",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-27T10:43:11.727Z",
"user": {
"_id": "64f5f8dd9b17cd59c453c57f",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/64f5f8dd9b17cd59c453c57f/MulhwLcePFUWUQel8LQZ8.jpeg",
"fullname": "Xinyu Fang",
"isPro": false,
"type": "user",
"user": "nebulae09"
}
},
{
"_id": "67973e05495916be7c0086cf",
"hidden": false,
"name": "Chunyi Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67973e05495916be7c0086d0",
"hidden": false,
"name": "Xiaohong Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67973e05495916be7c0086d1",
"hidden": false,
"name": "Xiongkuo Min",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67973e05495916be7c0086d2",
"hidden": false,
"name": "Haodong Duan",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-27T10:43:09.483Z",
"user": {
"_id": "63ee1379190ddd6214efd73a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1676546883247-noauth.png",
"fullname": "HAODONG DUAN",
"isPro": false,
"type": "user",
"user": "KennyUTC"
}
},
{
"_id": "67973e05495916be7c0086d3",
"hidden": false,
"name": "Kai Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67973e05495916be7c0086d4",
"hidden": false,
"name": "Guangtao Zhai",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:58:51.908Z",
"user": {
"_id": "65535b125413c1a54e6fb243",
"avatarUrl": "/avatars/03bcf1d58865f5406aff49a415e78bdc.svg",
"fullname": "Guangtao Zhai",
"isPro": false,
"type": "user",
"user": "GTZhai"
}
}
] | 2025-01-20T08:09:42 |
Redundancy Principles for MLLMs Benchmarks
|
With the rapid iteration of Multi-modality Large Language Models (MLLMs) and
the evolving demands of the field, the number of benchmarks produced annually
has surged into the hundreds. The rapid growth has inevitably led to
significant redundancy among benchmarks. Therefore, it is crucial to take a
step back and critically assess the current state of redundancy and propose
targeted principles for constructing effective MLLM benchmarks. In this paper,
we focus on redundancy from three key perspectives: 1) Redundancy of benchmark
capability dimensions, 2) Redundancy in the number of test questions, and 3)
Cross-benchmark redundancy within specific domains. Through the comprehensive
analysis over hundreds of MLLMs' performance across more than 20 benchmarks, we
aim to quantitatively measure the level of redundancy lies in existing MLLM
evaluations, provide valuable insights to guide the future development of MLLM
benchmarks, and offer strategies to refine and address redundancy issues
effectively.
| 28 |
67973e07495916be7c0087cd
| null | null |
|
2025-01-26T23:01:05.025000 |
Relightable Full-Body Gaussian Codec Avatars
| 2 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.14726
|
[
{
"_id": "679704f72ec68b41932bf52f",
"hidden": false,
"name": "Shaofei Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:03:00.388Z",
"user": {
"_id": "66d4a5c535eff7194df053a0",
"avatarUrl": "/avatars/4113570792db1d323d1c0b5beb98aa44.svg",
"fullname": "shaofei wang",
"isPro": false,
"type": "user",
"user": "sfwang23"
}
},
{
"_id": "679704f72ec68b41932bf530",
"hidden": false,
"name": "Tomas Simon",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:03:09.841Z",
"user": {
"_id": "66c7459792e9f5b19f55cc56",
"avatarUrl": "/avatars/24cae9556ea07c47d65d500646bcfe85.svg",
"fullname": "Tomas Simon",
"isPro": false,
"type": "user",
"user": "Tombo89"
}
},
{
"_id": "679704f72ec68b41932bf531",
"hidden": false,
"name": "Igor Santesteban",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf532",
"hidden": false,
"name": "Timur Bagautdinov",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:03:21.527Z",
"user": {
"_id": "63be0fe19a15a3e9419d3d7d",
"avatarUrl": "/avatars/145760938083ec031e30d20d321d9185.svg",
"fullname": "Timur Bagautdinov",
"isPro": false,
"type": "user",
"user": "psycharo"
}
},
{
"_id": "679704f72ec68b41932bf533",
"hidden": false,
"name": "Junxuan Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf534",
"hidden": false,
"name": "Vasu Agrawal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:03:35.858Z",
"user": {
"_id": "6527fa0f763bcce52dfb1af7",
"avatarUrl": "/avatars/79bdaa7f1948dfec4d9b6139952fbfe1.svg",
"fullname": "Vasu Agrawal",
"isPro": false,
"type": "user",
"user": "vasuagrawal4"
}
},
{
"_id": "679704f72ec68b41932bf535",
"hidden": false,
"name": "Fabian Prada",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf536",
"hidden": false,
"name": "Shoou-I Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf537",
"hidden": false,
"name": "Pace Nalbone",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf538",
"hidden": false,
"name": "Matt Gramlich",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf539",
"hidden": false,
"name": "Roman Lubachersky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf53a",
"hidden": false,
"name": "Chenglei Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf53b",
"hidden": false,
"name": "Javier Romero",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf53c",
"hidden": false,
"name": "Jason Saragih",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf53d",
"hidden": false,
"name": "Michael Zollhoefer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf53e",
"hidden": false,
"name": "Andreas Geiger",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:04:34.554Z",
"user": {
"_id": "620cae049086f3c07f01e3d5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1644998139538-noauth.jpeg",
"fullname": "Andreas Geiger",
"isPro": false,
"type": "user",
"user": "andreas-geiger"
}
},
{
"_id": "679704f72ec68b41932bf53f",
"hidden": false,
"name": "Siyu Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704f72ec68b41932bf540",
"hidden": false,
"name": "Shunsuke Saito",
"status": "extracted_pending",
"statusLastChangedAt": "2025-01-27T04:01:00.985Z",
"user": {
"_id": "630688edd37ce67e0e50e548",
"avatarUrl": "/avatars/1681e465d7649f67f94e1b69d236cb1e.svg",
"fullname": "Shunsuke Saito",
"isPro": false,
"type": "user",
"user": "psyth"
}
}
] | 2025-01-24T18:59:15 |
Relightable Full-Body Gaussian Codec Avatars
|
We propose Relightable Full-Body Gaussian Codec Avatars, a new approach for
modeling relightable full-body avatars with fine-grained details including face
and hands. The unique challenge for relighting full-body avatars lies in the
large deformations caused by body articulation and the resulting impact on
appearance caused by light transport. Changes in body pose can dramatically
change the orientation of body surfaces with respect to lights, resulting in
both local appearance changes due to changes in local light transport
functions, as well as non-local changes due to occlusion between body parts. To
address this, we decompose the light transport into local and non-local
effects. Local appearance changes are modeled using learnable zonal harmonics
for diffuse radiance transfer. Unlike spherical harmonics, zonal harmonics are
highly efficient to rotate under articulation. This allows us to learn diffuse
radiance transfer in a local coordinate frame, which disentangles the local
radiance transfer from the articulation of the body. To account for non-local
appearance changes, we introduce a shadow network that predicts shadows given
precomputed incoming irradiance on a base mesh. This facilitates the learning
of non-local shadowing between the body parts. Finally, we use a deferred
shading approach to model specular radiance transfer and better capture
reflections and highlights such as eye glints. We demonstrate that our approach
successfully models both the local and non-local light transport required for
relightable full-body avatars, with a superior generalization ability under
novel illumination conditions and unseen poses.
| 10 |
679704fc2ec68b41932bf644
| null | null |
|
2025-01-26T22:59:59.058000 |
Humanity's Last Exam
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| true | null |
2501.14249
|
[
{
"_id": "679704b422b334a8370d3572",
"hidden": false,
"name": "Long Phan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3573",
"hidden": false,
"name": "Alice Gatti",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:52:50.410Z",
"user": {
"_id": "64b6bd94047fd5611593694a",
"avatarUrl": "/avatars/9dc8dd0229fa0467a1414b9f5ebbab0d.svg",
"fullname": "Alice Gatti",
"isPro": false,
"type": "user",
"user": "alicegatti"
}
},
{
"_id": "679704b422b334a8370d3574",
"hidden": false,
"name": "Ziwen Han",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:52:59.398Z",
"user": {
"_id": "62ed40fe9b538d02b451c211",
"avatarUrl": "/avatars/d5486b050fd63ade348158e64a3e5ff7.svg",
"fullname": "Han",
"isPro": false,
"type": "user",
"user": "Ziwen"
}
},
{
"_id": "679704b422b334a8370d3575",
"hidden": false,
"name": "Nathaniel Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3576",
"hidden": false,
"name": "Josephina Hu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:53:15.392Z",
"user": {
"_id": "66b13ca5c8900ac0175a7bc5",
"avatarUrl": "/avatars/90e2c55145b1ee94fb6ef54426dfcdaa.svg",
"fullname": "Josephina Hu",
"isPro": false,
"type": "user",
"user": "lovejosaay"
}
},
{
"_id": "679704b422b334a8370d3577",
"hidden": false,
"name": "Hugh Zhang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:53:24.388Z",
"user": {
"_id": "65d6a5f94c28026a003581b4",
"avatarUrl": "/avatars/c24c314eafbf6a885c586258ae87a47f.svg",
"fullname": "Hugh Zhang",
"isPro": false,
"type": "user",
"user": "hugh-scale"
}
},
{
"_id": "679704b422b334a8370d3578",
"hidden": false,
"name": "Sean Shi",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:53:30.447Z",
"user": {
"_id": "66590a222d857c7a34771880",
"avatarUrl": "/avatars/e6ff7d1b68feffe8953a62da380a60c8.svg",
"fullname": "Sean Shi",
"isPro": false,
"type": "user",
"user": "seanshi-scale"
}
},
{
"_id": "679704b422b334a8370d3579",
"hidden": false,
"name": "Michael Choi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d357a",
"hidden": false,
"name": "Anish Agrawal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d357b",
"hidden": false,
"name": "Arnav Chopra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d357c",
"hidden": false,
"name": "Adam Khoja",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d357d",
"hidden": false,
"name": "Ryan Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d357e",
"hidden": false,
"name": "Jason Hausenloy",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:54:24.451Z",
"user": {
"_id": "6732f99ce6a45b6a0b322702",
"avatarUrl": "/avatars/91a3266fe3f1a4c822739d9baba53c68.svg",
"fullname": "Jason Hausenloy",
"isPro": false,
"type": "user",
"user": "jasonhausenloy"
}
},
{
"_id": "679704b422b334a8370d357f",
"hidden": false,
"name": "Oliver Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3580",
"hidden": false,
"name": "Mantas Mazeika",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:54:56.624Z",
"user": {
"_id": "622bbdeb47e49335b2128a06",
"avatarUrl": "/avatars/acaa6792da1a79a1ba91bbd148c85087.svg",
"fullname": "Mantas Mazeika",
"isPro": true,
"type": "user",
"user": "mmazeika"
}
},
{
"_id": "679704b422b334a8370d3581",
"hidden": false,
"name": "Daron Anderson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3582",
"hidden": false,
"name": "Tung Nguyen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3583",
"hidden": false,
"name": "Mobeen Mahmood",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3584",
"hidden": false,
"name": "Fiona Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3585",
"hidden": false,
"name": "Steven Y. Feng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3586",
"hidden": false,
"name": "Haoran Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3587",
"hidden": false,
"name": "Michael Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3588",
"hidden": false,
"name": "Varun Gangal",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:55:42.966Z",
"user": {
"_id": "617ac050ff689804c90eca49",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1635434570481-noauth.jpeg",
"fullname": "Varun Gangal",
"isPro": false,
"type": "user",
"user": "vgtomahawk"
}
},
{
"_id": "679704b422b334a8370d3589",
"hidden": false,
"name": "Chelsea Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358a",
"hidden": false,
"name": "Zihan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358b",
"hidden": false,
"name": "Jessica P. Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358c",
"hidden": false,
"name": "Pawan Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358d",
"hidden": false,
"name": "Oleksandr Pokutnyi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358e",
"hidden": false,
"name": "Robert Gerbicz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d358f",
"hidden": false,
"name": "Serguei Popov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3590",
"hidden": false,
"name": "John-Clark Levin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3591",
"hidden": false,
"name": "Mstyslav Kazakov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3592",
"hidden": false,
"name": "Johannes Schmitt",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3593",
"hidden": false,
"name": "Geoff Galgon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3594",
"hidden": false,
"name": "Alvaro Sanchez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3595",
"hidden": false,
"name": "Yongki Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3596",
"hidden": false,
"name": "Will Yeadon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3597",
"hidden": false,
"name": "Scott Sauers",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3598",
"hidden": false,
"name": "Marc Roth",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3599",
"hidden": false,
"name": "Chidozie Agu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359a",
"hidden": false,
"name": "Søren Riis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359b",
"hidden": false,
"name": "Fabian Giska",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359c",
"hidden": false,
"name": "Saiteja Utpala",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359d",
"hidden": false,
"name": "Zachary Giboney",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359e",
"hidden": false,
"name": "Gashaw M. Goshu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d359f",
"hidden": false,
"name": "Joan of Arc Xavier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a0",
"hidden": false,
"name": "Sarah-Jane Crowson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a1",
"hidden": false,
"name": "Mohinder Maheshbhai Naiya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a2",
"hidden": false,
"name": "Noah Burns",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a3",
"hidden": false,
"name": "Lennart Finke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a4",
"hidden": false,
"name": "Zerui Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a5",
"hidden": false,
"name": "Hyunwoo Park",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a6",
"hidden": false,
"name": "Francesco Fournier-Facio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a7",
"hidden": false,
"name": "John Wydallis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a8",
"hidden": false,
"name": "Mark Nandor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35a9",
"hidden": false,
"name": "Ankit Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35aa",
"hidden": false,
"name": "Tim Gehrunger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ab",
"hidden": false,
"name": "Jiaqi Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ac",
"hidden": false,
"name": "Ben McCarty",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ad",
"hidden": false,
"name": "Darling Duclosel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ae",
"hidden": false,
"name": "Jungbae Nam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35af",
"hidden": false,
"name": "Jennifer Zampese",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b0",
"hidden": false,
"name": "Ryan G. Hoerr",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b1",
"hidden": false,
"name": "Aras Bacho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b2",
"hidden": false,
"name": "Gautier Abou Loume",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b3",
"hidden": false,
"name": "Abdallah Galal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b4",
"hidden": false,
"name": "Hangrui Cao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b5",
"hidden": false,
"name": "Alexis C Garretson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b6",
"hidden": false,
"name": "Damien Sileo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b7",
"hidden": false,
"name": "Qiuyu Ren",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b8",
"hidden": false,
"name": "Doru Cojoc",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35b9",
"hidden": false,
"name": "Pavel Arkhipov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ba",
"hidden": false,
"name": "Usman Qazi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35bb",
"hidden": false,
"name": "Lianghui Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35bc",
"hidden": false,
"name": "Sumeet Motwani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35bd",
"hidden": false,
"name": "Christian Schroeder de Witt",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35be",
"hidden": false,
"name": "Edwin Taylor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35bf",
"hidden": false,
"name": "Johannes Veith",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c0",
"hidden": false,
"name": "Eric Singer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c1",
"hidden": false,
"name": "Taylor D. Hartman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c2",
"hidden": false,
"name": "Paolo Rissone",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c3",
"hidden": false,
"name": "Jaehyeok Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c4",
"hidden": false,
"name": "Jack Wei Lun Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c5",
"hidden": false,
"name": "Chris G. Willcocks",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c6",
"hidden": false,
"name": "Joshua Robinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c7",
"hidden": false,
"name": "Aleksandar Mikov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35c8",
"hidden": false,
"name": "Ameya Prabhu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T12:55:03.605Z",
"user": {
"_id": "6464a0d41683d3c81f51924a",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6464a0d41683d3c81f51924a/s7yYVwfUB4WOhVFJS6A6T.jpeg",
"fullname": "Ameya Prabhu",
"isPro": false,
"type": "user",
"user": "AmeyaPrabhu"
}
},
{
"_id": "679704b422b334a8370d35c9",
"hidden": false,
"name": "Longke Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ca",
"hidden": false,
"name": "Xavier Alapont",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35cb",
"hidden": false,
"name": "Justine Leon Uro",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35cc",
"hidden": false,
"name": "Kevin Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35cd",
"hidden": false,
"name": "Emily de Oliveira Santos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ce",
"hidden": false,
"name": "Andrey Pupasov Maksimov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35cf",
"hidden": false,
"name": "Edward Vendrow",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d0",
"hidden": false,
"name": "Kengo Zenitani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d1",
"hidden": false,
"name": "Julien Guillod",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d2",
"hidden": false,
"name": "Yuqi Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d3",
"hidden": false,
"name": "Joshua Vendrow",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d4",
"hidden": false,
"name": "Vladyslav Kuchkin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d5",
"hidden": false,
"name": "Ng Ze-An",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d6",
"hidden": false,
"name": "Pierre Marion",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d7",
"hidden": false,
"name": "Denis Efremov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d8",
"hidden": false,
"name": "Jayson Lynch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35d9",
"hidden": false,
"name": "Kaiqu Liang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35da",
"hidden": false,
"name": "Andrew Gritsevskiy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35db",
"hidden": false,
"name": "Dakotah Martinez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35dc",
"hidden": false,
"name": "Ben Pageler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35dd",
"hidden": false,
"name": "Nick Crispino",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35de",
"hidden": false,
"name": "Dimitri Zvonkine",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35df",
"hidden": false,
"name": "Natanael Wildner Fraga",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e0",
"hidden": false,
"name": "Saeed Soori",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e1",
"hidden": false,
"name": "Ori Press",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e2",
"hidden": false,
"name": "Henry Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e3",
"hidden": false,
"name": "Julian Salazar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e4",
"hidden": false,
"name": "Sean R. Green",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e5",
"hidden": false,
"name": "Lina Brüssel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e6",
"hidden": false,
"name": "Moon Twayana",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e7",
"hidden": false,
"name": "Aymeric Dieuleveut",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e8",
"hidden": false,
"name": "T. Ryan Rogers",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35e9",
"hidden": false,
"name": "Wenjin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ea",
"hidden": false,
"name": "Bikun Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35eb",
"hidden": false,
"name": "Jinzhou Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ec",
"hidden": false,
"name": "Arun Rao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ed",
"hidden": false,
"name": "Gabriel Loiseau",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ee",
"hidden": false,
"name": "Mikhail Kalinin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ef",
"hidden": false,
"name": "Marco Lukas",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f0",
"hidden": false,
"name": "Ciprian Manolescu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f1",
"hidden": false,
"name": "Subrata Mishra",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f2",
"hidden": false,
"name": "Ariel Ghislain Kemogne Kamdoum",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f3",
"hidden": false,
"name": "Tobias Kreiman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f4",
"hidden": false,
"name": "Tad Hogg",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f5",
"hidden": false,
"name": "Alvin Jin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f6",
"hidden": false,
"name": "Carlo Bosio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f7",
"hidden": false,
"name": "Gongbo Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f8",
"hidden": false,
"name": "Brian P Coppola",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35f9",
"hidden": false,
"name": "Tim Tarver",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35fa",
"hidden": false,
"name": "Haline Heidinger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35fb",
"hidden": false,
"name": "Rafael Sayous",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35fc",
"hidden": false,
"name": "Stefan Ivanov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35fd",
"hidden": false,
"name": "Joseph M Cavanagh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35fe",
"hidden": false,
"name": "Jiawei Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d35ff",
"hidden": false,
"name": "Joseph Marvin Imperial",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3600",
"hidden": false,
"name": "Philippe Schwaller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3601",
"hidden": false,
"name": "Shaipranesh Senthilkuma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3602",
"hidden": false,
"name": "Andres M Bran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3603",
"hidden": false,
"name": "Ali Dehghan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3604",
"hidden": false,
"name": "Andres Algaba",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3605",
"hidden": false,
"name": "Brecht Verbeken",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3606",
"hidden": false,
"name": "David Noever",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T10:10:21.620Z",
"user": {
"_id": "63136a82e29fb2e86d5e5bdd",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/63136a82e29fb2e86d5e5bdd/pFZDuQtzfUStovbwwZGvn.png",
"fullname": "David Noever",
"isPro": false,
"type": "user",
"user": "dnoever"
}
},
{
"_id": "679704b422b334a8370d3607",
"hidden": false,
"name": "Ragavendran P V",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3608",
"hidden": false,
"name": "Lisa Schut",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3609",
"hidden": false,
"name": "Ilia Sucholutsky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360a",
"hidden": false,
"name": "Evgenii Zheltonozhskii",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360b",
"hidden": false,
"name": "Derek Lim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360c",
"hidden": false,
"name": "Richard Stanley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360d",
"hidden": false,
"name": "Shankar Sivarajan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360e",
"hidden": false,
"name": "Tong Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d360f",
"hidden": false,
"name": "John Maar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3610",
"hidden": false,
"name": "Julian Wykowski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3611",
"hidden": false,
"name": "Martí Oller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3612",
"hidden": false,
"name": "Jennifer Sandlin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3613",
"hidden": false,
"name": "Anmol Sahu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3614",
"hidden": false,
"name": "Yuzheng Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3615",
"hidden": false,
"name": "Sara Fish",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3616",
"hidden": false,
"name": "Nasser Heydari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3617",
"hidden": false,
"name": "Archimedes Apronti",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3618",
"hidden": false,
"name": "Kaivalya Rawal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3619",
"hidden": false,
"name": "Tobias Garcia Vilchis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361a",
"hidden": false,
"name": "Yuexuan Zu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361b",
"hidden": false,
"name": "Martin Lackner",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361c",
"hidden": false,
"name": "James Koppel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361d",
"hidden": false,
"name": "Jeremy Nguyen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361e",
"hidden": false,
"name": "Daniil S. Antonenko",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d361f",
"hidden": false,
"name": "Steffi Chern",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3620",
"hidden": false,
"name": "Bingchen Zhao",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T10:10:25.495Z",
"user": {
"_id": "62dcd71075e9787ec5aa41ba",
"avatarUrl": "/avatars/f37ce036b76180ed0fa004f9c8c09363.svg",
"fullname": "Bingchen Zhao",
"isPro": false,
"type": "user",
"user": "tennant"
}
},
{
"_id": "679704b422b334a8370d3621",
"hidden": false,
"name": "Pierrot Arsene",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3622",
"hidden": false,
"name": "Alan Goldfarb",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3623",
"hidden": false,
"name": "Sergey Ivanov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3624",
"hidden": false,
"name": "Rafał Poświata",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-28T10:10:27.365Z",
"user": {
"_id": "63933543f8b4767ae646e8a1",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1670591762482-noauth.png",
"fullname": "Rafał Poświata",
"isPro": false,
"type": "user",
"user": "rafalposwiata"
}
},
{
"_id": "679704b422b334a8370d3625",
"hidden": false,
"name": "Chenguang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3626",
"hidden": false,
"name": "Daofeng Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3627",
"hidden": false,
"name": "Donato Crisostomi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3628",
"hidden": false,
"name": "Andrea Achilleos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3629",
"hidden": false,
"name": "Benjamin Myklebust",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362a",
"hidden": false,
"name": "Archan Sen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362b",
"hidden": false,
"name": "David Perrella",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362c",
"hidden": false,
"name": "Nurdin Kaparov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362d",
"hidden": false,
"name": "Mark H Inlow",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362e",
"hidden": false,
"name": "Allen Zang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d362f",
"hidden": false,
"name": "Elliott Thornley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3630",
"hidden": false,
"name": "Daniil Orel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3631",
"hidden": false,
"name": "Vladislav Poritski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3632",
"hidden": false,
"name": "Shalev Ben-David",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3633",
"hidden": false,
"name": "Zachary Berger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3634",
"hidden": false,
"name": "Parker Whitfill",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3635",
"hidden": false,
"name": "Michael Foster",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3636",
"hidden": false,
"name": "Daniel Munro",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3637",
"hidden": false,
"name": "Linh Ho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3638",
"hidden": false,
"name": "Dan Bar Hava",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3639",
"hidden": false,
"name": "Aleksey Kuchkin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363a",
"hidden": false,
"name": "Robert Lauff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363b",
"hidden": false,
"name": "David Holmes",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363c",
"hidden": false,
"name": "Frank Sommerhage",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363d",
"hidden": false,
"name": "Keith Schneider",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363e",
"hidden": false,
"name": "Zakayo Kazibwe",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d363f",
"hidden": false,
"name": "Nate Stambaugh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3640",
"hidden": false,
"name": "Mukhwinder Singh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3641",
"hidden": false,
"name": "Ilias Magoulas",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3642",
"hidden": false,
"name": "Don Clarke",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3643",
"hidden": false,
"name": "Dae Hyun Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3644",
"hidden": false,
"name": "Felipe Meneguitti Dias",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3645",
"hidden": false,
"name": "Veit Elser",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3646",
"hidden": false,
"name": "Kanu Priya Agarwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3647",
"hidden": false,
"name": "Victor Efren Guadarrama Vilchis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3648",
"hidden": false,
"name": "Immo Klose",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3649",
"hidden": false,
"name": "Christoph Demian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364a",
"hidden": false,
"name": "Ujjwala Anantheswaran",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364b",
"hidden": false,
"name": "Adam Zweiger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364c",
"hidden": false,
"name": "Guglielmo Albani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364d",
"hidden": false,
"name": "Jeffery Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364e",
"hidden": false,
"name": "Nicolas Daans",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d364f",
"hidden": false,
"name": "Maksim Radionov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3650",
"hidden": false,
"name": "Václav Rozhoň",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3651",
"hidden": false,
"name": "Ziqiao Ma",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-27T22:09:25.436Z",
"user": {
"_id": "630cfc45b66f088d547b2768",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/630cfc45b66f088d547b2768/ge2BZLDYeTlpnlIlraHNw.png",
"fullname": "Martin Ziqiao Ma",
"isPro": false,
"type": "user",
"user": "marstin"
}
},
{
"_id": "679704b422b334a8370d3652",
"hidden": false,
"name": "Christian Stump",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3653",
"hidden": false,
"name": "Mohammed Berkani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3654",
"hidden": false,
"name": "Jacob Platnick",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3655",
"hidden": false,
"name": "Volodymyr Nevirkovets",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3656",
"hidden": false,
"name": "Luke Basler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3657",
"hidden": false,
"name": "Marco Piccardo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3658",
"hidden": false,
"name": "Ferenc Jeanplong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3659",
"hidden": false,
"name": "Niv Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365a",
"hidden": false,
"name": "Josef Tkadlec",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365b",
"hidden": false,
"name": "Paul Rosu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365c",
"hidden": false,
"name": "Piotr Padlewski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365d",
"hidden": false,
"name": "Stanislaw Barzowski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365e",
"hidden": false,
"name": "Kyle Montgomery",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d365f",
"hidden": false,
"name": "Aline Menezes",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3660",
"hidden": false,
"name": "Arkil Patel",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-24T09:26:54.744Z",
"user": {
"_id": "631a523c04f8ed65eff16fb4",
"avatarUrl": "/avatars/2b284403c88f140d7bef283f729f7a3e.svg",
"fullname": "Arkil Patel",
"isPro": false,
"type": "user",
"user": "arkilpatel"
}
},
{
"_id": "679704b422b334a8370d3661",
"hidden": false,
"name": "Zixuan Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3662",
"hidden": false,
"name": "Jamie Tucker-Foltz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3663",
"hidden": false,
"name": "Jack Stade",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3664",
"hidden": false,
"name": "Tom Goertzen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3665",
"hidden": false,
"name": "Fereshteh Kazemi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3666",
"hidden": false,
"name": "Jeremiah Milbauer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3667",
"hidden": false,
"name": "John Arnold Ambay",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3668",
"hidden": false,
"name": "Abhishek Shukla",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3669",
"hidden": false,
"name": "Yan Carlos Leyva Labrador",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366a",
"hidden": false,
"name": "Alan Givré",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366b",
"hidden": false,
"name": "Hew Wolff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366c",
"hidden": false,
"name": "Vivien Rossbach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366d",
"hidden": false,
"name": "Muhammad Fayez Aziz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366e",
"hidden": false,
"name": "Younesse Kaddar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d366f",
"hidden": false,
"name": "Yanxu Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3670",
"hidden": false,
"name": "Robin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3671",
"hidden": false,
"name": "Jiayi Pan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3672",
"hidden": false,
"name": "Antonio Terpin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3673",
"hidden": false,
"name": "Niklas Muennighoff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3674",
"hidden": false,
"name": "Hailey Schoelkopf",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3675",
"hidden": false,
"name": "Eric Zheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3676",
"hidden": false,
"name": "Avishy Carmi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3677",
"hidden": false,
"name": "Adam Jones",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3678",
"hidden": false,
"name": "Jainam Shah",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3679",
"hidden": false,
"name": "Ethan D. L. Brown",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367a",
"hidden": false,
"name": "Kelin Zhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367b",
"hidden": false,
"name": "Max Bartolo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367c",
"hidden": false,
"name": "Richard Wheeler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367d",
"hidden": false,
"name": "Andrew Ho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367e",
"hidden": false,
"name": "Shaul Barkan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d367f",
"hidden": false,
"name": "Jiaqi Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3680",
"hidden": false,
"name": "Martin Stehberger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3681",
"hidden": false,
"name": "Egor Kretov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3682",
"hidden": false,
"name": "Kaustubh Sridhar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3683",
"hidden": false,
"name": "Zienab EL-Wasif",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3684",
"hidden": false,
"name": "Anji Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3685",
"hidden": false,
"name": "Daniel Pyda",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3686",
"hidden": false,
"name": "Joanna Tam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3687",
"hidden": false,
"name": "David M. Cunningham",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3688",
"hidden": false,
"name": "Vladimir Goryachev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3689",
"hidden": false,
"name": "Demosthenes Patramanis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368a",
"hidden": false,
"name": "Michael Krause",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368b",
"hidden": false,
"name": "Andrew Redenti",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368c",
"hidden": false,
"name": "Daniel Bugas",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368d",
"hidden": false,
"name": "David Aldous",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368e",
"hidden": false,
"name": "Jesyin Lai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d368f",
"hidden": false,
"name": "Shannon Coleman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3690",
"hidden": false,
"name": "Mohsen Bahaloo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3691",
"hidden": false,
"name": "Jiangnan Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3692",
"hidden": false,
"name": "Sangwon Lee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3693",
"hidden": false,
"name": "Sandy Zhao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3694",
"hidden": false,
"name": "Ning Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3695",
"hidden": false,
"name": "Michael K. Cohen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3696",
"hidden": false,
"name": "Micah Carroll",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3697",
"hidden": false,
"name": "Orr Paradise",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3698",
"hidden": false,
"name": "Jan Hendrik Kirchner",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3699",
"hidden": false,
"name": "Stefan Steinerberger",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369a",
"hidden": false,
"name": "Maksym Ovchynnikov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369b",
"hidden": false,
"name": "Jason O. Matos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369c",
"hidden": false,
"name": "Adithya Shenoy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369d",
"hidden": false,
"name": "Benedito Alves de Oliveira Junior",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369e",
"hidden": false,
"name": "Michael Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d369f",
"hidden": false,
"name": "Yuzhou Nie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a0",
"hidden": false,
"name": "Paolo Giordano",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a1",
"hidden": false,
"name": "Philipp Petersen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a2",
"hidden": false,
"name": "Anna Sztyber-Betley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a3",
"hidden": false,
"name": "Priti Shukla",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a4",
"hidden": false,
"name": "Jonathan Crozier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a5",
"hidden": false,
"name": "Antonella Pinto",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a6",
"hidden": false,
"name": "Shreyas Verma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a7",
"hidden": false,
"name": "Prashant Joshi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a8",
"hidden": false,
"name": "Zheng-Xin Yong",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36a9",
"hidden": false,
"name": "Allison Tee",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36aa",
"hidden": false,
"name": "Jérémy Andréoletti",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ab",
"hidden": false,
"name": "Orion Weller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ac",
"hidden": false,
"name": "Raghav Singhal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ad",
"hidden": false,
"name": "Gang Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ae",
"hidden": false,
"name": "Alexander Ivanov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36af",
"hidden": false,
"name": "Seri Khoury",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b0",
"hidden": false,
"name": "Hamid Mostaghimi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b1",
"hidden": false,
"name": "Kunvar Thaman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b2",
"hidden": false,
"name": "Qijia Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b3",
"hidden": false,
"name": "Tran Quoc Khánh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b4",
"hidden": false,
"name": "Jacob Loader",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b5",
"hidden": false,
"name": "Stefano Cavalleri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b6",
"hidden": false,
"name": "Hannah Szlyk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b7",
"hidden": false,
"name": "Zachary Brown",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b8",
"hidden": false,
"name": "Jonathan Roberts",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36b9",
"hidden": false,
"name": "William Alley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ba",
"hidden": false,
"name": "Kunyang Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36bb",
"hidden": false,
"name": "Ryan Stendall",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36bc",
"hidden": false,
"name": "Max Lamparth",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36bd",
"hidden": false,
"name": "Anka Reuel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36be",
"hidden": false,
"name": "Ting Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36bf",
"hidden": false,
"name": "Hanmeng Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c0",
"hidden": false,
"name": "Sreenivas Goud Raparthi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c1",
"hidden": false,
"name": "Pablo Hernández-Cámara",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c2",
"hidden": false,
"name": "Freddie Martin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c3",
"hidden": false,
"name": "Dmitry Malishev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c4",
"hidden": false,
"name": "Thomas Preu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c5",
"hidden": false,
"name": "Tomek Korbak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c6",
"hidden": false,
"name": "Marcus Abramovitch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c7",
"hidden": false,
"name": "Dominic Williamson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c8",
"hidden": false,
"name": "Ziye Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36c9",
"hidden": false,
"name": "Biró Bálint",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ca",
"hidden": false,
"name": "M Saiful Bari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36cb",
"hidden": false,
"name": "Peyman Kassani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36cc",
"hidden": false,
"name": "Zihao Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36cd",
"hidden": false,
"name": "Behzad Ansarinejad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ce",
"hidden": false,
"name": "Laxman Prasad Goswami",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36cf",
"hidden": false,
"name": "Yewen Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d0",
"hidden": false,
"name": "Hossam Elgnainy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d1",
"hidden": false,
"name": "Daniel Tordera",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d2",
"hidden": false,
"name": "George Balabanian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d3",
"hidden": false,
"name": "Earth Anderson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d4",
"hidden": false,
"name": "Lynna Kvistad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d5",
"hidden": false,
"name": "Alejandro José Moyano",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d6",
"hidden": false,
"name": "Rajat Maheshwari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d7",
"hidden": false,
"name": "Ahmad Sakor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d8",
"hidden": false,
"name": "Murat Eron",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36d9",
"hidden": false,
"name": "Isaac C. McAlister",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36da",
"hidden": false,
"name": "Javier Gimenez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36db",
"hidden": false,
"name": "Innocent Enyekwe",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36dc",
"hidden": false,
"name": "Andrew Favre D. O.",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36dd",
"hidden": false,
"name": "Shailesh Shah",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36de",
"hidden": false,
"name": "Xiaoxiang Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36df",
"hidden": false,
"name": "Firuz Kamalov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e0",
"hidden": false,
"name": "Ronald Clark",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e1",
"hidden": false,
"name": "Sherwin Abdoli",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e2",
"hidden": false,
"name": "Tim Santens",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e3",
"hidden": false,
"name": "Khalida Meer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e4",
"hidden": false,
"name": "Harrison K Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e5",
"hidden": false,
"name": "Kalyan Ramakrishnan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e6",
"hidden": false,
"name": "Evan Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e7",
"hidden": false,
"name": "Alessandro Tomasiello",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e8",
"hidden": false,
"name": "G. Bruno De Luca",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36e9",
"hidden": false,
"name": "Shi-Zhuo Looi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ea",
"hidden": false,
"name": "Vinh-Kha Le",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36eb",
"hidden": false,
"name": "Noam Kolt",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ec",
"hidden": false,
"name": "Niels Mündler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ed",
"hidden": false,
"name": "Avi Semler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ee",
"hidden": false,
"name": "Emma Rodman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ef",
"hidden": false,
"name": "Jacob Drori",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f0",
"hidden": false,
"name": "Carl J Fossum",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f1",
"hidden": false,
"name": "Milind Jagota",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f2",
"hidden": false,
"name": "Ronak Pradeep",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f3",
"hidden": false,
"name": "Honglu Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f4",
"hidden": false,
"name": "Tej Shah",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f5",
"hidden": false,
"name": "Jonathan Eicher",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f6",
"hidden": false,
"name": "Michael Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f7",
"hidden": false,
"name": "Kushal Thaman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f8",
"hidden": false,
"name": "William Merrill",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36f9",
"hidden": false,
"name": "Carter Harris",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36fa",
"hidden": false,
"name": "Jason Gross",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36fb",
"hidden": false,
"name": "Ilya Gusev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36fc",
"hidden": false,
"name": "Asankhaya Sharma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36fd",
"hidden": false,
"name": "Shashank Agnihotri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36fe",
"hidden": false,
"name": "Pavel Zhelnov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d36ff",
"hidden": false,
"name": "Siranut Usawasutsakorn",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3700",
"hidden": false,
"name": "Mohammadreza Mofayezi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3701",
"hidden": false,
"name": "Sergei Bogdanov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3702",
"hidden": false,
"name": "Alexander Piperski",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3703",
"hidden": false,
"name": "Marc Carauleanu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3704",
"hidden": false,
"name": "David K. Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3705",
"hidden": false,
"name": "Dylan Ler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3706",
"hidden": false,
"name": "Roman Leventov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3707",
"hidden": false,
"name": "Ignat Soroko",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3708",
"hidden": false,
"name": "Thorben Jansen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3709",
"hidden": false,
"name": "Pascal Lauer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d370a",
"hidden": false,
"name": "Joshua Duersch",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d370b",
"hidden": false,
"name": "Vage Taamazyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d370c",
"hidden": false,
"name": "Wiktor Morak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d370d",
"hidden": false,
"name": "Wenjie Ma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d370e",
"hidden": false,
"name": "William Held",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-26T08:42:57.161Z",
"user": {
"_id": "632116accafe12f481a473cb",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1666207676653-632116accafe12f481a473cb.jpeg",
"fullname": "Will Held",
"isPro": true,
"type": "user",
"user": "WillHeld"
}
},
{
"_id": "679704b422b334a8370d370f",
"hidden": false,
"name": "Tran Đuc Huy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3710",
"hidden": false,
"name": "Ruicheng Xian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3711",
"hidden": false,
"name": "Armel Randy Zebaze",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3712",
"hidden": false,
"name": "Mohanad Mohamed",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3713",
"hidden": false,
"name": "Julian Noah Leser",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3714",
"hidden": false,
"name": "Michelle X Yuan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3715",
"hidden": false,
"name": "Laila Yacar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3716",
"hidden": false,
"name": "Johannes Lengler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3717",
"hidden": false,
"name": "Hossein Shahrtash",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3718",
"hidden": false,
"name": "Edson Oliveira",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3719",
"hidden": false,
"name": "Joseph W. Jackson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371a",
"hidden": false,
"name": "Daniel Espinosa Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371b",
"hidden": false,
"name": "Andy Zou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371c",
"hidden": false,
"name": "Muthu Chidambaram",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371d",
"hidden": false,
"name": "Timothy Manik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371e",
"hidden": false,
"name": "Hector Haffenden",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d371f",
"hidden": false,
"name": "Dashiell Stander",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3720",
"hidden": false,
"name": "Ali Dasouqi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3721",
"hidden": false,
"name": "Alexander Shen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3722",
"hidden": false,
"name": "Emilien Duc",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3723",
"hidden": false,
"name": "Bita Golshani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3724",
"hidden": false,
"name": "David Stap",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3725",
"hidden": false,
"name": "Mikalai Uzhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3726",
"hidden": false,
"name": "Alina Borisovna Zhidkovskaya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3727",
"hidden": false,
"name": "Lukas Lewark",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3728",
"hidden": false,
"name": "Mátyás Vincze",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T08:16:10.413Z",
"user": {
"_id": "6454203c548f22be59825ee3",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/noauth/kfKhnt59IiCvVoPS8JP3-.jpeg",
"fullname": "Mátyás Vincze",
"isPro": false,
"type": "user",
"user": "vinczematyas"
}
},
{
"_id": "679704b422b334a8370d3729",
"hidden": false,
"name": "Dustin Wehr",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372a",
"hidden": false,
"name": "Colin Tang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372b",
"hidden": false,
"name": "Zaki Hossain",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372c",
"hidden": false,
"name": "Shaun Phillips",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372d",
"hidden": false,
"name": "Jiang Muzhen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372e",
"hidden": false,
"name": "Fredrik Ekström",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d372f",
"hidden": false,
"name": "Angela Hammon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3730",
"hidden": false,
"name": "Oam Patel",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3731",
"hidden": false,
"name": "Nicolas Remy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3732",
"hidden": false,
"name": "Faraz Farhidi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3733",
"hidden": false,
"name": "George Medley",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3734",
"hidden": false,
"name": "Forough Mohammadzadeh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3735",
"hidden": false,
"name": "Madellene Peñaflor",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3736",
"hidden": false,
"name": "Haile Kassahun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3737",
"hidden": false,
"name": "Alena Friedrich",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3738",
"hidden": false,
"name": "Claire Sparrow",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3739",
"hidden": false,
"name": "Taom Sakal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373a",
"hidden": false,
"name": "Omkar Dhamane",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373b",
"hidden": false,
"name": "Ali Khajegili Mirabadi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373c",
"hidden": false,
"name": "Eric Hallman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373d",
"hidden": false,
"name": "Mike Battaglia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373e",
"hidden": false,
"name": "Mohammad Maghsoudimehrabani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d373f",
"hidden": false,
"name": "Hieu Hoang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3740",
"hidden": false,
"name": "Alon Amit",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3741",
"hidden": false,
"name": "Dave Hulbert",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3742",
"hidden": false,
"name": "Roberto Pereira",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3743",
"hidden": false,
"name": "Simon Weber",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3744",
"hidden": false,
"name": "Stephen Mensah",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3745",
"hidden": false,
"name": "Nathan Andre",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3746",
"hidden": false,
"name": "Anton Peristyy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3747",
"hidden": false,
"name": "Chris Harjadi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3748",
"hidden": false,
"name": "Himanshu Gupta",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3749",
"hidden": false,
"name": "Stephen Malina",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374a",
"hidden": false,
"name": "Samuel Albanie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374b",
"hidden": false,
"name": "Will Cai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374c",
"hidden": false,
"name": "Mustafa Mehkary",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374d",
"hidden": false,
"name": "Frank Reidegeld",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374e",
"hidden": false,
"name": "Anna-Katharina Dick",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d374f",
"hidden": false,
"name": "Cary Friday",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3750",
"hidden": false,
"name": "Jasdeep Sidhu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3751",
"hidden": false,
"name": "Wanyoung Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3752",
"hidden": false,
"name": "Mariana Costa",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3753",
"hidden": false,
"name": "Hubeyb Gurdogan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3754",
"hidden": false,
"name": "Brian Weber",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3755",
"hidden": false,
"name": "Harsh Kumar",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3756",
"hidden": false,
"name": "Tong Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3757",
"hidden": false,
"name": "Arunim Agarwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3758",
"hidden": false,
"name": "Chiara Ceconello",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3759",
"hidden": false,
"name": "Warren S. Vaz",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375a",
"hidden": false,
"name": "Chao Zhuang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375b",
"hidden": false,
"name": "Haon Park",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375c",
"hidden": false,
"name": "Andrew R. Tawfeek",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375d",
"hidden": false,
"name": "Daattavya Aggarwal",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375e",
"hidden": false,
"name": "Michael Kirchhof",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d375f",
"hidden": false,
"name": "Linjie Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3760",
"hidden": false,
"name": "Evan Kim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3761",
"hidden": false,
"name": "Johan Ferret",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3762",
"hidden": false,
"name": "Yuzhou Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3763",
"hidden": false,
"name": "Minghao Yan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3764",
"hidden": false,
"name": "Krzysztof Burdzy",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3765",
"hidden": false,
"name": "Lixin Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3766",
"hidden": false,
"name": "Antonio Franca",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3767",
"hidden": false,
"name": "Diana T. Pham",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3768",
"hidden": false,
"name": "Kang Yong Loh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3769",
"hidden": false,
"name": "Joshua Robinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376a",
"hidden": false,
"name": "Shreen Gul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376b",
"hidden": false,
"name": "Gunjan Chhablani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376c",
"hidden": false,
"name": "Zhehang Du",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376d",
"hidden": false,
"name": "Adrian Cosma",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376e",
"hidden": false,
"name": "Colin White",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d376f",
"hidden": false,
"name": "Robin Riblet",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3770",
"hidden": false,
"name": "Prajvi Saxena",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3771",
"hidden": false,
"name": "Jacob Votava",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3772",
"hidden": false,
"name": "Vladimir Vinnikov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3773",
"hidden": false,
"name": "Ethan Delaney",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3774",
"hidden": false,
"name": "Shiv Halasyamani",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3775",
"hidden": false,
"name": "Syed M. Shahid",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3776",
"hidden": false,
"name": "Jean-Christophe Mourrat",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3777",
"hidden": false,
"name": "Lavr Vetoshkin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3778",
"hidden": false,
"name": "Renas Bacho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3779",
"hidden": false,
"name": "Vincent Ginis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377a",
"hidden": false,
"name": "Aleksandr Maksapetyan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377b",
"hidden": false,
"name": "Florencia de la Rosa",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377c",
"hidden": false,
"name": "Xiuyu Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377d",
"hidden": false,
"name": "Guillaume Malod",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377e",
"hidden": false,
"name": "Leon Lang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d377f",
"hidden": false,
"name": "Julien Laurendeau",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3780",
"hidden": false,
"name": "Fatimah Adesanya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3781",
"hidden": false,
"name": "Julien Portier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3782",
"hidden": false,
"name": "Lawrence Hollom",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3783",
"hidden": false,
"name": "Victor Souza",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3784",
"hidden": false,
"name": "Yuchen Anna Zhou",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3785",
"hidden": false,
"name": "Yiğit Yalın",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3786",
"hidden": false,
"name": "Gbenga Daniel Obikoya",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3787",
"hidden": false,
"name": "Luca Arnaboldi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3788",
"hidden": false,
"name": "Rai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3789",
"hidden": false,
"name": "Filippo Bigi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378a",
"hidden": false,
"name": "Kaniuar Bacho",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378b",
"hidden": false,
"name": "Pierre Clavier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378c",
"hidden": false,
"name": "Gabriel Recchia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378d",
"hidden": false,
"name": "Mara Popescu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378e",
"hidden": false,
"name": "Nikita Shulga",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d378f",
"hidden": false,
"name": "Ngefor Mildred Tanwie",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3790",
"hidden": false,
"name": "Thomas C. H. Lux",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3791",
"hidden": false,
"name": "Ben Rank",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3792",
"hidden": false,
"name": "Colin Ni",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3793",
"hidden": false,
"name": "Alesia Yakimchyk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3794",
"hidden": false,
"name": "Huanxu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3795",
"hidden": false,
"name": "Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3796",
"hidden": false,
"name": "Olle Häggström",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3797",
"hidden": false,
"name": "Emil Verkama",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3798",
"hidden": false,
"name": "Himanshu Narayan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3799",
"hidden": false,
"name": "Hans Gundlach",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379a",
"hidden": false,
"name": "Leonor Brito-Santana",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379b",
"hidden": false,
"name": "Brian Amaro",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379c",
"hidden": false,
"name": "Vivek Vajipey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379d",
"hidden": false,
"name": "Rynaa Grover",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379e",
"hidden": false,
"name": "Yiyang Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d379f",
"hidden": false,
"name": "Gabriel Poesia Reis e Silva",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a0",
"hidden": false,
"name": "Linwei Xin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a1",
"hidden": false,
"name": "Yosi Kratish",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a2",
"hidden": false,
"name": "Jakub Łucki",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a3",
"hidden": false,
"name": "Wen-Ding Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a4",
"hidden": false,
"name": "Justin Xu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a5",
"hidden": false,
"name": "Kevin Joseph Scaria",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a6",
"hidden": false,
"name": "Freddie Vargus",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a7",
"hidden": false,
"name": "Farzad Habibi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a8",
"hidden": false,
"name": "Long",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37a9",
"hidden": false,
"name": "Lian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37aa",
"hidden": false,
"name": "Emanuele Rodolà",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ab",
"hidden": false,
"name": "Jules Robins",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ac",
"hidden": false,
"name": "Vincent Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ad",
"hidden": false,
"name": "Declan Grabb",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ae",
"hidden": false,
"name": "Ida Bosio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37af",
"hidden": false,
"name": "Tony Fruhauff",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b0",
"hidden": false,
"name": "Ido Akov",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b1",
"hidden": false,
"name": "Eve J. Y. Lo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b2",
"hidden": false,
"name": "Hao Qi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b3",
"hidden": false,
"name": "Xi Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b4",
"hidden": false,
"name": "Ben Segev",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b5",
"hidden": false,
"name": "Jingxuan Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b6",
"hidden": false,
"name": "Sarah Martinson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b7",
"hidden": false,
"name": "Erik Y. Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b8",
"hidden": false,
"name": "Kaylie Hausknecht",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37b9",
"hidden": false,
"name": "Michael P. Brenner",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ba",
"hidden": false,
"name": "Mao Mao",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37bb",
"hidden": false,
"name": "Yibo Jiang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37bc",
"hidden": false,
"name": "Xinyu Zhang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37bd",
"hidden": false,
"name": "David Avagian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37be",
"hidden": false,
"name": "Eshawn Jessica Scipio",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37bf",
"hidden": false,
"name": "Muhammad Rehan Siddiqi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c0",
"hidden": false,
"name": "Alon Ragoler",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c1",
"hidden": false,
"name": "Justin Tan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c2",
"hidden": false,
"name": "Deepakkumar Patil",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c3",
"hidden": false,
"name": "Rebeka Plecnik",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c4",
"hidden": false,
"name": "Aaron Kirtland",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c5",
"hidden": false,
"name": "Roselynn Grace Montecillo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c6",
"hidden": false,
"name": "Stephane Durand",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c7",
"hidden": false,
"name": "Omer Faruk Bodur",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c8",
"hidden": false,
"name": "Zahra Adoul",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37c9",
"hidden": false,
"name": "Mohamed Zekry",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ca",
"hidden": false,
"name": "Guillaume Douville",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37cb",
"hidden": false,
"name": "Ali Karakoc",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37cc",
"hidden": false,
"name": "Tania C. B. Santos",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37cd",
"hidden": false,
"name": "Samir Shamseldeen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ce",
"hidden": false,
"name": "Loukmane Karim",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37cf",
"hidden": false,
"name": "Anna Liakhovitskaia",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d0",
"hidden": false,
"name": "Nate Resman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d1",
"hidden": false,
"name": "Nicholas Farina",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d2",
"hidden": false,
"name": "Juan Carlos Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d3",
"hidden": false,
"name": "Gabe Maayan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d4",
"hidden": false,
"name": "Sarah Hoback",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d5",
"hidden": false,
"name": "Rodrigo De Oliveira Pena",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d6",
"hidden": false,
"name": "Glen Sherman",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d7",
"hidden": false,
"name": "Hodjat Mariji",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d8",
"hidden": false,
"name": "Rasoul Pouriamanesh",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37d9",
"hidden": false,
"name": "Wentao Wu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37da",
"hidden": false,
"name": "Gözdenur Demir",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37db",
"hidden": false,
"name": "Sandra Mendoza",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37dc",
"hidden": false,
"name": "Ismail Alarab",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37dd",
"hidden": false,
"name": "Joshua Cole",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37de",
"hidden": false,
"name": "Danyelle Ferreira",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37df",
"hidden": false,
"name": "Bryan Johnson",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e0",
"hidden": false,
"name": "Hsiaoyun Milliron",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e1",
"hidden": false,
"name": "Mohammad Safdari",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e2",
"hidden": false,
"name": "Liangti Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e3",
"hidden": false,
"name": "Siriphan Arthornthurasuk",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e4",
"hidden": false,
"name": "Alexey Pronin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e5",
"hidden": false,
"name": "Jing Fan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e6",
"hidden": false,
"name": "Angel Ramirez-Trinidad",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e7",
"hidden": false,
"name": "Ashley Cartwright",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e8",
"hidden": false,
"name": "Daphiny Pottmaier",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37e9",
"hidden": false,
"name": "Omid Taheri",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ea",
"hidden": false,
"name": "David Outevsky",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37eb",
"hidden": false,
"name": "Stanley Stepanic",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ec",
"hidden": false,
"name": "Samuel Perry",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ed",
"hidden": false,
"name": "Luke Askew",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ee",
"hidden": false,
"name": "Raúl Adrián Huerta Rodríguez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ef",
"hidden": false,
"name": "Abdelkader Dendane",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f0",
"hidden": false,
"name": "Sam Ali",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f1",
"hidden": false,
"name": "Ricardo Lorena",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f2",
"hidden": false,
"name": "Krishnamurthy Iyer",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f3",
"hidden": false,
"name": "Sk Md Salauddin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f4",
"hidden": false,
"name": "Murat Islam",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f5",
"hidden": false,
"name": "Juan Gonzalez",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f6",
"hidden": false,
"name": "Josh Ducey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f7",
"hidden": false,
"name": "Russell Campbell",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f8",
"hidden": false,
"name": "Maja Somrak",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37f9",
"hidden": false,
"name": "Vasilios Mavroudis",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-03T08:16:07.872Z",
"user": {
"_id": "6394e3bb0663665c5f01f688",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1670855654319-6394e3bb0663665c5f01f688.jpeg",
"fullname": "Vasilios Mavroudis",
"isPro": false,
"type": "user",
"user": "vasilisM"
}
},
{
"_id": "679704b422b334a8370d37fa",
"hidden": false,
"name": "Eric Vergo",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37fb",
"hidden": false,
"name": "Juehang Qin",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37fc",
"hidden": false,
"name": "Benjámin Borbás",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37fd",
"hidden": false,
"name": "Eric Chu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37fe",
"hidden": false,
"name": "Jack Lindsey",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d37ff",
"hidden": false,
"name": "Anil Radhakrishnan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3800",
"hidden": false,
"name": "Antoine Jallon",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3801",
"hidden": false,
"name": "I. M. J. McInnis",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3802",
"hidden": false,
"name": "Alex Hoover",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3803",
"hidden": false,
"name": "Sören Möller",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3804",
"hidden": false,
"name": "Song Bian",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3805",
"hidden": false,
"name": "John Lai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3806",
"hidden": false,
"name": "Tejal Patwardhan",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3807",
"hidden": false,
"name": "Summer Yue",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3808",
"hidden": false,
"name": "Alexandr Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "679704b422b334a8370d3809",
"hidden": false,
"name": "Dan Hendrycks",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-24T05:27:46 |
Humanity's Last Exam
|
Benchmarks are important tools for tracking the rapid advancements in large
language model (LLM) capabilities. However, benchmarks are not keeping pace in
difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like
MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In
response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at
the frontier of human knowledge, designed to be the final closed-ended academic
benchmark of its kind with broad subject coverage. HLE consists of 3,000
questions across dozens of subjects, including mathematics, humanities, and the
natural sciences. HLE is developed globally by subject-matter experts and
consists of multiple-choice and short-answer questions suitable for automated
grading. Each question has a known solution that is unambiguous and easily
verifiable, but cannot be quickly answered via internet retrieval.
State-of-the-art LLMs demonstrate low accuracy and calibration on HLE,
highlighting a significant gap between current LLM capabilities and the expert
human frontier on closed-ended academic questions. To inform research and
policymaking upon a clear understanding of model capabilities, we publicly
release HLE at https://lastexam.ai.
| 64 |
679704b522b334a8370d3847
| null | null |
|
2025-01-26T22:30:49.642000 |
RealCritic: Towards Effectiveness-Driven Evaluation of Language Model Critiques
| 2 |
{
"_id": "64912976b95c3f0a1e6233cb",
"avatarUrl": "/avatars/c0615f8c6606073faffb419757d4e667.svg",
"followerCount": 1,
"fullname": "Zhengyang Tang",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "tangzhy",
"type": "user"
}
| true | null |
2501.14492
|
[
{
"_id": "6796fdb7582fda525a7f6434",
"hidden": false,
"name": "Zhengyang Tang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:00:51.828Z",
"user": {
"_id": "64912976b95c3f0a1e6233cb",
"avatarUrl": "/avatars/c0615f8c6606073faffb419757d4e667.svg",
"fullname": "Zhengyang Tang",
"isPro": false,
"type": "user",
"user": "tangzhy"
}
},
{
"_id": "6796fdb7582fda525a7f6435",
"hidden": false,
"name": "Ziniu Li",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:01:05.526Z",
"user": {
"_id": "647c4c901f878439e2fd34d6",
"avatarUrl": "/avatars/b4399d210d7239d4662b11a4ee7b527d.svg",
"fullname": "Ziniu Li",
"isPro": false,
"type": "user",
"user": "ziniuli"
}
},
{
"_id": "6796fdb7582fda525a7f6436",
"hidden": false,
"name": "Zhenyang Xiao",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:01:15.578Z",
"user": {
"_id": "649d87b19f0ffe21ef26d36b",
"avatarUrl": "/avatars/2a0b47328a780f24a640933111fe163c.svg",
"fullname": "xiaozhenyang",
"isPro": false,
"type": "user",
"user": "yeshoubaizi"
}
},
{
"_id": "6796fdb7582fda525a7f6437",
"hidden": false,
"name": "Tian Ding",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fdb7582fda525a7f6438",
"hidden": false,
"name": "Ruoyu Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fdb7582fda525a7f6439",
"hidden": false,
"name": "Benyou Wang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:01:55.582Z",
"user": {
"_id": "637c6703ca8542a0ba900ccb",
"avatarUrl": "/avatars/288ed63a1efa566c3f01e850c6ba5dd5.svg",
"fullname": "Wang",
"isPro": false,
"type": "user",
"user": "Benyou"
}
},
{
"_id": "6796fdb7582fda525a7f643a",
"hidden": false,
"name": "Dayiheng Liu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:02:01.463Z",
"user": {
"_id": "6434d4989bd5a84b5dd0b0f5",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/6434d4989bd5a84b5dd0b0f5/0Elf9qbfG9Hkgypm9pTGm.jpeg",
"fullname": "Dayiheng Liu",
"isPro": false,
"type": "user",
"user": "Losin94"
}
},
{
"_id": "6796fdb7582fda525a7f643b",
"hidden": false,
"name": "Fei Huang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fdb7582fda525a7f643c",
"hidden": false,
"name": "Tianyu Liu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fdb7582fda525a7f643d",
"hidden": false,
"name": "Bowen Yu",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:02:16.321Z",
"user": {
"_id": "6583ab7983a9e1460c67d876",
"avatarUrl": "/avatars/74400bc448c3f07e23a4cd53d68a6af7.svg",
"fullname": "bowen",
"isPro": false,
"type": "user",
"user": "bowenYu"
}
},
{
"_id": "6796fdb7582fda525a7f643e",
"hidden": false,
"name": "Junyang Lin",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T15:02:08.327Z",
"user": {
"_id": "620760a26e3b7210c2ff1943",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"isPro": false,
"type": "user",
"user": "JustinLin610"
}
}
] | 2025-01-24T13:48:10 |
RealCritic: Towards Effectiveness-Driven Evaluation of Language Model
Critiques
|
Critiques are important for enhancing the performance of Large Language
Models (LLMs), enabling both self-improvement and constructive feedback for
others by identifying flaws and suggesting improvements. However, evaluating
the critique capabilities of LLMs presents a significant challenge due to the
open-ended nature of the task. In this work, we introduce a new benchmark
designed to assess the critique capabilities of LLMs. Unlike existing
benchmarks, which typically function in an open-loop fashion, our approach
employs a closed-loop methodology that evaluates the quality of corrections
generated from critiques. Moreover, the benchmark incorporates features such as
self-critique, cross-critique, and iterative critique, which are crucial for
distinguishing the abilities of advanced reasoning models from more classical
ones. We implement this benchmark using eight challenging reasoning tasks. We
have several interesting findings. First, despite demonstrating comparable
performance in direct chain-of-thought generation, classical LLMs significantly
lag behind the advanced reasoning-based model o1-mini across all critique
scenarios. Second, in self-critique and iterative critique settings, classical
LLMs may even underperform relative to their baseline capabilities. We hope
that this benchmark will serve as a valuable resource to guide future
advancements. The code and data are available at
https://github.com/tangzhy/RealCritic.
| 31 |
6796fdb8582fda525a7f6496
| null | null |
|
2025-01-26T22:16:06.042000 |
Chain-of-Retrieval Augmented Generation
| 3 |
{
"_id": "60f1abe7544c2adfd699860c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"followerCount": 6280,
"fullname": "AK",
"isHf": true,
"isMod": false,
"isPro": false,
"name": "akhaliq",
"type": "user"
}
| false | null |
2501.14342
|
[
{
"_id": "6796fa2d4fccd4b951e8ab0a",
"hidden": false,
"name": "Liang Wang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fa2d4fccd4b951e8ab0b",
"hidden": false,
"name": "Haonan Chen",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-13T08:26:16.425Z",
"user": {
"_id": "66add675c7a575aa0e03d5f3",
"avatarUrl": "/avatars/b72b18130664c1de197c1f8df371aa70.svg",
"fullname": "Haonan Chen",
"isPro": false,
"type": "user",
"user": "Haon-Chen"
}
},
{
"_id": "6796fa2d4fccd4b951e8ab0c",
"hidden": false,
"name": "Nan Yang",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "6796fa2d4fccd4b951e8ab0d",
"hidden": false,
"name": "Xiaolong Huang",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:59:51.272Z",
"user": {
"_id": "64149ba8e682c73521d46ecb",
"avatarUrl": "/avatars/c1859aa2d295077e798c8bdc83664a07.svg",
"fullname": "Xiaolong Huang",
"isPro": false,
"type": "user",
"user": "DiamondH"
}
},
{
"_id": "6796fa2d4fccd4b951e8ab0e",
"hidden": false,
"name": "Zhicheng Dou",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:59:36.119Z",
"user": {
"_id": "66f0bf59e9d50ec57febf751",
"avatarUrl": "/avatars/be97941e60064e5dd806c6fe9db3c537.svg",
"fullname": "Zhicheng Dou",
"isPro": false,
"type": "user",
"user": "douzc"
}
},
{
"_id": "6796fa2d4fccd4b951e8ab0f",
"hidden": false,
"name": "Furu Wei",
"status": "admin_assigned",
"statusLastChangedAt": "2025-01-27T14:59:30.095Z",
"user": {
"_id": "6368c512fbfe97c16a40baba",
"avatarUrl": "/avatars/1c23bc7c0b6d9225699ce27647623d7a.svg",
"fullname": "Furu Wei",
"isPro": false,
"type": "user",
"user": "thegenerality"
}
}
] | 2025-01-24T09:12:52 |
Chain-of-Retrieval Augmented Generation
|
This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex queries due to imperfect retrieval results. In contrast, our
proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the
model to dynamically reformulate the query based on the evolving state. To
train CoRAG effectively, we utilize rejection sampling to automatically
generate intermediate retrieval chains, thereby augmenting existing RAG
datasets that only provide the correct final answer. At test time, we propose
various decoding strategies to scale the model's test-time compute by
controlling the length and number of sampled retrieval chains. Experimental
results across multiple benchmarks validate the efficacy of CoRAG, particularly
in multi-hop question answering tasks, where we observe more than 10 points
improvement in EM score compared to strong baselines. On the KILT benchmark,
CoRAG establishes a new state-of-the-art performance across a diverse range of
knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to
understand the scaling behavior of CoRAG, laying the groundwork for future
research aimed at developing factual and grounded foundation models.
| 52 |
6796fa2d4fccd4b951e8ab4e
| null | null |
|
2025-01-24T20:30:01.730000 |
EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents
| 2 |
{
"_id": "637a06580a77f602dc4ac922",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/637a06580a77f602dc4ac922/JR2VHWhG1tj3TKKNzD8GD.jpeg",
"followerCount": 8,
"fullname": "Jinyi Hu",
"isHf": false,
"isMod": false,
"isPro": false,
"name": "JamesHujy",
"type": "user"
}
| true | null |
2501.11858
|
[
{
"_id": "67943e877aa0faad86e4d750",
"hidden": false,
"name": "Zhili Cheng",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d751",
"hidden": false,
"name": "Yuge Tu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d752",
"hidden": false,
"name": "Ran Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d753",
"hidden": false,
"name": "Shiqi Dai",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d754",
"hidden": false,
"name": "Jinyi Hu",
"status": "claimed_verified",
"statusLastChangedAt": "2025-01-26T11:38:36.572Z",
"user": {
"_id": "637a06580a77f602dc4ac922",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/637a06580a77f602dc4ac922/JR2VHWhG1tj3TKKNzD8GD.jpeg",
"fullname": "Jinyi Hu",
"isPro": false,
"type": "user",
"user": "JamesHujy"
}
},
{
"_id": "67943e877aa0faad86e4d755",
"hidden": false,
"name": "Shengding Hu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d756",
"hidden": false,
"name": "Jiahao Li",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d757",
"hidden": false,
"name": "Yang Shi",
"status": "claimed_verified",
"statusLastChangedAt": "2025-02-18T09:35:10.394Z",
"user": {
"_id": "673c7319d11b1c2e246ead9c",
"avatarUrl": "https://aifasthub.com/avatars/v1/production/uploads/673c7319d11b1c2e246ead9c/IjFIO--N7Hm_BOEafhEQv.jpeg",
"fullname": "Yang Shi",
"isPro": false,
"type": "user",
"user": "DogNeverSleep"
}
},
{
"_id": "67943e877aa0faad86e4d758",
"hidden": false,
"name": "Tianyu Yu",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d759",
"hidden": false,
"name": "Weize Chen",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d75a",
"hidden": false,
"name": "Lei Shi",
"status": null,
"statusLastChangedAt": null,
"user": null
},
{
"_id": "67943e877aa0faad86e4d75b",
"hidden": false,
"name": "Maosong Sun",
"status": null,
"statusLastChangedAt": null,
"user": null
}
] | 2025-01-21T03:22:10 |
EmbodiedEval: Evaluate Multimodal LLMs as Embodied Agents
|
Multimodal Large Language Models (MLLMs) have shown significant advancements,
providing a promising future for embodied agents. Existing benchmarks for
evaluating MLLMs primarily utilize static images or videos, limiting
assessments to non-interactive scenarios. Meanwhile, existing embodied AI
benchmarks are task-specific and not diverse enough, which do not adequately
evaluate the embodied capabilities of MLLMs. To address this, we propose
EmbodiedEval, a comprehensive and interactive evaluation benchmark for MLLMs
with embodied tasks. EmbodiedEval features 328 distinct tasks within 125 varied
3D scenes, each of which is rigorously selected and annotated. It covers a
broad spectrum of existing embodied AI tasks with significantly enhanced
diversity, all within a unified simulation and evaluation framework tailored
for MLLMs. The tasks are organized into five categories: navigation, object
interaction, social interaction, attribute question answering, and spatial
question answering to assess different capabilities of the agents. We evaluated
the state-of-the-art MLLMs on EmbodiedEval and found that they have a
significant shortfall compared to human level on embodied tasks. Our analysis
demonstrates the limitations of existing MLLMs in embodied capabilities,
providing insights for their future development. We open-source all evaluation
data and simulation framework at https://github.com/thunlp/EmbodiedEval.
| 7 |
67943e8a7aa0faad86e4d9f7
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.