paper_url
stringlengths
35
81
arxiv_id
stringlengths
6
35
nips_id
null
openreview_id
stringlengths
9
93
title
stringlengths
1
1.02k
abstract
stringlengths
0
56.5k
short_abstract
stringlengths
0
1.95k
url_abs
stringlengths
16
996
url_pdf
stringlengths
16
996
proceeding
stringlengths
7
1.03k
authors
listlengths
0
3.31k
tasks
listlengths
0
147
date
timestamp[ns]date
1951-09-01 00:00:00
2222-12-22 00:00:00
conference_url_abs
stringlengths
16
199
conference_url_pdf
stringlengths
21
200
conference
stringlengths
2
47
reproduces_paper
stringclasses
22 values
methods
listlengths
0
7.5k
https://paperswithcode.com/paper/fourcastnet-3-a-geometric-approach-to
2507.12144
null
null
FourCastNet 3: A geometric approach to probabilistic machine-learning weather forecasting at scale
FourCastNet 3 advances global weather modeling by implementing a scalable, geometric machine learning (ML) approach to probabilistic ensemble forecasting. The approach is designed to respect spherical geometry and to accurately model the spatially correlated probabilistic nature of the problem, resulting in stable spectra and realistic dynamics across multiple scales. FourCastNet 3 delivers forecasting accuracy that surpasses leading conventional ensemble models and rivals the best diffusion-based methods, while producing forecasts 8 to 60 times faster than these approaches. In contrast to other ML approaches, FourCastNet 3 demonstrates excellent probabilistic calibration and retains realistic spectra, even at extended lead times of up to 60 days. All of these advances are realized using a purely convolutional neural network architecture tailored for spherical geometry. Scalable and efficient large-scale training on 1024 GPUs and more is enabled by a novel training paradigm for combined model- and data-parallelism, inspired by domain decomposition methods in classical numerical models. Additionally, FourCastNet 3 enables rapid inference on a single GPU, producing a 60-day global forecast at 0.25{\deg}, 6-hourly resolution in under 4 minutes. Its computational efficiency, medium-range probabilistic skill, spectral fidelity, and rollout stability at subseasonal timescales make it a strong candidate for improving meteorological forecasting and early warning systems through large ensemble predictions.
null
https://arxiv.org/abs/2507.12144v2
https://arxiv.org/pdf/2507.12144v2.pdf
null
[ "Boris Bonev", "Thorsten Kurth", "Ankur Mahesh", "Mauro Bisson", "Jean Kossaifi", "Karthik Kashinath", "Anima Anandkumar", "William D. Collins", "Michael S. Pritchard", "Alexander Keller" ]
[ "Computational Efficiency", "GPU", "Weather Forecasting" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/bridging-the-last-mile-of-prediction
2507.07192
null
null
Bridging the Last Mile of Prediction: Enhancing Time Series Forecasting with Conditional Guided Flow Matching
Diffusion models, a type of generative model, have shown promise in time series forecasting. But they face limitations like rigid source distributions and limited sampling paths, which hinder their performance. Flow matching offers faster generation, higher-quality outputs, and greater flexibility, while also possessing the ability to utilize valuable information from the prediction errors of prior models, which were previously inaccessible yet critically important. To address these challenges and fully unlock the untapped potential of flow matching, we propose Conditional Guided Flow Matching (CGFM). CGFM extends flow matching by incorporating the outputs of an auxiliary model, enabling a previously unattainable capability in the field: learning from the errors of the auxiliary model. For time series forecasting tasks, it integrates historical data as conditions and guidance, constructs two-sided conditional probability paths, and uses a general affine path to expand the space of probability paths, ultimately leading to improved predictions. Extensive experiments show that CGFM consistently enhances and outperforms state-of-the-art models, highlighting its effectiveness in advancing forecasting methods.
null
https://arxiv.org/abs/2507.07192v2
https://arxiv.org/pdf/2507.07192v2.pdf
null
[ "Huibo Xu", "Runlong Yu", "Likang Wu", "Xianquan Wang", "Qi Liu" ]
[ "Time Series", "Time Series Forecasting" ]
2025-07-09T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/exaone-4-0-unified-large-language-models
2507.11407
null
null
EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes
This technical report introduces EXAONE 4.0, which integrates a Non-reasoning mode and a Reasoning mode to achieve both the excellent usability of EXAONE 3.5 and the advanced reasoning abilities of EXAONE Deep. To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended to support Spanish in addition to English and Korean. The EXAONE 4.0 model series consists of two sizes: a mid-size 32B model optimized for high performance, and a small-size 1.2B model designed for on-device applications. The EXAONE 4.0 demonstrates superior performance compared to open-weight models in its class and remains competitive even against frontier-class models. The models are publicly available for research purposes and can be easily downloaded via https://huggingface.co/LGAI-EXAONE.
null
https://arxiv.org/abs/2507.11407v1
https://arxiv.org/pdf/2507.11407v1.pdf
null
[ "LG AI Research", ":", "Kyunghoon Bae", "Eunbi Choi", "Kibong Choi", "Stanley Jungkyu Choi", "Yemuk Choi", "Kyubeen Han", "Seokhee Hong", "Junwon Hwang", "Taewan Hwang", "Joonwon Jang", "Hyojin Jeon", "Kijeong Jeon", "Gerrard Jeongwon Jo", "Hyunjik Jo", "Jiyeon Jung", "Euisoon Kim", "Hyosang Kim", "Jihoon Kim", "Joonkee Kim", "SeongHwan Kim", "Soyeon Kim", "Sunkyoung Kim", "Yireun Kim", "Yongil Kim", "Youchul Kim", "Edward Hwayoung Lee", "Gwangho Lee", "Haeju Lee", "Honglak Lee", "Jinsik Lee", "Kyungmin Lee", "Sangha Park", "Young Min Paik", "Yongmin Park", "Youngyong Park", "Sanghyun Seo", "Sihoon Yang", "Heuiyeen Yeen", "Sihyuk Yi", "Hyeongu Yun" ]
[]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/mera-code-a-unified-framework-for-evaluating
2507.12284
null
null
MERA Code: A Unified Framework for Evaluating Code Generation Across Tasks
Advancements in LLMs have enhanced task automation in software engineering; however, current evaluations primarily focus on natural language tasks, overlooking code quality. Most benchmarks prioritize high-level reasoning over executable code and real-world performance, leaving gaps in understanding true capabilities and risks associated with these models in production. To address this issue, we propose MERA Code, a new addition to the MERA benchmark family, specifically focused on evaluating code for the latest code generation LLMs in Russian. This benchmark includes 11 evaluation tasks that span 8 programming languages. Our proposed evaluation methodology features a taxonomy that outlines the practical coding skills necessary for models to complete these tasks. The benchmark comprises an open-source codebase for users to conduct MERA assessments, a scoring system compatible with various programming environments, and a platform featuring a leaderboard and submission system. We evaluate open LLMs and frontier API models, analyzing their limitations in terms of practical coding tasks in non-English languages. We are publicly releasing MERA to guide future research, anticipate groundbreaking features in model development, and standardize evaluation procedures.
null
https://arxiv.org/abs/2507.12284v2
https://arxiv.org/pdf/2507.12284v2.pdf
null
[ "Artem Chervyakov", "Alexander Kharitonov", "Pavel Zadorozhny", "Adamenko Pavel", "Rodion Levichev", "Dmitrii Vorobev", "Dmitrii Salikhov", "Aidar Valeev", "Alena Pestova", "Maria Dziuba", "Ilseyar Alimova", "Artem Zavgorodnev", "Aleksandr Medvedev", "Stanislav Moiseev", "Elena Bruches", "Daniil Grebenkin", "Roman Derunets", "Vikulov Vladimir", "Anton Emelyanov", "Dmitrii Babaev", "Vladimir V. Ivanov", "Valentin Malykh", "Alena Fenogenova" ]
[ "Code Generation" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/codejudgebench-benchmarking-llm-as-a-judge
2507.10535
null
null
CodeJudgeBench: Benchmarking LLM-as-a-Judge for Coding Tasks
Large Language Models (LLMs) have significantly advanced the state-of-the-art in various coding tasks. Beyond directly answering user queries, LLMs can also serve as judges, assessing and comparing the quality of responses generated by other models. Such an evaluation capability is crucial both for benchmarking different LLMs and for improving response quality through response ranking. However, despite the growing adoption of the LLM-as-a-Judge paradigm, its effectiveness in coding scenarios remains underexplored due to the absence of dedicated benchmarks. To address this gap, we introduce CodeJudgeBench, a benchmark explicitly designed to evaluate the performance of LLM-as-a-Judge models across three critical coding tasks: code generation, code repair, and unit test generation. Through comprehensive benchmarking of 26 LLM-as-a-Judge models, we find that recent thinking models significantly outperform non-thinking models on our carefully designed code judging tasks. Notably, even relatively small thinking models, such as Qwen3-8B, can outperform specially trained LLM-as-a-Judge models up to 70B in size. Nevertheless, all models still exhibit significant randomness in their judgment of coding tasks. For pairwise judging tasks, simply changing the order in which responses are presented can substantially impact accuracy. In addition, when judging code and unit tests written by different LLMs, LLM-as-a-Judge models also show variance in performance. This sensitivity raises concerns about the reliability and consistency of LLM-as-a-Judge in coding scenarios. Lastly, we study optimal prompting strategies for LLM-as-a-Judge. We find that using pair-wise comparison outperforms scalar point-wise judging. Furthermore, retaining comments and reasoning in the full, unprocessed LLM response leads to improved judge performance.
null
https://arxiv.org/abs/2507.10535v1
https://arxiv.org/pdf/2507.10535v1.pdf
null
[ "Hongchao Jiang", "Yiming Chen", "Yushi Cao", "Hung-Yi Lee", "Robby T. Tan" ]
[ "Benchmarking", "Code Generation", "Code Repair" ]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/opencodereasoning-ii-a-simple-test-time
2507.09075
null
null
OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique
Recent advancements in reasoning-based Large Language Models (LLMs), particularly their potential through test-time scaling, have created significant opportunities for distillation in code generation and critique. However, progress in both areas fundamentally depends on large-scale, high-quality datasets. In this work, we introduce OpenCodeReasoning-II, a dataset consists of 2.5M question-solution-critique triples (approx. 35K unique programming questions), making it nearly twice the size of the previous largest publicly available code reasoning dataset. In this work, we employ a two-stage supervised fine-tuning strategy. The first stage focuses on fine-tuning for code generation, while the second stage involves the joint training of models for both code generation and critique. Our resulting finetuned Qwen2.5-Instruct models achieve performance in code generation that either exceeds or equals the best prior open-weight distilled models. Notably, the integration of our code generation and critique models leads to significant improvements in competitive coding performance. Furthermore, we present an extension of the LiveCodeBench benchmark to specifically support the C++ programming language, thereby facilitating more comprehensive LLM evaluation using this benchmark.
null
https://arxiv.org/abs/2507.09075v1
https://arxiv.org/pdf/2507.09075v1.pdf
null
[ "Wasi Uddin Ahmad", "Somshubra Majumdar", "Aleksander Ficek", "Sean Narenthiran", "Mehrzad Samadi", "Jocelyn Huang", "Siddhartha Jain", "Vahid Noroozi", "Boris Ginsburg" ]
[ "Code Generation" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/codeassistbench-cab-dataset-benchmarking-for
2507.10646
null
null
CodeAssistBench (CAB): Dataset & Benchmarking for Multi-turn Chat-Based Code Assistance
Programming assistants powered by large language models have transformed software development, yet most benchmarks focus narrowly on code generation tasks. Recent efforts like InfiBench and StackEval attempt to address this gap using Stack Overflow data but remain limited to single-turn interactions in isolated contexts, require significant manual curation, and fail to represent complete project environments. We introduce CodeAssistBench (CAB), the first benchmark framework for evaluating multi-turn programming assistance in realistic settings that address real-world questions about actual codebases. Unlike existing programming Q&A benchmarks, CAB automatically generates scalable datasets from question-related GitHub issues using configurable parameters (e.g., repository creation date, star count, programming languages), and includes automatic containerization of codebases for evaluation. It then evaluates models through simulated users in these containerized environments with full codebase access. Using this framework, we constructed a test set of 3,286 real-world programming questions across 231 repositories, spanning seven programming languages and diverse problem domains. Our evaluation of leading LLMs reveals a substantial capability gap: while models perform well on Stack Overflow questions with success rates of 70-83%, they resolve only up to 16.49% of CAB's recent issues. This discrepancy highlights the challenges of providing assistance in complex, project-specific contexts versus answering standalone questions.
null
https://arxiv.org/abs/2507.10646v2
https://arxiv.org/pdf/2507.10646v2.pdf
null
[ "Myeongsoo Kim", "Shweta Garg", "Baishakhi Ray", "Varun Kumar", "Anoop Deoras" ]
[ "Benchmarking", "Code Generation" ]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/kevin-multi-turn-rl-for-generating-cuda
2507.11948
null
null
Kevin: Multi-Turn RL for Generating CUDA Kernels
Writing GPU kernels is a challenging task and critical for AI systems' efficiency. It is also highly iterative: domain experts write code and improve performance through execution feedback. Moreover, it presents verifiable rewards like correctness and speedup, making it a natural environment to apply Reinforcement Learning (RL). To explicitly incorporate the iterative nature of this process into training, we develop a flexible multi-turn RL recipe that addresses unique challenges encountered in real-world settings, such as learning from long trajectories and effective reward attribution across turns. We present Kevin - K(ernel D)evin, the first model trained with multi-turn RL for CUDA kernel generation and optimization. In our evaluation setup, Kevin shows significant gains over its base model (QwQ-32B), improving correctness of generated kernels (in pure CUDA) from 56% to 82% and mean speedup from 0.53x to 1.10x of baseline (PyTorch Eager), and surpassing frontier models like o4-mini (0.78x). Finally, we study its behavior across test-time scaling axes: we found scaling serial refinement more beneficial than parallel sampling. In particular, when given more refinement turns, Kevin shows a higher rate of improvement.
null
https://arxiv.org/abs/2507.11948v1
https://arxiv.org/pdf/2507.11948v1.pdf
null
[ "Carlo Baronio", "Pietro Marsella", "Ben Pan", "Simon Guo", "Silas Alberti" ]
[ "GPU", "Reinforcement Learning (RL)" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multilingual-multimodal-software-developer
2507.08719
null
null
Multilingual Multimodal Software Developer for Code Generation
The rapid advancement of Large Language Models (LLMs) has significantly improved code generation, yet most models remain text-only, neglecting crucial visual aids like diagrams and flowcharts used in real-world software development. To bridge this gap, we introduce MM-Coder, a Multilingual Multimodal software developer. MM-Coder integrates visual design inputs-Unified Modeling Language (UML) diagrams and flowcharts (termed Visual Workflow)-with textual instructions to enhance code generation accuracy and architectural alignment. To enable this, we developed MMc-Instruct, a diverse multimodal instruction-tuning dataset including visual-workflow-based code generation, allowing MM-Coder to synthesize textual and graphical information like human developers, distinct from prior work on narrow tasks. Furthermore, we introduce MMEval, a new benchmark for evaluating multimodal code generation, addressing existing text-only limitations. Our evaluations using MMEval highlight significant remaining challenges for models in precise visual information capture, instruction following, and advanced programming knowledge. Our work aims to revolutionize industrial programming by enabling LLMs to interpret and implement complex specifications conveyed through both text and visual designs.
null
https://arxiv.org/abs/2507.08719v1
https://arxiv.org/pdf/2507.08719v1.pdf
null
[ "Linzheng Chai", "Jian Yang", "Shukai Liu", "Wei zhang", "Liran Wang", "Ke Jin", "Tao Sun", "Congnan Liu", "Chenchen Zhang", "Hualei Zhu", "Jiaheng Liu", "Xianjie Wu", "Ge Zhang", "Tianyu Liu", "Zhoujun Li" ]
[ "Code Generation", "Instruction Following" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/function-to-style-guidance-of-llms-for-code
2507.11083
null
null
Function-to-Style Guidance of LLMs for Code Translation
Large language models (LLMs) have made significant strides in code translation tasks. However, ensuring both the correctness and readability of translated code remains a challenge, limiting their effective adoption in real-world software development. In this work, we propose F2STrans, a function-to-style guiding paradigm designed to progressively improve the performance of LLMs in code translation. Our approach comprises two key stages: (1) Functional learning, which optimizes translation correctness using high-quality source-target code pairs mined from online programming platforms, and (2) Style learning, which improves translation readability by incorporating both positive and negative style examples. Additionally, we introduce a novel code translation benchmark that includes up-to-date source code, extensive test cases, and manually annotated ground-truth translations, enabling comprehensive functional and stylistic evaluations. Experiments on both our new benchmark and existing datasets demonstrate that our approach significantly improves code translation performance. Notably, our approach enables Qwen-1.5B to outperform prompt-enhanced Qwen-32B and GPT-4 on average across 20 diverse code translation scenarios.
null
https://arxiv.org/abs/2507.11083v1
https://arxiv.org/pdf/2507.11083v1.pdf
null
[ "Longhui Zhang", "Bin Wang", "Jiahao Wang", "Xiaofeng Zhao", "Hao Yang", "Meishan Zhang", "Yu Li", "Jing Li", "Jun Yu", "Min Zhang" ]
[ "Code Translation", "Translation" ]
2025-07-15T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cuda-l1-improving-cuda-optimization-via
2507.14111
null
null
CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning
The exponential growth in demand for GPU computing resources, driven by the rapid advancement of Large Language Models, has created an urgent need for automated CUDA optimization strategies. While recent advances in LLMs show promise for code generation, current SOTA models (e.g. R1, o1) achieve low success rates in improving CUDA speed. In this paper, we introduce CUDA-L1, an automated reinforcement learning framework for CUDA optimization. CUDA-L1 achieves performance improvements on the CUDA optimization task: trained on NVIDIA A100, it delivers an average speedup of x17.7 across all 250 CUDA kernels of KernelBench, with peak speedups reaching x449. Furthermore, the model also demonstrates excellent portability across GPU architectures, achieving average speedups of x17.8 on H100, x19.0 on RTX 3090, x16.5 on L40, x14.7 on H800, and x13.9 on H20 despite being optimized specifically for A100. Beyond these benchmark results, CUDA-L1 demonstrates several remarkable properties: 1) Discovers a variety of CUDA optimization techniques and learns to combine them strategically to achieve optimal performance; 2) Uncovers fundamental principles of CUDA optimization; 3) Identifies non-obvious performance bottlenecks and rejects seemingly beneficial optimizations that harm performance. The capabilities of CUDA-L1 demonstrate that reinforcement learning can transform an initially poor-performing LLM into an effective CUDA optimizer through speedup-based reward signals alone, without human expertise or domain knowledge. More importantly, the trained RL model extend the acquired reasoning abilities to new kernels. This paradigm opens possibilities for automated optimization of CUDA operations, and holds promise to substantially promote GPU efficiency and alleviate the rising pressure on GPU computing resources.
null
https://arxiv.org/abs/2507.14111v2
https://arxiv.org/pdf/2507.14111v2.pdf
null
[ "Xiaoya Li", "Xiaofei Sun", "Albert Wang", "Jiwei Li", "Chris Shum" ]
[ "Code Generation", "GPU", "reinforcement-learning", "Reinforcement Learning" ]
2025-07-18T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/prompting-for-performance-exploring-llms-for
2507.09790
null
null
Prompting for Performance: Exploring LLMs for Configuring Software
Software systems usually provide numerous configuration options that can affect performance metrics such as execution time, memory usage, binary size, or bitrate. On the one hand, making informed decisions is challenging and requires domain expertise in options and their combinations. On the other hand, machine learning techniques can search vast configuration spaces, but with a high computational cost, since concrete executions of numerous configurations are required. In this exploratory study, we investigate whether large language models (LLMs) can assist in performance-oriented software configuration through prompts. We evaluate several LLMs on tasks including identifying relevant options, ranking configurations, and recommending performant configurations across various configurable systems, such as compilers, video encoders, and SAT solvers. Our preliminary results reveal both positive abilities and notable limitations: depending on the task and systems, LLMs can well align with expert knowledge, whereas hallucinations or superficial reasoning can emerge in other cases. These findings represent a first step toward systematic evaluations and the design of LLM-based solutions to assist with software configuration.
null
https://arxiv.org/abs/2507.09790v1
https://arxiv.org/pdf/2507.09790v1.pdf
null
[ "Helge Spieker", "Théo Matricon", "Nassim Belmecheri", "Jørn Eirik Betten", "Gauthier Le Bartz Lyan", "Heraldo Borges", "Quentin Mazouni", "Dennis Gross", "Arnaud Gotlieb", "Mathieu Acher" ]
[]
2025-07-13T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/texttt-droid-a-resource-suite-for-ai
2507.10583
null
null
$\texttt{Droid}$: A Resource Suite for AI-Generated Code Detection
In this work, we compile $\textbf{$\texttt{DroidCollection}$}$, the most extensive open data suite for training and evaluating machine-generated code detectors, comprising over a million code samples, seven programming languages, outputs from 43 coding models, and over three real-world coding domains. Alongside fully AI-generated samples, our collection includes human-AI co-authored code, as well as adversarial samples explicitly crafted to evade detection. Subsequently, we develop $\textbf{$\texttt{DroidDetect}$}$, a suite of encoder-only detectors trained using a multi-task objective over $\texttt{DroidCollection}$. Our experiments show that existing detectors' performance fails to generalise to diverse coding domains and programming languages outside of their narrow training data. Additionally, we demonstrate that while most detectors are easily compromised by humanising the output distributions using superficial prompting and alignment approaches, this problem can be easily amended by training on a small amount of adversarial data. Finally, we demonstrate the effectiveness of metric learning and uncertainty-based resampling as means to enhance detector training on possibly noisy distributions.
null
https://arxiv.org/abs/2507.10583v1
https://arxiv.org/pdf/2507.10583v1.pdf
null
[ "Daniil Orel", "Indraneil Paul", "Iryna Gurevych", "Preslav Nakov" ]
[ "Metric Learning" ]
2025-07-11T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/turning-the-tide-repository-based-code
2507.09866
null
null
Turning the Tide: Repository-based Code Reflection
Code large language models (LLMs) enhance programming by understanding and generating code across languages, offering intelligent feedback, bug detection, and code updates through reflection, improving development efficiency and accessibility. While benchmarks (e.g. HumanEval/LiveCodeBench) evaluate code generation and real-world relevance, previous works ignore the scenario of modifying code in repositories. Considering challenges remaining in improving reflection capabilities and avoiding data contamination in dynamic benchmarks, we introduce LiveRepoReflection, a challenging benchmark for evaluating code understanding and generation in multi-file repository contexts, featuring 1,888 rigorously filtered test cases across $6$ programming languages to ensure diversity, correctness, and high difficulty. Further, we create RepoReflection-Instruct, a large-scale, quality-filtered instruction-tuning dataset derived from diverse sources, used to train RepoReflectionCoder through a two-turn dialogue process involving code generation and error-driven repair. The leaderboard evaluates over 40 LLMs to reflect the model performance of repository-based code reflection.
null
https://arxiv.org/abs/2507.09866v1
https://arxiv.org/pdf/2507.09866v1.pdf
null
[ "Wei zhang", "Jian Yang", "Jiaxi Yang", "Ya Wang", "Zhoujun Li", "Zeyu Cui", "Binyuan Hui", "Junyang Lin" ]
[ "Code Generation", "Diversity", "HumanEval" ]
2025-07-14T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/information-theoretic-generalization-bounds-9
2507.12043
null
null
Information-Theoretic Generalization Bounds of Replay-based Continual Learning
Continual learning (CL) has emerged as a dominant paradigm for acquiring knowledge from sequential tasks while avoiding catastrophic forgetting. Although many CL methods have been proposed to show impressive empirical performance, the theoretical understanding of their generalization behavior remains limited, particularly for replay-based approaches. In this paper, we establish a unified theoretical framework for replay-based CL, deriving a series of information-theoretic bounds that explicitly characterize how the memory buffer interacts with the current task to affect generalization. Specifically, our hypothesis-based bounds reveal that utilizing the limited exemplars of previous tasks alongside the current task data, rather than exhaustive replay, facilitates improved generalization while effectively mitigating catastrophic forgetting. Furthermore, our prediction-based bounds yield tighter and computationally tractable upper bounds of the generalization gap through the use of low-dimensional variables. Our analysis is general and broadly applicable to a wide range of learning algorithms, exemplified by stochastic gradient Langevin dynamics (SGLD) as a representative method. Comprehensive experimental evaluations demonstrate the effectiveness of our derived bounds in capturing the generalization dynamics in replay-based CL settings.
null
https://arxiv.org/abs/2507.12043v1
https://arxiv.org/pdf/2507.12043v1.pdf
null
[ "Wen Wen", "Tieliang Gong", "Yunjiao Zhang", "Zeyu Gao", "Weizhan Zhang", "Yong-Jin Liu" ]
[ "Continual Learning", "Generalization Bounds" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/prol-rehearsal-free-continual-learning-in
2507.12305
null
null
PROL : Rehearsal Free Continual Learning in Streaming Data via Prompt Online Learning
The data privacy constraint in online continual learning (OCL), where the data can be seen only once, complicates the catastrophic forgetting problem in streaming data. A common approach applied by the current SOTAs in OCL is with the use of memory saving exemplars or features from previous classes to be replayed in the current task. On the other hand, the prompt-based approach performs excellently in continual learning but with the cost of a growing number of trainable parameters. The first approach may not be applicable in practice due to data openness policy, while the second approach has the issue of throughput associated with the streaming data. In this study, we propose a novel prompt-based method for online continual learning that includes 4 main components: (1) single light-weight prompt generator as a general knowledge, (2) trainable scaler-and-shifter as specific knowledge, (3) pre-trained model (PTM) generalization preserving, and (4) hard-soft updates mechanism. Our proposed method achieves significantly higher performance than the current SOTAs in CIFAR100, ImageNet-R, ImageNet-A, and CUB dataset. Our complexity analysis shows that our method requires a relatively smaller number of parameters and achieves moderate training time, inference time, and throughput. For further study, the source code of our method is available at https://github.com/anwarmaxsum/PROL.
null
https://arxiv.org/abs/2507.12305v1
https://arxiv.org/pdf/2507.12305v1.pdf
null
[ "M. Anwar Ma'sum", "Mahardhika Pratama", "Savitha Ramasamy", "Lin Liu", "Habibullah Habibullah", "Ryszard Kowalczyk" ]
[ "Continual Learning", "General Knowledge" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/task-specific-generative-dataset-distillation
2507.03331
null
null
Task-Specific Generative Dataset Distillation with Difficulty-Guided Sampling
To alleviate the reliance of deep neural networks on large-scale datasets, dataset distillation aims to generate compact, high-quality synthetic datasets that can achieve comparable performance to the original dataset. The integration of generative models has significantly advanced this field. However, existing approaches primarily focus on aligning the distilled dataset with the original one, often overlooking task-specific information that can be critical for optimal downstream performance. In this paper, focusing on the downstream task of classification, we propose a task-specific sampling strategy for generative dataset distillation that incorporates the concept of difficulty to consider the requirements of the target task better. The final dataset is sampled from a larger image pool with a sampling distribution obtained by matching the difficulty distribution of the original dataset. A logarithmic transformation is applied as a pre-processing step to correct for distributional bias. The results of extensive experiments demonstrate the effectiveness of our method and suggest its potential for enhancing performance on other downstream tasks. The code is available at https://github.com/SumomoTaku/DiffGuideSamp.
null
https://arxiv.org/abs/2507.03331v2
https://arxiv.org/pdf/2507.03331v2.pdf
null
[ "Mingzhuo Li", "Guang Li", "Jiafeng Mao", "Linfeng Ye", "Takahiro Ogawa", "Miki Haseyama" ]
[ "Dataset Distillation" ]
2025-07-04T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dmn-guided-prompting-a-low-code-framework-for
2505.11701
null
null
DMN-Guided Prompting: A Low-Code Framework for Controlling LLM Behavior
Large Language Models (LLMs) have shown considerable potential in automating decision logic within knowledge-intensive processes. However, their effectiveness largely depends on the strategy and quality of prompting. Since decision logic is typically embedded in prompts, it becomes challenging for end users to modify or refine it. Decision Model and Notation (DMN) offers a standardized graphical approach for defining decision logic in a structured, user-friendly manner. This paper introduces a DMN-guided prompting framework that breaks down complex decision logic into smaller, manageable components, guiding LLMs through structured decision pathways. We implemented the framework in a graduate-level course where students submitted assignments. The assignments and DMN models representing feedback instructions served as inputs to our framework. The instructor evaluated the generated feedback and labeled it for performance assessment. Our approach demonstrated promising results, outperforming chain-of-thought (CoT) prompting. Students also responded positively to the generated feedback, reporting high levels of perceived usefulness in a survey based on the Technology Acceptance Model.
null
https://arxiv.org/abs/2505.11701v1
https://arxiv.org/pdf/2505.11701v1.pdf
null
[ "Shaghayegh Abedi", "Amin Jalali" ]
[]
2025-05-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/automating-steering-for-safe-multimodal-large
2507.13255
null
null
Automating Steering for Safe Multimodal Large Language Models
Recent progress in Multimodal Large Language Models (MLLMs) has unlocked powerful cross-modal reasoning abilities, but also raised new safety concerns, particularly when faced with adversarial multimodal inputs. To improve the safety of MLLMs during inference, we introduce a modular and adaptive inference-time intervention technology, AutoSteer, without requiring any fine-tuning of the underlying model. AutoSteer incorporates three core components: (1) a novel Safety Awareness Score (SAS) that automatically identifies the most safety-relevant distinctions among the model's internal layers; (2) an adaptive safety prober trained to estimate the likelihood of toxic outputs from intermediate representations; and (3) a lightweight Refusal Head that selectively intervenes to modulate generation when safety risks are detected. Experiments on LLaVA-OV and Chameleon across diverse safety-critical benchmarks demonstrate that AutoSteer significantly reduces the Attack Success Rate (ASR) for textual, visual, and cross-modal threats, while maintaining general abilities. These findings position AutoSteer as a practical, interpretable, and effective framework for safer deployment of multimodal AI systems.
null
https://arxiv.org/abs/2507.13255v1
https://arxiv.org/pdf/2507.13255v1.pdf
null
[ "Lyucheng Wu", "Mengru Wang", "Ziwen Xu", "Tri Cao", "Nay Oo", "Bryan Hooi", "Shumin Deng" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/shield-a-secure-and-highly-enhanced
2507.13170
null
null
SHIELD: A Secure and Highly Enhanced Integrated Learning for Robust Deepfake Detection against Adversarial Attacks
Audio plays a crucial role in applications like speaker verification, voice-enabled smart devices, and audio conferencing. However, audio manipulations, such as deepfakes, pose significant risks by enabling the spread of misinformation. Our empirical analysis reveals that existing methods for detecting deepfake audio are often vulnerable to anti-forensic (AF) attacks, particularly those attacked using generative adversarial networks. In this article, we propose a novel collaborative learning method called SHIELD to defend against generative AF attacks. To expose AF signatures, we integrate an auxiliary generative model, called the defense (DF) generative model, which facilitates collaborative learning by combining input and output. Furthermore, we design a triplet model to capture correlations for real and AF attacked audios with real-generated and attacked-generated audios using auxiliary generative models. The proposed SHIELD strengthens the defense against generative AF attacks and achieves robust performance across various generative models. The proposed AF significantly reduces the average detection accuracy from 95.49% to 59.77% for ASVspoof2019, from 99.44% to 38.45% for In-the-Wild, and from 98.41% to 51.18% for HalfTruth for three different generative models. The proposed SHIELD mechanism is robust against AF attacks and achieves an average accuracy of 98.13%, 98.58%, and 99.57% in match, and 98.78%, 98.62%, and 98.85% in mismatch settings for the ASVspoof2019, In-the-Wild, and HalfTruth datasets, respectively.
null
https://arxiv.org/abs/2507.13170v1
https://arxiv.org/pdf/2507.13170v1.pdf
null
[ "Kutub Uddin", "Awais Khan", "Muhammad Umar Farooq", "Khalid Malik" ]
[ "DeepFake Detection", "Face Swapping", "Misinformation", "Speaker Verification", "Triplet" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/adversarial-attacks-to-image-classification
2507.13136
null
null
Adversarial attacks to image classification systems using evolutionary algorithms
Image classification currently faces significant security challenges due to adversarial attacks, which consist of intentional alterations designed to deceive classification models based on artificial intelligence. This article explores an approach to generate adversarial attacks against image classifiers using a combination of evolutionary algorithms and generative adversarial networks. The proposed approach explores the latent space of a generative adversarial network with an evolutionary algorithm to find vectors representing adversarial attacks. The approach was evaluated in two case studies corresponding to the classification of handwritten digits and object images. The results showed success rates of up to 35% for handwritten digits, and up to 75% for object images, improving over other search methods and reported results in related works. The applied method proved to be effective in handling data diversity on the target datasets, even in problem instances that presented additional challenges due to the complexity and richness of information.
null
https://arxiv.org/abs/2507.13136v1
https://arxiv.org/pdf/2507.13136v1.pdf
null
[ "Sergio Nesmachnow", "Jamal Toutouh" ]
[ "Classification", "Diversity", "Evolutionary Algorithms", "Generative Adversarial Network", "image-classification", "Image Classification" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-bayesian-incentive-mechanism-for-poison
2507.12439
null
null
A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning
Federated learning (FL) enables collaborative model training across decentralized clients while preserving data privacy. However, its open-participation nature exposes it to data-poisoning attacks, in which malicious actors submit corrupted model updates to degrade the global model. Existing defenses are often reactive, relying on statistical aggregation rules that can be computationally expensive and that typically assume an honest majority. This paper introduces a proactive, economic defense: a lightweight Bayesian incentive mechanism that makes malicious behavior economically irrational. Each training round is modeled as a Bayesian game of incomplete information in which the server, acting as the principal, uses a small, private validation dataset to verify update quality before issuing payments. The design satisfies Individual Rationality (IR) for benevolent clients, ensuring their participation is profitable, and Incentive Compatibility (IC), making poisoning an economically dominated strategy. Extensive experiments on non-IID partitions of MNIST and FashionMNIST demonstrate robustness: with 50% label-flipping adversaries on MNIST, the mechanism maintains 96.7% accuracy, only 0.3 percentage points lower than in a scenario with 30% label-flipping adversaries. This outcome is 51.7 percentage points better than standard FedAvg, which collapses under the same 50% attack. The mechanism is computationally light, budget-bounded, and readily integrates into existing FL frameworks, offering a practical route to economically robust and sustainable FL ecosystems.
null
https://arxiv.org/abs/2507.12439v1
https://arxiv.org/pdf/2507.12439v1.pdf
null
[ "Daniel Commey", "Rebecca A. Sarpong", "Griffith S. Klogo", "Winful Bagyl-Bac", "Garth V. Crosby" ]
[ "Data Poisoning", "Federated Learning" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/trustworthy-tree-based-machine-learning-by
2507.12384
null
null
Trustworthy Tree-based Machine Learning by $MoS_2$ Flash-based Analog CAM with Inherent Soft Boundaries
The rapid advancement of artificial intelligence has raised concerns regarding its trustworthiness, especially in terms of interpretability and robustness. Tree-based models like Random Forest and XGBoost excel in interpretability and accuracy for tabular data, but scaling them remains computationally expensive due to poor data locality and high data dependence. Previous efforts to accelerate these models with analog content addressable memory (CAM) have struggled, due to the fact that the difficult-to-implement sharp decision boundaries are highly susceptible to device variations, which leads to poor hardware performance and vulnerability to adversarial attacks. This work presents a novel hardware-software co-design approach using $MoS_2$ Flash-based analog CAM with inherent soft boundaries, enabling efficient inference with soft tree-based models. Our soft tree model inference experiments on $MoS_2$ analog CAM arrays show this method achieves exceptional robustness against device variation and adversarial attacks while achieving state-of-the-art accuracy. Specifically, our fabricated analog CAM arrays achieve $96\%$ accuracy on Wisconsin Diagnostic Breast Cancer (WDBC) database, while maintaining decision explainability. Our experimentally calibrated model validated only a $0.6\%$ accuracy drop on the MNIST dataset under $10\%$ device threshold variation, compared to a $45.3\%$ drop for traditional decision trees. This work paves the way for specialized hardware that enhances AI's trustworthiness and efficiency.
null
https://arxiv.org/abs/2507.12384v1
https://arxiv.org/pdf/2507.12384v1.pdf
null
[ "Bo Wen", "Guoyun Gao", "Zhicheng Xu", "Ruibin Mao", "Xiaojuan Qi", "X. Sharon Hu", "Xunzhao Yin", "Can Li" ]
[ "Diagnostic" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/thought-purity-defense-paradigm-for-chain-of
2507.12314
null
null
Thought Purity: Defense Paradigm For Chain-of-Thought Attack
While reinforcement learning-trained Large Reasoning Models (LRMs, e.g., Deepseek-R1) demonstrate advanced reasoning capabilities in the evolving Large Language Models (LLMs) domain, their susceptibility to security threats remains a critical vulnerability. This weakness is particularly evident in Chain-of-Thought (CoT) generation processes, where adversarial methods like backdoor prompt attacks can systematically subvert the model's core reasoning mechanisms. The emerging Chain-of-Thought Attack (CoTA) reveals this vulnerability through exploiting prompt controllability, simultaneously degrading both CoT safety and task performance with low-cost interventions. To address this compounded security-performance vulnerability, we propose Thought Purity (TP): a defense paradigm that systematically strengthens resistance to malicious content while preserving operational efficacy. Our solution achieves this through three synergistic components: (1) a safety-optimized data processing pipeline (2) reinforcement learning-enhanced rule constraints (3) adaptive monitoring metrics. Our approach establishes the first comprehensive defense mechanism against CoTA vulnerabilities in reinforcement learning-aligned reasoning systems, significantly advancing the security-functionality equilibrium for next-generation AI architectures.
null
https://arxiv.org/abs/2507.12314v1
https://arxiv.org/pdf/2507.12314v1.pdf
null
[ "Zihao Xue", "Zhen Bi", "Long Ma", "Zhenlin Hu", "Yan Wang", "Zhenfang Liu", "Qing Sheng", "Jie Xiao", "Jungang Lou" ]
[ "reinforcement-learning", "Reinforcement Learning" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/non-adaptive-adversarial-face-generation
2507.12107
null
null
Non-Adaptive Adversarial Face Generation
Adversarial attacks on face recognition systems (FRSs) pose serious security and privacy threats, especially when these systems are used for identity verification. In this paper, we propose a novel method for generating adversarial faces-synthetic facial images that are visually distinct yet recognized as a target identity by the FRS. Unlike iterative optimization-based approaches (e.g., gradient descent or other iterative solvers), our method leverages the structural characteristics of the FRS feature space. We figure out that individuals sharing the same attribute (e.g., gender or race) form an attributed subsphere. By utilizing such subspheres, our method achieves both non-adaptiveness and a remarkably small number of queries. This eliminates the need for relying on transferability and open-source surrogate models, which have been a typical strategy when repeated adaptive queries to commercial FRSs are impossible. Despite requiring only a single non-adaptive query consisting of 100 face images, our method achieves a high success rate of over 93% against AWS's CompareFaces API at its default threshold. Furthermore, unlike many existing attacks that perturb a given image, our method can deliberately produce adversarial faces that impersonate the target identity while exhibiting high-level attributes chosen by the adversary.
null
https://arxiv.org/abs/2507.12107v1
https://arxiv.org/pdf/2507.12107v1.pdf
null
[ "Sunpill Kim", "Seunghun Paik", "Chanwoo Hwang", "Minsu Kim", "Jae Hong Seo" ]
[ "Attribute", "Face Generation", "Face Recognition" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/distributed-resilient-state-estimation-and
2507.12052
null
null
Distributed Resilient State Estimation and Control with Strategically Implemented Security Measures
This paper addresses the problem of distributed resilient state estimation and control for linear time-invariant systems in the presence of malicious false data injection sensor attacks and bounded noise. We consider a system operator (defender) capable of deploying cybersecurity measures to counteract the sensor compromises. Although such measures enhance resilience against adversarial attacks, they may incur substantial costs; hence, it is crucial to select countermeasures to balance resilience gains and cost efficiency strategically. We first demonstrate that the system's resilience against attacks is maximized through the appropriate implementation of security measures, implying that no attacker can execute undetectable sensor attacks. Building on this analysis, we propose an algorithm that identifies the optimal security measure. While determining this measure is NP-hard in general, we also derive sufficient conditions under which efficient computation is feasible. Furthermore, we develop a distributed resilient state estimation and control scheme informed by the optimal security measure and establish conditions that guarantee bounded estimation and control errors. Finally, we validate the efficacy of our approach via numerical simulations of a vehicle platooning scenario.
null
https://arxiv.org/abs/2507.12052v1
https://arxiv.org/pdf/2507.12052v1.pdf
null
[ "Takumi Shinohara", "Karl H. Johansson", "Henrik Sandberg" ]
[ "State Estimation" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/llms-encode-harmfulness-and-refusal
2507.11878
null
null
LLMs Encode Harmfulness and Refusal Separately
LLMs are trained to refuse harmful instructions, but do they truly understand harmfulness beyond just refusing? Prior work has shown that LLMs' refusal behaviors can be mediated by a one-dimensional subspace, i.e., a refusal direction. In this work, we identify a new dimension to analyze safety mechanisms in LLMs, i.e., harmfulness, which is encoded internally as a separate concept from refusal. There exists a harmfulness direction that is distinct from the refusal direction. As causal evidence, steering along the harmfulness direction can lead LLMs to interpret harmless instructions as harmful, but steering along the refusal direction tends to elicit refusal responses directly without reversing the model's judgment on harmfulness. Furthermore, using our identified harmfulness concept, we find that certain jailbreak methods work by reducing the refusal signals without reversing the model's internal belief of harmfulness. We also find that adversarially finetuning models to accept harmful instructions has minimal impact on the model's internal belief of harmfulness. These insights lead to a practical safety application: The model's latent harmfulness representation can serve as an intrinsic safeguard (Latent Guard) for detecting unsafe inputs and reducing over-refusals that is robust to finetuning attacks. For instance, our Latent Guard achieves performance comparable to or better than Llama Guard 3 8B, a dedicated finetuned safeguard model, across different jailbreak methods. Our findings suggest that LLMs' internal understanding of harmfulness is more robust than their refusal decision to diverse input instructions, offering a new perspective to study AI safety
null
https://arxiv.org/abs/2507.11878v1
https://arxiv.org/pdf/2507.11878v1.pdf
null
[ "Jiachen Zhao", "Jing Huang", "Zhengxuan Wu", "David Bau", "Weiyan Shi" ]
[]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/p-3-scalable-permutation-equivariant-visual
2507.13347
null
null
$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning
We introduce $\pi^3$, a feed-forward neural network that offers a novel approach to visual geometry reconstruction, breaking the reliance on a conventional fixed reference view. Previous methods often anchor their reconstructions to a designated viewpoint, an inductive bias that can lead to instability and failures if the reference is suboptimal. In contrast, $\pi^3$ employs a fully permutation-equivariant architecture to predict affine-invariant camera poses and scale-invariant local point maps without any reference frames. This design makes our model inherently robust to input ordering and highly scalable. These advantages enable our simple and bias-free approach to achieve state-of-the-art performance on a wide range of tasks, including camera pose estimation, monocular/video depth estimation, and dense point map reconstruction. Code and models are publicly available.
null
https://arxiv.org/abs/2507.13347v1
https://arxiv.org/pdf/2507.13347v1.pdf
null
[ "Yifan Wang", "Jianjun Zhou", "Haoyi Zhu", "Wenzheng Chang", "Yang Zhou", "Zizun Li", "Junyi Chen", "Jiangmiao Pang", "Chunhua Shen", "Tong He" ]
[ "Camera Pose Estimation", "Depth Estimation", "Inductive Bias", "Pose Estimation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/spectralift-physics-guided-spectral-inversion
2507.13339
null
null
SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution
High-spatial-resolution hyperspectral images (HSI) are essential for applications such as remote sensing and medical imaging, yet HSI sensors inherently trade spatial detail for spectral richness. Fusing high-spatial-resolution multispectral images (HR-MSI) with low-spatial-resolution hyperspectral images (LR-HSI) is a promising route to recover fine spatial structures without sacrificing spectral fidelity. Most state-of-the-art methods for HSI-MSI fusion demand point spread function (PSF) calibration or ground truth high resolution HSI (HR-HSI), both of which are impractical to obtain in real world settings. We present SpectraLift, a fully self-supervised framework that fuses LR-HSI and HR-MSI inputs using only the MSI's Spectral Response Function (SRF). SpectraLift trains a lightweight per-pixel multi-layer perceptron (MLP) network using ($i$)~a synthetic low-spatial-resolution multispectral image (LR-MSI) obtained by applying the SRF to the LR-HSI as input, ($ii$)~the LR-HSI as the output, and ($iii$)~an $\ell_1$ spectral reconstruction loss between the estimated and true LR-HSI as the optimization objective. At inference, SpectraLift uses the trained network to map the HR-MSI pixel-wise into a HR-HSI estimate. SpectraLift converges in minutes, is agnostic to spatial blur and resolution, and outperforms state-of-the-art methods on PSNR, SAM, SSIM, and RMSE benchmarks.
null
https://arxiv.org/abs/2507.13339v1
https://arxiv.org/pdf/2507.13339v1.pdf
null
[ "Ritik Shah", "Marco F. Duarte" ]
[ "Hyperspectral Image Super-Resolution", "Image Super-Resolution", "Spectral Reconstruction", "SSIM", "Super-Resolution" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/training-transformers-with-enforced-lipschitz
2507.13338
null
null
Training Transformers with Enforced Lipschitz Constants
Neural networks are often highly sensitive to input and weight perturbations. This sensitivity has been linked to pathologies such as vulnerability to adversarial examples, divergent training, and overfitting. To combat these problems, past research has looked at building neural networks entirely from Lipschitz components. However, these techniques have not matured to the point where researchers have trained a modern architecture such as a transformer with a Lipschitz certificate enforced beyond initialization. To explore this gap, we begin by developing and benchmarking novel, computationally-efficient tools for maintaining norm-constrained weight matrices. Applying these tools, we are able to train transformer models with Lipschitz bounds enforced throughout training. We find that optimizer dynamics matter: switching from AdamW to Muon improves standard methods -- weight decay and spectral normalization -- allowing models to reach equal performance with a lower Lipschitz bound. Inspired by Muon's update having a fixed spectral norm, we co-design a weight constraint method that improves the Lipschitz vs. performance tradeoff on MLPs and 2M parameter transformers. Our 2-Lipschitz transformer on Shakespeare text reaches validation accuracy 60%. Scaling to 145M parameters, our 10-Lipschitz transformer reaches 21% accuracy on internet text. However, to match the NanoGPT baseline validation accuracy of 39.4%, our Lipschitz upper bound increases to 10^264. Nonetheless, our Lipschitz transformers train without stability measures such as layer norm, QK norm, and logit tanh softcapping.
null
https://arxiv.org/abs/2507.13338v1
https://arxiv.org/pdf/2507.13338v1.pdf
null
[ "Laker Newhouse", "R. Preston Hess", "Franz Cesista", "Andrii Zahorodnii", "Jeremy Bernstein", "Phillip Isola" ]
[ "Benchmarking" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/efficiently-constructing-sparse-navigable
2507.13296
null
null
Efficiently Constructing Sparse Navigable Graphs
Graph-based nearest neighbor search methods have seen a surge of popularity in recent years, offering state-of-the-art performance across a wide variety of applications. Central to these methods is the task of constructing a sparse navigable search graph for a given dataset endowed with a distance function. Unfortunately, doing so is computationally expensive, so heuristics are universally used in practice. In this work, we initiate the study of fast algorithms with provable guarantees for search graph construction. For a dataset with $n$ data points, the problem of constructing an optimally sparse navigable graph can be framed as $n$ separate but highly correlated minimum set cover instances. This yields a naive $O(n^3)$ time greedy algorithm that returns a navigable graph whose sparsity is at most $O(\log n)$ higher than optimal. We improve significantly on this baseline, taking advantage of correlation between the set cover instances to leverage techniques from streaming and sublinear-time set cover algorithms. Combined with problem-specific pre-processing techniques, we present an $\tilde{O}(n^2)$ time algorithm for constructing an $O(\log n)$-approximate sparsest navigable graph under any distance function. The runtime of our method is optimal up to logarithmic factors under the Strong Exponential Time Hypothesis via a reduction from Monochromatic Closest Pair. Moreover, we prove that, as with general set cover, obtaining better than an $O(\log n)$-approximation is NP-hard, despite the significant additional structure present in the navigable graph problem. Finally, we show that our techniques can also beat cubic time for the closely related and practically important problems of constructing $\alpha$-shortcut reachable and $\tau$-monotonic graphs, which are also used for nearest neighbor search. For such graphs, we obtain $\tilde{O}(n^{2.5})$ time or better algorithms.
null
https://arxiv.org/abs/2507.13296v1
https://arxiv.org/pdf/2507.13296v1.pdf
null
[ "Alex Conway", "Laxman Dhulipala", "Martin Farach-Colton", "Rob Johnson", "Ben Landrum", "Christopher Musco", "Yarin Shechter", "Torsten Suel", "Richard Wen" ]
[ "graph construction" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/the-carbon-cost-of-materials-discovery-can
2507.13246
null
null
The carbon cost of materials discovery: Can machine learning really accelerate the discovery of new photovoltaics?
Computational screening has become a powerful complement to experimental efforts in the discovery of high-performance photovoltaic (PV) materials. Most workflows rely on density functional theory (DFT) to estimate electronic and optical properties relevant to solar energy conversion. Although more efficient than laboratory-based methods, DFT calculations still entail substantial computational and environmental costs. Machine learning (ML) models have recently gained attention as surrogates for DFT, offering drastic reductions in resource use with competitive predictive performance. In this study, we reproduce a canonical DFT-based workflow to estimate the maximum efficiency limit and progressively replace its components with ML surrogates. By quantifying the CO$_2$ emissions associated with each computational strategy, we evaluate the trade-offs between predictive efficacy and environmental cost. Our results reveal multiple hybrid ML/DFT strategies that optimize different points along the accuracy--emissions front. We find that direct prediction of scalar quantities, such as maximum efficiency, is significantly more tractable than using predicted absorption spectra as an intermediate step. Interestingly, ML models trained on DFT data can outperform DFT workflows using alternative exchange--correlation functionals in screening applications, highlighting the consistency and utility of data-driven approaches. We also assess strategies to improve ML-driven screening through expanded datasets and improved model architectures tailored to PV-relevant features. This work provides a quantitative framework for building low-emission, high-throughput discovery pipelines.
null
https://arxiv.org/abs/2507.13246v1
https://arxiv.org/pdf/2507.13246v1.pdf
null
[ "Matthew Walker", "Keith T. Butler" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fastwdm3d-fast-and-accurate-3d-healthy-tissue
2507.13146
null
null
fastWDM3D: Fast and Accurate 3D Healthy Tissue Inpainting
Healthy tissue inpainting has significant applications, including the generation of pseudo-healthy baselines for tumor growth models and the facilitation of image registration. In previous editions of the BraTS Local Synthesis of Healthy Brain Tissue via Inpainting Challenge, denoising diffusion probabilistic models (DDPMs) demonstrated qualitatively convincing results but suffered from low sampling speed. To mitigate this limitation, we adapted a 2D image generation approach, combining DDPMs with generative adversarial networks (GANs) and employing a variance-preserving noise schedule, for the task of 3D inpainting. Our experiments showed that the variance-preserving noise schedule and the selected reconstruction losses can be effectively utilized for high-quality 3D inpainting in a few time steps without requiring adversarial training. We applied our findings to a different architecture, a 3D wavelet diffusion model (WDM3D) that does not include a GAN component. The resulting model, denoted as fastWDM3D, obtained a SSIM of 0.8571, a MSE of 0.0079, and a PSNR of 22.26 on the BraTS inpainting test set. Remarkably, it achieved these scores using only two time steps, completing the 3D inpainting process in 1.81 s per image. When compared to other DDPMs used for healthy brain tissue inpainting, our model is up to 800 x faster while still achieving superior performance metrics. Our proposed method, fastWDM3D, represents a promising approach for fast and accurate healthy tissue inpainting. Our code is available at https://github.com/AliciaDurrer/fastWDM3D.
null
https://arxiv.org/abs/2507.13146v1
https://arxiv.org/pdf/2507.13146v1.pdf
null
[ "Alicia Durrer", "Florentin Bieder", "Paul Friedrich", "Bjoern Menze", "Philippe C. Cattin", "Florian Kofler" ]
[ "3D Inpainting", "Denoising", "Image Generation", "Image Registration", "SSIM" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/search-for-z-2-eigenfunctions-on-the-sphere
2507.13122
null
null
Search for Z/2 eigenfunctions on the sphere using machine learning
We use machine learning to search for examples of Z/2 eigenfunctions on the 2-sphere. For this we created a multivalued version of a feedforward deep neural network, and we implemented it using the JAX library. We found Z/2 eigenfunctions for three cases: In the first two cases we fixed the branch points at the vertices of a tetrahedron and at a cube respectively. In a third case, we allowed the AI to move the branch points around and, in the end, it positioned the branch points at the vertices of a squashed tetrahedron.
null
https://arxiv.org/abs/2507.13122v1
https://arxiv.org/pdf/2507.13122v1.pdf
null
[ "Andriy Haydys", "Willem Adriaan Salm" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/3dkeyad-high-resolution-3d-point-cloud
2507.13110
null
null
3DKeyAD: High-Resolution 3D Point Cloud Anomaly Detection via Keypoint-Guided Point Clustering
High-resolution 3D point clouds are highly effective for detecting subtle structural anomalies in industrial inspection. However, their dense and irregular nature imposes significant challenges, including high computational cost, sensitivity to spatial misalignment, and difficulty in capturing localized structural differences. This paper introduces a registration-based anomaly detection framework that combines multi-prototype alignment with cluster-wise discrepancy analysis to enable precise 3D anomaly localization. Specifically, each test sample is first registered to multiple normal prototypes to enable direct structural comparison. To evaluate anomalies at a local level, clustering is performed over the point cloud, and similarity is computed between features from the test sample and the prototypes within each cluster. Rather than selecting cluster centroids randomly, a keypoint-guided strategy is employed, where geometrically informative points are chosen as centroids. This ensures that clusters are centered on feature-rich regions, enabling more meaningful and stable distance-based comparisons. Extensive experiments on the Real3D-AD benchmark demonstrate that the proposed method achieves state-of-the-art performance in both object-level and point-level anomaly detection, even using only raw features.
null
https://arxiv.org/abs/2507.13110v1
https://arxiv.org/pdf/2507.13110v1.pdf
null
[ "Zi Wang", "Katsuya Hotta", "Koichiro Kamide", "Yawen Zou", "Chao Zhang", "Jun Yu" ]
[ "Anomaly Detection", "Anomaly Localization" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/angle-estimation-of-a-single-source-with
2507.13086
null
null
Angle Estimation of a Single Source with Massive Uniform Circular Arrays
Estimating the directions of arrival (DOAs) of incoming plane waves is an essential topic in array signal processing. Widely adopted uniform linear arrays can only provide estimates of source azimuth. Thus, uniform circular arrays (UCAs) are attractive in that they can provide $360^{\circ}$ azimuthal coverage and additional elevation angle information. Considering that with a massive UCA, its polar angles of array sensors can approximately represent azimuth angles over $360^{\circ}$ using angle quantization, a simple two-dimensional DOA estimation method for a single source is proposed. In this method, the quantized azimuth angle estimate is obtained by only calculating and comparing a number of covariances, based on which the elevation angle estimate is then obtained by an explicit formula. Thus, the proposed method is computationally simple and suitable for real-time signal processing. Numerical results verify that the proposed method can obtain azimuth as well as elevation angle estimates and the estimates can be used as starting points of multidimensional searches for methods with higher accuracy. Additionally, the proposed method can still work in the presence of nonuniform noise.
null
https://arxiv.org/abs/2507.13086v1
https://arxiv.org/pdf/2507.13086v1.pdf
null
[ "Mingyan Gong" ]
[ "Quantization" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/do-governments-react-to-public-debt
2507.13084
null
null
Do Governments React to Public Debt Accumulation? A Cross-Country Analysis
Do governments adjust budgetary policy to rising public debt, precluding fiscal unsustainability? Using budget data for 52 industrial and emerging economies since 1990, we apply panel methods accounting for cross-sectional dependence and heterogeneous fiscal conduct. We find that a primary-balance rule with tax-smoothing motives and responsiveness to debt has robust explanatory power in describing fiscal behavior. Controlling for temporary output, temporary spending, and the current account balance, a 10-percentage-point increase in the debt-to-GDP ratio raises the long-run primary surplus-to-GDP ratio by 0.5 percentage points on average. Corrective adjustments hold across high- and low-debt countries and across industrial and emerging economies. Our results imply many governments pursue Ricardian policy designs, avoiding Ponzi-type financing.
null
https://arxiv.org/abs/2507.13084v1
https://arxiv.org/pdf/2507.13084v1.pdf
null
[ "Paolo Canofari", "Alessandro Piergallini", "Marco Tedeschi" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/autopartgen-autogressive-3d-part-generation
2507.13346
null
null
AutoPartGen: Autogressive 3D Part Generation and Discovery
We introduce AutoPartGen, a model that generates objects composed of 3D parts in an autoregressive manner. This model can take as input an image of an object, 2D masks of the object's parts, or an existing 3D object, and generate a corresponding compositional 3D reconstruction. Our approach builds upon 3DShape2VecSet, a recent latent 3D representation with powerful geometric expressiveness. We observe that this latent space exhibits strong compositional properties, making it particularly well-suited for part-based generation tasks. Specifically, AutoPartGen generates object parts autoregressively, predicting one part at a time while conditioning on previously generated parts and additional inputs, such as 2D images, masks, or 3D objects. This process continues until the model decides that all parts have been generated, thus determining automatically the type and number of parts. The resulting parts can be seamlessly assembled into coherent objects or scenes without requiring additional optimization. We evaluate both the overall 3D generation capabilities and the part-level generation quality of AutoPartGen, demonstrating that it achieves state-of-the-art performance in 3D part generation.
null
https://arxiv.org/abs/2507.13346v2
https://arxiv.org/pdf/2507.13346v2.pdf
null
[ "Minghao Chen", "Jianyuan Wang", "Roman Shapovalov", "Tom Monnier", "Hyunyoung Jung", "Dilin Wang", "Rakesh Ranjan", "Iro Laina", "Andrea Vedaldi" ]
[ "3D Generation", "3D Reconstruction", "Object" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-real-time-system-for-egocentric-hand-object
2507.13326
null
null
A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains
Hand-object interaction detection remains an open challenge in real-time applications, where intuitive user experiences depend on fast and accurate detection of interactions with surrounding objects. We propose an efficient approach for detecting hand-objects interactions from streaming egocentric vision that operates in real time. Our approach consists of an action recognition module and an object detection module for identifying active objects upon confirmed interaction. Our Mamba model with EfficientNetV2 as backbone for action recognition achieves 38.52% p-AP on the ENIGMA-51 benchmark at 30fps, while our fine-tuned YOLOWorld reaches 85.13% AP for hand and object. We implement our models in a cascaded architecture where the action recognition and object detection modules operate sequentially. When the action recognition predicts a contact state, it activates the object detection module, which in turn performs inference on the relevant frame to detect and classify the active object.
null
https://arxiv.org/abs/2507.13326v1
https://arxiv.org/pdf/2507.13326v1.pdf
null
[ "Antonio Finocchiaro", "Alessandro Sebastiano Catinello", "Michele Mazzamuto", "Rosario Leonardi", "Antonino Furnari", "Giovanni Maria Farinella" ]
[ "Action Recognition", "Hand-Object Interaction Detection", "Mamba", "Object", "object-detection", "Object Detection" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/towards-formal-verification-of-llm-generated
2507.13290
null
null
Towards Formal Verification of LLM-Generated Code from Natural Language Prompts
In the past few years LLMs have emerged as a tool that can aid programmers by taking natural language descriptions and generating code based on it. However, LLMs often generate incorrect code that users need to fix and the literature suggests users often struggle to detect these errors. In this work we seek to offer formal guarantees of correctness to LLM generated code; such guarantees could improve the experience of using AI Code Assistants and potentially enable natural language programming for users with little or no programming knowledge. To address this challenge we propose to incorporate a formal query language that can represent a user's intent in a formally defined but natural language-like manner that a user can confirm matches their intent. Then, using such a query we propose to verify LLM generated code to ensure it matches the user's intent. We implement these ideas in our system, Astrogator, for the Ansible programming language which includes such a formal query language, a calculus for representing the behavior of Ansible programs, and a symbolic interpreter which is used for the verification. On a benchmark suite of 21 code-generation tasks, our verifier is able to verify correct code in 83% of cases and identify incorrect code in 92%.
null
https://arxiv.org/abs/2507.13290v1
https://arxiv.org/pdf/2507.13290v1.pdf
null
[ "Aaron Councilman", "David Fu", "Aryan Gupta", "Chengxiao Wang", "David Grove", "Yu-Xiong Wang", "Vikram Adve" ]
[ "Code Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/stochastic-weakly-convex-optimization-under
2507.13283
null
null
Stochastic Weakly Convex Optimization Under Heavy-Tailed Noises
An increasing number of studies have focused on stochastic first-order methods (SFOMs) under heavy-tailed gradient noises, which have been observed in the training of practical deep learning models. In this paper, we focus on two types of gradient noises: one is sub-Weibull noise, and the other is noise under the assumption that it has a bounded $p$-th central moment ($p$-BCM) with $p\in (1, 2]$. The latter is more challenging due to the occurrence of infinite variance when $p\in (1, 2)$. Under these two gradient noise assumptions, the in-expectation and high-probability convergence of SFOMs have been extensively studied in the contexts of convex optimization and standard smooth optimization. However, for weakly convex objectives-a class that includes all Lipschitz-continuous convex objectives and smooth objectives-our understanding of the in-expectation and high-probability convergence of SFOMs under these two types of noises remains incomplete. We investigate the high-probability convergence of the vanilla stochastic subgradient descent (SsGD) method under sub-Weibull noises, as well as the high-probability and in-expectation convergence of clipped SsGD under the $p$-BCM noises. Both analyses are conducted in the context of weakly convex optimization. For weakly convex objectives that may be non-convex and non-smooth, our results demonstrate that the theoretical dependence of vanilla SsGD on the failure probability and number of iterations under sub-Weibull noises does not degrade compared to the case of smooth objectives. Under $p$-BCM noises, our findings indicate that the non-smoothness and non-convexity of weakly convex objectives do not impact the theoretical dependence of clipped SGD on the failure probability relative to the smooth case; however, the sample complexity we derived is worse than a well-known lower bound for smooth optimization.
null
https://arxiv.org/abs/2507.13283v1
https://arxiv.org/pdf/2507.13283v1.pdf
null
[ "Tianxi Zhu", "Yi Xu", "Xiangyang Ji" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/evaluating-reinforcement-learning-algorithms-1
2507.13277
null
null
Evaluating Reinforcement Learning Algorithms for Navigation in Simulated Robotic Quadrupeds: A Comparative Study Inspired by Guide Dog Behaviour
Robots are increasingly integrated across industries, particularly in healthcare. However, many valuable applications for quadrupedal robots remain overlooked. This research explores the effectiveness of three reinforcement learning algorithms in training a simulated quadruped robot for autonomous navigation and obstacle avoidance. The goal is to develop a robotic guide dog simulation capable of path following and obstacle avoidance, with long-term potential for real-world assistance to guide dogs and visually impaired individuals. It also seeks to expand research into medical 'pets', including robotic guide and alert dogs. A comparative analysis of thirteen related research papers shaped key evaluation criteria, including collision detection, pathfinding algorithms, sensor usage, robot type, and simulation platforms. The study focuses on sensor inputs, collision frequency, reward signals, and learning progression to determine which algorithm best supports robotic navigation in complex environments. Custom-made environments were used to ensure fair evaluation of all three algorithms under controlled conditions, allowing consistent data collection. Results show that Proximal Policy Optimization (PPO) outperformed Deep Q-Network (DQN) and Q-learning across all metrics, particularly in average and median steps to goal per episode. By analysing these results, this study contributes to robotic navigation, AI and medical robotics, offering insights into the feasibility of AI-driven quadruped mobility and its role in assistive robotics.
null
https://arxiv.org/abs/2507.13277v1
https://arxiv.org/pdf/2507.13277v1.pdf
null
[ "Emma M. A. Harrison" ]
[ "Autonomous Navigation", "Q-Learning" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/merge-kernel-for-bayesian-optimization-on
2507.13263
null
null
Merge Kernel for Bayesian Optimization on Permutation Space
Bayesian Optimization (BO) algorithm is a standard tool for black-box optimization problems. The current state-of-the-art BO approach for permutation spaces relies on the Mallows kernel-an $\Omega(n^2)$ representation that explicitly enumerates every pairwise comparison. Inspired by the close relationship between the Mallows kernel and pairwise comparison, we propose a novel framework for generating kernel functions on permutation space based on sorting algorithms. Within this framework, the Mallows kernel can be viewed as a special instance derived from bubble sort. Further, we introduce the \textbf{Merge Kernel} constructed from merge sort, which replaces the quadratic complexity with $\Theta(n\log n)$ to achieve the lowest possible complexity. The resulting feature vector is significantly shorter, can be computed in linearithmic time, yet still efficiently captures meaningful permutation distances. To boost robustness and right-invariance without sacrificing compactness, we further incorporate three lightweight, task-agnostic descriptors: (1) a shift histogram, which aggregates absolute element displacements and supplies a global misplacement signal; (2) a split-pair line, which encodes selected long-range comparisons by aligning elements across the two halves of the whole permutation; and (3) sliding-window motifs, which summarize local order patterns that influence near-neighbor objectives. Our empirical evaluation demonstrates that the proposed kernel consistently outperforms the state-of-the-art Mallows kernel across various permutation optimization benchmarks. Results confirm that the Merge Kernel provides a more compact yet more effective solution for Bayesian optimization in permutation space.
null
https://arxiv.org/abs/2507.13263v2
https://arxiv.org/pdf/2507.13263v2.pdf
null
[ "Zikai Xie", "Linjiang Chen" ]
[ "Bayesian Optimization" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/qtcajosa-low-complexity-joint-offloading-and
2507.13242
null
null
QTCAJOSA: Low-Complexity Joint Offloading and Subchannel Allocation for NTN-Enabled IoMT
In this work, we consider the resource allocation problem for task offloading from Internet of Medical Things (IoMT) devices, to a non-terrestrial network. The architecture considers clusters of IoMT devices that offload their tasks to a dedicated unmanned aerial vehicle (UAV) serving as a multi-access edge computing (MEC) server, which can compute the task or further offload it to an available high-altitude platform station (HAPS) or to a low-earth orbit (LEO) satellite for remote computing. We formulate a problem that has as objective the minimization of the weighted sum delay of the tasks. Given the non-convex nature of the problem, and acknowledging that the complexity of the optimization algorithms impact their performance, we derive a low-complexity joint subchannel allocation and offloading decision algorithm with dynamic computing resource initialization, developed as a greedy heuristic based on convex optimization criteria. Simulations show the gain obtained by including the different non-terrestrial nodes against architectures without them.
null
https://arxiv.org/abs/2507.13242v1
https://arxiv.org/pdf/2507.13242v1.pdf
null
[ "Alejandro Flores C.", "Konstantinos Ntontin", "Ashok Bandi", "Symeon Chatzinotas" ]
[ "Edge-computing" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/leveraging-pre-trained-visual-models-for-ai
2507.13224
null
null
Leveraging Pre-Trained Visual Models for AI-Generated Video Detection
Recent advances in Generative AI (GenAI) have led to significant improvements in the quality of generated visual content. As AI-generated visual content becomes increasingly indistinguishable from real content, the challenge of detecting the generated content becomes critical in combating misinformation, ensuring privacy, and preventing security threats. Although there has been substantial progress in detecting AI-generated images, current methods for video detection are largely focused on deepfakes, which primarily involve human faces. However, the field of video generation has advanced beyond DeepFakes, creating an urgent need for methods capable of detecting AI-generated videos with generic content. To address this gap, we propose a novel approach that leverages pre-trained visual models to distinguish between real and generated videos. The features extracted from these pre-trained models, which have been trained on extensive real visual content, contain inherent signals that can help distinguish real from generated videos. Using these extracted features, we achieve high detection performance without requiring additional model training, and we further improve performance by training a simple linear classification layer on top of the extracted features. We validated our method on a dataset we compiled (VID-AID), which includes around 10,000 AI-generated videos produced by 9 different text-to-video models, along with 4,000 real videos, totaling over 7 hours of video content. Our evaluation shows that our approach achieves high detection accuracy, above 90% on average, underscoring its effectiveness. Upon acceptance, we plan to publicly release the code, the pre-trained models, and our dataset to support ongoing research in this critical area.
null
https://arxiv.org/abs/2507.13224v1
https://arxiv.org/pdf/2507.13224v1.pdf
null
[ "Keerthi Veeramachaneni", "Praveen Tirupattur", "Amrit Singh Bedi", "Mubarak Shah" ]
[ "Misinformation", "Video Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/synthesizing-reality-leveraging-the
2507.13221
null
null
Synthesizing Reality: Leveraging the Generative AI-Powered Platform Midjourney for Construction Worker Detection
While recent advancements in deep neural networks (DNNs) have substantially enhanced visual AI's capabilities, the challenge of inadequate data diversity and volume remains, particularly in construction domain. This study presents a novel image synthesis methodology tailored for construction worker detection, leveraging the generative-AI platform Midjourney. The approach entails generating a collection of 12,000 synthetic images by formulating 3000 different prompts, with an emphasis on image realism and diversity. These images, after manual labeling, serve as a dataset for DNN training. Evaluation on a real construction image dataset yielded promising results, with the model attaining average precisions (APs) of 0.937 and 0.642 at intersection-over-union (IoU) thresholds of 0.5 and 0.5 to 0.95, respectively. Notably, the model demonstrated near-perfect performance on the synthetic dataset, achieving APs of 0.994 and 0.919 at the two mentioned thresholds. These findings reveal both the potential and weakness of generative AI in addressing DNN training data scarcity.
null
https://arxiv.org/abs/2507.13221v1
https://arxiv.org/pdf/2507.13221v1.pdf
null
[ "Hongyang Zhao", "Tianyu Liang", "Sina Davari", "Daeho Kim" ]
[ "Diversity", "Image Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/gemmas-graph-based-evaluation-metrics-for
2507.13190
null
null
GEMMAS: Graph-based Evaluation Metrics for Multi Agent Systems
Multi-agent systems built on language models have shown strong performance on collaborative reasoning tasks. However, existing evaluations focus only on the correctness of the final output, overlooking how inefficient communication and poor coordination contribute to redundant reasoning and higher computational costs. We introduce GEMMAS, a graph-based evaluation framework that analyzes the internal collaboration process by modeling agent interactions as a directed acyclic graph. To capture collaboration quality, we propose two process-level metrics: Information Diversity Score (IDS) to measure semantic variation in inter-agent messages, and Unnecessary Path Ratio (UPR) to quantify redundant reasoning paths. We evaluate GEMMAS across five benchmarks and highlight results on GSM8K, where systems with only a 2.1% difference in accuracy differ by 12.8% in IDS and 80% in UPR, revealing substantial variation in internal collaboration. These findings demonstrate that outcome-only metrics are insufficient for evaluating multi-agent performance and highlight the importance of process-level diagnostics in designing more interpretable and resource-efficient collaborative AI systems.
null
https://arxiv.org/abs/2507.13190v1
https://arxiv.org/pdf/2507.13190v1.pdf
null
[ "Jisoo Lee", "Raeyoung Chang", "Dongwook Kwon", "Harmanpreet Singh", "Nikhil Verma" ]
[ "Diversity", "GSM8K" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/spectral-bellman-method-unifying
2507.13181
null
null
Spectral Bellman Method: Unifying Representation and Exploration in RL
The effect of representation has been demonstrated in reinforcement learning, from both theoretical and empirical successes. However, the existing representation learning mainly induced from model learning aspects, misaligning with our RL tasks. This work introduces Spectral Bellman Representation, a novel framework derived from the Inherent Bellman Error (IBE) condition, which aligns with the fundamental structure of Bellman updates across a space of possible value functions, therefore, directly towards value-based RL. Our key insight is the discovery of a fundamental spectral relationship: under the zero-IBE condition, the transformation of a distribution of value functions by the Bellman operator is intrinsically linked to the feature covariance structure. This spectral connection yields a new, theoretically-grounded objective for learning state-action features that inherently capture this Bellman-aligned covariance. Our method requires a simple modification to existing algorithms. We demonstrate that our learned representations enable structured exploration, by aligning feature covariance with Bellman dynamics, and improve overall performance, particularly in challenging hard-exploration and long-horizon credit assignment tasks. Our framework naturally extends to powerful multi-step Bellman operators, further broadening its impact. Spectral Bellman Representation offers a principled and effective path toward learning more powerful and structurally sound representations for value-based reinforcement learning.
null
https://arxiv.org/abs/2507.13181v1
https://arxiv.org/pdf/2507.13181v1.pdf
null
[ "Ofir Nabati", "Bo Dai", "Shie Mannor", "Guy Tennenholtz" ]
[ "reinforcement-learning", "Reinforcement Learning", "Representation Learning" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/aligning-humans-and-robots-via-reinforcement
2507.13171
null
null
Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback
Conventional reinforcement learning (RL) ap proaches often struggle to learn effective policies under sparse reward conditions, necessitating the manual design of complex, task-specific reward functions. To address this limitation, rein forcement learning from human feedback (RLHF) has emerged as a promising strategy that complements hand-crafted rewards with human-derived evaluation signals. However, most existing RLHF methods depend on explicit feedback mechanisms such as button presses or preference labels, which disrupt the natural interaction process and impose a substantial cognitive load on the user. We propose a novel reinforcement learning from implicit human feedback (RLIHF) framework that utilizes non-invasive electroencephalography (EEG) signals, specifically error-related potentials (ErrPs), to provide continuous, implicit feedback without requiring explicit user intervention. The proposed method adopts a pre-trained decoder to transform raw EEG signals into probabilistic reward components, en abling effective policy learning even in the presence of sparse external rewards. We evaluate our approach in a simulation environment built on the MuJoCo physics engine, using a Kinova Gen2 robotic arm to perform a complex pick-and-place task that requires avoiding obstacles while manipulating target objects. The results show that agents trained with decoded EEG feedback achieve performance comparable to those trained with dense, manually designed rewards. These findings validate the potential of using implicit neural feedback for scalable and human-aligned reinforcement learning in interactive robotics.
null
https://arxiv.org/abs/2507.13171v1
https://arxiv.org/pdf/2507.13171v1.pdf
null
[ "Suzie Kim", "Hye-Bin Shin", "Seong-Whan Lee" ]
[ "EEG", "MuJoCo", "reinforcement-learning", "Reinforcement Learning", "Reinforcement Learning (RL)" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/videoitg-multimodal-video-understanding-with
2507.13353
null
null
VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding
Recent studies have revealed that selecting informative and relevant video frames can significantly improve the performance of Video Large Language Models (Video-LLMs). Current methods, such as reducing inter-frame redundancy, employing separate models for image-text relevance assessment, or utilizing temporal video grounding for event localization, substantially adopt unsupervised learning paradigms, whereas they struggle to address the complex scenarios in long video understanding. We propose Instructed Temporal Grounding for Videos (VideoITG), featuring customized frame sampling aligned with user instructions. The core of VideoITG is the VidThinker pipeline, an automated annotation framework that explicitly mimics the human annotation process. First, it generates detailed clip-level captions conditioned on the instruction; then, it retrieves relevant video segments through instruction-guided reasoning; finally, it performs fine-grained frame selection to pinpoint the most informative visual evidence. Leveraging VidThinker, we construct the VideoITG-40K dataset, containing 40K videos and 500K instructed temporal grounding annotations. We then design a plug-and-play VideoITG model, which takes advantage of visual language alignment and reasoning capabilities of Video-LLMs, for effective frame selection in a discriminative manner. Coupled with Video-LLMs, VideoITG achieves consistent performance improvements across multiple multimodal video understanding benchmarks, showing its superiority and great potentials for video understanding.
null
https://arxiv.org/abs/2507.13353v1
https://arxiv.org/pdf/2507.13353v1.pdf
null
[ "Shihao Wang", "Guo Chen", "De-An Huang", "Zhiqi Li", "Minghan Li", "Guilin Li", "Jose M. Alvarez", "Lei Zhang", "Zhiding Yu" ]
[ "Video Grounding", "Video Understanding" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/deep-learning-based-fetal-lung-segmentation
2507.13106
null
null
Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.
null
https://arxiv.org/abs/2507.13106v1
https://arxiv.org/pdf/2507.13106v1.pdf
null
[ "Zhennan Xiao", "Katharine Brudkiewicz", "Zhen Yuan", "Rosalind Aughwane", "Magdalena Sokolska", "Joanna Chappell", "Trevor Gaunt", "Anna L. David", "Andrew P. King", "Andrew Melbourne" ]
[ "Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/diffoseg-omni-medical-image-segmentation-via
2507.13087
null
null
DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model
Annotation variability remains a substantial challenge in medical image segmentation, stemming from ambiguous imaging boundaries and diverse clinical expertise. Traditional deep learning methods producing single deterministic segmentation predictions often fail to capture these annotator biases. Although recent studies have explored multi-rater segmentation, existing methods typically focus on a single perspective -- either generating a probabilistic ``gold standard'' consensus or preserving expert-specific preferences -- thus struggling to provide a more omni view. In this study, we propose DiffOSeg, a two-stage diffusion-based framework, which aims to simultaneously achieve both consensus-driven (combining all experts' opinions) and preference-driven (reflecting experts' individual assessments) segmentation. Stage I establishes population consensus through a probabilistic consensus strategy, while Stage II captures expert-specific preference via adaptive prompts. Demonstrated on two public datasets (LIDC-IDRI and NPC-170), our model outperforms existing state-of-the-art methods across all evaluated metrics. Source code is available at https://github.com/string-ellipses/DiffOSeg .
null
https://arxiv.org/abs/2507.13087v1
https://arxiv.org/pdf/2507.13087v1.pdf
null
[ "Han Zhang", "Xiangde Luo", "Yong Chen", "Kang Li" ]
[ "Image Segmentation", "Medical Image Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hierarchical-rectified-flow-matching-with
2507.13350
null
null
Hierarchical Rectified Flow Matching with Mini-Batch Couplings
Flow matching has emerged as a compelling generative modeling approach that is widely used across domains. To generate data via a flow matching model, an ordinary differential equation (ODE) is numerically solved via forward integration of the modeled velocity field. To better capture the multi-modality that is inherent in typical velocity fields, hierarchical flow matching was recently introduced. It uses a hierarchy of ODEs that are numerically integrated when generating data. This hierarchy of ODEs captures the multi-modal velocity distribution just like vanilla flow matching is capable of modeling a multi-modal data distribution. While this hierarchy enables to model multi-modal velocity distributions, the complexity of the modeled distribution remains identical across levels of the hierarchy. In this paper, we study how to gradually adjust the complexity of the distributions across different levels of the hierarchy via mini-batch couplings. We show the benefits of mini-batch couplings in hierarchical rectified flow matching via compelling results on synthetic and imaging data. Code is available at https://riccizz.github.io/HRF_coupling.
null
https://arxiv.org/abs/2507.13350v1
https://arxiv.org/pdf/2507.13350v1.pdf
null
[ "Yichi Zhang", "Yici Yan", "Alex Schwing", "Zhizhen Zhao" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hapticcap-a-multimodal-dataset-and-task-for
2507.13318
null
null
HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals
Haptic signals, from smartphone vibrations to virtual reality touch feedback, can effectively convey information and enhance realism, but designing signals that resonate meaningfully with users is challenging. To facilitate this, we introduce a multimodal dataset and task, of matching user descriptions to vibration haptic signals, and highlight two primary challenges: (1) lack of large haptic vibration datasets annotated with textual descriptions as collecting haptic descriptions is time-consuming, and (2) limited capability of existing tasks and models to describe vibration signals in text. To advance this area, we create HapticCap, the first fully human-annotated haptic-captioned dataset, containing 92,070 haptic-text pairs for user descriptions of sensory, emotional, and associative attributes of vibrations. Based on HapticCap, we propose the haptic-caption retrieval task and present the results of this task from a supervised contrastive learning framework that brings together text representations within specific categories and vibrations. Overall, the combination of language model T5 and audio model AST yields the best performance in the haptic-caption retrieval task, especially when separately trained for each description category.
null
https://arxiv.org/abs/2507.13318v1
https://arxiv.org/pdf/2507.13318v1.pdf
null
[ "Guimin Hu", "Daniel Hershcovich", "Hasti Seifi" ]
[ "Contrastive Learning", "Retrieval" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/revisiting-reliability-in-the-reasoning-based
2507.13314
null
null
Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark
The reasoning-based pose estimation (RPE) benchmark has emerged as a widely adopted evaluation standard for pose-aware multimodal large language models (MLLMs). Despite its significance, we identified critical reproducibility and benchmark-quality issues that hinder fair and consistent quantitative evaluations. Most notably, the benchmark utilizes different image indices from those of the original 3DPW dataset, forcing researchers into tedious and error-prone manual matching processes to obtain accurate ground-truth (GT) annotations for quantitative metrics (\eg, MPJPE, PA-MPJPE). Furthermore, our analysis reveals several inherent benchmark-quality limitations, including significant image redundancy, scenario imbalance, overly simplistic poses, and ambiguous textual descriptions, collectively undermining reliable evaluations across diverse scenarios. To alleviate manual effort and enhance reproducibility, we carefully refined the GT annotations through meticulous visual matching and publicly release these refined annotations as an open-source resource, thereby promoting consistent quantitative evaluations and facilitating future advancements in human pose-aware multimodal reasoning.
null
https://arxiv.org/abs/2507.13314v1
https://arxiv.org/pdf/2507.13314v1.pdf
null
[ "Junsu Kim", "Naeun Kim", "Jaeho Lee", "Incheol Park", "Dongyoon Han", "Seungryul Baek" ]
[ "Multimodal Reasoning", "Pose Estimation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/overview-of-the-talentclef-2025-skill-and-job
2507.13275
null
null
Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management
Advances in natural language processing and large language models are driving a major transformation in Human Capital Management, with a growing interest in building smart systems based on language technologies for talent acquisition, upskilling strategies, and workforce planning. However, the adoption and progress of these technologies critically depend on the development of reliable and fair models, properly evaluated on public data and open benchmarks, which have so far been unavailable in this domain. To address this gap, we present TalentCLEF 2025, the first evaluation campaign focused on skill and job title intelligence. The lab consists of two tasks: Task A - Multilingual Job Title Matching, covering English, Spanish, German, and Chinese; and Task B - Job Title-Based Skill Prediction, in English. Both corpora were built from real job applications, carefully anonymized, and manually annotated to reflect the complexity and diversity of real-world labor market data, including linguistic variability and gender-marked expressions. The evaluations included monolingual and cross-lingual scenarios and covered the evaluation of gender bias. TalentCLEF attracted 76 registered teams with more than 280 submissions. Most systems relied on information retrieval techniques built with multilingual encoder-based models fine-tuned with contrastive learning, and several of them incorporated large language models for data augmentation or re-ranking. The results show that the training strategies have a larger effect than the size of the model alone. TalentCLEF provides the first public benchmark in this field and encourages the development of robust, fair, and transferable language technologies for the labor market.
null
https://arxiv.org/abs/2507.13275v1
https://arxiv.org/pdf/2507.13275v1.pdf
null
[ "Luis Gasco", "Hermenegildo Fabregat", "Laura García-Sardiña", "Paula Estrella", "Daniel Deniz", "Alvaro Rodrigo", "Rabih Zbib" ]
[ "Contrastive Learning", "Data Augmentation", "Information Retrieval", "Management", "Re-Ranking" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/vita-vision-to-action-flow-matching-policy
2507.13231
null
null
VITA: Vision-to-Action Flow Matching Policy
We present VITA, a Vision-To-Action flow matching policy that evolves latent visual representations into latent actions for visuomotor control. Traditional flow matching and diffusion policies sample from standard source distributions (e.g., Gaussian noise) and require additional conditioning mechanisms like cross-attention to condition action generation on visual information, creating time and space overheads. VITA proposes a novel paradigm that treats latent images as the flow source, learning an inherent mapping from vision to action while eliminating separate conditioning modules and preserving generative modeling capabilities. Learning flows between fundamentally different modalities like vision and action is challenging due to sparse action data lacking semantic structures and dimensional mismatches between high-dimensional visual representations and raw actions. We address this by creating a structured action latent space via an autoencoder as the flow matching target, up-sampling raw actions to match visual representation shapes. Crucially, we supervise flow matching with both encoder targets and final action outputs through flow latent decoding, which backpropagates action reconstruction loss through sequential flow matching ODE solving steps for effective end-to-end learning. Implemented as simple MLP layers, VITA is evaluated on challenging bi-manual manipulation tasks on the ALOHA platform, including 5 simulation and 2 real-world tasks. Despite its simplicity, MLP-only VITA outperforms or matches state-of-the-art generative policies while reducing inference latency by 50-130% compared to conventional flow matching policies requiring different conditioning mechanisms or complex architectures. To our knowledge, VITA is the first MLP-only flow matching policy capable of solving complex bi-manual manipulation tasks like those in ALOHA benchmarks.
null
https://arxiv.org/abs/2507.13231v1
https://arxiv.org/pdf/2507.13231v1.pdf
null
[ "Dechen Gao", "Boqi Zhao", "Andrew Lee", "Ian Chuang", "Hanchu Zhou", "Hang Wang", "Zhe Zhao", "Junshan Zhang", "Iman Soltani" ]
[ "Action Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/higher-order-pattern-unification-modulo
2507.13208
null
null
Higher-Order Pattern Unification Modulo Similarity Relations
The combination of higher-order theories and fuzzy logic can be useful in decision-making tasks that involve reasoning across abstract functions and predicates, where exact matches are often rare or unnecessary. Developing efficient reasoning and computational techniques for such a combined formalism presents a significant challenge. In this paper, we adopt a more straightforward approach aiming at integrating two well-established and computationally well-behaved components: higher-order patterns on one side and fuzzy equivalences expressed through similarity relations based on minimum T-norm on the other. We propose a unification algorithm for higher-order patterns modulo these similarity relations and prove its termination, soundness, and completeness. This unification problem, like its crisp counterpart, is unitary. The algorithm computes a most general unifier with the highest degree of approximation when the given terms are unifiable.
null
https://arxiv.org/abs/2507.13208v1
https://arxiv.org/pdf/2507.13208v1.pdf
null
[ "Besik Dundua", "Temur Kutsia" ]
[ "Decision Making" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/rs-tinynet-stage-wise-feature-fusion-network
2507.13120
null
null
RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images
Detecting tiny objects in remote sensing (RS) imagery has been a long-standing challenge due to their extremely limited spatial information, weak feature representations, and dense distributions across complex backgrounds. Despite numerous efforts devoted, mainstream detectors still underperform in such scenarios. To bridge this gap, we introduce RS-TinyNet, a multi-stage feature fusion and enhancement model explicitly tailored for RS tiny object detection in various RS scenarios. RS-TinyNet comes with two novel designs: tiny object saliency modeling and feature integrity reconstruction. Guided by these principles, we design three step-wise feature enhancement modules. Among them, the multi-dimensional collaborative attention (MDCA) module employs multi-dimensional attention to enhance the saliency of tiny objects. Additionally, the auxiliary reversible branch (ARB) and a progressive fusion detection head (PFDH) module are introduced to preserve information flow and fuse multi-level features to bridge semantic gaps and retain structural detail. Comprehensive experiments on public RS dataset AI-TOD show that our RS-TinyNet surpasses existing state-of-the-art (SOTA) detectors by 4.0% AP and 6.5% AP75. Evaluations on DIOR benchmark dataset further validate its superior detection performance in diverse RS scenarios. These results demonstrate that the proposed multi-stage feature fusion strategy offers an effective and practical solution for tiny object detection in complex RS environments.
null
https://arxiv.org/abs/2507.13120v1
https://arxiv.org/pdf/2507.13120v1.pdf
null
[ "Xiaozheng Jiang", "Wei zhang", "Xuerui Mao" ]
[ "object-detection", "Object Detection" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/graspgen-a-diffusion-based-framework-for-6
2507.13097
null
null
GraspGen: A Diffusion-based Framework for 6-DOF Grasping with On-Generator Training
Grasping is a fundamental robot skill, yet despite significant research advancements, learning-based 6-DOF grasping approaches are still not turnkey and struggle to generalize across different embodiments and in-the-wild settings. We build upon the recent success on modeling the object-centric grasp generation process as an iterative diffusion process. Our proposed framework, GraspGen, consists of a DiffusionTransformer architecture that enhances grasp generation, paired with an efficient discriminator to score and filter sampled grasps. We introduce a novel and performant on-generator training recipe for the discriminator. To scale GraspGen to both objects and grippers, we release a new simulated dataset consisting of over 53 million grasps. We demonstrate that GraspGen outperforms prior methods in simulations with singulated objects across different grippers, achieves state-of-the-art performance on the FetchBench grasping benchmark, and performs well on a real robot with noisy visual observations.
null
https://arxiv.org/abs/2507.13097v1
https://arxiv.org/pdf/2507.13097v1.pdf
null
[ "Adithyavairavan Murali", "Balakumar Sundaralingam", "Yu-Wei Chao", "Wentao Yuan", "Jun Yamada", "Mark Carlson", "Fabio Ramos", "Stan Birchfield", "Dieter Fox", "Clemens Eppner" ]
[ "Grasp Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/decoupled-prob-decoupled-query-initialization
2507.13085
null
null
Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection
Open World Object Detection (OWOD) is a challenging computer vision task that extends standard object detection by (1) detecting and classifying unknown objects without supervision, and (2) incrementally learning new object classes without forgetting previously learned ones. The absence of ground truths for unknown objects makes OWOD tasks particularly challenging. Many methods have addressed this by using pseudo-labels for unknown objects. The recently proposed Probabilistic Objectness transformer-based open-world detector (PROB) is a state-of-the-art model that does not require pseudo-labels for unknown objects, as it predicts probabilistic objectness. However, this method faces issues with learning conflicts between objectness and class predictions. To address this issue and further enhance performance, we propose a novel model, Decoupled PROB. Decoupled PROB introduces Early Termination of Objectness Prediction (ETOP) to stop objectness predictions at appropriate layers in the decoder, resolving the learning conflicts between class and objectness predictions in PROB. Additionally, we introduce Task-Decoupled Query Initialization (TDQI), which efficiently extracts features of known and unknown objects, thereby improving performance. TDQI is a query initialization method that combines query selection and learnable queries, and it is a module that can be easily integrated into existing DETR-based OWOD models. Extensive experiments on OWOD benchmarks demonstrate that Decoupled PROB surpasses all existing methods across several metrics, significantly improving performance.
null
https://arxiv.org/abs/2507.13085v1
https://arxiv.org/pdf/2507.13085v1.pdf
null
[ "Riku Inoue", "Masamitsu Tsuchiya", "Yuji Yasui" ]
[ "object-detection", "Object Detection", "Open World Object Detection" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/channel-wise-motion-features-for-efficient
2507.13082
null
null
Channel-wise Motion Features for Efficient Motion Segmentation
For safety-critical robotics applications such as autonomous driving, it is important to detect all required objects accurately in real-time. Motion segmentation offers a solution by identifying dynamic objects from the scene in a class-agnostic manner. Recently, various motion segmentation models have been proposed, most of which jointly use subnetworks to estimate Depth, Pose, Optical Flow, and Scene Flow. As a result, the overall computational cost of the model increases, hindering real-time performance. In this paper, we propose a novel cost-volume-based motion feature representation, Channel-wise Motion Features. By extracting depth features of each instance in the feature map and capturing the scene's 3D motion information, it offers enhanced efficiency. The only subnetwork used to build Channel-wise Motion Features is the Pose Network, and no others are required. Our method not only achieves about 4 times the FPS of state-of-the-art models in the KITTI Dataset and Cityscapes of the VCAS-Motion Dataset, but also demonstrates equivalent accuracy while reducing the parameters to about 25$\%$.
null
https://arxiv.org/abs/2507.13082v1
https://arxiv.org/pdf/2507.13082v1.pdf
null
[ "Riku Inoue", "Masamitsu Tsuchiya", "Yuji Yasui" ]
[ "Autonomous Driving", "Motion Segmentation", "Optical Flow Estimation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/imbalance-in-balance-online-concept-balancing
2507.13345
null
null
Imbalance in Balance: Online Concept Balancing in Generation Models
In visual generation tasks, the responses and combinations of complex concepts often lack stability and are error-prone, which remains an under-explored area. In this paper, we attempt to explore the causal factors for poor concept responses through elaborately designed experiments. We also design a concept-wise equalization loss function (IMBA loss) to address this issue. Our proposed method is online, eliminating the need for offline dataset processing, and requires minimal code changes. In our newly proposed complex concept benchmark Inert-CompBench and two other public test sets, our method significantly enhances the concept response capability of baseline models and yields highly competitive results with only a few codes.
null
https://arxiv.org/abs/2507.13345v1
https://arxiv.org/pdf/2507.13345v1.pdf
null
[ "Yukai Shi", "Jiarong Ou", "Rui Chen", "Haotian Yang", "Jiahao Wang", "Xin Tao", "Pengfei Wan", "Di Zhang", "Kun Gai" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/taming-diffusion-transformer-for-real-time
2507.13343
null
null
Taming Diffusion Transformer for Real-Time Mobile Video Generation
Diffusion Transformers (DiT) have shown strong performance in video generation tasks, but their high computational cost makes them impractical for resource-constrained devices like smartphones, and real-time generation is even more challenging. In this work, we propose a series of novel optimizations to significantly accelerate video generation and enable real-time performance on mobile platforms. First, we employ a highly compressed variational autoencoder (VAE) to reduce the dimensionality of the input data without sacrificing visual quality. Second, we introduce a KD-guided, sensitivity-aware tri-level pruning strategy to shrink the model size to suit mobile platform while preserving critical performance characteristics. Third, we develop an adversarial step distillation technique tailored for DiT, which allows us to reduce the number of inference steps to four. Combined, these optimizations enable our model to achieve over 10 frames per second (FPS) generation on an iPhone 16 Pro Max, demonstrating the feasibility of real-time, high-quality video generation on mobile devices.
null
https://arxiv.org/abs/2507.13343v1
https://arxiv.org/pdf/2507.13343v1.pdf
null
[ "Yushu Wu", "Yanyu Li", "Anil Kag", "Ivan Skorokhodov", "Willi Menapace", "Ke Ma", "Arpit Sahni", "Ju Hu", "Aliaksandr Siarohin", "Dhritiman Sagar", "Yanzhi Wang", "Sergey Tulyakov" ]
[ "Video Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fashionpose-text-to-pose-to-relight-image
2507.13311
null
null
FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization
Realistic and controllable garment visualization is critical for fashion e-commerce, where users expect personalized previews under diverse poses and lighting conditions. Existing methods often rely on predefined poses, limiting semantic flexibility and illumination adaptability. To address this, we introduce FashionPose, the first unified text-to-pose-to-relighting generation framework. Given a natural language description, our method first predicts a 2D human pose, then employs a diffusion model to generate high-fidelity person images, and finally applies a lightweight relighting module, all guided by the same textual input. By replacing explicit pose annotations with text-driven conditioning, FashionPose enables accurate pose alignment, faithful garment rendering, and flexible lighting control. Experiments demonstrate fine-grained pose synthesis and efficient, consistent relighting, providing a practical solution for personalized virtual fashion display.
null
https://arxiv.org/abs/2507.13311v1
https://arxiv.org/pdf/2507.13311v1.pdf
null
[ "Chuancheng Shi", "Yixiang Chen", "Burong Lei", "Jichao Chen" ]
[ "Image Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-agent-synergy-driven-iterative-visual
2507.13285
null
null
Multi-Agent Synergy-Driven Iterative Visual Narrative Synthesis
Automated generation of high-quality media presentations is challenging, requiring robust content extraction, narrative planning, visual design, and overall quality optimization. Existing methods often produce presentations with logical inconsistencies and suboptimal layouts, thereby struggling to meet professional standards. To address these challenges, we introduce RCPS (Reflective Coherent Presentation Synthesis), a novel framework integrating three key components: (1) Deep Structured Narrative Planning; (2) Adaptive Layout Generation; (3) an Iterative Optimization Loop. Additionally, we propose PREVAL, a preference-based evaluation framework employing rationale-enhanced multi-dimensional models to assess presentation quality across Content, Coherence, and Design. Experimental results demonstrate that RCPS significantly outperforms baseline methods across all quality dimensions, producing presentations that closely approximate human expert standards. PREVAL shows strong correlation with human judgments, validating it as a reliable automated tool for assessing presentation quality.
null
https://arxiv.org/abs/2507.13285v1
https://arxiv.org/pdf/2507.13285v1.pdf
null
[ "Wang Xi", "Quan Shi", "Tian Yu", "Yujie Peng", "Jiayi Sun", "MengXing Ren", "Zenghui Ding", "Ningguang Yao" ]
[ "Layout Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/dino-vo-a-feature-based-visual-odometry
2507.13145
null
null
DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model
Learning-based monocular visual odometry (VO) poses robustness, generalization, and efficiency challenges in robotics. Recent advances in visual foundation models, such as DINOv2, have improved robustness and generalization in various vision tasks, yet their integration in VO remains limited due to coarse feature granularity. In this paper, we present DINO-VO, a feature-based VO system leveraging DINOv2 visual foundation model for its sparse feature matching. To address the integration challenge, we propose a salient keypoints detector tailored to DINOv2's coarse features. Furthermore, we complement DINOv2's robust-semantic features with fine-grained geometric features, resulting in more localizable representations. Finally, a transformer-based matcher and differentiable pose estimation layer enable precise camera motion estimation by learning good matches. Against prior detector-descriptor networks like SuperPoint, DINO-VO demonstrates greater robustness in challenging environments. Furthermore, we show superior accuracy and generalization of the proposed feature descriptors against standalone DINOv2 coarse features. DINO-VO outperforms prior frame-to-frame VO methods on the TartanAir and KITTI datasets and is competitive on EuRoC dataset, while running efficiently at 72 FPS with less than 1GB of memory usage on a single GPU. Moreover, it performs competitively against Visual SLAM systems on outdoor driving scenarios, showcasing its generalization capabilities.
null
https://arxiv.org/abs/2507.13145v1
https://arxiv.org/pdf/2507.13145v1.pdf
null
[ "Maulana Bisyir Azhari", "David Hyunchul Shim" ]
[ "GPU", "Monocular Visual Odometry", "Motion Estimation", "Pose Estimation", "Visual Odometry" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/neural-network-guided-symbolic-regression-for
2507.12404
null
null
Neural Network-Guided Symbolic Regression for Interpretable Descriptor Discovery in Perovskite Catalysts
Understanding and predicting the activity of oxide perovskite catalysts for the oxygen evolution reaction (OER) requires descriptors that are both accurate and physically interpretable. While symbolic regression (SR) offers a path to discover such formulas, its performance degrades with high-dimensional inputs and small datasets. We present a two-phase framework that combines neural networks (NN), feature importance analysis, and symbolic regression (SR) to discover interpretable descriptors for OER activity in oxide perovskites. In Phase I, using a small dataset and seven structural features, we reproduce and improve the known {\mu}/t descriptor by engineering composite features and applying symbolic regression, achieving training and validation MAEs of 22.8 and 20.8 meV, respectively. In Phase II, we expand to 164 features, reduce dimensionality, and identify LUMO energy as a key electronic descriptor. A final formula using {\mu}/t, {\mu}/RA, and LUMO energy achieves improved accuracy (training and validation MAEs of 22.1 and 20.6 meV) with strong physical interpretability. Our results demonstrate that NN-guided symbolic regression enables accurate, interpretable, and physically meaningful descriptor discovery in data-scarce regimes, indicating interpretability need not sacrifice accuracy for materials informatics.
null
https://arxiv.org/abs/2507.12404v1
https://arxiv.org/pdf/2507.12404v1.pdf
null
[ "Yeming Xian", "Xiaoming Wang", "Yanfa Yan" ]
[ "Feature Importance", "regression", "Symbolic Regression" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/heat-kernel-goes-topological
2507.12380
null
null
Heat Kernel Goes Topological
Topological neural networks have emerged as powerful successors of graph neural networks. However, they typically involve higher-order message passing, which incurs significant computational expense. We circumvent this issue with a novel topological framework that introduces a Laplacian operator on combinatorial complexes (CCs), enabling efficient computation of heat kernels that serve as node descriptors. Our approach captures multiscale information and enables permutation-equivariant representations, allowing easy integration into modern transformer-based architectures. Theoretically, the proposed method is maximally expressive because it can distinguish arbitrary non-isomorphic CCs. Empirically, it significantly outperforms existing topological methods in terms of computational efficiency. Besides demonstrating competitive performance with the state-of-the-art descriptors on standard molecular datasets, it exhibits superior capability in distinguishing complex topological structures and avoiding blind spots on topological benchmarks. Overall, this work advances topological deep learning by providing expressive yet scalable representations, thereby opening up exciting avenues for molecular classification and property prediction tasks.
null
https://arxiv.org/abs/2507.12380v1
https://arxiv.org/pdf/2507.12380v1.pdf
null
[ "Maximilian Krahn", "Vikas Garg" ]
[ "Computational Efficiency", "Property Prediction" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unsupervised-monocular-3d-keypoint-discovery
2507.12336
null
null
Unsupervised Monocular 3D Keypoint Discovery from Multi-View Diffusion Priors
This paper introduces KeyDiff3D, a framework for unsupervised monocular 3D keypoints estimation that accurately predicts 3D keypoints from a single image. While previous methods rely on manual annotations or calibrated multi-view images, both of which are expensive to collect, our method enables monocular 3D keypoints estimation using only a collection of single-view images. To achieve this, we leverage powerful geometric priors embedded in a pretrained multi-view diffusion model. In our framework, this model generates multi-view images from a single image, serving as a supervision signal to provide 3D geometric cues to our model. We also use the diffusion model as a powerful 2D multi-view feature extractor and construct 3D feature volumes from its intermediate representations. This transforms implicit 3D priors learned by the diffusion model into explicit 3D features. Beyond accurate keypoints estimation, we further introduce a pipeline that enables manipulation of 3D objects generated by the diffusion model. Experimental results on diverse aspects and datasets, including Human3.6M, Stanford Dogs, and several in-the-wild and out-of-domain datasets, highlight the effectiveness of our method in terms of accuracy, generalization, and its ability to enable manipulation of 3D objects generated by the diffusion model from a single image.
null
https://arxiv.org/abs/2507.12336v1
https://arxiv.org/pdf/2507.12336v1.pdf
null
[ "Subin Jeon", "In Cho", "Junyoung Hong", "Seon Joo Kim" ]
[]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unsupervised-part-discovery-via-descriptor
2507.11985
null
null
Unsupervised Part Discovery via Descriptor-Based Masked Image Restoration with Optimized Constraints
Part-level features are crucial for image understanding, but few studies focus on them because of the lack of fine-grained labels. Although unsupervised part discovery can eliminate the reliance on labels, most of them cannot maintain robustness across various categories and scenarios, which restricts their application range. To overcome this limitation, we present a more effective paradigm for unsupervised part discovery, named Masked Part Autoencoder (MPAE). It first learns part descriptors as well as a feature map from the inputs and produces patch features from a masked version of the original images. Then, the masked regions are filled with the learned part descriptors based on the similarity between the local features and descriptors. By restoring these masked patches using the part descriptors, they become better aligned with their part shapes, guided by appearance features from unmasked patches. Finally, MPAE robustly discovers meaningful parts that closely match the actual object shapes, even in complex scenarios. Moreover, several looser yet more effective constraints are proposed to enable MPAE to identify the presence of parts across various scenarios and categories in an unsupervised manner. This provides the foundation for addressing challenges posed by occlusion and for exploring part similarity across multiple categories. Extensive experiments demonstrate that our method robustly discovers meaningful parts across various categories and scenarios. The code is available at the project https://github.com/Jiahao-UTS/MPAE.
null
https://arxiv.org/abs/2507.11985v1
https://arxiv.org/pdf/2507.11985v1.pdf
null
[ "Jiahao Xia", "Yike Wu", "Wenjian Huang", "JianGuo Zhang", "Jian Zhang" ]
[ "Image Restoration", "Unsupervised Part Discovery" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-multi-level-similarity-approach-for-single
2507.11938
null
null
A Multi-Level Similarity Approach for Single-View Object Grasping: Matching, Planning, and Fine-Tuning
Grasping unknown objects from a single view has remained a challenging topic in robotics due to the uncertainty of partial observation. Recent advances in large-scale models have led to benchmark solutions such as GraspNet-1Billion. However, such learning-based approaches still face a critical limitation in performance robustness for their sensitivity to sensing noise and environmental changes. To address this bottleneck in achieving highly generalized grasping, we abandon the traditional learning framework and introduce a new perspective: similarity matching, where similar known objects are utilized to guide the grasping of unknown target objects. We newly propose a method that robustly achieves unknown-object grasping from a single viewpoint through three key steps: 1) Leverage the visual features of the observed object to perform similarity matching with an existing database containing various object models, identifying potential candidates with high similarity; 2) Use the candidate models with pre-existing grasping knowledge to plan imitative grasps for the unknown target object; 3) Optimize the grasp quality through a local fine-tuning process. To address the uncertainty caused by partial and noisy observation, we propose a multi-level similarity matching framework that integrates semantic, geometric, and dimensional features for comprehensive evaluation. Especially, we introduce a novel point cloud geometric descriptor, the C-FPFH descriptor, which facilitates accurate similarity assessment between partial point clouds of observed objects and complete point clouds of database models. In addition, we incorporate the use of large language models, introduce the semi-oriented bounding box, and develop a novel point cloud registration approach based on plane detection to enhance matching accuracy under single-view conditions. Videos are available at https://youtu.be/qQDIELMhQmk.
null
https://arxiv.org/abs/2507.11938v1
https://arxiv.org/pdf/2507.11938v1.pdf
null
[ "Hao Chen", "Takuya Kiyokawa", "Zhengtao Hu", "Weiwei Wan", "Kensuke Harada" ]
[ "Object", "Point Cloud Registration" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/sepose-a-synthetic-event-based-human-pose
2507.11910
null
null
SEPose: A Synthetic Event-based Human Pose Estimation Dataset for Pedestrian Monitoring
Event-based sensors have emerged as a promising solution for addressing challenging conditions in pedestrian and traffic monitoring systems. Their low-latency and high dynamic range allow for improved response time in safety-critical situations caused by distracted walking or other unusual movements. However, the availability of data covering such scenarios remains limited. To address this gap, we present SEPose -- a comprehensive synthetic event-based human pose estimation dataset for fixed pedestrian perception generated using dynamic vision sensors in the CARLA simulator. With nearly 350K annotated pedestrians with body pose keypoints from the perspective of fixed traffic cameras, SEPose is a comprehensive synthetic multi-person pose estimation dataset that spans busy and light crowds and traffic across diverse lighting and weather conditions in 4-way intersections in urban, suburban, and rural environments. We train existing state-of-the-art models such as RVT and YOLOv8 on our dataset and evaluate them on real event-based data to demonstrate the sim-to-real generalization capabilities of the proposed dataset.
null
https://arxiv.org/abs/2507.11910v1
https://arxiv.org/pdf/2507.11910v1.pdf
null
[ "Kaustav Chanda", "Aayush Atul Verma", "Arpitsinh Vaghela", "Yezhou Yang", "Bharatesh Chakravarthi" ]
[ "Multi-Person Pose Estimation", "Pose Estimation" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/diffuman4d-4d-consistent-human-view-synthesis
2507.13344
null
null
Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models
This paper addresses the challenge of high-fidelity view synthesis of humans with sparse-view videos as input. Previous methods solve the issue of insufficient observation by leveraging 4D diffusion models to generate videos at novel viewpoints. However, the generated videos from these models often lack spatio-temporal consistency, thus degrading view synthesis quality. In this paper, we propose a novel sliding iterative denoising process to enhance the spatio-temporal consistency of the 4D diffusion model. Specifically, we define a latent grid in which each latent encodes the image, camera pose, and human pose for a certain viewpoint and timestamp, then alternately denoising the latent grid along spatial and temporal dimensions with a sliding window, and finally decode the videos at target viewpoints from the corresponding denoised latents. Through the iterative sliding, information flows sufficiently across the latent grid, allowing the diffusion model to obtain a large receptive field and thus enhance the 4D consistency of the output, while making the GPU memory consumption affordable. The experiments on the DNA-Rendering and ActorsHQ datasets demonstrate that our method is able to synthesize high-quality and consistent novel-view videos and significantly outperforms the existing approaches. See our project page for interactive demos and video results: https://diffuman4d.github.io/ .
null
https://arxiv.org/abs/2507.13344v1
https://arxiv.org/pdf/2507.13344v1.pdf
null
[ "Yudong Jin", "Sida Peng", "Xuan Wang", "Tao Xie", "Zhen Xu", "Yifan Yang", "Yujun Shen", "Hujun Bao", "Xiaowei Zhou" ]
[ "Denoising", "GPU" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/diffclean-diffusion-based-makeup-removal-for
2507.13292
null
null
DiffClean: Diffusion-based Makeup Removal for Accurate Age Estimation
Accurate age verification can protect underage users from unauthorized access to online platforms and e-commerce sites that provide age-restricted services. However, accurate age estimation can be confounded by several factors, including facial makeup that can induce changes to alter perceived identity and age to fool both humans and machines. In this work, we propose DiffClean which erases makeup traces using a text-guided diffusion model to defend against makeup attacks. DiffClean improves age estimation (minor vs. adult accuracy by 4.8%) and face verification (TMR by 8.9% at FMR=0.01%) over competing baselines on digitally simulated and real makeup images.
null
https://arxiv.org/abs/2507.13292v1
https://arxiv.org/pdf/2507.13292v1.pdf
null
[ "Ekta Balkrishna Gavas", "Chinmay Hegde", "Nasir Memon", "Sudipta Banerjee" ]
[ "Age Estimation", "Face Verification" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/efficient-adaptation-of-pre-trained-vision-1
2507.13260
null
null
Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy
A prevalent approach in Parameter-Efficient Fine-Tuning (PEFT) of pre-trained Vision Transformers (ViT) involves freezing the majority of the backbone parameters and solely learning low-rank adaptation weight matrices to accommodate downstream tasks. These low-rank matrices are commonly derived through the multiplication structure of down-projection and up-projection matrices, exemplified by methods such as LoRA and Adapter. In this work, we observe an approximate orthogonality among any two row or column vectors within any weight matrix of the backbone parameters; however, this property is absent in the vectors of the down/up-projection matrices. Approximate orthogonality implies a reduction in the upper bound of the model's generalization error, signifying that the model possesses enhanced generalization capability. If the fine-tuned down/up-projection matrices were to exhibit this same property as the pre-trained backbone matrices, could the generalization capability of fine-tuned ViTs be further augmented? To address this question, we propose an Approximately Orthogonal Fine-Tuning (AOFT) strategy for representing the low-rank weight matrices. This strategy employs a single learnable vector to generate a set of approximately orthogonal vectors, which form the down/up-projection matrices, thereby aligning the properties of these matrices with those of the backbone. Extensive experimental results demonstrate that our method achieves competitive performance across a range of downstream image classification tasks, confirming the efficacy of the enhanced generalization capability embedded in the down/up-projection matrices.
null
https://arxiv.org/abs/2507.13260v1
https://arxiv.org/pdf/2507.13260v1.pdf
null
[ "Yiting Yang", "Hao Luo", "Yuan Sun", "Qingsen Yan", "Haokui Zhang", "Wei Dong", "Guoqing Wang", "Peng Wang", "Yang Yang", "HengTao Shen" ]
[ "image-classification", "Image Classification", "parameter-efficient fine-tuning" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/disentangling-coincident-cell-events-using
2507.13176
null
null
Disentangling coincident cell events using deep transfer learning and compressive sensing
Accurate single-cell analysis is critical for diagnostics, immunomonitoring, and cell therapy, but coincident events - where multiple cells overlap in a sensing zone - can severely compromise signal fidelity. We present a hybrid framework combining a fully convolutional neural network (FCN) with compressive sensing (CS) to disentangle such overlapping events in one-dimensional sensor data. The FCN, trained on bead-derived datasets, accurately estimates coincident event counts and generalizes to immunomagnetically labeled CD4+ and CD14+ cells in whole blood without retraining. Using this count, the CS module reconstructs individual signal components with high fidelity, enabling precise recovery of single-cell features, including velocity, amplitude, and hydrodynamic diameter. Benchmarking against conventional state-machine algorithms shows superior performance - recovering up to 21% more events and improving classification accuracy beyond 97%. Explinability via class activation maps and parameterized Gaussian template fitting ensures transparency and clinical interpretability. Demonstrated with magnetic flow cytometry (MFC), the framework is compatible with other waveform-generating modalities, including impedance cytometry, nanopore, and resistive pulse sensing. This work lays the foundation for next-generation non-optical single-cell sensing platforms that are automated, generalizable, and capable of resolving overlapping events, broadening the utility of cytometry in translational medicine and precision diagnostics, e.g. cell-interaction studies.
null
https://arxiv.org/abs/2507.13176v1
https://arxiv.org/pdf/2507.13176v1.pdf
null
[ "Moritz Leuthner", "Rafael Vorländer", "Oliver Hayden" ]
[ "Benchmarking", "Compressive Sensing", "Transfer Learning" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/multi-population-gan-training-analyzing-co
2507.13157
null
null
Multi-population GAN Training: Analyzing Co-Evolutionary Algorithms
Generative adversarial networks (GANs) are powerful generative models but remain challenging to train due to pathologies suchas mode collapse and instability. Recent research has explored co-evolutionary approaches, in which populations of generators and discriminators are evolved, as a promising solution. This paper presents an empirical analysis of different coevolutionary GAN training strategies, focusing on the impact of selection and replacement mechanisms. We compare (mu,lambda), (mu+lambda) with elitism, and (mu+lambda) with tournament selection coevolutionary schemes, along with a non-evolutionary population based multi-generator multi-discriminator GAN baseline, across both synthetic low-dimensional datasets (blob and gaussian mixtures) and an image-based benchmark (MNIST). Results show that full generational replacement, i.e., (mu,lambda), consistently outperforms in terms of both sample quality and diversity, particularly when combined with larger offspring sizes. In contrast, elitist approaches tend to converge prematurely and suffer from reduced diversity. These findings highlight the importance of balancing exploration and exploitation dynamics in coevolutionary GAN training and provide guidance for designing more effective population-based generative models.
null
https://arxiv.org/abs/2507.13157v1
https://arxiv.org/pdf/2507.13157v1.pdf
null
[ "Walter P. Casas", "Jamal Toutouh" ]
[ "Diversity", "Evolutionary Algorithms" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/nonverbaltts-a-public-english-corpus-of-text
2507.13155
null
null
NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech
Current expressive speech synthesis models are constrained by the limited availability of open-source datasets containing diverse nonverbal vocalizations (NVs). In this work, we introduce NonverbalTTS (NVTTS), a 17-hour open-access dataset annotated with 10 types of NVs (e.g., laughter, coughs) and 8 emotional categories. The dataset is derived from popular sources, VoxCeleb and Expresso, using automated detection followed by human validation. We propose a comprehensive pipeline that integrates automatic speech recognition (ASR), NV tagging, emotion classification, and a fusion algorithm to merge transcriptions from multiple annotators. Fine-tuning open-source text-to-speech (TTS) models on the NVTTS dataset achieves parity with closed-source systems such as CosyVoice2, as measured by both human evaluation and automatic metrics, including speaker similarity and NV fidelity. By releasing NVTTS and its accompanying annotation guidelines, we address a key bottleneck in expressive TTS research. The dataset is available at https://huggingface.co/datasets/deepvk/NonverbalTTS.
null
https://arxiv.org/abs/2507.13155v1
https://arxiv.org/pdf/2507.13155v1.pdf
null
[ "Maksim Borisov", "Egor Spirin", "Daria Diatlova" ]
[ "Automatic Speech Recognition", "Automatic Speech Recognition (ASR)", "Emotion Classification", "Expressive Speech Synthesis", "speech-recognition", "Speech Recognition", "Speech Synthesis", "text-to-speech", "Text to Speech" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/leveraging-language-prior-for-infrared-small
2507.13113
null
null
Leveraging Language Prior for Infrared Small Target Detection
IRSTD (InfraRed Small Target Detection) detects small targets in infrared blurry backgrounds and is essential for various applications. The detection task is challenging due to the small size of the targets and their sparse distribution in infrared small target datasets. Although existing IRSTD methods and datasets have led to significant advancements, they are limited by their reliance solely on the image modality. Recent advances in deep learning and large vision-language models have shown remarkable performance in various visual recognition tasks. In this work, we propose a novel multimodal IRSTD framework that incorporates language priors to guide small target detection. We leverage language-guided attention weights derived from the language prior to enhance the model's ability for IRSTD, presenting a novel approach that combines textual information with image data to improve IRSTD capabilities. Utilizing the state-of-the-art GPT-4 vision model, we generate text descriptions that provide the locations of small targets in infrared images, employing careful prompt engineering to ensure improved accuracy. Due to the absence of multimodal IR datasets, existing IRSTD methods rely solely on image data. To address this shortcoming, we have curated a multimodal infrared dataset that includes both image and text modalities for small target detection, expanding upon the popular IRSTD-1k and NUDT-SIRST datasets. We validate the effectiveness of our approach through extensive experiments and comprehensive ablation studies. The results demonstrate significant improvements over the state-of-the-art method, with relative percentage differences of 9.74%, 13.02%, 1.25%, and 67.87% in IoU, nIoU, Pd, and Fa on the NUAA-SIRST subset, and 4.41%, 2.04%, 2.01%, and 113.43% on the IRSTD-1k subset of the LangIR dataset, respectively.
null
https://arxiv.org/abs/2507.13113v1
https://arxiv.org/pdf/2507.13113v1.pdf
null
[ "Pranav Singh", "Pravendra Singh" ]
[ "Prompt Engineering" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/beyond-fully-supervised-pixel-annotations
2507.13018
null
null
Beyond Fully Supervised Pixel Annotations: Scribble-Driven Weakly-Supervised Framework for Image Manipulation Localization
Deep learning-based image manipulation localization (IML) methods have achieved remarkable performance in recent years, but typically rely on large-scale pixel-level annotated datasets. To address the challenge of acquiring high-quality annotations, some recent weakly supervised methods utilize image-level labels to segment manipulated regions. However, the performance is still limited due to insufficient supervision signals. In this study, we explore a form of weak supervision that improves the annotation efficiency and detection performance, namely scribble annotation supervision. We re-annotated mainstream IML datasets with scribble labels and propose the first scribble-based IML (Sc-IML) dataset. Additionally, we propose the first scribble-based weakly supervised IML framework. Specifically, we employ self-supervised training with a structural consistency loss to encourage the model to produce consistent predictions under multi-scale and augmented inputs. In addition, we propose a prior-aware feature modulation module (PFMM) that adaptively integrates prior information from both manipulated and authentic regions for dynamic feature adjustment, further enhancing feature discriminability and prediction consistency in complex scenes. We also propose a gated adaptive fusion module (GAFM) that utilizes gating mechanisms to regulate information flow during feature fusion, guiding the model toward emphasizing potential tampered regions. Finally, we propose a confidence-aware entropy minimization loss (${\mathcal{L}}_{ {CEM }}$). This loss dynamically regularizes predictions in weakly annotated or unlabeled regions based on model uncertainty, effectively suppressing unreliable predictions. Experimental results show that our method outperforms existing fully supervised approaches in terms of average performance both in-distribution and out-of-distribution.
null
https://arxiv.org/abs/2507.13018v1
https://arxiv.org/pdf/2507.13018v1.pdf
null
[ "Songlin Li", "Guofeng Yu", "Zhiqing Guo", "Yunfeng Diao", "Dan Ma", "Gaobo Yang", "Liejun Wang" ]
[ "Image Manipulation", "Image Manipulation Localization" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/exploiting-constraint-reasoning-to-build
2507.13007
null
null
Exploiting Constraint Reasoning to Build Graphical Explanations for Mixed-Integer Linear Programming
Following the recent push for trustworthy AI, there has been an increasing interest in developing contrastive explanation techniques for optimisation, especially concerning the solution of specific decision-making processes formalised as MILPs. Along these lines, we propose X-MILP, a domain-agnostic approach for building contrastive explanations for MILPs based on constraint reasoning techniques. First, we show how to encode the queries a user makes about the solution of an MILP problem as additional constraints. Then, we determine the reasons that constitute the answer to the user's query by computing the Irreducible Infeasible Subsystem (IIS) of the newly obtained set of constraints. Finally, we represent our explanation as a "graph of reasons" constructed from the IIS, which helps the user understand the structure among the reasons that answer their query. We test our method on instances of well-known optimisation problems to evaluate the empirical hardness of computing explanations.
null
https://arxiv.org/abs/2507.13007v1
https://arxiv.org/pdf/2507.13007v1.pdf
null
[ "Roger Xavier Lera-Leri", "Filippo Bistaffa", "Athina Georgara", "Juan Antonio Rodriguez-Aguilar" ]
[ "Decision Making" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/from-variability-to-accuracy-conditional
2507.12985
null
null
From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation
Accurate segmentation of orbital bones in facial computed tomography (CT) images is essential for the creation of customized implants for reconstruction of defected orbital bones, particularly challenging due to the ambiguous boundaries and thin structures such as the orbital medial wall and orbital floor. In these ambiguous regions, existing segmentation approaches often output disconnected or under-segmented results. We propose a novel framework that corrects segmentation results by leveraging consensus from multiple diffusion model outputs. Our approach employs a conditional Bernoulli diffusion model trained on diverse annotation patterns per image to generate multiple plausible segmentations, followed by a consensus-driven correction that incorporates position proximity, consensus level, and gradient direction similarity to correct challenging regions. Experimental results demonstrate that our method outperforms existing methods, significantly improving recall in ambiguous regions while preserving the continuity of thin structures. Furthermore, our method automates the manual process of segmentation result correction and can be applied to image-guided surgical planning and surgery.
null
https://arxiv.org/abs/2507.12985v1
https://arxiv.org/pdf/2507.12985v1.pdf
null
[ "Jinseo An", "Min Jin Lee", "Kyu Won Shim", "Helen Hong" ]
[ "Computed Tomography (CT)", "Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/cidir-conditioned-implicit-neural
2507.12953
null
null
cIDIR: Conditioned Implicit Neural Representation for Regularized Deformable Image Registration
Regularization is essential in deformable image registration (DIR) to ensure that the estimated Deformation Vector Field (DVF) remains smooth, physically plausible, and anatomically consistent. However, fine-tuning regularization parameters in learning-based DIR frameworks is computationally expensive, often requiring multiple training iterations. To address this, we propose cIDI, a novel DIR framework based on Implicit Neural Representations (INRs) that conditions the registration process on regularization hyperparameters. Unlike conventional methods that require retraining for each regularization hyperparameter setting, cIDIR is trained over a prior distribution of these hyperparameters, then optimized over the regularization hyperparameters by using the segmentations masks as an observation. Additionally, cIDIR models a continuous and differentiable DVF, enabling seamless integration of advanced regularization techniques via automatic differentiation. Evaluated on the DIR-LAB dataset, $\operatorname{cIDIR}$ achieves high accuracy and robustness across the dataset.
null
https://arxiv.org/abs/2507.12953v1
https://arxiv.org/pdf/2507.12953v1.pdf
null
[ "Sidaty El Hadramy", "Oumeymah Cherkaoui", "Philippe C. Cattin" ]
[ "Image Registration" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/lovic-efficient-long-video-generation-with
2507.12952
null
null
LoViC: Efficient Long Video Generation with Context Compression
Despite recent advances in diffusion transformers (DiTs) for text-to-video generation, scaling to long-duration content remains challenging due to the quadratic complexity of self-attention. While prior efforts -- such as sparse attention and temporally autoregressive models -- offer partial relief, they often compromise temporal coherence or scalability. We introduce LoViC, a DiT-based framework trained on million-scale open-domain videos, designed to produce long, coherent videos through a segment-wise generation process. At the core of our approach is FlexFormer, an expressive autoencoder that jointly compresses video and text into unified latent representations. It supports variable-length inputs with linearly adjustable compression rates, enabled by a single query token design based on the Q-Former architecture. Additionally, by encoding temporal context through position-aware mechanisms, our model seamlessly supports prediction, retradiction, interpolation, and multi-shot generation within a unified paradigm. Extensive experiments across diverse tasks validate the effectiveness and versatility of our approach.
null
https://arxiv.org/abs/2507.12952v1
https://arxiv.org/pdf/2507.12952v1.pdf
null
[ "Jiaxiu Jiang", "Wenbo Li", "Jingjing Ren", "Yuping Qiu", "Yong Guo", "Xiaogang Xu", "Han Wu", "WangMeng Zuo" ]
[ "Text-to-Video Generation", "Video Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unleashing-vision-foundation-models-for
2507.12938
null
null
Unleashing Vision Foundation Models for Coronary Artery Segmentation: Parallel ViT-CNN Encoding and Variational Fusion
Accurate coronary artery segmentation is critical for computeraided diagnosis of coronary artery disease (CAD), yet it remains challenging due to the small size, complex morphology, and low contrast with surrounding tissues. To address these challenges, we propose a novel segmentation framework that leverages the power of vision foundation models (VFMs) through a parallel encoding architecture. Specifically, a vision transformer (ViT) encoder within the VFM captures global structural features, enhanced by the activation of the final two ViT blocks and the integration of an attention-guided enhancement (AGE) module, while a convolutional neural network (CNN) encoder extracts local details. These complementary features are adaptively fused using a cross-branch variational fusion (CVF) module, which models latent distributions and applies variational attention to assign modality-specific weights. Additionally, we introduce an evidential-learning uncertainty refinement (EUR) module, which quantifies uncertainty using evidence theory and refines uncertain regions by incorporating multi-scale feature aggregation and attention mechanisms, further enhancing segmentation accuracy. Extensive evaluations on one in-house and two public datasets demonstrate that the proposed framework significantly outperforms state-of-the-art methods, achieving superior performance in accurate coronary artery segmentation and showcasing strong generalization across multiple datasets. The code is available at https://github.com/d1c2x3/CAseg.
null
https://arxiv.org/abs/2507.12938v1
https://arxiv.org/pdf/2507.12938v1.pdf
null
[ "Caixia Dong", "Duwei Dai", "Xinyi Han", "Fan Liu", "Xu Yang", "Zongfang Li", "Songhua Xu" ]
[ "Coronary Artery Segmentation", "Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fremer-lightweight-and-effective-frequency
2507.12908
null
null
Fremer: Lightweight and Effective Frequency Transformer for Workload Forecasting in Cloud Services
Workload forecasting is pivotal in cloud service applications, such as auto-scaling and scheduling, with profound implications for operational efficiency. Although Transformer-based forecasting models have demonstrated remarkable success in general tasks, their computational efficiency often falls short of the stringent requirements in large-scale cloud environments. Given that most workload series exhibit complicated periodic patterns, addressing these challenges in the frequency domain offers substantial advantages. To this end, we propose Fremer, an efficient and effective deep forecasting model. Fremer fulfills three critical requirements: it demonstrates superior efficiency, outperforming most Transformer-based forecasting models; it achieves exceptional accuracy, surpassing all state-of-the-art (SOTA) models in workload forecasting; and it exhibits robust performance for multi-period series. Furthermore, we collect and open-source four high-quality, open-source workload datasets derived from ByteDance's cloud services, encompassing workload data from thousands of computing instances. Extensive experiments on both our proprietary datasets and public benchmarks demonstrate that Fremer consistently outperforms baseline models, achieving average improvements of 5.5% in MSE, 4.7% in MAE, and 8.6% in SMAPE over SOTA models, while simultaneously reducing parameter scale and computational costs. Additionally, in a proactive auto-scaling test based on Kubernetes, Fremer improves average latency by 18.78% and reduces resource consumption by 2.35%, underscoring its practical efficacy in real-world applications.
null
https://arxiv.org/abs/2507.12908v1
https://arxiv.org/pdf/2507.12908v1.pdf
null
[ "Jiadong Chen", "Hengyu Ye", "Fuxin Jiang", "Xiao He", "Tieying Zhang", "Jianjun Chen", "Xiaofeng Gao" ]
[ "Computational Efficiency", "Scheduling" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/var-math-probing-true-mathematical-reasoning
2507.12885
null
null
VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks
Recent advances in reinforcement learning (RL) have led to substantial improvements in the mathematical reasoning abilities of large language models (LLMs), as measured by standard benchmarks. However, these gains often persist even when models are trained with flawed signals, such as random or inverted rewards, raising a fundamental question: do such improvements reflect true reasoning, or are they merely artifacts of overfitting to benchmark-specific patterns? To address this question, we take an evaluation-centric perspective and identify two critical shortcomings in existing protocols. First, \emph{benchmark contamination} arises from the public availability of test problems, increasing the risk of data leakage. Second, \emph{evaluation fragility} stems from the reliance on single-instance assessments, which are highly sensitive to stochastic outputs and fail to capture reasoning consistency. To overcome these limitations, we introduce {VAR-MATH}, a symbolic evaluation framework designed to probe genuine reasoning ability. By converting fixed numerical problems into symbolic templates and requiring models to solve multiple instantiations of each, VAR-MATH enforces consistent reasoning across structurally equivalent variants, thereby mitigating contamination and improving evaluation robustness. We apply VAR-MATH to transform two popular benchmarks, AMC23 and AIME24, into their symbolic counterparts, VAR-AMC23 and VAR-AIME24. Experimental results reveal substantial performance drops for RL-trained models on the variabilized versions, especially for smaller models, with average declines of 48.0\% on AMC23 and 58.3\% on AIME24. These findings suggest that many existing RL methods rely on superficial heuristics and fail to generalize beyond specific numerical forms. Overall, VAR-MATH offers a principled, contamination-resistant evaluation paradigm for mathematical reasoning.
null
https://arxiv.org/abs/2507.12885v1
https://arxiv.org/pdf/2507.12885v1.pdf
null
[ "Jian Yao", "Ran Cheng", "Kay Chen Tan" ]
[ "Math", "Mathematical Reasoning", "Reinforcement Learning (RL)" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/hrseg-high-resolution-visual-perception-and
2507.12883
null
null
HRSeg: High-Resolution Visual Perception and Enhancement for Reasoning Segmentation
The reasoning segmentation task involves segmenting objects within an image by interpreting implicit user instructions, which may encompass subtleties such as contextual cues and open-world knowledge. Despite significant advancements made by existing approaches, they remain constrained by low perceptual resolution, as visual encoders are typically pre-trained at lower resolutions. Furthermore, simply interpolating the positional embeddings of visual encoders to enhance perceptual resolution yields only marginal performance improvements while incurring substantial computational costs. To address this, we propose HRSeg, an efficient model with high-resolution fine-grained perception. It features two key innovations: High-Resolution Perception (HRP) and High-Resolution Enhancement (HRE). The HRP module processes high-resolution images through cropping, integrating local and global features for multi-granularity quality. The HRE module enhances mask features by integrating fine-grained information from high-resolution images, refining their alignment with text features for precise segmentation. Extensive ablation studies validate the effectiveness of our modules, while comprehensive experiments on multiple benchmark datasets demonstrate HRSeg's superior performance.
null
https://arxiv.org/abs/2507.12883v1
https://arxiv.org/pdf/2507.12883v1.pdf
null
[ "Weihuang Lin", "Yiwei Ma", "Xiaoshuai Sun", "Shuting He", "Jiayi Ji", "Liujuan Cao", "Rongrong Ji" ]
[ "Reasoning Segmentation", "World Knowledge" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/score-scene-context-matters-in-open
2507.12857
null
null
SCORE: Scene Context Matters in Open-Vocabulary Remote Sensing Instance Segmentation
Most existing remote sensing instance segmentation approaches are designed for close-vocabulary prediction, limiting their ability to recognize novel categories or generalize across datasets. This restricts their applicability in diverse Earth observation scenarios. To address this, we introduce open-vocabulary (OV) learning for remote sensing instance segmentation. While current OV segmentation models perform well on natural image datasets, their direct application to remote sensing faces challenges such as diverse landscapes, seasonal variations, and the presence of small or ambiguous objects in aerial imagery. To overcome these challenges, we propose $\textbf{SCORE}$ ($\textbf{S}$cene $\textbf{C}$ontext matters in $\textbf{O}$pen-vocabulary $\textbf{RE}$mote sensing instance segmentation), a framework that integrates multi-granularity scene context, i.e., regional context and global context, to enhance both visual and textual representations. Specifically, we introduce Region-Aware Integration, which refines class embeddings with regional context to improve object distinguishability. Additionally, we propose Global Context Adaptation, which enriches naive text embeddings with remote sensing global context, creating a more adaptable and expressive linguistic latent space for the classifier. We establish new benchmarks for OV remote sensing instance segmentation across diverse datasets. Experimental results demonstrate that, our proposed method achieves SOTA performance, which provides a robust solution for large-scale, real-world geospatial analysis. Our code is available at https://github.com/HuangShiqi128/SCORE.
null
https://arxiv.org/abs/2507.12857v1
https://arxiv.org/pdf/2507.12857v1.pdf
null
[ "Shiqi Huang", "Shuting He", "Huaiyuan Qin", "Bihan Wen" ]
[ "Earth Observation", "Instance Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/enter-the-mind-palace-reasoning-and-planning
2507.12846
null
null
Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering
As robots become increasingly capable of operating over extended periods -- spanning days, weeks, and even months -- they are expected to accumulate knowledge of their environments and leverage this experience to assist humans more effectively. This paper studies the problem of Long-term Active Embodied Question Answering (LA-EQA), a new task in which a robot must both recall past experiences and actively explore its environment to answer complex, temporally-grounded questions. Unlike traditional EQA settings, which typically focus either on understanding the present environment alone or on recalling a single past observation, LA-EQA challenges an agent to reason over past, present, and possible future states, deciding when to explore, when to consult its memory, and when to stop gathering observations and provide a final answer. Standard EQA approaches based on large models struggle in this setting due to limited context windows, absence of persistent memory, and an inability to combine memory recall with active exploration. To address this, we propose a structured memory system for robots, inspired by the mind palace method from cognitive science. Our method encodes episodic experiences as scene-graph-based world instances, forming a reasoning and planning algorithm that enables targeted memory retrieval and guided navigation. To balance the exploration-recall trade-off, we introduce value-of-information-based stopping criteria that determines when the agent has gathered sufficient information. We evaluate our method on real-world experiments and introduce a new benchmark that spans popular simulation environments and actual industrial sites. Our approach significantly outperforms state-of-the-art baselines, yielding substantial gains in both answer accuracy and exploration efficiency.
null
https://arxiv.org/abs/2507.12846v1
https://arxiv.org/pdf/2507.12846v1.pdf
null
[ "Muhammad Fadhil Ginting", "Dong-Ki Kim", "Xiangyun Meng", "Andrzej Reinke", "Bandi Jai Krishna", "Navid Kayhani", "Oriana Peltzer", "David D. Fan", "Amirreza Shaban", "Sung-Kyun Kim", "Mykel J. Kochenderfer", "Ali-akbar Agha-mohammadi", "Shayegan Omidshafiei" ]
[ "Embodied Question Answering", "Question Answering" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/unified-medical-image-segmentation-with-state
2507.12760
null
null
Unified Medical Image Segmentation with State Space Modeling Snake
Unified Medical Image Segmentation (UMIS) is critical for comprehensive anatomical assessment but faces challenges due to multi-scale structural heterogeneity. Conventional pixel-based approaches, lacking object-level anatomical insight and inter-organ relational modeling, struggle with morphological complexity and feature conflicts, limiting their efficacy in UMIS. We propose Mamba Snake, a novel deep snake framework enhanced by state space modeling for UMIS. Mamba Snake frames multi-contour evolution as a hierarchical state space atlas, effectively modeling macroscopic inter-organ topological relationships and microscopic contour refinements. We introduce a snake-specific vision state space module, the Mamba Evolution Block (MEB), which leverages effective spatiotemporal information aggregation for adaptive refinement of complex morphologies. Energy map shape priors further ensure robust long-range contour evolution in heterogeneous data. Additionally, a dual-classification synergy mechanism is incorporated to concurrently optimize detection and segmentation, mitigating under-segmentation of microstructures in UMIS. Extensive evaluations across five clinical datasets reveal Mamba Snake's superior performance, with an average Dice improvement of 3\% over state-of-the-art methods.
null
https://arxiv.org/abs/2507.12760v1
https://arxiv.org/pdf/2507.12760v1.pdf
null
[ "Ruicheng Zhang", "Haowei Guo", "Kanghui Tian", "Jun Zhou", "Mingliang Yan", "Zeyu Zhang", "Shen Zhao" ]
[ "Image Segmentation", "Mamba", "Medical Image Segmentation", "Segmentation", "Semantic Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-privacy-preserving-semantic-segmentation
2507.12730
null
null
A Privacy-Preserving Semantic-Segmentation Method Using Domain-Adaptation Technique
We propose a privacy-preserving semantic-segmentation method for applying perceptual encryption to images used for model training in addition to test images. This method also provides almost the same accuracy as models without any encryption. The above performance is achieved using a domain-adaptation technique on the embedding structure of the Vision Transformer (ViT). The effectiveness of the proposed method was experimentally confirmed in terms of the accuracy of semantic segmentation when using a powerful semantic-segmentation model with ViT called Segmentation Transformer.
null
https://arxiv.org/abs/2507.12730v1
https://arxiv.org/pdf/2507.12730v1.pdf
null
[ "Homare Sueyoshi", "Kiyoshi Nishikawa", "Hitoshi Kiya" ]
[ "Domain Adaptation", "Privacy Preserving", "Segmentation", "Semantic Segmentation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/flexitokens-flexible-tokenization-for
2507.12720
null
null
FLEXITOKENS: Flexible Tokenization for Evolving Language Models
Language models (LMs) are challenging to adapt to new data distributions by simple finetuning. This is due to the rigidity of their subword tokenizers, which typically remain unchanged during adaptation. This inflexibility often leads to inefficient tokenization, causing overfragmentation of out-of-distribution domains, unseen languages, or scripts. In this work, we develop byte-level LMs with learnable tokenizers to make tokenization adaptive. Our models include a submodule that learns to predict boundaries between the input byte sequence, encoding it into variable-length segments. Existing tokenizer-free methods train this boundary predictor using an auxiliary loss that enforces a fixed compression rate across the training corpus, introducing a new kind of rigidity. We propose FLEXITOKENS, a simplified training objective that enables significantly greater flexibility during adaptation. Evaluating across multiple multilingual benchmarks, morphologically diverse tasks, and domains, we demonstrate that FLEXITOKENS consistently reduces token over-fragmentation and achieves up to 10\% improvements on downstream task performance compared to subword and other gradient-based tokenizers. Code and data for our experiments will be released at https://github.com/owos/flexitokens
null
https://arxiv.org/abs/2507.12720v1
https://arxiv.org/pdf/2507.12720v1.pdf
null
[ "Abraham Toluase Owodunni", "Orevaoghene Ahia", "Sachin Kumar" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/fortress-function-composition-optimized-real
2507.12675
null
null
FORTRESS: Function-composition Optimized Real-Time Resilient Structural Segmentation via Kolmogorov-Arnold Enhanced Spatial Attention Networks
Automated structural defect segmentation in civil infrastructure faces a critical challenge: achieving high accuracy while maintaining computational efficiency for real-time deployment. This paper presents FORTRESS (Function-composition Optimized Real-Time Resilient Structural Segmentation), a new architecture that balances accuracy and speed by using a special method that combines depthwise separable convolutions with adaptive Kolmogorov-Arnold Network integration. FORTRESS incorporates three key innovations: a systematic depthwise separable convolution framework achieving a 3.6x parameter reduction per layer, adaptive TiKAN integration that selectively applies function composition transformations only when computationally beneficial, and multi-scale attention fusion combining spatial, channel, and KAN-enhanced features across decoder levels. The architecture achieves remarkable efficiency gains with 91% parameter reduction (31M to 2.9M), 91% computational complexity reduction (13.7 to 1.17 GFLOPs), and 3x inference speed improvement while delivering superior segmentation performance. Evaluation on benchmark infrastructure datasets demonstrates state-of-the-art results with an F1- score of 0.771 and a mean IoU of 0.677, significantly outperforming existing methods including U-Net, SA-UNet, and U- KAN. The dual optimization strategy proves essential for optimal performance, establishing FORTRESS as a robust solution for practical structural defect segmentation in resource-constrained environments where both accuracy and computational efficiency are paramount. Comprehensive architectural specifications are provided in the Supplemental Material. Source code is available at URL: https://github.com/faeyelab/fortress-paper-code.
null
https://arxiv.org/abs/2507.12675v1
https://arxiv.org/pdf/2507.12675v1.pdf
null
[ "Christina Thrainer", "Md Meftahul Ferdaus", "Mahdi Abdelguerfi", "Christian Guetl", "Steven Sloan", "Kendall N. Niles", "Ken Pathak" ]
[ "Computational Efficiency", "Segmentation" ]
2025-07-16T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/ngtm-substructure-based-neural-graph-topic
2507.13133
null
null
NGTM: Substructure-based Neural Graph Topic Model for Interpretable Graph Generation
Graph generation plays a pivotal role across numerous domains, including molecular design and knowledge graph construction. Although existing methods achieve considerable success in generating realistic graphs, their interpretability remains limited, often obscuring the rationale behind structural decisions. To address this challenge, we propose the Neural Graph Topic Model (NGTM), a novel generative framework inspired by topic modeling in natural language processing. NGTM represents graphs as mixtures of latent topics, each defining a distribution over semantically meaningful substructures, which facilitates explicit interpretability at both local and global scales. The generation process transparently integrates these topic distributions with a global structural variable, enabling clear semantic tracing of each generated graph. Experiments demonstrate that NGTM achieves competitive generation quality while uniquely enabling fine-grained control and interpretability, allowing users to tune structural features or induce biological properties through topic-level adjustments.
null
https://arxiv.org/abs/2507.13133v1
https://arxiv.org/pdf/2507.13133v1.pdf
null
[ "Yuanxin Zhuang", "Dazhong Shen", "Ying Sun" ]
[ "graph construction", "Graph Generation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/uncertainty-aware-cross-modal-knowledge
2507.13092
null
null
Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces
Electroencephalography (EEG) is a fundamental modality for cognitive state monitoring in brain-computer interfaces (BCIs). However, it is highly susceptible to intrinsic signal errors and human-induced labeling errors, which lead to label noise and ultimately degrade model performance. To enhance EEG learning, multimodal knowledge distillation (KD) has been explored to transfer knowledge from visual models with rich representations to EEG-based models. Nevertheless, KD faces two key challenges: modality gap and soft label misalignment. The former arises from the heterogeneous nature of EEG and visual feature spaces, while the latter stems from label inconsistencies that create discrepancies between ground truth labels and distillation targets. This paper addresses semantic uncertainty caused by ambiguous features and weakly defined labels. We propose a novel cross-modal knowledge distillation framework that mitigates both modality and label inconsistencies. It aligns feature semantics through a prototype-based similarity module and introduces a task-specific distillation head to resolve label-induced inconsistency in supervision. Experimental results demonstrate that our approach improves EEG-based emotion regression and classification performance, outperforming both unimodal and multimodal baselines on a public multimodal dataset. These findings highlight the potential of our framework for BCI applications.
null
https://arxiv.org/abs/2507.13092v1
https://arxiv.org/pdf/2507.13092v1.pdf
null
[ "Hyo-Jeong Jang", "Hye-Bin Shin", "Seong-Whan Lee" ]
[ "EEG", "Knowledge Distillation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/advancing-complex-wide-area-scene
2507.13061
null
null
Advancing Complex Wide-Area Scene Understanding with Hierarchical Coresets Selection
Scene understanding is one of the core tasks in computer vision, aiming to extract semantic information from images to identify objects, scene categories, and their interrelationships. Although advancements in Vision-Language Models (VLMs) have driven progress in this field, existing VLMs still face challenges in adaptation to unseen complex wide-area scenes. To address the challenges, this paper proposes a Hierarchical Coresets Selection (HCS) mechanism to advance the adaptation of VLMs in complex wide-area scene understanding. It progressively refines the selected regions based on the proposed theoretically guaranteed importance function, which considers utility, representativeness, robustness, and synergy. Without requiring additional fine-tuning, HCS enables VLMs to achieve rapid understandings of unseen scenes at any scale using minimal interpretable regions while mitigating insufficient feature density. HCS is a plug-and-play method that is compatible with any VLM. Experiments demonstrate that HCS achieves superior performance and universality in various tasks.
null
https://arxiv.org/abs/2507.13061v1
https://arxiv.org/pdf/2507.13061v1.pdf
null
[ "Jingyao Wang", "Yiming Chen", "Lingyu Si", "Changwen Zheng" ]
[ "Scene Understanding" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/a-translation-of-probabilistic-event-calculus
2507.12989
null
null
A Translation of Probabilistic Event Calculus into Markov Decision Processes
Probabilistic Event Calculus (PEC) is a logical framework for reasoning about actions and their effects in uncertain environments, which enables the representation of probabilistic narratives and computation of temporal projections. The PEC formalism offers significant advantages in interpretability and expressiveness for narrative reasoning. However, it lacks mechanisms for goal-directed reasoning. This paper bridges this gap by developing a formal translation of PEC domains into Markov Decision Processes (MDPs), introducing the concept of "action-taking situations" to preserve PEC's flexible action semantics. The resulting PEC-MDP formalism enables the extensive collection of algorithms and theoretical tools developed for MDPs to be applied to PEC's interpretable narrative domains. We demonstrate how the translation supports both temporal reasoning tasks and objective-driven planning, with methods for mapping learned policies back into human-readable PEC representations, maintaining interpretability while extending PEC's capabilities.
null
https://arxiv.org/abs/2507.12989v1
https://arxiv.org/pdf/2507.12989v1.pdf
null
[ "Lyris Xu", "Fabio Aurelio D'Asaro", "Luke Dickens" ]
[ "Translation" ]
2025-07-17T00:00:00
null
null
null
null
[]
https://paperswithcode.com/paper/generalist-bimanual-manipulation-via
2507.12898
null
null
Generalist Bimanual Manipulation via Foundation Video Diffusion Models
Bimanual robotic manipulation, which involves the coordinated control of two robotic arms, is foundational for solving challenging tasks. Despite recent progress in general-purpose manipulation, data scarcity and embodiment heterogeneity remain serious obstacles to further scaling up in bimanual settings. In this paper, we introduce VIdeo Diffusion for Action Reasoning (VIDAR), a two-stage framework that leverages large-scale, diffusion-based video pre-training and a novel masked inverse dynamics model for action prediction. We pre-train the video diffusion model on 750K multi-view videos from three real-world bimanual robot platforms, utilizing a unified observation space that encodes robot, camera, task, and scene contexts. Our masked inverse dynamics model learns masks to extract action-relevant information from generated trajectories without requiring pixel-level labels, and the masks can effectively generalize to unseen backgrounds. Our experiments demonstrate that with only 20 minutes of human demonstrations on an unseen robot platform (only 1% of typical data requirements), VIDAR generalizes to unseen tasks and backgrounds with strong semantic understanding, surpassing state-of-the-art methods. Our findings highlight the potential of video foundation models, coupled with masked action prediction, to enable scalable and generalizable robotic manipulation in diverse real-world settings.
null
https://arxiv.org/abs/2507.12898v1
https://arxiv.org/pdf/2507.12898v1.pdf
null
[ "Yao Feng", "Hengkai Tan", "Xinyi Mao", "Guodong Liu", "Shuhe Huang", "Chendong Xiang", "Hang Su", "Jun Zhu" ]
[]
2025-07-17T00:00:00
null
null
null
null
[]