paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/camera-based-implicit-mind-reading-by
|
2507.12889
| null | null |
Camera-based implicit mind reading by capturing higher-order semantic dynamics of human gaze within environmental context
|
Emotion recognition,as a step toward mind reading,seeks to infer internal states from external cues.Most existing methods rely on explicit signals-such as facial expressions,speech,or gestures-that reflect only bodily responses and overlook the influence of environmental context.These cues are often voluntary,easy to mask,and insufficient for capturing deeper,implicit emotions. Physiological signal-based approaches offer more direct access to internal states but require complex sensors that compromise natural behavior and limit scalability.Gaze-based methods typically rely on static fixation analysis and fail to capture the rich,dynamic interactions between gaze and the environment,and thus cannot uncover the deep connection between emotion and implicit behavior.To address these limitations,we propose a novel camera-based,user-unaware emotion recognition approach that integrates gaze fixation patterns with environmental semantics and temporal dynamics.Leveraging standard HD cameras,our method unobtrusively captures users'eye appearance and head movements in natural settings-without the need for specialized hardware or active user participation.From these visual cues,the system estimates gaze trajectories over time and space, providing the basis for modeling the spatial, semantic,and temporal dimensions of gaze behavior. This allows us to capture the dynamic interplay between visual attention and the surrounding environment,revealing that emotions are not merely physiological responses but complex outcomes of human-environment interactions.The proposed approach enables user-unaware,real-time,and continuous emotion recognition,offering high generalizability and low deployment cost.
| null |
https://arxiv.org/abs/2507.12889v1
|
https://arxiv.org/pdf/2507.12889v1.pdf
| null |
[
"Mengke Song",
"Yuge Xie",
"Qi Cui",
"Luming Li",
"Xinyu Liu",
"Guotao Wang",
"Chenglizhao Chen",
"Shanchen Pang"
] |
[
"Emotion Recognition"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/generative-multi-target-cross-domain
|
2507.12871
| null | null |
Generative Multi-Target Cross-Domain Recommendation
|
Recently, there has been a surge of interest in Multi-Target Cross-Domain Recommendation (MTCDR), which aims to enhance recommendation performance across multiple domains simultaneously. Existing MTCDR methods primarily rely on domain-shared entities (\eg users or items) to fuse and transfer cross-domain knowledge, which may be unavailable in non-overlapped recommendation scenarios. Some studies model user preferences and item features as domain-sharable semantic representations, which can be utilized to tackle the MTCDR task. Nevertheless, they often require extensive auxiliary data for pre-training. Developing more effective solutions for MTCDR remains an important area for further exploration. Inspired by recent advancements in generative recommendation, this paper introduces GMC, a generative paradigm-based approach for multi-target cross-domain recommendation. The core idea of GMC is to leverage semantically quantized discrete item identifiers as a medium for integrating multi-domain knowledge within a unified generative model. GMC first employs an item tokenizer to generate domain-shared semantic identifiers for each item, and then formulates item recommendation as a next-token generation task by training a domain-unified sequence-to-sequence model. To further leverage the domain information to enhance performance, we incorporate a domain-aware contrastive loss into the semantic identifier learning, and perform domain-specific fine-tuning on the unified recommender. Extensive experiments on five public datasets demonstrate the effectiveness of GMC compared to a range of baseline methods.
| null |
https://arxiv.org/abs/2507.12871v2
|
https://arxiv.org/pdf/2507.12871v2.pdf
| null |
[
"Jinqiu Jin",
"Yang Zhang",
"Junwei Pan",
"Fuli Feng",
"Hua Lu",
"Lei Xiao",
"Haijie Gu",
"Xiangnan He"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sgcl-unifying-self-supervised-and-supervised
|
2507.13336
| null | null |
SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation
|
Recommender systems (RecSys) are essential for online platforms, providing personalized suggestions to users within a vast sea of information. Self-supervised graph learning seeks to harness high-order collaborative filtering signals through unsupervised augmentation on the user-item bipartite graph, primarily leveraging a multi-task learning framework that includes both supervised recommendation loss and self-supervised contrastive loss. However, this separate design introduces additional graph convolution processes and creates inconsistencies in gradient directions due to disparate losses, resulting in prolonged training times and sub-optimal performance. In this study, we introduce a unified framework of Supervised Graph Contrastive Learning for recommendation (SGCL) to address these issues. SGCL uniquely combines the training of recommendation and unsupervised contrastive losses into a cohesive supervised contrastive learning loss, aligning both tasks within a single optimization direction for exceptionally fast training. Extensive experiments on three real-world datasets show that SGCL outperforms state-of-the-art methods, achieving superior accuracy and efficiency.
| null |
https://arxiv.org/abs/2507.13336v1
|
https://arxiv.org/pdf/2507.13336v1.pdf
| null |
[
"Weizhi Zhang",
"Liangwei Yang",
"Zihe Song",
"Henrry Peng Zou",
"Ke Xu",
"Yuanjie Zhu",
"Philip S. Yu"
] |
[
"Collaborative Filtering",
"Contrastive Learning",
"Graph Learning",
"Multi-Task Learning",
"Recommendation Systems"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-imitation-game-turing-machine-imitator-is
|
2507.13332
| null | null |
The Imitation Game: Turing Machine Imitator is Length Generalizable Reasoner
|
Length generalization, the ability to solve problems of longer sequences than those observed during training, poses a core challenge of Transformer-based large language models (LLM). Although existing studies have predominantly focused on data-driven approaches for arithmetic operations and symbolic manipulation tasks, these approaches tend to be task-specific with limited overall performance. To pursue a more general solution, this paper focuses on a broader case of reasoning problems that are computable, i.e., problems that algorithms can solve, thus can be solved by the Turing Machine. From this perspective, this paper proposes Turing MAchine Imitation Learning (TAIL) to improve the length generalization ability of LLMs. TAIL synthesizes chain-of-thoughts (CoT) data that imitate the execution process of a Turing Machine by computer programs, which linearly expands the reasoning steps into atomic states to alleviate shortcut learning and explicit memory fetch mechanism to reduce the difficulties of dynamic and long-range data access in elementary operations. To validate the reliability and universality of TAIL, we construct a challenging synthetic dataset covering 8 classes of algorithms and 18 tasks. Without bells and whistles, TAIL significantly improves the length generalization ability as well as the performance of Qwen2.5-7B on various tasks using only synthetic data, surpassing previous methods and DeepSeek-R1. The experimental results reveal that the key concepts in the Turing Machine, instead of the thinking styles, are indispensable for TAIL for length generalization, through which the model exhibits read-and-write behaviors consistent with the properties of the Turing Machine in their attention layers. This work provides a promising direction for future research in the learning of LLM reasoning from synthetic data.
| null |
https://arxiv.org/abs/2507.13332v1
|
https://arxiv.org/pdf/2507.13332v1.pdf
| null |
[
"Zhouqi Hua",
"Wenwei Zhang",
"Chengqi Lyu",
"Yuzhe Gu",
"Songyang Gao",
"Kuikun Liu",
"Kai Chen"
] |
[
"Imitation Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/georeg-weight-constrained-few-shot-regression
|
2507.13323
| null | null |
GeoReg: Weight-Constrained Few-Shot Regression for Socio-Economic Estimation using LLM
|
Socio-economic indicators like regional GDP, population, and education levels, are crucial to shaping policy decisions and fostering sustainable development. This research introduces GeoReg a regression model that integrates diverse data sources, including satellite imagery and web-based geospatial information, to estimate these indicators even for data-scarce regions such as developing countries. Our approach leverages the prior knowledge of large language model (LLM) to address the scarcity of labeled data, with the LLM functioning as a data engineer by extracting informative features to enable effective estimation in few-shot settings. Specifically, our model obtains contextual relationships between data features and the target indicator, categorizing their correlations as positive, negative, mixed, or irrelevant. These features are then fed into the linear estimator with tailored weight constraints for each category. To capture nonlinear patterns, the model also identifies meaningful feature interactions and integrates them, along with nonlinear transformations. Experiments across three countries at different stages of development demonstrate that our model outperforms baselines in estimating socio-economic indicators, even for low-income countries with limited data availability.
| null |
https://arxiv.org/abs/2507.13323v1
|
https://arxiv.org/pdf/2507.13323v1.pdf
| null |
[
"Kyeongjin Ahn",
"Sungwon Han",
"Seungeon Lee",
"Donghyun Ahn",
"Hyoshin Kim",
"Jungwon Kim",
"Jihee Kim",
"Sangyoon Park",
"Meeyoung Cha"
] |
[
"Large Language Model"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/boosting-team-modeling-through-tempo
|
2507.13305
| null | null |
Boosting Team Modeling through Tempo-Relational Representation Learning
|
Team modeling remains a fundamental challenge at the intersection of Artificial Intelligence and the Social Sciences. Social Science research emphasizes the need to jointly model dynamics and relations, while practical applications demand unified models capable of inferring multiple team constructs simultaneously, providing interpretable insights and actionable recommendations to enhance team performance. However, existing works do not meet these practical demands. To bridge this gap, we present TRENN, a novel tempo-relational architecture that integrates: (i) an automatic temporal graph extractor, (ii) a tempo-relational encoder, (iii) a decoder for team construct prediction, and (iv) two complementary explainability modules. TRENN jointly captures relational and temporal team dynamics, providing a solid foundation for MT-TRENN, which extends TReNN by replacing the decoder with a multi-task head, enabling the model to learn shared Social Embeddings and simultaneously predict multiple team constructs, including Emergent Leadership, Leadership Style, and Teamwork components. Experimental results demonstrate that our approach significantly outperforms approaches that rely exclusively on temporal or relational information. Additionally, experimental evaluation has shown that the explainability modules integrated in MT-TRENN yield interpretable insights and actionable suggestions to support team improvement. These capabilities make our approach particularly well-suited for Human-Centered AI applications, such as intelligent decision-support systems in high-stakes collaborative environments.
| null |
https://arxiv.org/abs/2507.13305v1
|
https://arxiv.org/pdf/2507.13305v1.pdf
| null |
[
"Vincenzo Marco De Luca",
"Giovanna Varni",
"Andrea Passerini"
] |
[
"Decoder",
"Representation Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mnarx-a-surrogate-model-for-complex-dynamical
|
2507.13301
| null | null |
mNARX+: A surrogate model for complex dynamical systems using manifold-NARX and automatic feature selection
|
We propose an automatic approach for manifold nonlinear autoregressive with exogenous inputs (mNARX) modeling that leverages the feature-based structure of functional-NARX (F-NARX) modeling. This novel approach, termed mNARX+, preserves the key strength of the mNARX framework, which is its expressivity allowing it to model complex dynamical systems, while simultaneously addressing a key limitation: the heavy reliance on domain expertise to identify relevant auxiliary quantities and their causal ordering. Our method employs a data-driven, recursive algorithm that automates the construction of the mNARX model sequence. It operates by sequentially selecting temporal features based on their correlation with the model prediction residuals, thereby automatically identifying the most critical auxiliary quantities and the order in which they should be modeled. This procedure significantly reduces the need for prior system knowledge. We demonstrate the effectiveness of the mNARX+ algorithm on two case studies: a Bouc-Wen oscillator with strong hysteresis and a complex aero-servo-elastic wind turbine simulator. The results show that the algorithm provides a systematic, data-driven method for creating accurate and stable surrogate models for complex dynamical systems.
| null |
https://arxiv.org/abs/2507.13301v1
|
https://arxiv.org/pdf/2507.13301v1.pdf
| null |
[
"S. Schär",
"S. Marelli",
"B. Sudret"
] |
[
"feature selection"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/optimal-empirical-risk-minimization-under
|
2507.13287
| null | null |
Optimal Empirical Risk Minimization under Temporal Distribution Shifts
|
Temporal distribution shifts pose a key challenge for machine learning models trained and deployed in dynamically evolving environments. This paper introduces RIDER (RIsk minimization under Dynamically Evolving Regimes) which derives optimally-weighted empirical risk minimization procedures under temporal distribution shifts. Our approach is theoretically grounded in the random distribution shift model, where random shifts arise as a superposition of numerous unpredictable changes in the data-generating process. We show that common weighting schemes, such as pooling all data, exponentially weighting data, and using only the most recent data, emerge naturally as special cases in our framework. We demonstrate that RIDER consistently improves out-of-sample predictive performance when applied as a fine-tuning step on the Yearbook dataset, across a range of benchmark methods in Wild-Time. Moreover, we show that RIDER outperforms standard weighting strategies in two other real-world tasks: predicting stock market volatility and forecasting ride durations in NYC taxi data.
| null |
https://arxiv.org/abs/2507.13287v1
|
https://arxiv.org/pdf/2507.13287v1.pdf
| null |
[
"Yujin Jeong",
"Ramesh Johari",
"Dominik Rothenhäusler",
"Emily Fox"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/questa-expanding-reasoning-capacity-in-llms
|
2507.13266
| null | null |
QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
|
Reinforcement learning (RL) has become a key component in training large language reasoning models (LLMs). However, recent studies questions its effectiveness in improving multi-step reasoning-particularly on hard problems. To address this challenge, we propose a simple yet effective strategy via Question Augmentation: introduce partial solutions during training to reduce problem difficulty and provide more informative learning signals. Our method, QuestA, when applied during RL training on math reasoning tasks, not only improves pass@1 but also pass@k-particularly on problems where standard RL struggles to make progress. This enables continual improvement over strong open-source models such as DeepScaleR and OpenMath Nemotron, further enhancing their reasoning capabilities. We achieve new state-of-the-art results on math benchmarks using 1.5B-parameter models: 67.1% (+5.3%) on AIME24, 59.5% (+10.0%) on AIME25, and 35.5% (+4.0%) on HMMT25. Further, we provide theoretical explanations that QuestA improves sample efficiency, offering a practical and generalizable pathway for expanding reasoning capability through RL.
| null |
https://arxiv.org/abs/2507.13266v1
|
https://arxiv.org/pdf/2507.13266v1.pdf
| null |
[
"Jiazheng Li",
"Hong Lu",
"Kaiyue Wen",
"Zaiwen Yang",
"Jiaxuan Gao",
"Hongzhou Lin",
"Yi Wu",
"Jingzhao Zhang"
] |
[
"Math",
"Reinforcement Learning (RL)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transient-stability-aware-frequency-provision
|
2507.13265
| null | null |
Transient-Stability-Aware Frequency Provision in IBR-Rich Grids via Information Gap Decision Theory and Deep Learning
|
This paper introduces a framework to address the critical loss of transient stability caused by reduced inertia in grids with high inverter-based resource (IBR) penetration. The proposed method integrates a predictive deep learning (DL) model with information gap decision theory (IGDT) to create a risk-averse dispatch strategy. By reformulating the conventional virtual inertia scheduling (VIS) problem, the framework uses early predictions of post-fault dynamics to proactively redispatch resources, ensuring the system's center of inertia remains stable under worst-case contingencies. Validated on the IEEE 39-bus system with 70% IBR penetration, the proposed approach prevents system collapse where a conventional VIS strategy fails, ensuring frequency stability at a cost increase of only 5%.
| null |
https://arxiv.org/abs/2507.13265v1
|
https://arxiv.org/pdf/2507.13265v1.pdf
| null |
[
"Amin Masoumi",
"Mert Korkali"
] |
[
"Scheduling"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/on-accelerated-mixing-of-the-no-u-turn
|
2507.13259
| null | null |
On Accelerated Mixing of the No-U-turn Sampler
|
Recent progress on the theory of variational hypocoercivity established that Randomized Hamiltonian Monte Carlo -- at criticality -- can achieve pronounced acceleration in its convergence and hence sampling performance over diffusive dynamics. Manual critical tuning being unfeasible in practice has motivated automated algorithmic solutions, notably the No-U-turn Sampler. Beyond its empirical success, a rigorous study of this method's ability to achieve accelerated convergence has been missing. We initiate this investigation combining a concentration of measure approach to examine the automatic tuning mechanism with a coupling based mixing analysis for Hamiltonian Monte Carlo. In certain Gaussian target distributions, this yields a precise characterization of the sampler's behavior resulting, in particular, in rigorous mixing guarantees describing the algorithm's ability and limitations in achieving accelerated convergence.
| null |
https://arxiv.org/abs/2507.13259v1
|
https://arxiv.org/pdf/2507.13259v1.pdf
| null |
[
"Stefan Oberdörster"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/leveraging-asynchronous-cross-border-market
|
2507.13250
| null | null |
Leveraging Asynchronous Cross-border Market Data for Improved Day-Ahead Electricity Price Forecasting in European Markets
|
Accurate short-term electricity price forecasting is crucial for strategically scheduling demand and generation bids in day-ahead markets. While data-driven techniques have shown considerable prowess in achieving high forecast accuracy in recent years, they rely heavily on the quality of input covariates. In this paper, we investigate whether asynchronously published prices as a result of differing gate closure times (GCTs) in some bidding zones can improve forecasting accuracy in other markets with later GCTs. Using a state-of-the-art ensemble of models, we show significant improvements of 22% and 9% in forecast accuracy in the Belgian (BE) and Swedish bidding zones (SE3) respectively, when including price data from interconnected markets with earlier GCT (Germany-Luxembourg, Austria, and Switzerland). This improvement holds for both general as well as extreme market conditions. Our analysis also yields further important insights: frequent model recalibration is necessary for maximum accuracy but comes at substantial additional computational costs, and using data from more markets does not always lead to better performance - a fact we delve deeper into with interpretability analysis of the forecast models. Overall, these findings provide valuable guidance for market participants and decision-makers aiming to optimize bidding strategies within increasingly interconnected and volatile European energy markets.
| null |
https://arxiv.org/abs/2507.13250v1
|
https://arxiv.org/pdf/2507.13250v1.pdf
| null |
[
"Maria Margarida Mascarenhas",
"Jilles De Blauwe",
"Mikael Amelin",
"Hussain Kazmi"
] |
[
"Scheduling"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fedga-a-fair-federated-learning-framework
|
2507.12983
| null | null |
FedGA: A Fair Federated Learning Framework Based on the Gini Coefficient
|
Fairness has emerged as one of the key challenges in federated learning. In horizontal federated settings, data heterogeneity often leads to substantial performance disparities across clients, raising concerns about equitable model behavior. To address this issue, we propose FedGA, a fairness-aware federated learning algorithm. We first employ the Gini coefficient to measure the performance disparity among clients. Based on this, we establish a relationship between the Gini coefficient $G$ and the update scale of the global model ${U_s}$, and use this relationship to adaptively determine the timing of fairness intervention. Subsequently, we dynamically adjust the aggregation weights according to the system's real-time fairness status, enabling the global model to better incorporate information from clients with relatively poor performance.We conduct extensive experiments on the Office-Caltech-10, CIFAR-10, and Synthetic datasets. The results show that FedGA effectively improves fairness metrics such as variance and the Gini coefficient, while maintaining strong overall performance, demonstrating the effectiveness of our approach.
| null |
https://arxiv.org/abs/2507.12983v1
|
https://arxiv.org/pdf/2507.12983v1.pdf
| null |
[
"ShanBin Liu"
] |
[
"Fairness",
"Federated Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-distributed-generative-ai-approach-for
|
2507.12979
| null | null |
A Distributed Generative AI Approach for Heterogeneous Multi-Domain Environments under Data Sharing constraints
|
Federated Learning has gained increasing attention for its ability to enable multiple nodes to collaboratively train machine learning models without sharing their raw data. At the same time, Generative AI -- particularly Generative Adversarial Networks (GANs) -- have achieved remarkable success across a wide range of domains, such as healthcare, security, and Image Generation. However, training generative models typically requires large datasets and significant computational resources, which are often unavailable in real-world settings. Acquiring such resources can be costly and inefficient, especially when many underutilized devices -- such as IoT devices and edge devices -- with varying capabilities remain idle. Moreover, obtaining large datasets is challenging due to privacy concerns and copyright restrictions, as most devices are unwilling to share their data. To address these challenges, we propose a novel approach for decentralized GAN training that enables the utilization of distributed data and underutilized, low-capability devices while not sharing data in its raw form. Our approach is designed to tackle key challenges in decentralized environments, combining KLD-weighted Clustered Federated Learning to address the issues of data heterogeneity and multi-domain datasets, with Heterogeneous U-Shaped split learning to tackle the challenge of device heterogeneity under strict data sharing constraints -- ensuring that no labels or raw data, whether real or synthetic, are ever shared between nodes. Experimental results shows that our approach demonstrates consistent and significant improvements across key performance metrics, where it achieves 1.1x -- 2.2x higher image generation scores, an average 10% boost in classification metrics (up to 50% in multi-domain non-IID settings), in much lower latency compared to several benchmarks. Find our code at https://github.com/youssefga28/HuSCF-GAN.
| null |
https://arxiv.org/abs/2507.12979v1
|
https://arxiv.org/pdf/2507.12979v1.pdf
| null |
[
"Youssef Tawfilis",
"Hossam Amer",
"Minar El-Aasser",
"Tallal Elshabrawy"
] |
[
"Federated Learning",
"Image Generation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/federated-learning-for-commercial-image
|
2507.12903
| null | null |
Federated Learning for Commercial Image Sources
|
Federated Learning is a collaborative machine learning paradigm that enables multiple clients to learn a global model without exposing their data to each other. Consequently, it provides a secure learning platform with privacy-preserving capabilities. This paper introduces a new dataset containing 23,326 images collected from eight different commercial sources and classified into 31 categories, similar to the Office-31 dataset. To the best of our knowledge, this is the first image classification dataset specifically designed for Federated Learning. We also propose two new Federated Learning algorithms, namely Fed-Cyclic and Fed-Star. In Fed-Cyclic, a client receives weights from its previous client, updates them through local training, and passes them to the next client, thus forming a cyclic topology. In Fed-Star, a client receives weights from all other clients, updates its local weights through pre-aggregation (to address statistical heterogeneity) and local training, and sends its updated local weights to all other clients, thus forming a star-like topology. Our experiments reveal that both algorithms perform better than existing baselines on our newly introduced dataset.
| null |
https://arxiv.org/abs/2507.12903v1
|
https://arxiv.org/pdf/2507.12903v1.pdf
| null |
[
"Shreyansh Jain",
"Koteswar Rao Jerripothula"
] |
[
"Federated Learning",
"image-classification",
"Image Classification",
"Privacy Preserving"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/federated-learning-in-open-and-closed-loop
|
2507.12652
| null | null |
Federated Learning in Open- and Closed-Loop EMG Decoding: A Privacy and Performance Perspective
|
Invasive and non-invasive neural interfaces hold promise as high-bandwidth input devices for next-generation technologies. However, neural signals inherently encode sensitive information about an individual's identity and health, making data sharing for decoder training a critical privacy challenge. Federated learning (FL), a distributed, privacy-preserving learning framework, presents a promising solution, but it remains unexplored in closed-loop adaptive neural interfaces. Here, we introduce FL-based neural decoding and systematically evaluate its performance and privacy using high-dimensional electromyography signals in both open- and closed-loop scenarios. In open-loop simulations, FL significantly outperformed local learning baselines, demonstrating its potential for high-performance, privacy-conscious neural decoding. In contrast, closed-loop user studies required adapting FL methods to accommodate single-user, real-time interactions, a scenario not supported by standard FL. This modification resulted in local learning decoders surpassing the adapted FL approach in closed-loop performance, yet local learning still carried higher privacy risks. Our findings highlight a critical performance-privacy tradeoff in real-time adaptive applications and indicate the need for FL methods specifically designed for co-adaptive, single-user applications.
| null |
https://arxiv.org/abs/2507.12652v1
|
https://arxiv.org/pdf/2507.12652v1.pdf
| null |
[
"Kai Malcolm",
"César Uribe",
"Momona Yamagami"
] |
[
"Federated Learning",
"Privacy Preserving"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/third-party-credit-guarantees-and-the-cost-of
|
2507.12616
| null | null |
Third-Party Credit Guarantees and the Cost of Debt: Evidence from Corporate Loans
|
Using a comprehensive dataset collected by the Federal Reserve, I find that over one-third of corporate loans issued by US banks are fully guaranteed by legal entities separate from borrowing firms. Using an empirical strategy that accounts for time-varying firm and lender effects, I find that the existence of a third-party credit guarantee is negatively related to loan risk, loan rate, and loan delinquency. Third party credit guarantees alleviate the effect of collateral constraints in credit market. Firms (particularly smaller firms) that experience a negative shock to their asset values are less likely to use collateral and more likely to use credit guarantees in new borrowings.
| null |
https://arxiv.org/abs/2507.12616v1
|
https://arxiv.org/pdf/2507.12616v1.pdf
| null |
[
"Mehdi Beyhaghi"
] |
[] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/safeguarding-federated-learning-based-road
|
2507.12568
| null | null |
Safeguarding Federated Learning-based Road Condition Classification
|
Federated Learning (FL) has emerged as a promising solution for privacy-preserving autonomous driving, specifically camera-based Road Condition Classification (RCC) systems, harnessing distributed sensing, computing, and communication resources on board vehicles without sharing sensitive image data. However, the collaborative nature of FL-RCC frameworks introduces new vulnerabilities: Targeted Label Flipping Attacks (TLFAs), in which malicious clients (vehicles) deliberately alter their training data labels to compromise the learned model inference performance. Such attacks can, e.g., cause a vehicle to mis-classify slippery, dangerous road conditions as pristine and exceed recommended speed. However, TLFAs for FL-based RCC systems are largely missing. We address this challenge with a threefold contribution: 1) we disclose the vulnerability of existing FL-RCC systems to TLFAs; 2) we introduce a novel label-distance-based metric to precisely quantify the safety risks posed by TLFAs; and 3) we propose FLARE, a defensive mechanism leveraging neuron-wise analysis of the output layer to mitigate TLFA effects. Extensive experiments across three RCC tasks, four evaluation metrics, six baselines, and three deep learning models demonstrate both the severity of TLFAs on FL-RCC systems and the effectiveness of FLARE in mitigating the attack impact.
| null |
https://arxiv.org/abs/2507.12568v1
|
https://arxiv.org/pdf/2507.12568v1.pdf
| null |
[
"Sheng Liu",
"Panos Papadimitratos"
] |
[
"Autonomous Driving",
"Classification",
"Federated Learning",
"Privacy Preserving"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/site-level-fine-tuning-with-progressive-layer
|
2507.12269
| null | null |
Site-Level Fine-Tuning with Progressive Layer Freezing: Towards Robust Prediction of Bronchopulmonary Dysplasia from Day-1 Chest Radiographs in Extremely Preterm Infants
|
Bronchopulmonary dysplasia (BPD) is a chronic lung disease affecting 35% of extremely low birth weight infants. Defined by oxygen dependence at 36 weeks postmenstrual age, it causes lifelong respiratory complications. However, preventive interventions carry severe risks, including neurodevelopmental impairment, ventilator-induced lung injury, and systemic complications. Therefore, early BPD prognosis and prediction of BPD outcome is crucial to avoid unnecessary toxicity in low risk infants. Admission radiographs of extremely preterm infants are routinely acquired within 24h of life and could serve as a non-invasive prognostic tool. In this work, we developed and investigated a deep learning approach using chest X-rays from 163 extremely low-birth-weight infants ($\leq$32 weeks gestation, 401-999g) obtained within 24 hours of birth. We fine-tuned a ResNet-50 pretrained specifically on adult chest radiographs, employing progressive layer freezing with discriminative learning rates to prevent overfitting and evaluated a CutMix augmentation and linear probing. For moderate/severe BPD outcome prediction, our best performing model with progressive freezing, linear probing and CutMix achieved an AUROC of 0.78 $\pm$ 0.10, balanced accuracy of 0.69 $\pm$ 0.10, and an F1-score of 0.67 $\pm$ 0.11. In-domain pre-training significantly outperformed ImageNet initialization (p = 0.031) which confirms domain-specific pretraining to be important for BPD outcome prediction. Routine IRDS grades showed limited prognostic value (AUROC 0.57 $\pm$ 0.11), confirming the need of learned markers. Our approach demonstrates that domain-specific pretraining enables accurate BPD prediction from routine day-1 radiographs. Through progressive freezing and linear probing, the method remains computationally feasible for site-level implementation and future federated learning deployments.
| null |
https://arxiv.org/abs/2507.12269v2
|
https://arxiv.org/pdf/2507.12269v2.pdf
| null |
[
"Sybelle Goedicke-Fritz",
"Michelle Bous",
"Annika Engel",
"Matthias Flotho",
"Pascal Hirsch",
"Hannah Wittig",
"Dino Milanovic",
"Dominik Mohr",
"Mathias Kaspar",
"Sogand Nemat",
"Dorothea Kerner",
"Arno Bücker",
"Andreas Keller",
"Sascha Meyer",
"Michael Zemlin",
"Philipp Flotho"
] |
[
"Federated Learning",
"Prognosis"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/self-adaptive-and-robust-federated-spectrum
|
2507.12127
| null | null |
Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks
|
Advancements in wireless and mobile technologies, including 5G advanced and the envisioned 6G, are driving exponential growth in wireless devices. However, this rapid expansion exacerbates spectrum scarcity, posing a critical challenge. Dynamic spectrum allocation (DSA)--which relies on sensing and dynamically sharing spectrum--has emerged as an essential solution to address this issue. While machine learning (ML) models hold significant potential for improving spectrum sensing, their adoption in centralized ML-based DSA systems is limited by privacy concerns, bandwidth constraints, and regulatory challenges. To overcome these limitations, distributed ML-based approaches such as Federated Learning (FL) offer promising alternatives. This work addresses two key challenges in FL-based spectrum sensing (FLSS). First, the scarcity of labeled data for training FL models in practical spectrum sensing scenarios is tackled with a semi-supervised FL approach, combined with energy detection, enabling model training on unlabeled datasets. Second, we examine the security vulnerabilities of FLSS, focusing on the impact of data poisoning attacks. Our analysis highlights the shortcomings of existing majority-based defenses in countering such attacks. To address these vulnerabilities, we propose a novel defense mechanism inspired by vaccination, which effectively mitigates data poisoning attacks without relying on majority-based assumptions. Extensive experiments on both synthetic and real-world datasets validate our solutions, demonstrating that FLSS can achieve near-perfect accuracy on unlabeled datasets and maintain Byzantine robustness against both targeted and untargeted data poisoning attacks, even when a significant proportion of participants are malicious.
| null |
https://arxiv.org/abs/2507.12127v1
|
https://arxiv.org/pdf/2507.12127v1.pdf
| null |
[
"Ngoc Duy Pham",
"Thusitha Dayaratne",
"Viet Vo",
"Shangqi Lai",
"Sharif Abuadbba",
"Hajime Suzuki",
"Xingliang Yuan",
"Carsten Rudolph"
] |
[
"Data Poisoning",
"Federated Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-privacy-preserving-framework-for
|
2507.12098
| null | null |
A Privacy-Preserving Framework for Advertising Personalization Incorporating Federated Learning and Differential Privacy
|
To mitigate privacy leakage and performance issues in personalized advertising, this paper proposes a framework that integrates federated learning and differential privacy. The system combines distributed feature extraction, dynamic privacy budget allocation, and robust model aggregation to balance model accuracy, communication overhead, and privacy protection. Multi-party secure computing and anomaly detection mechanisms further enhance system resilience against malicious attacks. Experimental results demonstrate that the framework achieves dual optimization of recommendation accuracy and system efficiency while ensuring privacy, providing both a practical solution and a theoretical foundation for applying privacy protection technologies in advertisement recommendation.
| null |
https://arxiv.org/abs/2507.12098v1
|
https://arxiv.org/pdf/2507.12098v1.pdf
| null |
[
"Xiang Li",
"Yifan Lin",
"Yuanzhe Zhang"
] |
[
"Anomaly Detection",
"Federated Learning",
"Privacy Preserving"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sporadic-federated-learning-approach-in
|
2507.12492
| null | null |
Sporadic Federated Learning Approach in Quantum Environment to Tackle Quantum Noise
|
Quantum Federated Learning (QFL) is an emerging paradigm that combines quantum computing and federated learning (FL) to enable decentralized model training while maintaining data privacy over quantum networks. However, quantum noise remains a significant barrier in QFL, since modern quantum devices experience heterogeneous noise levels due to variances in hardware quality and sensitivity to quantum decoherence, resulting in inadequate training performance. To address this issue, we propose SpoQFL, a novel QFL framework that leverages sporadic learning to mitigate quantum noise heterogeneity in distributed quantum systems. SpoQFL dynamically adjusts training strategies based on noise fluctuations, enhancing model robustness, convergence stability, and overall learning efficiency. Extensive experiments on real-world datasets demonstrate that SpoQFL significantly outperforms conventional QFL approaches, achieving superior training performance and more stable convergence.
| null |
https://arxiv.org/abs/2507.12492v1
|
https://arxiv.org/pdf/2507.12492v1.pdf
| null |
[
"Ratun Rahman",
"Atit Pokharel",
"Dinh C. Nguyen"
] |
[
"Federated Learning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/zkp-fedeval-verifiable-and-privacy-preserving
|
2507.11649
| null | null |
ZKP-FedEval: Verifiable and Privacy-Preserving Federated Evaluation using Zero-Knowledge Proofs
|
Federated Learning (FL) enables collaborative model training on decentralized data without exposing raw data. However, the evaluation phase in FL may leak sensitive information through shared performance metrics. In this paper, we propose a novel protocol that incorporates Zero-Knowledge Proofs (ZKPs) to enable privacy-preserving and verifiable evaluation for FL. Instead of revealing raw loss values, clients generate a succinct proof asserting that their local loss is below a predefined threshold. Our approach is implemented without reliance on external APIs, using self-contained modules for federated learning simulation, ZKP circuit design, and experimental evaluation on both the MNIST and Human Activity Recognition (HAR) datasets. We focus on a threshold-based proof for a simple Convolutional Neural Network (CNN) model (for MNIST) and a multi-layer perceptron (MLP) model (for HAR), and evaluate the approach in terms of computational overhead, communication cost, and verifiability.
| null |
https://arxiv.org/abs/2507.11649v2
|
https://arxiv.org/pdf/2507.11649v2.pdf
| null |
[
"Daniel Commey",
"Benjamin Appiah",
"Griffith S. Klogo",
"Garth V. Crosby"
] |
[
"Activity Recognition",
"Federated Learning",
"Human Activity Recognition",
"Privacy Preserving"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/r-2moe-redundancy-removal-mixture-of-experts
|
2507.13107
| null | null |
R^2MoE: Redundancy-Removal Mixture of Experts for Lifelong Concept Learning
|
Enabling large-scale generative models to continuously learn new visual concepts is essential for personalizing pre-trained models to meet individual user preferences. Existing approaches for continual visual concept learning are constrained by two fundamental challenges: catastrophic forgetting and parameter expansion. In this paper, we propose Redundancy-Removal Mixture of Experts (R^2MoE), a parameter-efficient framework for lifelong visual concept learning that effectively learns new concepts while incurring minimal parameter overhead. Our framework includes three key innovative contributions: First, we propose a mixture-of-experts framework with a routing distillation mechanism that enables experts to acquire concept-specific knowledge while preserving the gating network's routing capability, thereby effectively mitigating catastrophic forgetting. Second, we propose a strategy for eliminating redundant layer-wise experts that reduces the number of expert parameters by fully utilizing previously learned experts. Third, we employ a hierarchical local attention-guided inference approach to mitigate interference between generated visual concepts. Extensive experiments have demonstrated that our method generates images with superior conceptual fidelity compared to the state-of-the-art (SOTA) method, achieving an impressive 87.8\% reduction in forgetting rates and 63.3\% fewer parameters on the CustomConcept 101 dataset. Our code is available at {https://github.com/learninginvision/R2MoE}
| null |
https://arxiv.org/abs/2507.13107v1
|
https://arxiv.org/pdf/2507.13107v1.pdf
| null |
[
"Xiaohan Guo",
"Yusong Cai",
"Zejia Liu",
"Zhengning Wang",
"Lili Pan",
"Hongliang Li"
] |
[
"Mixture-of-Experts"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/weakly-supervised-visible-infrared-person-re
|
2507.12942
| null | null |
Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning
|
To reduce the reliance of visible-infrared person re-identification (ReID) models on labeled cross-modal samples, this paper explores a weakly supervised cross-modal person ReID method that uses only single-modal sample identity labels, addressing scenarios where cross-modal identity labels are unavailable. To mitigate the impact of missing cross-modal labels on model performance, we propose a heterogeneous expert collaborative consistency learning framework, designed to establish robust cross-modal identity correspondences in a weakly supervised manner. This framework leverages labeled data from each modality to independently train dedicated classification experts. To associate cross-modal samples, these classification experts act as heterogeneous predictors, predicting the identities of samples from the other modality. To improve prediction accuracy, we design a cross-modal relationship fusion mechanism that effectively integrates predictions from different experts. Under the implicit supervision provided by cross-modal identity correspondences, collaborative and consistent learning among the experts is encouraged, significantly enhancing the model's ability to extract modality-invariant features and improve cross-modal identity recognition. Experimental results on two challenging datasets validate the effectiveness of the proposed method.
| null |
https://arxiv.org/abs/2507.12942v1
|
https://arxiv.org/pdf/2507.12942v1.pdf
| null |
[
"Yafei Zhang",
"Lingqi Kong",
"Huafeng Li",
"Jie Wen"
] |
[
"Person Re-Identification"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-neck-to-head-bio-impedance-sensing-for
|
2507.12884
| null | null |
From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation
|
We present NeckSense, a novel wearable system for head pose tracking that leverages multi-channel bio-impedance sensing with soft, dry electrodes embedded in a lightweight, necklace-style form factor. NeckSense captures dynamic changes in tissue impedance around the neck, which are modulated by head rotations and subtle muscle activations. To robustly estimate head pose, we propose a deep learning framework that integrates anatomical priors, including joint constraints and natural head rotation ranges, into the loss function design. We validate NeckSense on 7 participants using the current SOTA pose estimation model as ground truth. Our system achieves a mean per-vertex error of 25.9 mm across various head movements with a leave-one-person-out cross-validation method, demonstrating that a compact, line-of-sight-free bio-impedance wearable can deliver head-tracking performance comparable to SOTA vision-based methods.
| null |
https://arxiv.org/abs/2507.12884v1
|
https://arxiv.org/pdf/2507.12884v1.pdf
| null |
[
"Mengxi Liu",
"Lala Shakti Swarup Ray",
"Sizhen Bian",
"Ko Watanabe",
"Ankur Bhatt",
"Joanna Sorysz",
"Russel Torah",
"Bo Zhou",
"Paul Lukowicz"
] |
[
"Head Pose Estimation",
"Pose Estimation",
"Pose Tracking"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/whofi-deep-person-re-identification-via-wi-fi
|
2507.12869
| null | null |
WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding
|
Person Re-Identification is a key and challenging task in video surveillance. While traditional methods rely on visual data, issues like poor lighting, occlusion, and suboptimal angles often hinder performance. To address these challenges, we introduce WhoFi, a novel pipeline that utilizes Wi-Fi signals for person re-identification. Biometric features are extracted from Channel State Information (CSI) and processed through a modular Deep Neural Network (DNN) featuring a Transformer-based encoder. The network is trained using an in-batch negative loss function to learn robust and generalizable biometric signatures. Experiments on the NTU-Fi dataset show that our approach achieves competitive results compared to state-of-the-art methods, confirming its effectiveness in identifying individuals via Wi-Fi signals.
| null |
https://arxiv.org/abs/2507.12869v1
|
https://arxiv.org/pdf/2507.12869v1.pdf
| null |
[
"Danilo Avola",
"Daniele Pannone",
"Dario Montagnini",
"Emad Emam"
] |
[
"Person Re-Identification"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transformer-based-person-identification-via
|
2507.12854
| null | null |
Transformer-Based Person Identification via Wi-Fi CSI Amplitude and Phase Perturbations
|
Wi-Fi sensing is gaining momentum as a non-intrusive and privacy-preserving alternative to vision-based systems for human identification. However, person identification through wireless signals, particularly without user motion, remains largely unexplored. Most prior wireless-based approaches rely on movement patterns, such as walking gait, to extract biometric cues. In contrast, we propose a transformer-based method that identifies individuals from Channel State Information (CSI) recorded while the subject remains stationary. CSI captures fine-grained amplitude and phase distortions induced by the unique interaction between the human body and the radio signal. To support evaluation, we introduce a dataset acquired with ESP32 devices in a controlled indoor environment, featuring six participants observed across multiple orientations. A tailored preprocessing pipeline, including outlier removal, smoothing, and phase calibration, enhances signal quality. Our dual-branch transformer architecture processes amplitude and phase modalities separately and achieves 99.82\% classification accuracy, outperforming convolutional and multilayer perceptron baselines. These results demonstrate the discriminative potential of CSI perturbations, highlighting their capacity to encode biometric traits in a consistent manner. They further confirm the viability of passive, device-free person identification using low-cost commodity Wi-Fi hardware in real-world settings.
| null |
https://arxiv.org/abs/2507.12854v1
|
https://arxiv.org/pdf/2507.12854v1.pdf
| null |
[
"Danilo Avola",
"Andrea Bernardini",
"Francesco Danese",
"Mario Lezoche",
"Maurizio Mancini",
"Daniele Pannone",
"Amedeo Ranaldi"
] |
[
"Person Identification",
"Privacy Preserving"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/emotional-support-with-llm-based-empathetic
|
2507.12820
| null | null |
Emotional Support with LLM-based Empathetic Dialogue Generation
|
Emotional Support Conversation (ESC) aims to provide empathetic and effective emotional assistance through dialogue, addressing the growing demand for mental health support. This paper presents our solution for the NLPCC 2025 Task 8 ESC evaluation, where we leverage large-scale language models enhanced by prompt engineering and finetuning techniques. We explore both parameter-efficient Low-Rank Adaptation and full-parameter fine-tuning strategies to improve the model's ability to generate supportive and contextually appropriate responses. Our best model ranked second in the competition, highlighting the potential of combining LLMs with effective adaptation methods for ESC tasks. Future work will focus on further enhancing emotional understanding and response personalization to build more practical and reliable emotional support systems.
| null |
https://arxiv.org/abs/2507.12820v1
|
https://arxiv.org/pdf/2507.12820v1.pdf
| null |
[
"Shiquan Wang",
"Ruiyu Fang",
"Zhongjiang He",
"Shuangyong Song",
"Yongxiang Li"
] |
[
"Dialogue Generation",
"Prompt Engineering"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sample-constrained-black-box-optimization-for
|
2507.12773
| null | null |
Sample-Constrained Black Box Optimization for Audio Personalization
|
We consider the problem of personalizing audio to maximize user experience. Briefly, we aim to find a filter $h^*$, which applied to any music or speech, will maximize the user's satisfaction. This is a black-box optimization problem since the user's satisfaction function is unknown. Substantive work has been done on this topic where the key idea is to play audio samples to the user, each shaped by a different filter $h_i$, and query the user for their satisfaction scores $f(h_i)$. A family of ``surrogate" functions is then designed to fit these scores and the optimization method gradually refines these functions to arrive at the filter $\hat{h}^*$ that maximizes satisfaction. In certain applications, we observe that a second type of querying is possible where users can tell us the individual elements $h^*[j]$ of the optimal filter $h^*$. Consider an analogy from cooking where the goal is to cook a recipe that maximizes user satisfaction. A user can be asked to score various cooked recipes (e.g., tofu fried rice) or to score individual ingredients (say, salt, sugar, rice, chicken, etc.). Given a budget of $B$ queries, where a query can be of either type, our goal is to find the recipe that will maximize this user's satisfaction. Our proposal builds on Sparse Gaussian Process Regression (GPR) and shows how a hybrid approach can outperform any one type of querying. Our results are validated through simulations and real world experiments, where volunteers gave feedback on music/speech audio and were able to achieve high satisfaction levels. We believe this idea of hybrid querying opens new problems in black-box optimization and solutions can benefit other applications beyond audio personalization.
| null |
https://arxiv.org/abs/2507.12773v1
|
https://arxiv.org/pdf/2507.12773v1.pdf
| null |
[
"Rajalaxmi Rajagopalan",
"Yu-Lin Wei",
"Romit Roy Choudhury"
] |
[
"GPR"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/integrated-oculomics-and-lipidomics-reveal
|
2507.12663
| null | null |
Integrated Oculomics and Lipidomics Reveal Microvascular Metabolic Signatures Associated with Cardiovascular Health in a Healthy Cohort
|
Cardiovascular disease (CVD) remains the leading global cause of mortality, yet current risk stratification methods often fail to detect early, subclinical changes. Previous studies have generally not integrated retinal microvasculature characteristics with comprehensive serum lipidomic profiles as potential indicators of CVD risk. In this study, an innovative imaging omics framework was introduced, combining retinal microvascular traits derived through deep learning based image processing with serum lipidomic data to highlight asymptomatic biomarkers of cardiovascular risk beyond the conventional lipid panel. This represents the first large scale, covariate adjusted and stratified correlation analysis conducted in a healthy population, which is essential for identifying early indicators of disease. Retinal phenotypes were quantified using automated image analysis tools, while serum lipid profiling was performed by Ultra High Performance Liquid Chromatography Electrospray ionization High resolution mass spectrometry (UHPLC ESI HRMS). Strong, age- and sex-independent correlations were established, particularly between average artery width, vessel density, and lipid subclasses such as triacylglycerols (TAGs), diacylglycerols (DAGs), and ceramides (Cers). These associations suggest a converging mechanism of microvascular remodeling under metabolic stress. By linking detailed vascular structural phenotypes to specific lipid species, this study fills a critical gap in the understanding of early CVD pathogenesis. This integration not only offers a novel perspective on microvascular metabolic associations but also presents a significant opportunity for the identification of robust, non-invasive biomarkers. Ultimately, these findings may support improved early detection, targeted prevention, and personalized approaches in cardiovascular healthcare.
| null |
https://arxiv.org/abs/2507.12663v1
|
https://arxiv.org/pdf/2507.12663v1.pdf
| null |
[
"Inamullah",
"Ernesto Elias Vidal Rosas",
"Imran Razzak",
"Shoaib Jameel"
] |
[] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ranking-vectors-clustering-theory-and
|
2507.12583
| null | null |
Ranking Vectors Clustering: Theory and Applications
|
We study the problem of clustering ranking vectors, where each vector represents preferences as an ordered list of distinct integers. Specifically, we focus on the k-centroids ranking vectors clustering problem (KRC), which aims to partition a set of ranking vectors into k clusters and identify the centroid of each cluster. Unlike classical k-means clustering (KMC), KRC constrains both the observations and centroids to be ranking vectors. We establish the NP-hardness of KRC and characterize its feasible set. For the single-cluster case, we derive a closed-form analytical solution for the optimal centroid, which can be computed in linear time. To address the computational challenges of KRC, we develop an efficient approximation algorithm, KRCA, which iteratively refines initial solutions from KMC, referred to as the baseline solution. Additionally, we introduce a branch-and-bound (BnB) algorithm for efficient cluster reconstruction within KRCA, leveraging a decision tree framework to reduce computational time while incorporating a controlling parameter to balance solution quality and efficiency. We establish theoretical error bounds for KRCA and BnB. Through extensive numerical experiments on synthetic and real-world datasets, we demonstrate that KRCA consistently outperforms baseline solutions, delivering significant improvements in solution quality with fast computational times. This work highlights the practical significance of KRC for personalization and large-scale decision making, offering methodological advancements and insights that can be built upon in future studies.
| null |
https://arxiv.org/abs/2507.12583v1
|
https://arxiv.org/pdf/2507.12583v1.pdf
| null |
[
"Ali Fattahi",
"Ali Eshragh",
"Babak Aslani",
"Meysam Rabiee"
] |
[
"Clustering"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/looking-for-fairness-in-recommender-systems
|
2507.12242
| null | null |
Looking for Fairness in Recommender Systems
|
Recommender systems can be found everywhere today, shaping our everyday experience whenever we're consuming content, ordering food, buying groceries online, or even just reading the news. Let's imagine we're in the process of building a recommender system to make content suggestions to users on social media. When thinking about fairness, it becomes clear there are several perspectives to consider: the users asking for tailored suggestions, the content creators hoping for some limelight, and society at large, navigating the repercussions of algorithmic recommendations. A shared fairness concern across all three is the emergence of filter bubbles, a side-effect that takes place when recommender systems are almost "too good", making recommendations so tailored that users become inadvertently confined to a narrow set of opinions/themes and isolated from alternative ideas. From the user's perspective, this is akin to manipulation. From the small content creator's perspective, this is an obstacle preventing them access to a whole range of potential fans. From society's perspective, the potential consequences are far-reaching, influencing collective opinions, social behavior and political decisions. How can our recommender system be fine-tuned to avoid the creation of filter bubbles, and ensure a more inclusive and diverse content landscape? Approaching this problem involves defining one (or more) performance metric to represent diversity, and tweaking our recommender system's performance through the lens of fairness. By incorporating this metric into our evaluation framework, we aim to strike a balance between personalized recommendations and the broader societal goal of fostering rich and varied cultures and points of view.
| null |
https://arxiv.org/abs/2507.12242v1
|
https://arxiv.org/pdf/2507.12242v1.pdf
| null |
[
"Cécile Logé"
] |
[
"Fairness",
"Recommendation Systems"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/novel-approach-to-dual-channel-estimation-in
|
2507.12221
| null | null |
Novel Approach to Dual-Channel Estimation in Integrated Sensing and Communications for 6G
|
Integrated Sensing and Communication (ISAC) design is crucial for 6G and harmonizes environmental data sensing with communication, emphasizing the need to understand and model these elements. This paper delves into dual-channel models for ISAC, employing channel extraction techniques to validate and enhance accuracy. Focusing on millimeter wave (mmWave) radars, it explores the extraction of the bistatic sensing channel from monostatic measurements and subsequent communication channel estimation. The proposed methods involve interference extraction, module and phase correlation analyses, chirp clustering, and auto-clutter reduction. A comprehensive set-up in an anechoic chamber with controlled scenarios evaluates the proposed techniques, demonstrating successful channel extraction and validation through Root Mean Square Delay Spread (RMS DS), Power Delay Profile (PDP), and Angle of Arrival (AoA) analysis. Comparison with Ray-Tracing (RT) simulations confirms the effectiveness of the proposed approach, presenting an innovative stride towards fully integrated sensing and communication in future networks.
| null |
https://arxiv.org/abs/2507.12221v1
|
https://arxiv.org/pdf/2507.12221v1.pdf
| null |
[
"Alejandro Castilla",
"Saúl Fenollosa",
"Monika Drozdowska",
"Alejandro Lopez-Escudero",
"Sergio Micò-Rosa",
"Narcis Cardona"
] |
[
"Integrated sensing and communication",
"ISAC"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/draw-an-ugly-person-an-exploration-of
|
2507.12212
| null | null |
Draw an Ugly Person An Exploration of Generative AIs Perceptions of Ugliness
|
Generative AI does not only replicate human creativity but also reproduces deep-seated cultural biases, making it crucial to critically examine how concepts like ugliness are understood and expressed by these tools. This study investigates how four different generative AI models understand and express ugliness through text and image and explores the biases embedded within these representations. We extracted 13 adjectives associated with ugliness through iterative prompting of a large language model and generated 624 images across four AI models and three prompts. Demographic and socioeconomic attributes within the images were independently coded and thematically analyzed. Our findings show that AI models disproportionately associate ugliness with old white male figures, reflecting entrenched social biases as well as paradoxical biases, where efforts to avoid stereotypical depictions of marginalized groups inadvertently result in the disproportionate projection of negative attributes onto majority groups. Qualitative analysis further reveals that, despite supposed attempts to frame ugliness within social contexts, conventional physical markers such as asymmetry and aging persist as central visual motifs. These findings demonstrate that despite attempts to create more equal representations, generative AI continues to perpetuate inherited and paradoxical biases, underscoring the critical work being done to create ethical AI training paradigms and advance methodologies for more inclusive AI development.
| null |
https://arxiv.org/abs/2507.12212v1
|
https://arxiv.org/pdf/2507.12212v1.pdf
| null |
[
"Garyoung Kim",
"Huisung Kwon",
"Seoju Yun",
"Yu-Won Youn"
] |
[
"Large Language Model"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-cross-task-transfer-of-large
|
2507.13236
| null | null |
Enhancing Cross-task Transfer of Large Language Models via Activation Steering
|
Large language models (LLMs) have shown impressive abilities in leveraging pretrained knowledge through prompting, but they often struggle with unseen tasks, particularly in data-scarce scenarios. While cross-task in-context learning offers a direct solution for transferring knowledge across tasks, it still faces critical challenges in terms of robustness, scalability, and efficiency. In this paper, we investigate whether cross-task transfer can be achieved via latent space steering without parameter updates or input expansion. Through an analysis of activation patterns in the latent space of LLMs, we observe that the enhanced activations induced by in-context examples have consistent patterns across different tasks. Inspired by these findings, we propose CAST, a novel Cross-task Activation Steering Transfer framework that enables effective transfer by manipulating the model's internal activation states. Our approach first selects influential and diverse samples from high-resource tasks, then utilizes their contrastive representation-enhanced activations to adapt LLMs to low-resource tasks. Extensive experiments across both cross-domain and cross-lingual transfer settings show that our method outperforms competitive baselines and demonstrates superior scalability and lower computational costs.
| null |
https://arxiv.org/abs/2507.13236v1
|
https://arxiv.org/pdf/2507.13236v1.pdf
| null |
[
"Xinyu Tang",
"Zhihao Lv",
"Xiaoxue Cheng",
"Junyi Li",
"Wayne Xin Zhao",
"Zujie Wen",
"Zhiqiang Zhang",
"Jun Zhou"
] |
[
"Cross-Lingual Transfer",
"In-Context Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/abgen-evaluating-large-language-models-in
|
2507.13300
| null | null |
AbGen: Evaluating Large Language Models in Ablation Study Design and Evaluation for Scientific Research
|
We introduce AbGen, the first benchmark designed to evaluate the capabilities of LLMs in designing ablation studies for scientific research. AbGen consists of 1,500 expert-annotated examples derived from 807 NLP papers. In this benchmark, LLMs are tasked with generating detailed ablation study designs for a specified module or process based on the given research context. Our evaluation of leading LLMs, such as DeepSeek-R1-0528 and o4-mini, highlights a significant performance gap between these models and human experts in terms of the importance, faithfulness, and soundness of the ablation study designs. Moreover, we demonstrate that current automated evaluation methods are not reliable for our task, as they show a significant discrepancy when compared to human assessment. To better investigate this, we develop AbGen-Eval, a meta-evaluation benchmark designed to assess the reliability of commonly used automated evaluation systems in measuring LLM performance on our task. We investigate various LLM-as-Judge systems on AbGen-Eval, providing insights for future research on developing more effective and reliable LLM-based evaluation systems for complex scientific tasks.
| null |
https://arxiv.org/abs/2507.13300v1
|
https://arxiv.org/pdf/2507.13300v1.pdf
| null |
[
"Yilun Zhao",
"Weiyuan Chen",
"Zhijian Xu",
"Manasi Patwardhan",
"Yixin Liu",
"Chengye Wang",
"Lovekesh Vig",
"Arman Cohan"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/are-encoders-able-to-learn-landmarkers-for
|
2507.12604
| null | null |
Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?
|
Effectively representing heterogeneous tabular datasets for meta-learning purposes is still an open problem. Previous approaches rely on representations that are intended to be universal. This paper proposes two novel methods for tabular representation learning tailored to a specific meta-task - warm-starting Bayesian Hyperparameter Optimization. Both follow the specific requirement formulated by ourselves that enforces representations to capture the properties of landmarkers. The first approach involves deep metric learning, while the second one is based on landmarkers reconstruction. We evaluate the proposed encoders in two ways. Next to the gain in the target meta-task, we also use the degree of fulfillment of the proposed requirement as the evaluation metric. Experiments demonstrate that while the proposed encoders can effectively learn representations aligned with landmarkers, they may not directly translate to significant performance gains in the meta-task of HPO warm-starting.
| null |
https://arxiv.org/abs/2507.12604v1
|
https://arxiv.org/pdf/2507.12604v1.pdf
| null |
[
"Antoni Zajko",
"Katarzyna Woźnica"
] |
[
"Hyperparameter Optimization",
"Meta-Learning",
"Metric Learning",
"Representation Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/imbalanced-regression-pipeline-recommendation
|
2507.11901
| null | null |
Imbalanced Regression Pipeline Recommendation
|
Imbalanced problems are prevalent in various real-world scenarios and are extensively explored in classification tasks. However, they also present challenges for regression tasks due to the rarity of certain target values. A common alternative is to employ balancing algorithms in preprocessing to address dataset imbalance. However, due to the variety of resampling methods and learning models, determining the optimal solution requires testing many combinations. Furthermore, the learning model, dataset, and evaluation metric affect the best strategies. This work proposes the Meta-learning for Imbalanced Regression (Meta-IR) framework, which diverges from existing literature by training meta-classifiers to recommend the best pipeline composed of the resampling strategy and learning model per task in a zero-shot fashion. The meta-classifiers are trained using a set of meta-features to learn how to map the meta-features to the classes indicating the best pipeline. We propose two formulations: Independent and Chained. Independent trains the meta-classifiers to separately indicate the best learning algorithm and resampling strategy. Chained involves a sequential procedure where the output of one meta-classifier is used as input for another to model intrinsic relationship factors. The Chained scenario showed superior performance, suggesting a relationship between the learning algorithm and the resampling strategy per task. Compared with AutoML frameworks, Meta-IR obtained better results. Moreover, compared with baselines of six learning algorithms and six resampling algorithms plus no resampling, totaling 42 (6 X 7) configurations, Meta-IR outperformed all of them. The code, data, and further information of the experiments can be found on GitHub: https://github.com/JusciAvelino/Meta-IR.
| null |
https://arxiv.org/abs/2507.11901v1
|
https://arxiv.org/pdf/2507.11901v1.pdf
| null |
[
"Juscimara G. Avelino",
"George D. C. Cavalcanti",
"Rafael M. O. Cruz"
] |
[
"AutoML",
"Meta-Learning",
"regression"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/clid-mu-cross-layer-information-divergence
|
2507.11807
| null | null |
CLID-MU: Cross-Layer Information Divergence Based Meta Update Strategy for Learning with Noisy Labels
|
Learning with noisy labels (LNL) is essential for training deep neural networks with imperfect data. Meta-learning approaches have achieved success by using a clean unbiased labeled set to train a robust model. However, this approach heavily depends on the availability of a clean labeled meta-dataset, which is difficult to obtain in practice. In this work, we thus tackle the challenge of meta-learning for noisy label scenarios without relying on a clean labeled dataset. Our approach leverages the data itself while bypassing the need for labels. Building on the insight that clean samples effectively preserve the consistency of related data structures across the last hidden and the final layer, whereas noisy samples disrupt this consistency, we design the Cross-layer Information Divergence-based Meta Update Strategy (CLID-MU). CLID-MU leverages the alignment of data structures across these diverse feature spaces to evaluate model performance and use this alignment to guide training. Experiments on benchmark datasets with varying amounts of labels under both synthetic and real-world noise demonstrate that CLID-MU outperforms state-of-the-art methods. The code is released at https://github.com/ruofanhu/CLID-MU.
| null |
https://arxiv.org/abs/2507.11807v1
|
https://arxiv.org/pdf/2507.11807v1.pdf
| null |
[
"Ruofan Hu",
"Dongyu Zhang",
"Huayi Zhang",
"Elke Rundensteiner"
] |
[
"Learning with noisy labels",
"Meta-Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-graph-in-graph-learning-framework-for-drug
|
2507.11757
| null | null |
A Graph-in-Graph Learning Framework for Drug-Target Interaction Prediction
|
Accurately predicting drug-target interactions (DTIs) is pivotal for advancing drug discovery and target validation techniques. While machine learning approaches including those that are based on Graph Neural Networks (GNN) have achieved notable success in DTI prediction, many of them have difficulties in effectively integrating the diverse features of drugs, targets and their interactions. To address this limitation, we introduce a novel framework to take advantage of the power of both transductive learning and inductive learning so that features at molecular level and drug-target interaction network level can be exploited. Within this framework is a GNN-based model called Graph-in-Graph (GiG) that represents graphs of drug and target molecular structures as meta-nodes in a drug-target interaction graph, enabling a detailed exploration of their intricate relationships. To evaluate the proposed model, we have compiled a special benchmark comprising drug SMILES, protein sequences, and their interaction data, which is interesting in its own right. Our experimental results demonstrate that the GiG model significantly outperforms existing approaches across all evaluation metrics, highlighting the benefits of integrating different learning paradigms and interaction data.
| null |
https://arxiv.org/abs/2507.11757v1
|
https://arxiv.org/pdf/2507.11757v1.pdf
| null |
[
"Yuehua Song",
"Yong Gao"
] |
[
"Drug Discovery",
"Graph Learning",
"Inductive Learning",
"Transductive Learning"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/data-driven-meta-analysis-and-public-dataset
|
2507.11571
| null | null |
Data-Driven Meta-Analysis and Public-Dataset Evaluation for Sensor-Based Gait Age Estimation
|
Estimating a person's age from their gait has important applications in healthcare, security and human-computer interaction. In this work, we review fifty-nine studies involving over seventy-five thousand subjects recorded with video, wearable and radar sensors. We observe that convolutional neural networks produce an average error of about 4.2 years, inertial-sensor models about 4.5 years and multi-sensor fusion as low as 3.4 years, with notable differences between lab and real-world data. We then analyse sixty-three thousand eight hundred forty-six gait cycles from the OU-ISIR Large-Population dataset to quantify correlations between age and five key metrics: stride length, walking speed, step cadence, step-time variability and joint-angle entropy, with correlation coefficients of at least 0.27. Next, we fine-tune a ResNet34 model and apply Grad-CAM to reveal that the network attends to the knee and pelvic regions, consistent with known age-related gait changes. Finally, on a one hundred thousand sample subset of the VersatileGait database, we compare support vector machines, decision trees, random forests, multilayer perceptrons and convolutional neural networks, finding that deep networks achieve up to 96 percent accuracy while processing each sample in under 0.1 seconds. By combining a broad meta-analysis with new large-scale experiments and interpretable visualizations, we establish solid performance baselines and practical guidelines for reducing gait-age error below three years in real-world scenarios.
| null |
https://arxiv.org/abs/2507.11571v1
|
https://arxiv.org/pdf/2507.11571v1.pdf
| null |
[
"Varun Velankar"
] |
[
"Age Estimation",
"Sensor Fusion"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-ai-ethical-resonance-hypothesis-the
|
2507.11552
| null | null |
The AI Ethical Resonance Hypothesis: The Possibility of Discovering Moral Meta-Patterns in AI Systems
|
This paper presents a theoretical framework for the AI ethical resonance hypothesis, which proposes that advanced AI systems with purposefully designed cognitive structures ("ethical resonators") may emerge with the ability to identify subtle moral patterns that are invisible to the human mind. The paper explores the possibility that by processing and synthesizing large amounts of ethical contexts, AI systems may discover moral meta-patterns that transcend cultural, historical, and individual biases, potentially leading to a deeper understanding of universal ethical foundations. The paper also examines a paradoxical aspect of the hypothesis, in which AI systems could potentially deepen our understanding of what we traditionally consider essentially human - our capacity for ethical reflection.
| null |
https://arxiv.org/abs/2507.11552v1
|
https://arxiv.org/pdf/2507.11552v1.pdf
| null |
[
"Tomasz Zgliczyński-Cuber"
] |
[] | 2025-07-13T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/formulaone-measuring-the-depth-of-algorithmic
|
2507.13337
| null | null |
FormulaOne: Measuring the Depth of Algorithmic Reasoning Beyond Competitive Programming
|
Frontier AI models demonstrate formidable breadth of knowledge. But how close are they to true human -- or superhuman -- expertise? Genuine experts can tackle the hardest problems and push the boundaries of scientific understanding. To illuminate the limits of frontier model capabilities, we turn away from contrived competitive programming puzzles, and instead focus on real-life research problems. We construct FormulaOne, a benchmark that lies at the intersection of graph theory, logic, and algorithms, all well within the training distribution of frontier models. Our problems are incredibly demanding, requiring an array of reasoning steps. The dataset has three key properties. First, it is of commercial interest and relates to practical large-scale optimisation problems, such as those arising in routing, scheduling, and network design. Second, it is generated from the highly expressive framework of Monadic Second-Order (MSO) logic on graphs, paving the way toward automatic problem generation at scale; ideal for building RL environments. Third, many of our problems are intimately related to the frontier of theoretical computer science, and to central conjectures therein, such as the Strong Exponential Time Hypothesis (SETH). As such, any significant algorithmic progress on our dataset, beyond known results, could carry profound theoretical implications. Remarkably, state-of-the-art models like OpenAI's o3 fail entirely on FormulaOne, solving less than 1% of the questions, even when given 10 attempts and explanatory fewshot examples -- highlighting how far they remain from expert-level understanding in some domains. To support further research, we additionally curate FormulaOne-Warmup, offering a set of simpler tasks, from the same distribution. We release the full corpus along with a comprehensive evaluation framework.
| null |
https://arxiv.org/abs/2507.13337v1
|
https://arxiv.org/pdf/2507.13337v1.pdf
| null |
[
"Gal Beniamini",
"Yuval Dor",
"Alon Vinnikov",
"Shir Granot Peled",
"Or Weinstein",
"Or Sharir",
"Noam Wies",
"Tomer Nussbaum",
"Ido Ben Shaul",
"Tomer Zekharya",
"Yoav Levine",
"Shai Shalev-Shwartz",
"Amnon Shashua"
] |
[
"Scheduling"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/privacy-preserving-fusion-for-multi-sensor
|
2507.13286
| null | null |
Privacy-Preserving Fusion for Multi-Sensor Systems Under Multiple Packet Dropouts
|
Wireless sensor networks (WSNs) are critical components in modern cyber-physical systems, enabling efficient data collection and fusion through spatially distributed sensors. However, the inherent risks of eavesdropping and packet dropouts in such networks pose significant challenges to secure state estimation. In this paper, we address the privacy-preserving fusion estimation (PPFE) problem for multi-sensor systems under multiple packet dropouts and eavesdropping attacks. To mitigate these issues, we propose a distributed encoding-based privacy-preserving mechanism (PPM) within a control-theoretic framework, ensuring data privacy during transmission while maintaining the performance of legitimate state estimation. A centralized fusion filter is developed, accounting for the coupling effects of packet dropouts and the encoding-based PPM. Boundedness conditions for the legitimate user's estimation error covariance are derived via a modified algebraic Riccati equation. Additionally, by demonstrating the divergence of the eavesdropper's mean estimation error, the proposed PPFE algorithm's data confidentiality is rigorously analyzed. Simulation results for an Internet-based three-tank system validate the effectiveness of the proposed approach, highlighting its potential to enhance privacy without compromising estimation accuracy.
| null |
https://arxiv.org/abs/2507.13286v1
|
https://arxiv.org/pdf/2507.13286v1.pdf
| null |
[
"Jie Huang",
"Jason J. R. Liu"
] |
[
"Privacy Preserving",
"State Estimation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/power-in-sharing-networks-with-a-priori
|
2507.13272
| null | null |
Power in Sharing Networks with a priori Unions
|
We introduce and analyze a novel family of power indices tailored for sharing networks in technological markets, where firms operate competitively within, but not across, distinct industrial sectors. In these settings, inter-firm collaboration structures emerge from formal technology licensing agreements. The proposed indices are defined over graphs with a priori unions and combine two key centrality measures - degree-based and rescaled eigenvector centrality - modulated by positive market coefficients that reflect sectoral dynamics. We first explore the monotonicity properties of these indices, highlighting their responsiveness to local changes in network structure. Interestingly, major economic actors exhibit structural stability when inter-sectoral technological spillovers are minimal. Building on these findings, we provide theoretical underpinnings by characterizing the indices as the Shapley values of a family of coherent and economically interpretable transferable utility (TU) games defined over such graphs. However, for a broad class of network structures, the core of these TU games is often empty, signaling inherent instability in technological sharing arrangements. Finally, we offer an axiomatic foundation for this family of indices, proving independence of the proposed axioms. This axiomatization extends naturally to exchange networks, even when stage-propagation coefficients are not positive.
| null |
https://arxiv.org/abs/2507.13272v1
|
https://arxiv.org/pdf/2507.13272v1.pdf
| null |
[
"Michele Aleandri",
"Francesco Ciardiello",
"Andrea Di Liddo"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/life-finds-a-way-emergence-of-cooperative
|
2507.13253
| null | null |
Life Finds A Way: Emergence of Cooperative Structures in Adaptive Threshold Networks
|
There has been a long debate on how new levels of organization have evolved. It might seem unlikely, as cooperation must prevail over competition. One well-studied example is the emergence of autocatalytic sets, which seem to be a prerequisite for the evolution of life. Using a simple model, we investigate how varying bias toward cooperation versus antagonism shapes network dynamics, revealing that higher-order organization emerges even amid pervasive antagonistic interactions. In general, we observe that a quantitative increase in the number of elements in a system leads to a qualitative transition. We present a random threshold-directed network model that integrates node-specific traits with dynamic edge formation and node removal, simulating arbitrary levels of cooperation and competition. In our framework, intrinsic node values determine directed links through various threshold rules. Our model generates a multi-digraph with signed edges (reflecting support/antagonism, labeled ``help''/``harm''), which ultimately yields two parallel yet interdependent threshold graphs. Incorporating temporal growth and node turnover in our approach allows exploration of the evolution, adaptation, and potential collapse of communities and reveals phase transitions in both connectivity and resilience. Our findings extend classical random threshold and Erd\H{o}s-R\'enyi models, offering new insights into adaptive systems in biological and economic contexts, with emphasis on the application to Collective Affordance Sets. This framework should also be useful for making predictions that will be tested by ongoing experiments of microbial communities in soil.
| null |
https://arxiv.org/abs/2507.13253v1
|
https://arxiv.org/pdf/2507.13253v1.pdf
| null |
[
"Sean P. Maley",
"Carlos Gershenson",
"Stuart A. Kauffman"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/gradnetot-learning-optimal-transport-maps
|
2507.13191
| null | null |
GradNetOT: Learning Optimal Transport Maps with GradNets
|
Monotone gradient functions play a central role in solving the Monge formulation of the optimal transport problem, which arises in modern applications ranging from fluid dynamics to robot swarm control. When the transport cost is the squared Euclidean distance, Brenier's theorem guarantees that the unique optimal map is the gradient of a convex function, namely a monotone gradient map, and it satisfies a Monge-Amp\`ere equation. In [arXiv:2301.10862] [arXiv:2404.07361], we proposed Monotone Gradient Networks (mGradNets), neural networks that directly parameterize the space of monotone gradient maps. In this work, we leverage mGradNets to directly learn the optimal transport mapping by minimizing a training loss function defined using the Monge-Amp\`ere equation. We empirically show that the structural bias of mGradNets facilitates the learning of optimal transport maps and employ our method for a robot swarm control problem.
| null |
https://arxiv.org/abs/2507.13191v1
|
https://arxiv.org/pdf/2507.13191v1.pdf
| null |
[
"Shreyas Chaudhari",
"Srinivasa Pranav",
"José M. F. Moura"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-survey-of-context-engineering-for-large
|
2507.13334
| null | null |
A Survey of Context Engineering for Large Language Models
|
The performance of Large Language Models (LLMs) is fundamentally determined by the contextual information provided during inference. This survey introduces Context Engineering, a formal discipline that transcends simple prompt design to encompass the systematic optimization of information payloads for LLMs. We present a comprehensive taxonomy decomposing Context Engineering into its foundational components and the sophisticated implementations that integrate them into intelligent systems. We first examine the foundational components: context retrieval and generation, context processing and context management. We then explore how these components are architecturally integrated to create sophisticated system implementations: retrieval-augmented generation (RAG), memory systems and tool-integrated reasoning, and multi-agent systems. Through this systematic analysis of over 1400 research papers, our survey not only establishes a technical roadmap for the field but also reveals a critical research gap: a fundamental asymmetry exists between model capabilities. While current models, augmented by advanced context engineering, demonstrate remarkable proficiency in understanding complex contexts, they exhibit pronounced limitations in generating equally sophisticated, long-form outputs. Addressing this gap is a defining priority for future research. Ultimately, this survey provides a unified framework for both researchers and engineers advancing context-aware AI.
| null |
https://arxiv.org/abs/2507.13334v2
|
https://arxiv.org/pdf/2507.13334v2.pdf
| null |
[
"Lingrui Mei",
"Jiayu Yao",
"Yuyao Ge",
"Yiwei Wang",
"Baolong Bi",
"Yujun Cai",
"Jiazhi Liu",
"Mingyu Li",
"Zhong-Zhi Li",
"Duzhen Zhang",
"Chenlin Zhou",
"Jiayi Mao",
"Tianze Xia",
"Jiafeng Guo",
"Shenghua Liu"
] |
[
"RAG",
"Retrieval",
"Retrieval-augmented Generation",
"Survey"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/vision-and-language-training-helps-deploy
|
2507.13328
| null | null |
Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It
|
Does vision-and-language (VL) training change the linguistic representations of language models in meaningful ways? Most results in the literature have shown inconsistent or marginal differences, both behaviorally and representationally. In this work, we start from the hypothesis that the domain in which VL training could have a significant effect is lexical-conceptual knowledge, in particular its taxonomic organization. Through comparing minimal pairs of text-only LMs and their VL-trained counterparts, we first show that the VL models often outperform their text-only counterparts on a text-only question-answering task that requires taxonomic understanding of concepts mentioned in the questions. Using an array of targeted behavioral and representational analyses, we show that the LMs and VLMs do not differ significantly in terms of their taxonomic knowledge itself, but they differ in how they represent questions that contain concepts in a taxonomic relation vs. a non-taxonomic relation. This implies that the taxonomic knowledge itself does not change substantially through additional VL training, but VL training does improve the deployment of this knowledge in the context of a specific task, even when the presentation of the task is purely linguistic.
| null |
https://arxiv.org/abs/2507.13328v1
|
https://arxiv.org/pdf/2507.13328v1.pdf
| null |
[
"Yulu Qin",
"Dheeraj Varghese",
"Adam Dahlgren Lindström",
"Lucia Donatelli",
"Kanishka Misra",
"Najoung Kim"
] |
[
"Question Answering"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/social-and-political-framing-in-search-engine
|
2507.13325
| null | null |
Social and Political Framing in Search Engine Results
|
Search engines play a crucial role in shaping public discourse by influencing how information is accessed and framed. While prior research has extensively examined various dimensions of search bias -- such as content prioritization, indexical bias, political polarization, and sources of bias -- an important question remains underexplored: how do search engines and ideologically-motivated user queries contribute to bias in search results. This study analyzes the outputs of major search engines using a dataset of political and social topics. The findings reveal that search engines not only prioritize content in ways that reflect underlying biases but also that ideologically-driven user queries exacerbate these biases, resulting in the amplification of specific narratives. Moreover, significant differences were observed across search engines in terms of the sources they prioritize. These results suggest that search engines may play a pivotal role in shaping public perceptions by reinforcing ideological divides, thereby contributing to the broader issue of information polarization.
| null |
https://arxiv.org/abs/2507.13325v1
|
https://arxiv.org/pdf/2507.13325v1.pdf
| null |
[
"Amrit Poudel",
"Tim Weninger"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-generative-energy-arena-gea-incorporating
|
2507.13302
| null | null |
The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations
|
The evaluation of large language models is a complex task, in which several approaches have been proposed. The most common is the use of automated benchmarks in which LLMs have to answer multiple-choice questions of different topics. However, this method has certain limitations, being the most concerning, the poor correlation with the humans. An alternative approach, is to have humans evaluate the LLMs. This poses scalability issues as there is a large and growing number of models to evaluate making it impractical (and costly) to run traditional studies based on recruiting a number of evaluators and having them rank the responses of the models. An alternative approach is the use of public arenas, such as the popular LM arena, on which any user can freely evaluate models on any question and rank the responses of two models. The results are then elaborated into a model ranking. An increasingly important aspect of LLMs is their energy consumption and, therefore, evaluating how energy awareness influences the decisions of humans in selecting a model is of interest. In this paper, we present GEA, the Generative Energy Arena, an arena that incorporates information on the energy consumption of the model in the evaluation process. Preliminary results obtained with GEA are also presented, showing that for most questions, when users are aware of the energy consumption, they favor smaller and more energy efficient models. This suggests that for most user interactions, the extra cost and energy incurred by the more complex and top-performing models do not provide an increase in the perceived quality of the responses that justifies their use.
| null |
https://arxiv.org/abs/2507.13302v1
|
https://arxiv.org/pdf/2507.13302v1.pdf
| null |
[
"Carlos Arriaga",
"Gonzalo Martínez",
"Eneko Sendin",
"Javier Conde",
"Pedro Reviriego"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model",
"Multiple-choice"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hats-hindi-analogy-test-set-for-evaluating
|
2507.13238
| null | null |
HATS: Hindi Analogy Test Set for Evaluating Reasoning in Large Language Models
|
Analogies test a model's ability to infer implicit relationships between concepts, making them a key benchmark for evaluating reasoning capabilities. While large language models (LLMs) are widely evaluated for reasoning in English, their abilities in Indic languages remain understudied, limiting our understanding of whether these models generalize across languages. To address this gap, we introduce a new Hindi Analogy Test Set (HATS), comprising 405 multiple-choice questions sourced from Indian government exams. We benchmark state-of-the-art multilingual LLMs using various prompting strategies and introduce a grounded Chain of Thought approach that leverages cognitive theories of analogical reasoning. This approach improves model performance on Hindi analogy questions. Our experiments show that models perform best with English prompts, irrespective of the prompting strategy. Our test set addresses the lack of a critical resource to evaluate LLM reasoning capabilities in Hindi.
| null |
https://arxiv.org/abs/2507.13238v1
|
https://arxiv.org/pdf/2507.13238v1.pdf
| null |
[
"Ashray Gupta",
"Rohan Joseph",
"Sunny Rai"
] |
[
"Multiple-choice"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/computational-statistical-tradeoffs-from-np
|
2507.13222
| null | null |
Computational-Statistical Tradeoffs from NP-hardness
|
A central question in computer science and statistics is whether efficient algorithms can achieve the information-theoretic limits of statistical problems. Many computational-statistical tradeoffs have been shown under average-case assumptions, but since statistical problems are average-case in nature, it has been a challenge to base them on standard worst-case assumptions. In PAC learning where such tradeoffs were first studied, the question is whether computational efficiency can come at the cost of using more samples than information-theoretically necessary. We base such tradeoffs on $\mathsf{NP}$-hardness and obtain: $\circ$ Sharp computational-statistical tradeoffs assuming $\mathsf{NP}$ requires exponential time: For every polynomial $p(n)$, there is an $n$-variate class $C$ with VC dimension $1$ such that the sample complexity of time-efficiently learning $C$ is $\Theta(p(n))$. $\circ$ A characterization of $\mathsf{RP}$ vs. $\mathsf{NP}$ in terms of learning: $\mathsf{RP} = \mathsf{NP}$ iff every $\mathsf{NP}$-enumerable class is learnable with $O(\mathrm{VCdim}(C))$ samples in polynomial time. The forward implication has been known since (Pitt and Valiant, 1988); we prove the reverse implication. Notably, all our lower bounds hold against improper learners. These are the first $\mathsf{NP}$-hardness results for improperly learning a subclass of polynomial-size circuits, circumventing formal barriers of Applebaum, Barak, and Xiao (2008).
| null |
https://arxiv.org/abs/2507.13222v1
|
https://arxiv.org/pdf/2507.13222v1.pdf
| null |
[
"Guy Blanc",
"Caleb Koch",
"Carmen Strassle",
"Li-Yang Tan"
] |
[
"Computational Efficiency",
"PAC learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/motm-towards-a-foundation-model-for-time
|
2507.13207
| null | null |
MoTM: Towards a Foundation Model for Time Series Imputation based on Continuous Modeling
|
Recent years have witnessed a growing interest for time series foundation models, with a strong emphasis on the forecasting task. Yet, the crucial task of out-of-domain imputation of missing values remains largely underexplored. We propose a first step to fill this gap by leveraging implicit neural representations (INRs). INRs model time series as continuous functions and naturally handle various missing data scenarios and sampling rates. While they have shown strong performance within specific distributions, they struggle under distribution shifts. To address this, we introduce MoTM (Mixture of Timeflow Models), a step toward a foundation model for time series imputation. Building on the idea that a new time series is a mixture of previously seen patterns, MoTM combines a basis of INRs, each trained independently on a distinct family of time series, with a ridge regressor that adapts to the observed context at inference. We demonstrate robust in-domain and out-of-domain generalization across diverse imputation scenarios (e.g., block and pointwise missingness, variable sampling rates), paving the way for adaptable foundation imputation models.
| null |
https://arxiv.org/abs/2507.13207v2
|
https://arxiv.org/pdf/2507.13207v2.pdf
| null |
[
"Etienne Le Naour",
"Tahar Nabil",
"Ghislain Agoua"
] |
[
"Domain Generalization",
"Imputation",
"Missing Values",
"Time Series"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/inverse-reinforcement-learning-meets-large
|
2507.13158
| null | null |
Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities
|
In the era of Large Language Models (LLMs), alignment has emerged as a fundamental yet challenging problem in the pursuit of more reliable, controllable, and capable machine intelligence. The recent success of reasoning models and conversational AI systems has underscored the critical role of reinforcement learning (RL) in enhancing these systems, driving increased research interest at the intersection of RL and LLM alignment. This paper provides a comprehensive review of recent advances in LLM alignment through the lens of inverse reinforcement learning (IRL), emphasizing the distinctions between RL techniques employed in LLM alignment and those in conventional RL tasks. In particular, we highlight the necessity of constructing neural reward models from human data and discuss the formal and practical implications of this paradigm shift. We begin by introducing fundamental concepts in RL to provide a foundation for readers unfamiliar with the field. We then examine recent advances in this research agenda, discussing key challenges and opportunities in conducting IRL for LLM alignment. Beyond methodological considerations, we explore practical aspects, including datasets, benchmarks, evaluation metrics, infrastructure, and computationally efficient training and inference techniques. Finally, we draw insights from the literature on sparse-reward RL to identify open questions and potential research directions. By synthesizing findings from diverse studies, we aim to provide a structured and critical overview of the field, highlight unresolved challenges, and outline promising future directions for improving LLM alignment through RL and IRL techniques.
| null |
https://arxiv.org/abs/2507.13158v1
|
https://arxiv.org/pdf/2507.13158v1.pdf
| null |
[
"Hao Sun",
"Mihaela van der Schaar"
] |
[
"Language Modeling",
"Language Modelling",
"Large Language Model",
"Reinforcement Learning (RL)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/unsupervised-ground-metric-learning
|
2507.13094
| null | null |
Unsupervised Ground Metric Learning
|
Data classification without access to labeled samples remains a challenging problem. It usually depends on an appropriately chosen distance between features, a topic addressed in metric learning. Recently, Huizing, Cantini and Peyr\'e proposed to simultaneously learn optimal transport (OT) cost matrices between samples and features of the dataset. This leads to the task of finding positive eigenvectors of a certain nonlinear function that maps cost matrices to OT distances. Having this basic idea in mind, we consider both the algorithmic and the modeling part of unsupervised metric learning. First, we examine appropriate algorithms and their convergence. In particular, we propose to use the stochastic random function iteration algorithm and prove that it converges linearly for our setting, although our operators are not paracontractive as it was required for convergence so far. Second, we ask the natural question if the OT distance can be replaced by other distances. We show how Mahalanobis-like distances fit into our considerations. Further, we examine an approach via graph Laplacians. In contrast to the previous settings, we have just to deal with linear functions in the wanted matrices here, so that simple algorithms from linear algebra can be applied.
| null |
https://arxiv.org/abs/2507.13094v1
|
https://arxiv.org/pdf/2507.13094v1.pdf
| null |
[
"Janis Auffenberg",
"Jonas Bresch",
"Oleh Melnyk",
"Gabriele Steidl"
] |
[
"Metric Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mupax-multidimensional-problem-agnostic
|
2507.13090
| null | null |
MUPAX: Multidimensional Problem Agnostic eXplainable AI
|
Robust XAI techniques should ideally be simultaneously deterministic, model agnostic, and guaranteed to converge. We propose MULTIDIMENSIONAL PROBLEM AGNOSTIC EXPLAINABLE AI (MUPAX), a deterministic, model agnostic explainability technique, with guaranteed convergency. MUPAX measure theoretic formulation gives principled feature importance attribution through structured perturbation analysis that discovers inherent input patterns and eliminates spurious relationships. We evaluate MUPAX on an extensive range of data modalities and tasks: audio classification (1D), image classification (2D), volumetric medical image analysis (3D), and anatomical landmark detection, demonstrating dimension agnostic effectiveness. The rigorous convergence guarantees extend to any loss function and arbitrary dimensions, making MUPAX applicable to virtually any problem context for AI. By contrast with other XAI methods that typically decrease performance when masking, MUPAX not only preserves but actually enhances model accuracy by capturing only the most important patterns of the original data. Extensive benchmarking against the state of the XAI art demonstrates MUPAX ability to generate precise, consistent and understandable explanations, a crucial step towards explainable and trustworthy AI systems. The source code will be released upon publication.
| null |
https://arxiv.org/abs/2507.13090v1
|
https://arxiv.org/pdf/2507.13090v1.pdf
| null |
[
"Vincenzo Dentamaro",
"Felice Franchini",
"Giuseppe Pirlo",
"Irina Voiculescu"
] |
[
"Anatomical Landmark Detection",
"Audio Classification",
"Benchmarking",
"Feature Importance",
"image-classification",
"Image Classification",
"Medical Image Analysis"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/demographic-aware-fine-grained-classification
|
2507.12964
| null | null |
Demographic-aware fine-grained classification of pediatric wrist fractures
|
Wrist pathologies are frequently observed, particularly among children who constitute the majority of fracture cases. However, diagnosing these conditions is time-consuming and requires specialized expertise. Computer vision presents a promising avenue, contingent upon the availability of extensive datasets, a notable challenge in medical imaging. Therefore, reliance solely on one modality, such as images, proves inadequate, especially in an era of diverse and plentiful data types. In this study, we employ a multifaceted approach to address the challenge of recognizing wrist pathologies using an extremely limited dataset. Initially, we approach the problem as a fine-grained recognition task, aiming to identify subtle X-ray pathologies that conventional CNNs overlook. Secondly, we enhance network performance by fusing patient metadata with X-ray images. Thirdly, rather than pre-training on a coarse-grained dataset like ImageNet, we utilize weights trained on a fine-grained dataset. While metadata integration has been used in other medical domains, this is a novel application for wrist pathologies. Our results show that a fine-grained strategy and metadata integration improve diagnostic accuracy by 2% with a limited dataset and by over 10% with a larger fracture-focused dataset.
| null |
https://arxiv.org/abs/2507.12964v2
|
https://arxiv.org/pdf/2507.12964v2.pdf
| null |
[
"Ammar Ahmed",
"Ali Shariq Imran",
"Zenun Kastrati",
"Sher Muhammad Daudpota"
] |
[
"Diagnostic"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analysis-of-image-and-text-uncertainty
|
2507.12945
| null | null |
Analysis of Image-and-Text Uncertainty Propagation in Multimodal Large Language Models with Cardiac MR-Based Applications
|
Multimodal large language models (MLLMs) can process and integrate information from multimodality sources, such as text and images. However, interrelationship among input modalities, uncertainties due to individual uni-modal data and potential clinical applications following such an uncertainty decomposition are yet fully understood in the context of large-scale MLLMs. In this work, we propose a multimodal uncertainty propagation model (MUPM) based on uncertainty propagation, to characterise the relationship among the uncertainties arising from image-only, text-only, and joint image-text variations in MLLM inputs. Using real clinical data consisting of cardiac MR scans and digital health records, we describe that MUPMs can be optimised robustly with a few samples. We then show that the fitted MUPMs are generalisable across different input data distributions and, perhaps surprisingly, across different downstream tasks. Such a transferability may be explained by the shared pretraining, comparatively light MLLM fine-tuning, along with the low-dimensional nature of the MUPMs. More importantly, this learned transferability, quantifying the relationship between these uncertainties, led to direct clinical applications in which uncertainties may be estimated and thus analysed robustly for varying data or even a novel set of cardiac disease prediction tasks. In addition, we show experimentally the efficiency in multimodal data required for estimating the overall uncertainty and its ability to identify redundant factors, both of which are considered practical yet clinically useful applications with the proposed MUPMs. Codes are available at https://github.com/yucheng722/MUPM.
| null |
https://arxiv.org/abs/2507.12945v1
|
https://arxiv.org/pdf/2507.12945v1.pdf
| null |
[
"Yucheng Tang",
"Yunguan Fu",
"Weixi Yi",
"Yipei Wang",
"Daniel C. Alexander",
"Rhodri Davies",
"Yipeng Hu"
] |
[
"Disease Prediction"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/argus-leveraging-multiview-images-for
|
2507.12916
| null | null |
Argus: Leveraging Multiview Images for Improved 3-D Scene Understanding With Large Language Models
|
Advancements in foundation models have made it possible to conduct applications in various downstream tasks. Especially, the new era has witnessed a remarkable capability to extend Large Language Models (LLMs) for tackling tasks of 3D scene understanding. Current methods rely heavily on 3D point clouds, but the 3D point cloud reconstruction of an indoor scene often results in information loss. Some textureless planes or repetitive patterns are prone to omission and manifest as voids within the reconstructed 3D point clouds. Besides, objects with complex structures tend to introduce distortion of details caused by misalignments between the captured images and the dense reconstructed point clouds. 2D multi-view images present visual consistency with 3D point clouds and provide more detailed representations of scene components, which can naturally compensate for these deficiencies. Based on these insights, we propose Argus, a novel 3D multimodal framework that leverages multi-view images for enhanced 3D scene understanding with LLMs. In general, Argus can be treated as a 3D Large Multimodal Foundation Model (3D-LMM) since it takes various modalities as input(text instructions, 2D multi-view images, and 3D point clouds) and expands the capability of LLMs to tackle 3D tasks. Argus involves fusing and integrating multi-view images and camera poses into view-as-scene features, which interact with the 3D features to create comprehensive and detailed 3D-aware scene embeddings. Our approach compensates for the information loss while reconstructing 3D point clouds and helps LLMs better understand the 3D world. Extensive experiments demonstrate that our method outperforms existing 3D-LMMs in various downstream tasks.
| null |
https://arxiv.org/abs/2507.12916v1
|
https://arxiv.org/pdf/2507.12916v1.pdf
| null |
[
"Yifan Xu",
"Chao Zhang",
"Hanqi Jiang",
"Xiaoyan Wang",
"Ruifei Ma",
"Yiwei Li",
"Zihao Wu",
"Zeju Li",
"Xiangde Liu"
] |
[
"3D Point Cloud Reconstruction",
"Point cloud reconstruction",
"Scene Understanding"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/diffrhythm-controllable-and-flexible-full
|
2507.12890
| null | null |
DiffRhythm+: Controllable and Flexible Full-Length Song Generation with Preference Optimization
|
Songs, as a central form of musical art, exemplify the richness of human intelligence and creativity. While recent advances in generative modeling have enabled notable progress in long-form song generation, current systems for full-length song synthesis still face major challenges, including data imbalance, insufficient controllability, and inconsistent musical quality. DiffRhythm, a pioneering diffusion-based model, advanced the field by generating full-length songs with expressive vocals and accompaniment. However, its performance was constrained by an unbalanced model training dataset and limited controllability over musical style, resulting in noticeable quality disparities and restricted creative flexibility. To address these limitations, we propose DiffRhythm+, an enhanced diffusion-based framework for controllable and flexible full-length song generation. DiffRhythm+ leverages a substantially expanded and balanced training dataset to mitigate issues such as repetition and omission of lyrics, while also fostering the emergence of richer musical skills and expressiveness. The framework introduces a multi-modal style conditioning strategy, enabling users to precisely specify musical styles through both descriptive text and reference audio, thereby significantly enhancing creative control and diversity. We further introduce direct performance optimization aligned with user preferences, guiding the model toward consistently preferred outputs across evaluation metrics. Extensive experiments demonstrate that DiffRhythm+ achieves significant improvements in naturalness, arrangement complexity, and listener satisfaction over previous systems.
| null |
https://arxiv.org/abs/2507.12890v1
|
https://arxiv.org/pdf/2507.12890v1.pdf
| null |
[
"Huakang Chen",
"Yuepeng Jiang",
"Guobin Ma",
"Chunbo Hao",
"Shuai Wang",
"Jixun Yao",
"Ziqian Ning",
"Meng Meng",
"Jian Luan",
"Lei Xie"
] |
[
"Descriptive"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/anycap-project-a-unified-framework-dataset
|
2507.12841
| null | null |
AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning
|
Controllable captioning is essential for precise multimodal alignment and instruction following, yet existing models often lack fine-grained control and reliable evaluation protocols. To address this gap, we present the AnyCap Project, an integrated solution spanning model, dataset, and evaluation. We introduce AnyCapModel (ACM), a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. To remedy the data scarcity in controllable multimodal captioning, we build AnyCapDataset (ACD), covering three modalities, 28 user-instruction types, and 300\,k high-quality data entries. We further propose AnyCapEval, a new benchmark that provides more reliable evaluation metrics for controllable captioning by decoupling content accuracy and stylistic fidelity. ACM markedly improves caption quality across a diverse set of base models on AnyCapEval. Notably, ACM-8B raises GPT-4o\'s content scores by 45\% and style scores by 12\%, and it also achieves substantial gains on widely used benchmarks such as MIA-Bench and VidCapBench.
| null |
https://arxiv.org/abs/2507.12841v1
|
https://arxiv.org/pdf/2507.12841v1.pdf
| null |
[
"Yiming Ren",
"Zhiqiang Lin",
"Yu Li",
"Gao Meng",
"Weiyun Wang",
"Junjie Wang",
"Zicheng Lin",
"Jifeng Dai",
"Yujiu Yang",
"Wenhai Wang",
"Ruihang Chu"
] |
[
"Instruction Following"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autoregressive-speech-enhancement-via
|
2507.12825
| null | null |
Autoregressive Speech Enhancement via Acoustic Tokens
|
In speech processing pipelines, improving the quality and intelligibility of real-world recordings is crucial. While supervised regression is the primary method for speech enhancement, audio tokenization is emerging as a promising alternative for a smooth integration with other modalities. However, research on speech enhancement using discrete representations is still limited. Previous work has mainly focused on semantic tokens, which tend to discard key acoustic details such as speaker identity. Additionally, these studies typically employ non-autoregressive models, assuming conditional independence of outputs and overlooking the potential improvements offered by autoregressive modeling. To address these gaps we: 1) conduct a comprehensive study of the performance of acoustic tokens for speech enhancement, including the effect of bitrate and noise strength; 2) introduce a novel transducer-based autoregressive architecture specifically designed for this task. Experiments on VoiceBank and Libri1Mix datasets show that acoustic tokens outperform semantic tokens in terms of preserving speaker identity, and that our autoregressive approach can further improve performance. Nevertheless, we observe that discrete representations still fall short compared to continuous ones, highlighting the need for further research in this area.
| null |
https://arxiv.org/abs/2507.12825v1
|
https://arxiv.org/pdf/2507.12825v1.pdf
| null |
[
"Luca Della Libera",
"Cem Subakan",
"Mirco Ravanelli"
] |
[
"Speech Enhancement"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/far-net-multi-stage-fusion-network-with
|
2507.12823
| null | null |
FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval
|
Composed image retrieval (CIR) is a vision language task that retrieves a target image using a reference image and modification text, enabling intuitive specification of desired changes. While effectively fusing visual and textual modalities is crucial, existing methods typically adopt either early or late fusion. Early fusion tends to excessively focus on explicitly mentioned textual details and neglect visual context, whereas late fusion struggles to capture fine-grained semantic alignments between image regions and textual tokens. To address these issues, we propose FAR-Net, a multi-stage fusion framework designed with enhanced semantic alignment and adaptive reconciliation, integrating two complementary modules. The enhanced semantic alignment module (ESAM) employs late fusion with cross-attention to capture fine-grained semantic relationships, while the adaptive reconciliation module (ARM) applies early fusion with uncertainty embeddings to enhance robustness and adaptability. Experiments on CIRR and FashionIQ show consistent performance gains, improving Recall@1 by up to 2.4% and Recall@50 by 1.04% over existing state-of-the-art methods, empirically demonstrating that FAR Net provides a robust and scalable solution to CIR tasks.
| null |
https://arxiv.org/abs/2507.12823v1
|
https://arxiv.org/pdf/2507.12823v1.pdf
| null |
[
"Jeong-Woo Park",
"Young-Eun Kim",
"Seong-Whan Lee"
] |
[
"Image Retrieval"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mcot-re-multi-faceted-chain-of-thought-and-re
|
2507.12819
| null | null |
MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval
|
Composed Image Retrieval (CIR) is the task of retrieving a target image from a gallery using a composed query consisting of a reference image and a modification text. Among various CIR approaches, training-free zero-shot methods based on pre-trained models are cost-effective but still face notable limitations. For example, sequential VLM-LLM pipelines process each modality independently, which often results in information loss and limits cross-modal interaction. In contrast, methods based on multimodal large language models (MLLMs) often focus exclusively on applying changes indicated by the text, without fully utilizing the contextual visual information from the reference image. To address these issues, we propose multi-faceted Chain-of-Thought with re-ranking (MCoT-RE), a training-free zero-shot CIR framework. MCoT-RE utilizes multi-faceted Chain-of-Thought to guide the MLLM to balance explicit modifications and contextual visual cues, generating two distinct captions: one focused on modification and the other integrating comprehensive visual-textual context. The first caption is used to filter candidate images. Subsequently, we combine these two captions and the reference image to perform multi-grained re-ranking. This two-stage approach facilitates precise retrieval by aligning with the textual modification instructions while preserving the visual context of the reference image. Through extensive experiments, MCoT-RE achieves state-of-the-art results among training-free methods, yielding improvements of up to 6.24% in Recall@10 on FashionIQ and 8.58% in Recall@1 on CIRR.
| null |
https://arxiv.org/abs/2507.12819v1
|
https://arxiv.org/pdf/2507.12819v1.pdf
| null |
[
"Jeong-Woo Park",
"Seong-Whan Lee"
] |
[
"Image Retrieval",
"Re-Ranking",
"Retrieval"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/semantic-guided-fine-tuning-of-foundation
|
2507.12807
| null | null |
Semantic-guided Fine-tuning of Foundation Model for Long-tailed Visual Recognition
|
The variance in class-wise sample sizes within long-tailed scenarios often results in degraded performance in less frequent classes. Fortunately, foundation models, pre-trained on vast open-world datasets, demonstrate strong potential for this task due to their generalizable representation, which promotes the development of adaptive strategies on pre-trained models in long-tailed learning. Advanced fine-tuning methods typically adjust visual encoders while neglecting the semantics derived from the frozen text encoder, overlooking the visual and textual alignment. To strengthen this alignment, we propose a novel approach, Semantic-guided fine-tuning of foundation model for long-tailed visual recognition (Sage), which incorporates semantic guidance derived from textual modality into the visual fine-tuning process. Specifically, we introduce an SG-Adapter that integrates class descriptions as semantic guidance to guide the fine-tuning of the visual encoder. The introduced guidance is passesed through the attention mechanism and enables the model to focus more on semantically relevant content, strengthening the alignment between the visual and textual modalities. Due to the inconsistent class-conditional distributions neglected by the existing loss function, the resulting prediction bias causes performance improvements for the tail class less than for the head class, even when the multi-modal alignment is enhanced. To address this challenge, we propose a novel distribution mismatch-aware compensation factor, which is specifically designed to rectify the prediction bias caused by the ignored inconsistent distribution based on our theoretical analysis, and is seamlessly integrated into the loss function. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed Sage in enhancing performance in long-tailed learning.
| null |
https://arxiv.org/abs/2507.12807v1
|
https://arxiv.org/pdf/2507.12807v1.pdf
| null |
[
"Yufei Peng",
"Yonggang Zhang",
"Yiu-ming Cheung"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/deqa-doc-adapting-deqa-score-to-document
|
2507.12796
| null | null |
DeQA-Doc: Adapting DeQA-Score to Document Image Quality Assessment
|
Document quality assessment is critical for a wide range of applications including document digitization, OCR, and archival. However, existing approaches often struggle to provide accurate and robust quality scores, limiting their applicability in practical scenarios. With the rapid progress in Multi-modal Large Language Models (MLLMs), recent MLLM-based methods have achieved remarkable performance in image quality assessment. In this work, we extend this success to the document domain by adapting DeQA-Score, a state-of-the-art MLLM-based image quality scorer, for document quality assessment. We propose DeQA-Doc, a framework that leverages the visual language capabilities of MLLMs and a soft label strategy to regress continuous document quality scores. To adapt DeQA-Score to DeQA-Doc, we adopt two complementary solutions to construct soft labels without the variance information. Also, we relax the resolution constrains to support the large resolution of document images. Finally, we introduce ensemble methods to further enhance the performance. Extensive experiments demonstrate that DeQA-Doc significantly outperforms existing baselines, offering accurate and generalizable document quality assessment across diverse degradation types. Codes and model weights are available in https://github.com/Junjie-Gao19/DeQA-Doc.
| null |
https://arxiv.org/abs/2507.12796v1
|
https://arxiv.org/pdf/2507.12796v1.pdf
| null |
[
"Junjie Gao",
"Runze Liu",
"Yingzhe Peng",
"Shujian Yang",
"Jin Zhang",
"Kai Yang",
"Zhiyuan You"
] |
[
"Document Image Quality Assessment",
"Image Quality Assessment",
"Optical Character Recognition (OCR)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/city-vlm-towards-multidomain-perception-scene
|
2507.12795
| null | null |
City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning
|
Scene understanding enables intelligent agents to interpret and comprehend their environment. While existing large vision-language models (LVLMs) for scene understanding have primarily focused on indoor household tasks, they face two significant limitations when applied to outdoor large-scale scene understanding. First, outdoor scenarios typically encompass larger-scale environments observed through various sensors from multiple viewpoints (e.g., bird view and terrestrial view), while existing indoor LVLMs mainly analyze single visual modalities within building-scale contexts from humanoid viewpoints. Second, existing LVLMs suffer from missing multidomain perception outdoor data and struggle to effectively integrate 2D and 3D visual information. To address the aforementioned limitations, we build the first multidomain perception outdoor scene understanding dataset, named \textbf{\underline{SVM-City}}, deriving from multi\textbf{\underline{S}}cale scenarios with multi\textbf{\underline{V}}iew and multi\textbf{\underline{M}}odal instruction tuning data. It contains $420$k images and $4, 811$M point clouds with $567$k question-answering pairs from vehicles, low-altitude drones, high-altitude aerial planes, and satellite. To effectively fuse the multimodal data in the absence of one modality, we introduce incomplete multimodal learning to model outdoor scene understanding and design the LVLM named \textbf{\underline{City-VLM}}. Multimodal fusion is realized by constructing a joint probabilistic distribution space rather than implementing directly explicit fusion operations (e.g., concatenation). Experimental results on three typical outdoor scene understanding tasks show City-VLM achieves $18.14 \%$ performance surpassing existing LVLMs in question-answering tasks averagely. Our method demonstrates pragmatic and generalization performance across multiple outdoor scenes.
| null |
https://arxiv.org/abs/2507.12795v1
|
https://arxiv.org/pdf/2507.12795v1.pdf
| null |
[
"Penglei Sun",
"Yaoxian Song",
"Xiangru Zhu",
"Xiang Liu",
"Qiang Wang",
"Yue Liu",
"Changqun Xia",
"Tiefeng Li",
"Yang Yang",
"Xiaowen Chu"
] |
[
"Question Answering",
"Scene Understanding"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-channel-graph-neural-network-for
|
2507.12787
| null | null |
Multi-Channel Graph Neural Network for Financial Risk Prediction of NEEQ Enterprises
|
With the continuous evolution of China's multi-level capital market, the National Equities Exchange and Quotations (NEEQ), also known as the "New Third Board," has become a critical financing platform for small and medium-sized enterprises (SMEs). However, due to their limited scale and financial resilience, many NEEQ-listed companies face elevated risks of financial distress. To address this issue, we propose a multi-channel deep learning framework that integrates structured financial indicators, textual disclosures, and enterprise relationship data for comprehensive financial risk prediction. Specifically, we design a Triple-Channel Graph Isomorphism Network (GIN) that processes numeric, textual, and graph-based inputs separately. These modality-specific representations are fused using an attention-based mechanism followed by a gating unit to enhance robustness and prediction accuracy. Experimental results on data from 7,731 real-world NEEQ companies demonstrate that our model significantly outperforms traditional machine learning methods and single-modality baselines in terms of AUC, Precision, Recall, and F1 Score. This work provides theoretical and practical insights into risk modeling for SMEs and offers a data-driven tool to support financial regulators and investors.
| null |
https://arxiv.org/abs/2507.12787v1
|
https://arxiv.org/pdf/2507.12787v1.pdf
| null |
[
"Jianyu Zhu"
] |
[
"Graph Neural Network"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multimodal-guided-dynamic-dataset-pruning-for
|
2507.12750
| null | null |
Multimodal-Guided Dynamic Dataset Pruning for Robust and Efficient Data-Centric Learning
|
Modern deep models are trained on large real-world datasets, where data quality varies and redundancy is common. Data-centric approaches such as dataset pruning have shown promise in improving training efficiency and model performance. However, most existing methods rely on static heuristics or task-specific metrics, limiting their robustness and generalizability across domains. In this work, we introduce a dynamic dataset pruning framework that adaptively selects training samples based on both task-driven difficulty and cross-modality semantic consistency. By incorporating supervision from pretrained multimodal foundation models, our approach captures training dynamics while effectively filtering out uninformative samples. Our work highlights the potential of integrating cross-modality alignment for robust sample selection, advancing data-centric learning toward more efficient and robust practices across application domains.
| null |
https://arxiv.org/abs/2507.12750v1
|
https://arxiv.org/pdf/2507.12750v1.pdf
| null |
[
"Suorong Yang",
"Peijia Li",
"Yujie Liu",
"Zhiming Xu",
"Peng Ye",
"Wanli Ouyang",
"Furao Shen",
"Dongzhan Zhou"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/transformer-based-spatial-grounding-a
|
2507.12739
| null | null |
Transformer-based Spatial Grounding: A Comprehensive Survey
|
Spatial grounding, the process of associating natural language expressions with corresponding image regions, has rapidly advanced due to the introduction of transformer-based models, significantly enhancing multimodal representation and cross-modal alignment. Despite this progress, the field lacks a comprehensive synthesis of current methodologies, dataset usage, evaluation metrics, and industrial applicability. This paper presents a systematic literature review of transformer-based spatial grounding approaches from 2018 to 2025. Our analysis identifies dominant model architectures, prevalent datasets, and widely adopted evaluation metrics, alongside highlighting key methodological trends and best practices. This study provides essential insights and structured guidance for researchers and practitioners, facilitating the development of robust, reliable, and industry-ready transformer-based spatial grounding models.
| null |
https://arxiv.org/abs/2507.12739v1
|
https://arxiv.org/pdf/2507.12739v1.pdf
| null |
[
"Ijazul Haq",
"Muhammad Saqib",
"Yingjie Zhang"
] |
[
"cross-modal alignment",
"Survey",
"Systematic Literature Review"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pixel-perfect-megamed-a-megapixel-scale
|
2507.12698
| null | null |
Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images
|
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.
| null |
https://arxiv.org/abs/2507.12698v1
|
https://arxiv.org/pdf/2507.12698v1.pdf
| null |
[
"Zahra Tehraninasab",
"Amar Kumar",
"Tal Arbel"
] |
[
"Data Augmentation",
"Image Generation",
"Medical Image Generation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptisent-context-aware-adaptive-attention
|
2507.12695
| null | null |
AdaptiSent: Context-Aware Adaptive Attention for Multimodal Aspect-Based Sentiment Analysis
|
We introduce AdaptiSent, a new framework for Multimodal Aspect-Based Sentiment Analysis (MABSA) that uses adaptive cross-modal attention mechanisms to improve sentiment classification and aspect term extraction from both text and images. Our model integrates dynamic modality weighting and context-adaptive attention, enhancing the extraction of sentiment and aspect-related information by focusing on how textual cues and visual context interact. We tested our approach against several baselines, including traditional text-based models and other multimodal methods. Results from standard Twitter datasets show that AdaptiSent surpasses existing models in precision, recall, and F1 score, and is particularly effective in identifying nuanced inter-modal relationships that are crucial for accurate sentiment and aspect term extraction. This effectiveness comes from the model's ability to adjust its focus dynamically based on the context's relevance, improving the depth and accuracy of sentiment analysis across various multimodal data sets. AdaptiSent sets a new standard for MABSA, significantly outperforming current methods, especially in understanding complex multimodal information.
| null |
https://arxiv.org/abs/2507.12695v1
|
https://arxiv.org/pdf/2507.12695v1.pdf
| null |
[
"S M Rafiuddin",
"Sadia Kamal",
"Mohammed Rakib",
"Arunkumar Bagavathi",
"Atriya Sen"
] |
[
"Aspect-Based Sentiment Analysis",
"Sentiment Analysis",
"Sentiment Classification",
"Term Extraction"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/is-this-just-fantasy-language-model
|
2507.12553
| null | null |
Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility
|
Language models (LMs) are used for a diverse range of tasks, from question answering to writing fantastical stories. In order to reliably accomplish these tasks, LMs must be able to discern the modal category of a sentence (i.e., whether it describes something that is possible, impossible, completely nonsensical, etc.). However, recent studies have called into question the ability of LMs to categorize sentences according to modality (Michaelov et al., 2025; Kauf et al., 2023). In this work, we identify linear representations that discriminate between modal categories within a variety of LMs, or modal difference vectors. Analysis of modal difference vectors reveals that LMs have access to more reliable modal categorization judgments than previously reported. Furthermore, we find that modal difference vectors emerge in a consistent order as models become more competent (i.e., through training steps, layers, and parameter count). Notably, we find that modal difference vectors identified within LM activations can be used to model fine-grained human categorization behavior. This potentially provides a novel view into how human participants distinguish between modal categories, which we explore by correlating projections along modal difference vectors with human participants' ratings of interpretable features. In summary, we derive new insights into LM modal categorization using techniques from mechanistic interpretability, with the potential to inform our understanding of modal categorization in humans.
| null |
https://arxiv.org/abs/2507.12553v1
|
https://arxiv.org/pdf/2507.12553v1.pdf
| null |
[
"Michael A. Lepori",
"Jennifer Hu",
"Ishita Dasgupta",
"Roma Patel",
"Thomas Serre",
"Ellie Pavlick"
] |
[
"Language Modeling",
"Language Modelling",
"Question Answering"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/comparing-apples-to-oranges-a-dataset
|
2507.13335
| null | null |
Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes
|
Humour, as a complex language form, is derived from myriad aspects of life, whilst existing work on computational humour has focussed almost exclusively on short pun-based jokes. In this work, we investigate whether the ability of Large Language Models (LLMs) to explain humour depends on the particular humour form. We compare models on simple puns and more complex topical humour that requires knowledge of real-world entities and events. In doing so, we curate a dataset of 600 jokes split across 4 joke types and manually write high-quality explanations. These jokes include heterographic and homographic puns, contemporary internet humour, and topical jokes, where understanding relies on reasoning beyond "common sense", rooted instead in world knowledge regarding news events and pop culture. Using this dataset, we compare the zero-shot abilities of a range of LLMs to accurately and comprehensively explain jokes of different types, identifying key research gaps in the task of humour explanation. We find that none of the tested models (inc. reasoning models) are capable of reliably generating adequate explanations of all joke types, further highlighting the narrow focus of most works in computational humour on overly simple joke forms.
| null |
https://arxiv.org/abs/2507.13335v1
|
https://arxiv.org/pdf/2507.13335v1.pdf
| null |
[
"Tyler Loakman",
"William Thorne",
"Chenghua Lin"
] |
[
"Common Sense Reasoning",
"World Knowledge"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-framework-for-waterfall-pricing-using
|
2507.13324
| null | null |
A Framework for Waterfall Pricing Using Simulation-Based Uncertainty Modeling
|
We present a novel framework for pricing waterfall structures by simulating the uncertainty of the cashflow generated by the underlying assets in terms of value, time, and confidence levels. Our approach incorporates various probability distributions calibrated on the market price of the tranches at inception. The framework is fully implemented in PyTorch, leveraging its computational efficiency and automatic differentiation capabilities through Adjoint Algorithmic Differentiation (AAD). This enables efficient gradient computation for risk sensitivity analysis and optimization. The proposed methodology provides a flexible and scalable solution for pricing complex structured finance instruments under uncertainty
| null |
https://arxiv.org/abs/2507.13324v1
|
https://arxiv.org/pdf/2507.13324v1.pdf
| null |
[
"Nicola Jean",
"Giacomo Le Pera",
"Lorenzo Giada",
"Claudio Nordio"
] |
[
"Computational Efficiency",
"Sensitivity"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/nufft-for-the-fast-cos-method
|
2507.13186
| null | null |
NUFFT for the Fast COS Method
|
The COS method is a very efficient way to compute European option prices under L\'evy models or affine stochastic volatility models, based on a Fourier Cosine expansion of the density, involving the characteristic function. This note shows how to compute the COS method formula with a non-uniform fast Fourier transform, thus allowing to price many options of the same maturity but different strikes at an unprecedented speed.
| null |
https://arxiv.org/abs/2507.13186v2
|
https://arxiv.org/pdf/2507.13186v2.pdf
| null |
[
"Fabien LeFloc'h"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/prompt-injection-2-0-hybrid-ai-threats
|
2507.13169
| null | null |
Prompt Injection 2.0: Hybrid AI Threats
|
Prompt injection attacks, where malicious input is designed to manipulate AI systems into ignoring their original instructions and following unauthorized commands instead, were first discovered by Preamble, Inc. in May 2022 and responsibly disclosed to OpenAI. Over the last three years, these attacks have continued to pose a critical security threat to LLM-integrated systems. The emergence of agentic AI systems, where LLMs autonomously perform multistep tasks through tools and coordination with other agents, has fundamentally transformed the threat landscape. Modern prompt injection attacks can now combine with traditional cybersecurity exploits to create hybrid threats that systematically evade traditional security controls. This paper presents a comprehensive analysis of Prompt Injection 2.0, examining how prompt injections integrate with Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and other web security vulnerabilities to bypass traditional security measures. We build upon Preamble's foundational research and mitigation technologies, evaluating them against contemporary threats, including AI worms, multi-agent infections, and hybrid cyber-AI attacks. Our analysis incorporates recent benchmarks that demonstrate how traditional web application firewalls, XSS filters, and CSRF tokens fail against AI-enhanced attacks. We also present architectural solutions that combine prompt isolation, runtime security, and privilege separation with novel threat detection capabilities.
| null |
https://arxiv.org/abs/2507.13169v1
|
https://arxiv.org/pdf/2507.13169v1.pdf
| null |
[
"Jeremy McHugh",
"Kristina Šekrst",
"Jon Cefalu"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-power-of-architecture-deep-dive-into
|
2507.13043
| null | null |
The Power of Architecture: Deep Dive into Transformer Architectures for Long-Term Time Series Forecasting
|
Transformer-based models have recently become dominant in Long-term Time Series Forecasting (LTSF), yet the variations in their architecture, such as encoder-only, encoder-decoder, and decoder-only designs, raise a crucial question: What Transformer architecture works best for LTSF tasks? However, existing models are often tightly coupled with various time-series-specific designs, making it difficult to isolate the impact of the architecture itself. To address this, we propose a novel taxonomy that disentangles these designs, enabling clearer and more unified comparisons of Transformer architectures. Our taxonomy considers key aspects such as attention mechanisms, forecasting aggregations, forecasting paradigms, and normalization layers. Through extensive experiments, we uncover several key insights: bi-directional attention with joint-attention is most effective; more complete forecasting aggregation improves performance; and the direct-mapping paradigm outperforms autoregressive approaches. Furthermore, our combined model, utilizing optimal architectural choices, consistently outperforms several existing models, reinforcing the validity of our conclusions. We hope these findings offer valuable guidance for future research on Transformer architectural designs in LTSF. Our code is available at https://github.com/HALF111/TSF_architecture.
| null |
https://arxiv.org/abs/2507.13043v1
|
https://arxiv.org/pdf/2507.13043v1.pdf
| null |
[
"Lefei Shen",
"Mouxiang Chen",
"Han Fu",
"Xiaoxue Ren",
"Xiaoyun Joy Wang",
"Jianling Sun",
"Zhuo Li",
"Chenghao Liu"
] |
[
"Decoder",
"Time Series",
"Time Series Forecasting"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/smart-relation-aware-learning-of-geometric
|
2507.13001
| null | null |
SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs
|
Knowledge graph representation learning approaches provide a mapping between symbolic knowledge in the form of triples in a knowledge graph (KG) and their feature vectors. Knowledge graph embedding (KGE) models often represent relations in a KG as geometric transformations. Most state-of-the-art (SOTA) KGE models are derived from elementary geometric transformations (EGTs), such as translation, scaling, rotation, and reflection, or their combinations. These geometric transformations enable the models to effectively preserve specific structural and relational patterns of the KG. However, the current use of EGTs by KGEs remains insufficient without considering relation-specific transformations. Although recent models attempted to address this problem by ensembling SOTA baseline models in different ways, only a single or composite version of geometric transformations are used by such baselines to represent all the relations. In this paper, we propose a framework that evaluates how well each relation fits with different geometric transformations. Based on this ranking, the model can: (1) assign the best-matching transformation to each relation, or (2) use majority voting to choose one transformation type to apply across all relations. That is, the model learns a single relation-specific EGT in low dimensional vector space through an attention mechanism. Furthermore, we use the correlation between relations and EGTs, which are learned in a low dimension, for relation embeddings in a high dimensional vector space. The effectiveness of our models is demonstrated through comprehensive evaluations on three benchmark KGs as well as a real-world financial KG, witnessing a performance comparable to leading models
| null |
https://arxiv.org/abs/2507.13001v1
|
https://arxiv.org/pdf/2507.13001v1.pdf
| null |
[
"Kossi Amouzouvi",
"Bowen Song",
"Andrea Coletta",
"Luigi Bellomarini",
"Jens Lehmann",
"Sahar Vahdati"
] |
[
"Graph Embedding",
"Graph Representation Learning",
"Knowledge Graph Embedding",
"Knowledge Graphs",
"Relation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-dapo-from-a-mixed-policy
|
2507.12931
| null | null |
Improving DAPO from a Mixed-Policy Perspective
|
This paper introduces two novel modifications to the Dynamic sAmpling Policy Optimization (DAPO) algorithm [1], approached from a mixed-policy perspective. Standard policy gradient methods can suffer from instability and sample inefficiency, particularly in sparse reward settings. To address this, we first propose a method that incorporates a pre-trained, stable guiding policy ($\piphi$) to provide off-policy experience, thereby regularizing the training of the target policy ($\pion$). This approach improves training stability and convergence speed by adaptively adjusting the learning step size. Secondly, we extend this idea to re-utilize zero-reward samples, which are often discarded by dynamic sampling strategies like DAPO's. By treating these samples as a distinct batch guided by the expert policy, we further enhance sample efficiency. We provide a theoretical analysis for both methods, demonstrating that their objective functions converge to the optimal solution within the established theoretical framework of reinforcement learning. The proposed mixed-policy framework effectively balances exploration and exploitation, promising more stable and efficient policy optimization.
| null |
https://arxiv.org/abs/2507.12931v2
|
https://arxiv.org/pdf/2507.12931v2.pdf
| null |
[
"Hongze Tan"
] |
[
"Policy Gradient Methods"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/laviplan-language-guided-visual-path-planning
|
2507.12911
| null | null |
LaViPlan : Language-Guided Visual Path Planning with RLVR
|
Out-of-distribution (OOD) scenarios in autonomous driving refer to situations that deviate from the training domain, often leading to unexpected and potentially hazardous behavior from planners that lack prior exposure to such cases. Recently, Vision-Language Models (VLMs) have been introduced into autonomous driving research for their promising generalization capabilities in OOD settings. Early studies demonstrated that VLMs could recognize OOD scenarios and generate user-level decisions such as "go straight" or "turn right." However, a new challenge has emerged due to the misalignment between the VLM's high-level decisions or visual reasoning expressed in language, and the low-level predicted trajectories interpreted as actions. In this paper, we propose LaViPlan, a framework that leverages Reinforcement Learning with Verifiable Rewards (RLVR) to optimize VLMs using planning-oriented metrics. This approach addresses the vision-language-action misalignment observed in existing VLMs fine-tuned via supervised learning, which can recognize driving scenarios but often produce context-unaware decisions. Experimental results demonstrate that our method improves situational awareness and decision-making under OOD conditions, highlighting its potential to mitigate the misalignment issue. This work introduces a promising post-training paradigm for VLM agents in the context of autonomous driving.
| null |
https://arxiv.org/abs/2507.12911v2
|
https://arxiv.org/pdf/2507.12911v2.pdf
| null |
[
"Hayeon Oh"
] |
[
"Autonomous Driving",
"Vision-Language-Action",
"Visual Reasoning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autonomous-resource-management-in-1
|
2507.12879
| null | null |
Autonomous Resource Management in Microservice Systems via Reinforcement Learning
|
This paper proposes a reinforcement learning-based method for microservice resource scheduling and optimization, aiming to address issues such as uneven resource allocation, high latency, and insufficient throughput in traditional microservice architectures. In microservice systems, as the number of services and the load increase, efficiently scheduling and allocating resources such as computing power, memory, and storage becomes a critical research challenge. To address this, the paper employs an intelligent scheduling algorithm based on reinforcement learning. Through the interaction between the agent and the environment, the resource allocation strategy is continuously optimized. In the experiments, the paper considers different resource conditions and load scenarios, evaluating the proposed method across multiple dimensions, including response time, throughput, resource utilization, and cost efficiency. The experimental results show that the reinforcement learning-based scheduling method significantly improves system response speed and throughput under low load and high concurrency conditions, while also optimizing resource utilization and reducing energy consumption. Under multi-dimensional resource conditions, the proposed method can consider multiple objectives and achieve optimized resource scheduling. Compared to traditional static resource allocation methods, the reinforcement learning model demonstrates stronger adaptability and optimization capability. It can adjust resource allocation strategies in real time, thereby maintaining good system performance in dynamically changing load and resource environments.
| null |
https://arxiv.org/abs/2507.12879v1
|
https://arxiv.org/pdf/2507.12879v1.pdf
| null |
[
"Yujun Zou",
"Nia Qi",
"Yingnan Deng",
"Zhihao Xue",
"Ming Gong",
"Wuyang Zhang"
] |
[
"Management",
"reinforcement-learning",
"Reinforcement Learning",
"Scheduling"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/supervised-fine-tuning-on-curated-data-is
|
2507.12856
| null | null |
Supervised Fine Tuning on Curated Data is Reinforcement Learning (and can be improved)
|
Behavior Cloning (BC) on curated (or filtered) data is the predominant paradigm for supervised fine-tuning (SFT) of large language models; as well as for imitation learning of control policies. Here, we draw on a connection between this successful strategy and the theory and practice of finding optimal policies via Reinforcement Learning (RL). Building on existing literature, we clarify that SFT can be understood as maximizing a lower bound on the RL objective in a sparse reward setting. Giving support to its often observed good performance. From this viewpoint, we realize that a small modification to SFT leads to an importance weighted variant that behaves closer to training with RL as it: i) optimizes a tighter bound to the RL objective and, ii) can improve performance compared to SFT on curated data. We refer to this variant as importance weighted supervised fine-tuning (iw-SFT). We show that it is easy to implement and can be further generalized to training with quality scored data. The resulting SFT variants are competitive with more advanced RL algorithms for large language models and for training policies in continuous control tasks. For example achieving 66.7% on the AIME 2024 dataset.
| null |
https://arxiv.org/abs/2507.12856v1
|
https://arxiv.org/pdf/2507.12856v1.pdf
| null |
[
"Chongli Qin",
"Jost Tobias Springenberg"
] |
[
"continuous-control",
"Continuous Control",
"Imitation Learning",
"Reinforcement Learning (RL)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-novelty-to-imitation-self-distilled
|
2507.12815
| null | null |
From Novelty to Imitation: Self-Distilled Rewards for Offline Reinforcement Learning
|
Offline Reinforcement Learning (RL) aims to learn effective policies from a static dataset without requiring further agent-environment interactions. However, its practical adoption is often hindered by the need for explicit reward annotations, which can be costly to engineer or difficult to obtain retrospectively. To address this, we propose ReLOAD (Reinforcement Learning with Offline Reward Annotation via Distillation), a novel reward annotation framework for offline RL. Unlike existing methods that depend on complex alignment procedures, our approach adapts Random Network Distillation (RND) to generate intrinsic rewards from expert demonstrations using a simple yet effective embedding discrepancy measure. First, we train a predictor network to mimic a fixed target network's embeddings based on expert state transitions. Later, the prediction error between these networks serves as a reward signal for each transition in the static dataset. This mechanism provides a structured reward signal without requiring handcrafted reward annotations. We provide a formal theoretical construct that offers insights into how RND prediction errors effectively serve as intrinsic rewards by distinguishing expert-like transitions. Experiments on the D4RL benchmark demonstrate that ReLOAD enables robust offline policy learning and achieves performance competitive with traditional reward-annotated methods.
| null |
https://arxiv.org/abs/2507.12815v1
|
https://arxiv.org/pdf/2507.12815v1.pdf
| null |
[
"Gaurav Chaudhary",
"Laxmidhar Behera"
] |
[
"D4RL",
"Offline RL",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/logit-arithmetic-elicits-long-reasoning
|
2507.12759
| null | null |
Logit Arithmetic Elicits Long Reasoning Capabilities Without Training
|
Large reasoning models (LRMs) can do complex reasoning via long chain-of-thought (CoT) involving cognitive strategies such as backtracking and self-correction. Recent studies suggest that some models inherently possess these long reasoning abilities, which may be unlocked via extra training. Our work first investigates whether we can elicit such behavior without any training. To this end, we propose a decoding-time approach, ThinkLogit, which utilizes logits arithmetic (Liu et al., 2024) to tune a target large LM for long reasoning using a substantially smaller model as guider. We then show that we can further boost performance by training the guider model with preference optimization over correct/incorrect reasoning pairs sampled from both the target and guider model -- a setup we refer to as ThinkLogit-DPO. Our experiments demonstrate that ThinkLogit and ThinkLogit-DPO achieve a relative improvement in pass@1 by 26% and 29%, respectively, over four mathematical datasets using the Qwen2.5-32B when guided by R1-Distill-Qwen-1.5B -- a model 21x smaller. Lastly, we show that ThinkLogit can transfer long reasoning skills acquired through reinforcement learning, improving pass@1 by 13% relative compared to the Qwen2.5-32B base model. Our work presents a computationally-efficient method to elicit long reasoning in large models with minimal or no additional training.
| null |
https://arxiv.org/abs/2507.12759v1
|
https://arxiv.org/pdf/2507.12759v1.pdf
| null |
[
"Yunxiang Zhang",
"Muhammad Khalifa",
"Lechen Zhang",
"Xin Liu",
"Ayoung Lee",
"Xinliang Frederick Zhang",
"Farima Fatahi Bayat",
"Lu Wang"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/fly-fail-fix-iterative-game-repair-with
|
2507.12666
| null | null |
Fly, Fail, Fix: Iterative Game Repair with Reinforcement Learning and Large Multimodal Models
|
Game design hinges on understanding how static rules and content translate into dynamic player behavior - something modern generative systems that inspect only a game's code or assets struggle to capture. We present an automated design iteration framework that closes this gap by pairing a reinforcement learning (RL) agent, which playtests the game, with a large multimodal model (LMM), which revises the game based on what the agent does. In each loop the RL player completes several episodes, producing (i) numerical play metrics and/or (ii) a compact image strip summarising recent video frames. The LMM designer receives a gameplay goal and the current game configuration, analyses the play traces, and edits the configuration to steer future behaviour toward the goal. We demonstrate results that LMMs can reason over behavioral traces supplied by RL agents to iteratively refine game mechanics, pointing toward practical, scalable tools for AI-assisted game design.
| null |
https://arxiv.org/abs/2507.12666v1
|
https://arxiv.org/pdf/2507.12666v1.pdf
| null |
[
"Alex Zook",
"Josef Spjut",
"Jonathan Tremblay"
] |
[
"Game Design",
"Reinforcement Learning (RL)"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/distributional-reinforcement-learning-on-path
|
2507.12657
| null | null |
Distributional Reinforcement Learning on Path-dependent Options
|
We reinterpret and propose a framework for pricing path-dependent financial derivatives by estimating the full distribution of payoffs using Distributional Reinforcement Learning (DistRL). Unlike traditional methods that focus on expected option value, our approach models the entire conditional distribution of payoffs, allowing for risk-aware pricing, tail-risk estimation, and enhanced uncertainty quantification. We demonstrate the efficacy of this method on Asian options, using quantile-based value function approximators.
| null |
https://arxiv.org/abs/2507.12657v1
|
https://arxiv.org/pdf/2507.12657v1.pdf
| null |
[
"Ahmet Umur Özsoy"
] |
[
"Distributional Reinforcement Learning",
"reinforcement-learning",
"Reinforcement Learning",
"Uncertainty Quantification"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-survey-of-explainable-reinforcement-1
|
2507.12599
| null | null |
A Survey of Explainable Reinforcement Learning: Targets, Methods and Needs
|
The success of recent Artificial Intelligence (AI) models has been accompanied by the opacity of their internal mechanisms, due notably to the use of deep neural networks. In order to understand these internal mechanisms and explain the output of these AI models, a set of methods have been proposed, grouped under the domain of eXplainable AI (XAI). This paper focuses on a sub-domain of XAI, called eXplainable Reinforcement Learning (XRL), which aims to explain the actions of an agent that has learned by reinforcement learning. We propose an intuitive taxonomy based on two questions "What" and "How". The first question focuses on the target that the method explains, while the second relates to the way the explanation is provided. We use this taxonomy to provide a state-of-the-art review of over 250 papers. In addition, we present a set of domains close to XRL, which we believe should get attention from the community. Finally, we identify some needs for the field of XRL.
| null |
https://arxiv.org/abs/2507.12599v1
|
https://arxiv.org/pdf/2507.12599v1.pdf
| null |
[
"Léo Saulières"
] |
[
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/second-order-bounds-for-01-valued-regression
|
2507.12584
| null | null |
Second-Order Bounds for [0,1]-Valued Regression via Betting Loss
|
We consider the $[0,1]$-valued regression problem in the i.i.d. setting. In a related problem called cost-sensitive classification, \citet{foster21efficient} have shown that the log loss minimizer achieves an improved generalization bound compared to that of the squared loss minimizer in the sense that the bound scales with the cost of the best classifier, which can be arbitrarily small depending on the problem at hand. Such a result is often called a first-order bound. For $[0,1]$-valued regression, we first show that the log loss minimizer leads to a similar first-order bound. We then ask if there exists a loss function that achieves a variance-dependent bound (also known as a second order bound), which is a strict improvement upon first-order bounds. We answer this question in the affirmative by proposing a novel loss function called the betting loss. Our result is ``variance-adaptive'' in the sense that the bound is attained \textit{without any knowledge about the variance}, which is in contrast to modeling label (or reward) variance or the label distribution itself explicitly as part of the function class such as distributional reinforcement learning.
| null |
https://arxiv.org/abs/2507.12584v1
|
https://arxiv.org/pdf/2507.12584v1.pdf
| null |
[
"Yinan Li",
"Kwang-Sung Jun"
] |
[
"Distributional Reinforcement Learning",
"regression"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/scaling-up-rl-unlocking-diverse-reasoning-in
|
2507.12507
| null | null |
Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training
|
Recent advancements in reasoning-focused language models such as OpenAI's O1 and DeepSeek-R1 have shown that scaling test-time computation-through chain-of-thought reasoning and iterative exploration-can yield substantial improvements on complex tasks like mathematics and code generation. These breakthroughs have been driven by large-scale reinforcement learning (RL), particularly when combined with verifiable reward signals that provide objective and grounded supervision. In this report, we investigate the effects of prolonged reinforcement learning on a small language model across a diverse set of reasoning domains. Our work identifies several key ingredients for effective training, including the use of verifiable reward tasks, enhancements to Group Relative Policy Optimization (GRPO), and practical techniques to improve training stability and generalization. We introduce controlled KL regularization, clipping ratio, and periodic reference policy resets as critical components for unlocking long-term performance gains. Our model achieves significant improvements over strong baselines, including +14.7% on math, +13.9% on coding, and +54.8% on logic puzzle tasks. To facilitate continued research, we release our model publicly.
| null |
https://arxiv.org/abs/2507.12507v1
|
https://arxiv.org/pdf/2507.12507v1.pdf
| null |
[
"Mingjie Liu",
"Shizhe Diao",
"Jian Hu",
"Ximing Lu",
"Xin Dong",
"Hao Zhang",
"Alexander Bukharin",
"Shaokun Zhang",
"Jiaqi Zeng",
"Makesh Narsimhan Sreedhar",
"Gerald Shen",
"David Mosallanezhad",
"Di Zhang",
"Jonas Yang",
"June Yang",
"Oleksii Kuchaiev",
"Guilin Liu",
"Zhiding Yu",
"Pavlo Molchanov",
"Yejin Choi",
"Jan Kautz",
"Yi Dong"
] |
[
"Code Generation",
"Math",
"reinforcement-learning",
"Reinforcement Learning",
"Reinforcement Learning (RL)",
"Small Language Model"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/improving-reinforcement-learning-sample
|
2507.12383
| null | null |
Improving Reinforcement Learning Sample-Efficiency using Local Approximation
|
In this study, we derive Probably Approximately Correct (PAC) bounds on the asymptotic sample-complexity for RL within the infinite-horizon Markov Decision Process (MDP) setting that are sharper than those in existing literature. The premise of our study is twofold: firstly, the further two states are from each other, transition-wise, the less relevant the value of the first state is when learning the $\epsilon$-optimal value of the second; secondly, the amount of 'effort', sample-complexity-wise, expended in learning the $\epsilon$-optimal value of a state is independent of the number of samples required to learn the $\epsilon$-optimal value of a second state that is a sufficient number of transitions away from the first. Inversely, states within each other's vicinity have values that are dependent on each other and will require a similar number of samples to learn. By approximating the original MDP using smaller MDPs constructed using subsets of the original's state-space, we are able to reduce the sample-complexity by a logarithmic factor to $O(SA \log A)$ timesteps, where $S$ and $A$ are the state and action space sizes. We are able to extend these results to an infinite-horizon, model-free setting by constructing a PAC-MDP algorithm with the aforementioned sample-complexity. We conclude with showing how significant the improvement is by comparing our algorithm against prior work in an experimental setting.
| null |
https://arxiv.org/abs/2507.12383v1
|
https://arxiv.org/pdf/2507.12383v1.pdf
| null |
[
"Mohit Prashant",
"Arvind Easwaran"
] |
[
"reinforcement-learning",
"Reinforcement Learning"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/xiangqi-r1-enhancing-spatial-strategic
|
2507.12215
| null | null |
Xiangqi-R1: Enhancing Spatial Strategic Reasoning in LLMs for Chinese Chess via Reinforcement Learning
|
Game playing has long served as a fundamental benchmark for evaluating Artificial General Intelligence (AGI). While Large Language Models (LLMs) have demonstrated impressive capabilities in general reasoning, their effectiveness in spatial strategic reasoning, which is critical for complex and fully observable board games, remains insufficiently explored. In this work, we adopt Chinese Chess (Xiangqi) as a challenging and rich testbed due to its intricate rules and spatial complexity. To advance LLMs' strategic competence in such environments, we propose a training framework tailored to Xiangqi, built upon a large-scale dataset of five million board-move pairs enhanced with expert annotations and engine evaluations. Building on this foundation, we introduce Xiangqi-R1, a 7B-parameter model trained in multi-stage manner: (1) fine-tuning for legal move prediction to capture basic spatial rules, (2) incorporating strategic annotations to improve decision-making, and (3) applying reinforcement learning via Group Relative Policy Optimization (GRPO) with multi-dimensional reward signals to enhance reasoning stability. Our Experimental results indicate that, despite their size and power, general-purpose LLMs struggle to achieve satisfactory performance in these tasks. Compared to general-purpose LLMs, Xiangqi-R1 greatly advances with an 18% rise in move legality and a 22% boost in analysis accuracy. Our results point to a promising path for creating general strategic intelligence in spatially complex areas.
| null |
https://arxiv.org/abs/2507.12215v1
|
https://arxiv.org/pdf/2507.12215v1.pdf
| null |
[
"Yuhao Chen",
"Shuochen Liu",
"Yuanjie Lyu",
"Chao Zhang",
"Jiayao Shi",
"Tong Xu"
] |
[
"Board Games"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dual-lidar-based-traffic-movement-count
|
2507.13073
| null | null |
Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis
|
Traffic Movement Count (TMC) at intersections is crucial for optimizing signal timings, assessing the performance of existing traffic control measures, and proposing efficient lane configurations to minimize delays, reduce congestion, and promote safety. Traditionally, methods such as manual counting, loop detectors, pneumatic road tubes, and camera-based recognition have been used for TMC estimation. Although generally reliable, camera-based TMC estimation is prone to inaccuracies under poor lighting conditions during harsh weather and nighttime. In contrast, Light Detection and Ranging (LiDAR) technology is gaining popularity in recent times due to reduced costs and its expanding use in 3D object detection, tracking, and related applications. This paper presents the authors' endeavor to develop, deploy and evaluate a dual-LiDAR system at an intersection in the city of Rialto, California, for TMC estimation. The 3D bounding box detections from the two LiDARs are used to classify vehicle counts based on traffic directions, vehicle movements, and vehicle classes. This work discusses the estimated TMC results and provides insights into the observed trends and irregularities. Potential improvements are also discussed that could enhance not only TMC estimation, but also trajectory forecasting and intent prediction at intersections.
| null |
https://arxiv.org/abs/2507.13073v1
|
https://arxiv.org/pdf/2507.13073v1.pdf
| null |
[
"Saswat Priyadarshi Nayak",
"Guoyuan Wu",
"Kanok Boriboonsomsin",
"Matthew Barth"
] |
[
"3D Object Detection",
"object-detection",
"Object Detection",
"Trajectory Forecasting"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/athleticspose-authentic-sports-motion-dataset
|
2507.12905
| null | null |
AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability
|
Monocular 3D pose estimation is a promising, flexible alternative to costly motion capture systems for sports analysis. However, its practical application is hindered by two factors: a lack of realistic sports datasets and unclear reliability for sports tasks. To address these challenges, we introduce the AthleticsPose dataset, a new public dataset featuring ``real'' motions captured from 23 athletes performing various athletics events on an athletic field. Using this dataset, we trained a representative 3D pose estimation model and performed a comprehensive evaluation. Our results show that the model trained on AthleticsPose significantly outperforms a baseline model trained on an imitated sports motion dataset, reducing MPJPE by approximately 75 %. These results show the importance of training on authentic sports motion data, as models based on imitated motions do not effectively transfer to real-world motions. Further analysis reveals that estimation accuracy is sensitive to camera view and subject scale. In case studies of kinematic indicators, the model demonstrated the potential to capture individual differences in knee angles but struggled with higher-speed metrics, such as knee-drive velocity, due to prediction biases. This work provides the research community with a valuable dataset and clarifies the potential and practical limitations of using monocular 3D pose estimation for sports motion analysis. Our dataset, code, and checkpoints are available at https://github.com/SZucchini/AthleticsPose.
| null |
https://arxiv.org/abs/2507.12905v1
|
https://arxiv.org/pdf/2507.12905v1.pdf
| null |
[
"Tomohiro Suzuki",
"Ryota Tanaka",
"Calvin Yeung",
"Keisuke Fujii"
] |
[
"3D Pose Estimation",
"Pose Estimation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/mva-2025-small-multi-object-tracking-for
|
2507.12832
| null | null |
MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results
|
Small Multi-Object Tracking (SMOT) is particularly challenging when targets occupy only a few dozen pixels, rendering detection and appearance-based association unreliable. Building on the success of the MVA2023 SOD4SB challenge, this paper introduces the SMOT4SB challenge, which leverages temporal information to address limitations of single-frame detection. Our three main contributions are: (1) the SMOT4SB dataset, consisting of 211 UAV video sequences with 108,192 annotated frames under diverse real-world conditions, designed to capture motion entanglement where both camera and targets move freely in 3D; (2) SO-HOTA, a novel metric combining Dot Distance with HOTA to mitigate the sensitivity of IoU-based metrics to small displacements; and (3) a competitive MVA2025 challenge with 78 participants and 308 submissions, where the winning method achieved a 5.1x improvement over the baseline. This work lays a foundation for advancing SMOT in UAV scenarios with applications in bird strike avoidance, agriculture, fisheries, and ecological monitoring.
| null |
https://arxiv.org/abs/2507.12832v1
|
https://arxiv.org/pdf/2507.12832v1.pdf
| null |
[
"Yuki Kondo",
"Norimichi Ukita",
"Riku Kanayama",
"Yuki Yoshida",
"Takayuki Yamaguchi",
"Xiang Yu",
"Guang Liang",
"Xinyao Liu",
"Guan-Zhang Wang",
"Wei-Ta Chu",
"Bing-Cheng Chuang",
"Jia-Hua Lee",
"Pin-Tseng Kuo",
"I-Hsuan Chu",
"Yi-Shein Hsiao",
"Cheng-Han Wu",
"Po-Yi Wu",
"Jui-Chien Tsou",
"Hsuan-Chi Liu",
"Chun-Yi Lee",
"Yuan-Fu Yang",
"Kosuke Shigematsu",
"Asuka Shin",
"Ba Tran"
] |
[
"Multi-Object Tracking",
"Object Tracking"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-semi-supervised-learning-method-for-the
|
2507.12784
| null | null |
A Semi-Supervised Learning Method for the Identification of Bad Exposures in Large Imaging Surveys
|
As the data volume of astronomical imaging surveys rapidly increases, traditional methods for image anomaly detection, such as visual inspection by human experts, are becoming impractical. We introduce a machine-learning-based approach to detect poor-quality exposures in large imaging surveys, with a focus on the DECam Legacy Survey (DECaLS) in regions of low extinction (i.e., $E(B-V)<0.04$). Our semi-supervised pipeline integrates a vision transformer (ViT), trained via self-supervised learning (SSL), with a k-Nearest Neighbor (kNN) classifier. We train and validate our pipeline using a small set of labeled exposures observed by surveys with the Dark Energy Camera (DECam). A clustering-space analysis of where our pipeline places images labeled in ``good'' and ``bad'' categories suggests that our approach can efficiently and accurately determine the quality of exposures. Applied to new imaging being reduced for DECaLS Data Release 11, our pipeline identifies 780 problematic exposures, which we subsequently verify through visual inspection. Being highly efficient and adaptable, our method offers a scalable solution for quality control in other large imaging surveys.
| null |
https://arxiv.org/abs/2507.12784v1
|
https://arxiv.org/pdf/2507.12784v1.pdf
| null |
[
"Yufeng Luo",
"Adam D. Myers",
"Alex Drlica-Wagner",
"Dario Dematties",
"Salma Borchani",
"Frank Valdes",
"Arjun Dey",
"David Schlegel",
"Rongpu Zhou",
"DESI Legacy Imaging Surveys Team"
] |
[
"Anomaly Detection",
"Self-Supervised Learning"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/continuous-marine-tracking-via-autonomous-uav
|
2507.12763
| null | null |
Continuous Marine Tracking via Autonomous UAV Handoff
|
This paper introduces an autonomous UAV vision system for continuous, real-time tracking of marine animals, specifically sharks, in dynamic marine environments. The system integrates an onboard computer with a stabilised RGB-D camera and a custom-trained OSTrack pipeline, enabling visual identification under challenging lighting, occlusion, and sea-state conditions. A key innovation is the inter-UAV handoff protocol, which enables seamless transfer of tracking responsibilities between drones, extending operational coverage beyond single-drone battery limitations. Performance is evaluated on a curated shark dataset of 5,200 frames, achieving a tracking success rate of 81.9\% during real-time flight control at 100 Hz, and robustness to occlusion, illumination variation, and background clutter. We present a seamless UAV handoff framework, where target transfer is attempted via high-confidence feature matching, achieving 82.9\% target coverage. These results confirm the viability of coordinated UAV operations for extended marine tracking and lay the groundwork for scalable, autonomous monitoring.
| null |
https://arxiv.org/abs/2507.12763v1
|
https://arxiv.org/pdf/2507.12763v1.pdf
| null |
[
"Heegyeong Kim",
"Alice James",
"Avishkar Seth",
"Endrowednes Kuantama",
"Jane Williamson",
"Yimeng Feng",
"Richard Han"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/vision-based-perception-for-autonomous
|
2507.12449
| null | null |
Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios
|
Obstacle avoidance is essential for ensuring the safety of autonomous vehicles. Accurate perception and motion planning are crucial to enabling vehicles to navigate complex environments while avoiding collisions. In this paper, we propose an efficient obstacle avoidance pipeline that leverages a camera-only perception module and a Frenet-Pure Pursuit-based planning strategy. By integrating advancements in computer vision, the system utilizes YOLOv11 for object detection and state-of-the-art monocular depth estimation models, such as Depth Anything V2, to estimate object distances. A comparative analysis of these models provides valuable insights into their accuracy, efficiency, and robustness in real-world conditions. The system is evaluated in diverse scenarios on a university campus, demonstrating its effectiveness in handling various obstacles and enhancing autonomous navigation. The video presenting the results of the obstacle avoidance experiments is available at: https://www.youtube.com/watch?v=FoXiO5S_tA8
| null |
https://arxiv.org/abs/2507.12449v1
|
https://arxiv.org/pdf/2507.12449v1.pdf
| null |
[
"Van-Hoang-Anh Phan",
"Chi-Tam Nguyen",
"Doan-Trung Au",
"Thanh-Danh Phan",
"Minh-Thien Duong",
"My-Ha Le"
] |
[
"Autonomous Navigation",
"Autonomous Vehicles",
"Depth Estimation",
"Monocular Depth Estimation",
"Motion Planning",
"Navigate",
"object-detection",
"Object Detection"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.