paper_url
stringlengths 35
81
| arxiv_id
stringlengths 6
35
⌀ | nips_id
null | openreview_id
stringlengths 9
93
⌀ | title
stringlengths 1
1.02k
⌀ | abstract
stringlengths 0
56.5k
⌀ | short_abstract
stringlengths 0
1.95k
⌀ | url_abs
stringlengths 16
996
| url_pdf
stringlengths 16
996
⌀ | proceeding
stringlengths 7
1.03k
⌀ | authors
listlengths 0
3.31k
| tasks
listlengths 0
147
| date
timestamp[ns]date 1951-09-01 00:00:00
2222-12-22 00:00:00
⌀ | conference_url_abs
stringlengths 16
199
⌀ | conference_url_pdf
stringlengths 21
200
⌀ | conference
stringlengths 2
47
⌀ | reproduces_paper
stringclasses 22
values | methods
listlengths 0
7.5k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://paperswithcode.com/paper/fade-adversarial-concept-erasure-in-flow
|
2507.12283
| null | null |
FADE: Adversarial Concept Erasure in Flow Models
|
Diffusion models have demonstrated remarkable image generation capabilities, but also pose risks in privacy and fairness by memorizing sensitive concepts or perpetuating biases. We propose a novel \textbf{concept erasure} method for text-to-image diffusion models, designed to remove specified concepts (e.g., a private individual or a harmful stereotype) from the model's generative repertoire. Our method, termed \textbf{FADE} (Fair Adversarial Diffusion Erasure), combines a trajectory-aware fine-tuning strategy with an adversarial objective to ensure the concept is reliably removed while preserving overall model fidelity. Theoretically, we prove a formal guarantee that our approach minimizes the mutual information between the erased concept and the model's outputs, ensuring privacy and fairness. Empirically, we evaluate FADE on Stable Diffusion and FLUX, using benchmarks from prior work (e.g., object, celebrity, explicit content, and style erasure tasks from MACE). FADE achieves state-of-the-art concept removal performance, surpassing recent baselines like ESD, UCE, MACE, and ANT in terms of removal efficacy and image quality. Notably, FADE improves the harmonic mean of concept removal and fidelity by 5--10\% over the best prior method. We also conduct an ablation study to validate each component of FADE, confirming that our adversarial and trajectory-preserving objectives each contribute to its superior performance. Our work sets a new standard for safe and fair generative modeling by unlearning specified concepts without retraining from scratch.
| null |
https://arxiv.org/abs/2507.12283v1
|
https://arxiv.org/pdf/2507.12283v1.pdf
| null |
[
"Zixuan Fu",
"Yan Ren",
"Finn Carter",
"Chenyue Wang",
"Ze Niu",
"Dacheng Yu",
"Emily Davis",
"Bo Zhang"
] |
[
"Fairness",
"Image Generation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/brum-robust-3d-vehicle-reconstruction-from
|
2507.12095
| null | null |
BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images
|
Accurate 3D reconstruction of vehicles is vital for applications such as vehicle inspection, predictive maintenance, and urban planning. Existing methods like Neural Radiance Fields and Gaussian Splatting have shown impressive results but remain limited by their reliance on dense input views, which hinders real-world applicability. This paper addresses the challenge of reconstructing vehicles from sparse-view inputs, leveraging depth maps and a robust pose estimation architecture to synthesize novel views and augment training data. Specifically, we enhance Gaussian Splatting by integrating a selective photometric loss, applied only to high-confidence pixels, and replacing standard Structure-from-Motion pipelines with the DUSt3R architecture to improve camera pose estimation. Furthermore, we present a novel dataset featuring both synthetic and real-world public transportation vehicles, enabling extensive evaluation of our approach. Experimental results demonstrate state-of-the-art performance across multiple benchmarks, showcasing the method's ability to achieve high-quality reconstructions even under constrained input conditions.
| null |
https://arxiv.org/abs/2507.12095v1
|
https://arxiv.org/pdf/2507.12095v1.pdf
| null |
[
"Davide Di Nucci",
"Matteo Tomei",
"Guido Borghi",
"Luca Ciuffreda",
"Roberto Vezzani",
"Rita Cucchiara"
] |
[
"3D Reconstruction",
"Camera Pose Estimation",
"Pose Estimation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/yolov8-smot-an-efficient-and-robust-framework
|
2507.12087
| null | null |
YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association
|
Tracking small, agile multi-objects (SMOT), such as birds, from an Unmanned Aerial Vehicle (UAV) perspective is a highly challenging computer vision task. The difficulty stems from three main sources: the extreme scarcity of target appearance features, the complex motion entanglement caused by the combined dynamics of the camera and the targets themselves, and the frequent occlusions and identity ambiguity arising from dense flocking behavior. This paper details our championship-winning solution in the MVA 2025 "Finding Birds" Small Multi-Object Tracking Challenge (SMOT4SB), which adopts the tracking-by-detection paradigm with targeted innovations at both the detection and association levels. On the detection side, we propose a systematic training enhancement framework named \textbf{SliceTrain}. This framework, through the synergy of 'deterministic full-coverage slicing' and 'slice-level stochastic augmentation, effectively addresses the problem of insufficient learning for small objects in high-resolution image training. On the tracking side, we designed a robust tracker that is completely independent of appearance information. By integrating a \textbf{motion direction maintenance (EMA)} mechanism and an \textbf{adaptive similarity metric} combining \textbf{bounding box expansion and distance penalty} into the OC-SORT framework, our tracker can stably handle irregular motion and maintain target identities. Our method achieves state-of-the-art performance on the SMOT4SB public test set, reaching an SO-HOTA score of \textbf{55.205}, which fully validates the effectiveness and advancement of our framework in solving complex real-world SMOT problems. The source code will be made available at https://github.com/Salvatore-Love/YOLOv8-SMOT.
| null |
https://arxiv.org/abs/2507.12087v2
|
https://arxiv.org/pdf/2507.12087v2.pdf
| null |
[
"Xiang Yu",
"Xinyao Liu",
"Guang Liang"
] |
[
"Multi-Object Tracking",
"Object Tracking"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/dark-evgs-event-camera-as-an-eye-for-radiance
|
2507.11931
| null | null |
Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark
|
In low-light environments, conventional cameras often struggle to capture clear multi-view images of objects due to dynamic range limitations and motion blur caused by long exposure. Event cameras, with their high-dynamic range and high-speed properties, have the potential to mitigate these issues. Additionally, 3D Gaussian Splatting (GS) enables radiance field reconstruction, facilitating bright frame synthesis from multiple viewpoints in low-light conditions. However, naively using an event-assisted 3D GS approach still faced challenges because, in low light, events are noisy, frames lack quality, and the color tone may be inconsistent. To address these issues, we propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints along the camera trajectory. Triplet-level supervision is proposed to gain holistic knowledge, granular details, and sharp scene rendering. The color tone matching block is proposed to guarantee the color consistency of the rendered frames. Furthermore, we introduce the first real-captured dataset for the event-guided bright frame synthesis task via 3D GS-based radiance field reconstruction. Experiments demonstrate that our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions. The code and sample data are included in the supplementary material.
| null |
https://arxiv.org/abs/2507.11931v1
|
https://arxiv.org/pdf/2507.11931v1.pdf
| null |
[
"Jingqian Wu",
"Peiqi Duan",
"Zongqiang Wang",
"Changwei Wang",
"Boxin Shi",
"Edmund Y. Lam"
] |
[
"Triplet"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/physically-based-neural-lidar-resimulation
|
2507.12489
| null | null |
Physically Based Neural LiDAR Resimulation
|
Methods for Novel View Synthesis (NVS) have recently found traction in the field of LiDAR simulation and large-scale 3D scene reconstruction. While solutions for faster rendering or handling dynamic scenes have been proposed, LiDAR specific effects remain insufficiently addressed. By explicitly modeling sensor characteristics such as rolling shutter, laser power variations, and intensity falloff, our method achieves more accurate LiDAR simulation compared to existing techniques. We demonstrate the effectiveness of our approach through quantitative and qualitative comparisons with state-of-the-art methods, as well as ablation studies that highlight the importance of each sensor model component. Beyond that, we show that our approach exhibits advanced resimulation capabilities, such as generating high resolution LiDAR scans in the camera perspective. Our code and the resulting dataset are available at https://github.com/richardmarcus/PBNLiDAR.
| null |
https://arxiv.org/abs/2507.12489v1
|
https://arxiv.org/pdf/2507.12489v1.pdf
| null |
[
"Richard Marcus",
"Marc Stamminger"
] |
[
"3D Scene Reconstruction",
"Novel View Synthesis"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/se-vln-a-self-evolving-vision-language
|
2507.13152
| null | null |
SE-VLN: A Self-Evolving Vision-Language Navigation Framework Based on Multimodal Large Language Models
|
Recent advances in vision-language navigation (VLN) were mainly attributed to emerging large language models (LLMs). These methods exhibited excellent generalization capabilities in instruction understanding and task reasoning. However, they were constrained by the fixed knowledge bases and reasoning abilities of LLMs, preventing fully incorporating experiential knowledge and thus resulting in a lack of efficient evolutionary capacity. To address this, we drew inspiration from the evolution capabilities of natural agents, and proposed a self-evolving VLN framework (SE-VLN) to endow VLN agents with the ability to continuously evolve during testing. To the best of our knowledge, it was the first time that an multimodal LLM-powered self-evolving VLN framework was proposed. Specifically, SE-VLN comprised three core modules, i.e., a hierarchical memory module to transfer successful and failure cases into reusable knowledge, a retrieval-augmented thought-based reasoning module to retrieve experience and enable multi-step decision-making, and a reflection module to realize continual evolution. Comprehensive tests illustrated that the SE-VLN achieved navigation success rates of 57% and 35.2% in unseen environments, representing absolute performance improvements of 23.9% and 15.0% over current state-of-the-art methods on R2R and REVERSE datasets, respectively. Moreover, the SE-VLN showed performance improvement with increasing experience repository, elucidating its great potential as a self-evolving agent framework for VLN.
| null |
https://arxiv.org/abs/2507.13152v1
|
https://arxiv.org/pdf/2507.13152v1.pdf
| null |
[
"Xiangyu Dong",
"Haoran Zhao",
"Jiang Gao",
"Haozhou Li",
"Xiaoguang Ma",
"Yaoming Zhou",
"Fuhai Chen",
"Juan Liu"
] |
[
"Vision-Language Navigation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/rethinking-the-embodied-gap-in-vision-and
|
2507.13019
| null | null |
Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities
|
Recent Vision-and-Language Navigation (VLN) advancements are promising, but their idealized assumptions about robot movement and control fail to reflect physically embodied deployment challenges. To bridge this gap, we introduce VLN-PE, a physically realistic VLN platform supporting humanoid, quadruped, and wheeled robots. For the first time, we systematically evaluate several ego-centric VLN methods in physical robotic settings across different technical pipelines, including classification models for single-step discrete action prediction, a diffusion model for dense waypoint prediction, and a train-free, map-based large language model (LLM) integrated with path planning. Our results reveal significant performance degradation due to limited robot observation space, environmental lighting variations, and physical challenges like collisions and falls. This also exposes locomotion constraints for legged robots in complex environments. VLN-PE is highly extensible, allowing seamless integration of new scenes beyond MP3D, thereby enabling more comprehensive VLN evaluation. Despite the weak generalization of current models in physical deployment, VLN-PE provides a new pathway for improving cross-embodiment's overall adaptability. We hope our findings and tools inspire the community to rethink VLN limitations and advance robust, practical VLN models. The code is available at https://crystalsixone.github.io/vln_pe.github.io/.
| null |
https://arxiv.org/abs/2507.13019v1
|
https://arxiv.org/pdf/2507.13019v1.pdf
| null |
[
"Liuyi Wang",
"Xinyuan Xia",
"Hui Zhao",
"Hanqing Wang",
"Tai Wang",
"Yilun Chen",
"Chengju Liu",
"Qijun Chen",
"Jiangmiao Pang"
] |
[
"Large Language Model",
"Vision and Language Navigation"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/anypos-automated-task-agnostic-actions-for
|
2507.12768
| null | null |
AnyPos: Automated Task-Agnostic Actions for Bimanual Manipulation
|
Vision-language-action (VLA) models have shown promise on task-conditioned control in complex settings such as bimanual manipulation. However, the heavy reliance on task-specific human demonstrations limits their generalization and incurs high data acquisition costs. In this work, we present a new notion of task-agnostic action paradigm that decouples action execution from task-specific conditioning, enhancing scalability, efficiency, and cost-effectiveness. To address the data collection challenges posed by this paradigm -- such as low coverage density, behavioral redundancy, and safety risks -- we introduce ATARA (Automated Task-Agnostic Random Actions), a scalable self-supervised framework that accelerates collection by over $ 30\times $ compared to human teleoperation. To further enable effective learning from task-agnostic data, which often suffers from distribution mismatch and irrelevant trajectories, we propose AnyPos, an inverse dynamics model equipped with Arm-Decoupled Estimation and a Direction-Aware Decoder (DAD). We additionally integrate a video-conditioned action validation module to verify the feasibility of learned policies across diverse manipulation tasks. Extensive experiments show that the AnyPos-ATARA pipeline yields a 51% improvement in test accuracy and achieves 30-40% higher success rates in downstream tasks such as lifting, pick-and-place, and clicking, using replay-based video validation. Project Page: https://embodiedfoundation.github.io/vidar_anypos
| null |
https://arxiv.org/abs/2507.12768v1
|
https://arxiv.org/pdf/2507.12768v1.pdf
| null |
[
"Hengkai Tan",
"Yao Feng",
"Xinyi Mao",
"Shuhe Huang",
"Guodong Liu",
"Zhongkai Hao",
"Hang Su",
"Jun Zhu"
] |
[
"Vision-Language-Action"
] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/enhancing-image-restoration-transformer-via
|
2506.18520
| null | null |
Enhancing Image Restoration Transformer via Adaptive Translation Equivariance
|
Translation equivariance is a fundamental inductive bias in image restoration, ensuring that translated inputs produce translated outputs. Attention mechanisms in modern restoration transformers undermine this property, adversely impacting both training convergence and generalization. To alleviate this issue, we propose two key strategies for incorporating translation equivariance: slide indexing and component stacking. Slide indexing maintains operator responses at fixed positions, with sliding window attention being a notable example, while component stacking enables the arrangement of translation-equivariant operators in parallel or sequentially, thereby building complex architectures while preserving translation equivariance. However, these strategies still create a dilemma in model design between the high computational cost of self-attention and the fixed receptive field associated with sliding window attention. To address this, we develop an adaptive sliding indexing mechanism to efficiently select key-value pairs for each query, which are then concatenated in parallel with globally aggregated key-value pairs. The designed network, called the Translation Equivariance Adaptive Transformer (TEAFormer), is assessed across a variety of image restoration tasks. The results highlight its superiority in terms of effectiveness, training convergence, and generalization.
| null |
https://arxiv.org/abs/2506.18520v1
|
https://arxiv.org/pdf/2506.18520v1.pdf
| null |
[
"Jiakui Hu",
"Zhengjian Yao",
"Lujia Jin",
"Hangzhou He",
"Yanye Lu"
] |
[
"Image Restoration",
"Inductive Bias",
"Translation"
] | 2025-06-23T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/language-guided-contrastive-audio-visual
|
2507.11967
| null | null |
Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos
|
In this paper, we propose Language-Guided Contrastive Audio-Visual Masked Autoencoders (LG-CAV-MAE) to improve audio-visual representation learning. LG-CAV-MAE integrates a pretrained text encoder into contrastive audio-visual masked autoencoders, enabling the model to learn across audio, visual and text modalities. To train LG-CAV-MAE, we introduce an automatic method to generate audio-visual-text triplets from unlabeled videos. We first generate frame-level captions using an image captioning model and then apply CLAP-based filtering to ensure strong alignment between audio and captions. This approach yields high-quality audio-visual-text triplets without requiring manual annotations. We evaluate LG-CAV-MAE on audio-visual retrieval tasks, as well as an audio-visual classification task. Our method significantly outperforms existing approaches, achieving up to a 5.6% improvement in recall@10 for retrieval tasks and a 3.2% improvement for the classification task.
| null |
https://arxiv.org/abs/2507.11967v1
|
https://arxiv.org/pdf/2507.11967v1.pdf
| null |
[
"Yuchi Ishikawa",
"Shota Nakada",
"Hokuto Munakata",
"Kazuhiro Saito",
"Tatsuya Komatsu",
"Yoshimitsu Aoki"
] |
[
"Image Captioning",
"Representation Learning",
"Retrieval"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/towards-autonomous-riding-a-review-of
|
2507.11852
| null | null |
Towards Autonomous Riding: A Review of Perception, Planning, and Control in Intelligent Two-Wheelers
|
The rapid adoption of micromobility solutions, particularly two-wheeled vehicles like e-scooters and e-bikes, has created an urgent need for reliable autonomous riding (AR) technologies. While autonomous driving (AD) systems have matured significantly, AR presents unique challenges due to the inherent instability of two-wheeled platforms, limited size, limited power, and unpredictable environments, which pose very serious concerns about road users' safety. This review provides a comprehensive analysis of AR systems by systematically examining their core components, perception, planning, and control, through the lens of AD technologies. We identify critical gaps in current AR research, including a lack of comprehensive perception systems for various AR tasks, limited industry and government support for such developments, and insufficient attention from the research community. The review analyses the gaps of AR from the perspective of AD to highlight promising research directions, such as multimodal sensor techniques for lightweight platforms and edge deep learning architectures. By synthesising insights from AD research with the specific requirements of AR, this review aims to accelerate the development of safe, efficient, and scalable autonomous riding systems for future urban mobility.
| null |
https://arxiv.org/abs/2507.11852v1
|
https://arxiv.org/pdf/2507.11852v1.pdf
| null |
[
"Mohammed Hassanin",
"Mohammad Abu Alsheikh",
"Carlos C. N. Kuhn",
"Damith Herath",
"Dinh Thai Hoang",
"Ibrahim Radwan"
] |
[
"Autonomous Driving"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/seeing-the-signs-a-survey-of-edge-deployable
|
2507.11730
| null | null |
Seeing the Signs: A Survey of Edge-Deployable OCR Models for Billboard Visibility Analysis
|
Outdoor advertisements remain a critical medium for modern marketing, yet accurately verifying billboard text visibility under real-world conditions is still challenging. Traditional Optical Character Recognition (OCR) pipelines excel at cropped text recognition but often struggle with complex outdoor scenes, varying fonts, and weather-induced visual noise. Recently, multimodal Vision-Language Models (VLMs) have emerged as promising alternatives, offering end-to-end scene understanding with no explicit detection step. This work systematically benchmarks representative VLMs - including Qwen 2.5 VL 3B, InternVL3, and SmolVLM2 - against a compact CNN-based OCR baseline (PaddleOCRv4) across two public datasets (ICDAR 2015 and SVT), augmented with synthetic weather distortions to simulate realistic degradation. Our results reveal that while selected VLMs excel at holistic scene reasoning, lightweight CNN pipelines still achieve competitive accuracy for cropped text at a fraction of the computational cost-an important consideration for edge deployment. To foster future research, we release our weather-augmented benchmark and evaluation code publicly.
| null |
https://arxiv.org/abs/2507.11730v1
|
https://arxiv.org/pdf/2507.11730v1.pdf
| null |
[
"Maciej Szankin",
"Vidhyananth Venkatasamy",
"Lihang Ying"
] |
[
"Marketing",
"Optical Character Recognition",
"Optical Character Recognition (OCR)",
"Scene Understanding"
] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-survey-of-deep-learning-for-geometry
|
2507.11936
| null | null |
A Survey of Deep Learning for Geometry Problem Solving
|
Geometry problem solving is a key area of mathematical reasoning, which is widely involved in many important fields such as education, mathematical ability assessment of artificial intelligence, and multimodal ability assessment. In recent years, the rapid development of deep learning technology, especially the rise of multimodal large language models, has triggered a widespread research boom. This paper provides a survey of the applications of deep learning in geometry problem solving, including (i) a comprehensive summary of the relevant tasks in geometry problem solving; (ii) a thorough review of related deep learning methods; (iii) a detailed analysis of evaluation metrics and methods; and (iv) a critical discussion of the current challenges and future directions that can be explored. Our goal is to provide a comprehensive and practical reference of deep learning for geometry problem solving to promote further developments in this field. We create a continuously updated list of papers on GitHub: https://github.com/majianz/dl4gps.
| null |
https://arxiv.org/abs/2507.11936v1
|
https://arxiv.org/pdf/2507.11936v1.pdf
| null |
[
"Jianzhe Ma",
"Wenxuan Wang",
"Qin Jin"
] |
[
"Deep Learning",
"Geometry Problem Solving",
"Mathematical Reasoning",
"Survey"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/cautious-next-token-prediction
|
2507.03038
| null | null |
Cautious Next Token Prediction
|
Next token prediction paradigm has been prevailing for autoregressive models in the era of LLMs. The current default sampling choice for popular LLMs is temperature scaling together with nucleus sampling to balance diversity and coherence. Nevertheless, such approach leads to inferior performance in various NLP tasks when the model is not certain about testing questions. To this end, we propose a brand new training-free decoding strategy, dubbed as Cautious Next Token Prediction (CNTP). In the decoding process, if the model has comparatively high prediction entropy at a certain step, we sample multiple trials starting from the step independently and stop when encountering any punctuation. Then we select the trial with the lowest perplexity score viewed as the most probable and reliable trial path given the model's capacity. The trial number is negatively correlated with the prediction confidence, i.e., the less confident the model is, the more trials it should sample. This is consistent with human beings' behaviour: when feeling uncertain or unconfident, one tends to think more creatively, exploring multiple thinking paths, to cautiously select the path one feels most confident about. Extensive experiments on both LLMs and MLLMs show that our proposed CNTP approach outperforms existing standard decoding strategies consistently by a clear margin. Moreover, the integration of CNTP with self consistency can further improve over vanilla self consistency. We believe our proposed CNTP has the potential to become one of the default choices for LLM decoding. Code is available at https://github.com/wyzjack/CNTP.
|
The trial number is negatively correlated with the prediction confidence, i. e., the less confident the model is, the more trials it should sample.
|
https://arxiv.org/abs/2507.03038v1
|
https://arxiv.org/pdf/2507.03038v1.pdf
| null |
[
"Yizhou Wang",
"Lingzhi Zhang",
"Yue Bai",
"Mang Tik Chiu",
"Zhengmian Hu",
"Mingyuan Zhang",
"Qihua Dong",
"Yu Yin",
"Sohrab Amirghodsi",
"Yun Fu"
] |
[
"Prediction"
] | 2025-07-03T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/context-aware-search-and-retrieval-over
|
2507.11894
| null | null |
Context-Aware Search and Retrieval Over Erasure Channels
|
This paper introduces and analyzes a search and retrieval model that adopts key semantic communication principles from retrieval-augmented generation. We specifically present an information-theoretic analysis of a remote document retrieval system operating over a symbol erasure channel. The proposed model encodes the feature vector of a query, derived from term-frequency weights of a language corpus by using a repetition code with an adaptive rate dependent on the contextual importance of the terms. At the decoder, we select between two documents based on the contextual closeness of the recovered query. By leveraging a jointly Gaussian approximation for both the true and reconstructed similarity scores, we derive an explicit expression for the retrieval error probability, i.e., the probability under which the less similar document is selected. Numerical simulations on synthetic and real-world data (Google NQ) confirm the validity of the analysis. They further demonstrate that assigning greater redundancy to critical features effectively reduces the error rate, highlighting the effectiveness of semantic-aware feature encoding in error-prone communication settings.
| null |
https://arxiv.org/abs/2507.11894v1
|
https://arxiv.org/pdf/2507.11894v1.pdf
| null |
[
"Sara Ghasvarianjahromi",
"Yauhen Yakimenka",
"Jörg Kliewer"
] |
[
"Decoder",
"Retrieval",
"Retrieval-augmented Generation",
"Semantic Communication"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/volatility-spillovers-and-interconnectedness
|
2507.15046
| null | null |
Volatility Spillovers and Interconnectedness in OPEC Oil Markets: A Network-Based log-ARCH Approach
|
This paper examines several network-based volatility models for oil prices, capturing spillovers among OPEC oil-exporting countries by embedding novel network structures into ARCH-type models. We apply a network-based log-ARCH framework that incorporates weight matrices derived from time-series clustering and model-implied distances into the conditional variance equation. These weight matrices are constructed from return data and standard multivariate GARCH model outputs (CCC, DCC, and GO-GARCH), enabling a comparative analysis of volatility transmission across specifications. Through a rolling-window forecast evaluation, the network-based models demonstrate competitive forecasting performance relative to traditional specifications and uncover intricate spillover effects. These results provide a deeper understanding of the interconnectedness within the OPEC network, with important implications for financial risk assessment, market integration, and coordinated policy among oil-producing economies.
| null |
https://arxiv.org/abs/2507.15046v1
|
https://arxiv.org/pdf/2507.15046v1.pdf
| null |
[
"Fayçal Djebari",
"Kahina Mehidi",
"Khelifa Mazouz",
"Philipp Otto"
] |
[
"Time Series Clustering"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/touch-in-the-wild-learning-fine-grained
|
2507.15062
| null | null |
Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper
|
Handheld grippers are increasingly used to collect human demonstrations due to their ease of deployment and versatility. However, most existing designs lack tactile sensing, despite the critical role of tactile feedback in precise manipulation. We present a portable, lightweight gripper with integrated tactile sensors that enables synchronized collection of visual and tactile data in diverse, real-world, and in-the-wild settings. Building on this hardware, we propose a cross-modal representation learning framework that integrates visual and tactile signals while preserving their distinct characteristics. The learning procedure allows the emergence of interpretable representations that consistently focus on contacting regions relevant for physical interactions. When used for downstream manipulation tasks, these representations enable more efficient and effective policy learning, supporting precise robotic manipulation based on multimodal feedback. We validate our approach on fine-grained tasks such as test tube insertion and pipette-based fluid transfer, demonstrating improved accuracy and robustness under external disturbances. Our project page is available at https://binghao-huang.github.io/touch_in_the_wild/ .
| null |
https://arxiv.org/abs/2507.15062v1
|
https://arxiv.org/pdf/2507.15062v1.pdf
| null |
[
"Xinyue Zhu",
"Binghao Huang",
"Yunzhu Li"
] |
[
"Representation Learning"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/partitioning-of-eddy-covariance-footprint
|
2507.14829
| null | null |
Partitioning of Eddy Covariance Footprint Evapotranspiration Using Field Data, UAS Observations and GeoAI in the U.S. Chihuahuan Desert
|
This study proposes a new method for computing transpiration across an eddy covariance footprint using field observations of plant sap flow, phytomorphology sampling, uncrewed aerial system (UAS), deep learning-based digital image processing, and eddy covariance micrometeorological measurements. The method is applied to the Jornada Experimental Range, New Mexico, where we address three key questions: (1) What are the daily summer transpiration rates of Mesquite (Prosopis glandulosa) and Creosote (Larrea tridentata) individuals, and how do these species contribute to footprint-scale evapotranspiration? (2) How can the plant-level measurements be integrated for terrain-wide transpiration estimates? (3) What is the contribution of transpiration to total evapotranspiration within the eddy covariance footprint? Data collected from June to October 2022, during the North American Monsoon season, include hourly evapotranspiration and precipitation rates from the Ameriflux eddy covariance system (US Jo-1 Bajada site) and sap flux rates from heat-balance sensors. We used plant biometric measurements and supervised classification of multispectral imagery to upscale from the patch to footprint-scale estimations. A proportional relationship between the plant's horizontal projected area and the estimated number of water flow conduits was extended to the eddy covariance footprint via UAS data. Our results show that Mesquite's average daily summer transpiration is 2.84 mm/d, while Creosote's is 1.78 mm/d (a ratio of 1.6:1). The summer footprint integrated transpiration to evapotranspiration ratio (T/ET) was 0.50, decreasing to 0.44 during dry spells and increasing to 0.63 following significant precipitation. Further testing of this method is needed in different regions to validate its applicability. With appropriate adjustments, it could be relevant for other areas with similar ecological conditions.
| null |
https://arxiv.org/abs/2507.14829v1
|
https://arxiv.org/pdf/2507.14829v1.pdf
| null |
[
"Habibur R. Howlider",
"Hernan A. Moreno",
"Marguerite E. Mauritz",
"Stephanie N. Marquez"
] |
[] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/egoprune-efficient-token-pruning-for
|
2507.15428
| null | null |
EgoPrune: Efficient Token Pruning for Egomotion Video Reasoning in Embodied Agent
|
Egomotion videos are first-person recordings where the view changes continuously due to the agent's movement. As they serve as the primary visual input for embodied AI agents, making egomotion video reasoning more efficient is therefore essential for real-world deployment. Recent advances in vision-language models have enabled strong multimodal reasoning capabilities, but their computational cost remains prohibitive for long, redundant video inputs. Existing token pruning methods, typically designed for third-person videos, fail to leverage the spatiotemporal continuity and motion constraints inherent in egomotion settings. To address this, we propose EgoPrune, a training-free token pruning method tailored for egomotion video reasoning. EgoPrune comprises three components: a keyframe selector adapted from EmbodiedR for temporally efficient sampling; Perspective-Aware Redundancy Filtering (PARF), which aligns visual tokens using perspective transformations and removes redundant tokens; and a Maximal Marginal Relevance (MMR)-based token selector that jointly considers visual-text relevance and intra-frame diversity. Experiments on two egomotion video benchmarks show that EgoPrune consistently outperforms prior training-free methods across various pruning ratios while significantly reducing FLOPs, memory usage, and latency. Moreover, we deploy EgoPrune on an embodied agent equipped with a Jetson Orin NX 16GB edge device, demonstrating its real-world efficiency and suitability for on-device egomotion video reasoning.
| null |
https://arxiv.org/abs/2507.15428v1
|
https://arxiv.org/pdf/2507.15428v1.pdf
| null |
[
"Jiaao Li",
"Kaiyuan Li",
"Chen Gao",
"Yong Li",
"Xinlei Chen"
] |
[
"Multimodal Reasoning"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-language-model-knowledge-distillation
|
2507.15680
| null | null |
Visual-Language Model Knowledge Distillation Method for Image Quality Assessment
|
Image Quality Assessment (IQA) is a core task in computer vision. Multimodal methods based on vision-language models, such as CLIP, have demonstrated exceptional generalization capabilities in IQA tasks. To address the issues of excessive parameter burden and insufficient ability to identify local distorted features in CLIP for IQA, this study proposes a visual-language model knowledge distillation method aimed at guiding the training of models with architectural advantages using CLIP's IQA knowledge. First, quality-graded prompt templates were designed to guide CLIP to output quality scores. Then, CLIP is fine-tuned to enhance its capabilities in IQA tasks. Finally, a modality-adaptive knowledge distillation strategy is proposed to achieve guidance from the CLIP teacher model to the student model. Our experiments were conducted on multiple IQA datasets, and the results show that the proposed method significantly reduces model complexity while outperforming existing IQA methods, demonstrating strong potential for practical deployment.
| null |
https://arxiv.org/abs/2507.15680v1
|
https://arxiv.org/pdf/2507.15680v1.pdf
| null |
[
"Yongkang Hou",
"Jiarun Song"
] |
[
"Image Quality Assessment",
"Knowledge Distillation",
"Language Modeling",
"Language Modelling"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/one-step-is-enough-multi-agent-reinforcement
|
2507.15351
| null | null |
One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms
|
On-demand ride-sharing platforms face the fundamental challenge of dynamically bundling passengers with diverse origins and destinations and matching them with vehicles in real time, all under significant uncertainty. Recently, MARL has emerged as a promising solution for this problem, leveraging decentralized learning to address the curse of dimensionality caused by the large number of agents in the ride-hailing market and the resulting expansive state and action spaces. However, conventional MARL-based ride-sharing approaches heavily rely on the accurate estimation of Q-values or V-values, which becomes problematic in large-scale, highly uncertain environments. Specifically, most of these approaches adopt an independent paradigm, exacerbating this issue, as each agent treats others as part of the environment, leading to unstable training and substantial estimation bias in value functions. To address these challenges, we propose two novel alternative methods that bypass value function estimation. First, we adapt GRPO to ride-sharing, replacing the PPO baseline with the group average reward to eliminate critic estimation errors and reduce training bias. Second, inspired by GRPO's full utilization of group reward information, we customize the PPO framework for ride-sharing platforms and show that, under a homogeneous fleet, the optimal policy can be trained using only one-step rewards - a method we term One-Step Policy Optimization (OSPO). Experiments on a real-world Manhattan ride-hailing dataset demonstrate that both GRPO and OSPO achieve superior performance across most scenarios, efficiently optimizing pickup times and the number of served orders using simple MLP networks.
|
On-demand ride-sharing platforms face the fundamental challenge of dynamically bundling passengers with diverse origins and destinations and matching them with vehicles in real time, all under significant uncertainty.
|
https://arxiv.org/abs/2507.15351v1
|
https://arxiv.org/pdf/2507.15351v1.pdf
| null |
[
"Zijian Zhao",
"Sen Li"
] |
[
"Multi-agent Reinforcement Learning"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-stage-prompt-inference-attacks-on
|
2507.15613
| null | null |
Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems
|
Large Language Models (LLMs) deployed in enterprise settings (e.g., as Microsoft 365 Copilot) face novel security challenges. One critical threat is prompt inference attacks: adversaries chain together seemingly benign prompts to gradually extract confidential data. In this paper, we present a comprehensive study of multi-stage prompt inference attacks in an enterprise LLM context. We simulate realistic attack scenarios where an attacker uses mild-mannered queries and indirect prompt injections to exploit an LLM integrated with private corporate data. We develop a formal threat model for these multi-turn inference attacks and analyze them using probability theory, optimization frameworks, and information-theoretic leakage bounds. The attacks are shown to reliably exfiltrate sensitive information from the LLM's context (e.g., internal SharePoint documents or emails), even when standard safety measures are in place. We propose and evaluate defenses to counter such attacks, including statistical anomaly detection, fine-grained access control, prompt sanitization techniques, and architectural modifications to LLM deployment. Each defense is supported by mathematical analysis or experimental simulation. For example, we derive bounds on information leakage under differential privacy-based training and demonstrate an anomaly detection method that flags multi-turn attacks with high AUC. We also introduce an approach called "spotlighting" that uses input transformations to isolate untrusted prompt content, reducing attack success by an order of magnitude. Finally, we provide a formal proof of concept and empirical validation for a combined defense-in-depth strategy. Our work highlights that securing LLMs in enterprise settings requires moving beyond single-turn prompt filtering toward a holistic, multi-stage perspective on both attacks and defenses.
| null |
https://arxiv.org/abs/2507.15613v1
|
https://arxiv.org/pdf/2507.15613v1.pdf
| null |
[
"Andrii Balashov",
"Olena Ponomarova",
"Xiaohua Zhai"
] |
[
"Anomaly Detection"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/long-short-distance-graph-neural-networks-and
|
2507.15205
| null | null |
Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation
|
Emotion Recognition in Conversation (ERC) is a practical and challenging task. This paper proposes a novel multimodal approach, the Long-Short Distance Graph Neural Network (LSDGNN). Based on the Directed Acyclic Graph (DAG), it constructs a long-distance graph neural network and a short-distance graph neural network to obtain multimodal features of distant and nearby utterances, respectively. To ensure that long- and short-distance features are as distinct as possible in representation while enabling mutual influence between the two modules, we employ a Differential Regularizer and incorporate a BiAffine Module to facilitate feature interaction. In addition, we propose an Improved Curriculum Learning (ICL) to address the challenge of data imbalance. By computing the similarity between different emotions to emphasize the shifts in similar emotions, we design a "weighted emotional shift" metric and develop a difficulty measurer, enabling a training process that prioritizes learning easy samples before harder ones. Experimental results on the IEMOCAP and MELD datasets demonstrate that our model outperforms existing benchmarks.
|
Emotion Recognition in Conversation (ERC) is a practical and challenging task.
|
https://arxiv.org/abs/2507.15205v1
|
https://arxiv.org/pdf/2507.15205v1.pdf
| null |
[
"Xinran Li",
"Xiujuan Xu",
"Jiaqi Qiao"
] |
[
"Emotion Recognition",
"Emotion Recognition in Conversation",
"Graph Neural Network"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hierarchical-cross-modal-prompt-learning-for
|
2507.14976
| null | null |
Hierarchical Cross-modal Prompt Learning for Vision-Language Models
|
Pre-trained Vision-Language Models (VLMs) such as CLIP have shown excellent generalization abilities. However, adapting these large-scale models to downstream tasks while preserving their generalization capabilities remains challenging. Although prompt learning methods have shown promise, they suffer from two fundamental bottlenecks that limit generalization: (a) modality isolation, and (b) hierarchical semantic decay. To address these limitations, we propose HiCroPL, a Hierarchical Cross-modal Prompt Learning framework that establishes bidirectional knowledge flow between text and vision modalities, enabling them to refine their semantics mutually. HiCroPL routes knowledge flows by leveraging the complementary strengths of text and vision. In early layers, text prompts inject relatively clear semantics into visual prompts through a hierarchical knowledge mapper, enhancing the representation of low-level visual semantics. In later layers, visual prompts encoding specific task-relevant objects flow back to refine text prompts, enabling deeper alignment. Crucially, our hierarchical knowledge mapper allows representations at multi-scales to be fused, ensuring that deeper representations retain transferable shallow semantics thereby enhancing generalization. We further introduce a lightweight layer-specific knowledge proxy to enable efficient cross-modal interactions. Extensive evaluations across four tasks demonstrate HiCroPL's superior performance, achieving state-of-the-art results on 11 benchmarks with significant improvements. Code is available at: https://github.com/zzeoZheng/HiCroPL.
| null |
https://arxiv.org/abs/2507.14976v1
|
https://arxiv.org/pdf/2507.14976v1.pdf
| null |
[
"Hao Zheng",
"Shunzhi Yang",
"Zhuoxin He",
"Jinfeng Yang",
"Zhenhua Huang"
] |
[
"Prompt Learning"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/visual-place-recognition-for-large-scale-uav
|
2507.15089
| null | null |
Visual Place Recognition for Large-Scale UAV Applications
|
Visual Place Recognition (vPR) plays a crucial role in Unmanned Aerial Vehicle (UAV) navigation, enabling robust localization across diverse environments. Despite significant advancements, aerial vPR faces unique challenges due to the limited availability of large-scale, high-altitude datasets, which limits model generalization, along with the inherent rotational ambiguity in UAV imagery. To address these challenges, we introduce LASED, a large-scale aerial dataset with approximately one million images, systematically sampled from 170,000 unique locations throughout Estonia over a decade, offering extensive geographic and temporal diversity. Its structured design ensures clear place separation significantly enhancing model training for aerial scenarios. Furthermore, we propose the integration of steerable Convolutional Neural Networks (CNNs) to explicitly handle rotational variance, leveraging their inherent rotational equivariance to produce robust, orientation-invariant feature representations. Our extensive benchmarking demonstrates that models trained on LASED achieve significantly higher recall compared to those trained on smaller, less diverse datasets, highlighting the benefits of extensive geographic coverage and temporal diversity. Moreover, steerable CNNs effectively address rotational ambiguity inherent in aerial imagery, consistently outperforming conventional convolutional architectures, achieving on average 12\% recall improvement over the best-performing non-steerable network. By combining structured, large-scale datasets with rotation-equivariant neural networks, our approach significantly enhances model robustness and generalization for aerial vPR.
| null |
https://arxiv.org/abs/2507.15089v1
|
https://arxiv.org/pdf/2507.15089v1.pdf
| null |
[
"Ioannis Tsampikos Papapetros",
"Ioannis Kansizoglou",
"Antonios Gasteratos"
] |
[
"Benchmarking",
"Diversity",
"Visual Place Recognition"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/multi-strategy-improved-snake-optimizer
|
2507.15832
| null | null |
Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction
|
To address the limitations of medium- and long-term four-dimensional (4D) trajectory prediction models, this paper proposes a hybrid CNN-LSTM-attention-adaboost neural network model incorporating a multi-strategy improved snake-herd optimization (SO) algorithm. The model applies the Adaboost algorithm to divide multiple weak learners, and each submodel utilizes CNN to extract spatial features, LSTM to capture temporal features, and attention mechanism to capture global features comprehensively. The strong learner model, combined with multiple sub-models, then optimizes the hyperparameters of the prediction model through the natural selection behavior pattern simulated by SO. In this study, based on the real ADS-B data from Xi'an to Tianjin, the comparison experiments and ablation studies of multiple optimizers are carried out, and a comprehensive test and evaluation analysis is carried out. The results show that SO-CLA-adaboost outperforms traditional optimizers such as particle swarm, whale, and gray wolf in handling large-scale high-dimensional trajectory data. In addition, introducing the full-strategy collaborative improvement SO algorithm improves the model's prediction accuracy by 39.89%.
| null |
https://arxiv.org/abs/2507.15832v1
|
https://arxiv.org/pdf/2507.15832v1.pdf
| null |
[
"Shiyang Li"
] |
[
"Prediction",
"Trajectory Prediction"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/from-neurons-to-semantics-evaluating-cross
|
2507.14900
| null | null |
From Neurons to Semantics: Evaluating Cross-Linguistic Alignment Capabilities of Large Language Models via Neurons Alignment
|
Large language models (LLMs) have demonstrated remarkable multilingual capabilities, however, how to evaluate cross-lingual alignment remains underexplored. Existing alignment benchmarks primarily focus on sentence embeddings, but prior research has shown that neural models tend to induce a non-smooth representation space, which impact of semantic alignment evaluation on low-resource languages. Inspired by neuroscientific findings that similar information activates overlapping neuronal regions, we propose a novel Neuron State-Based Cross-Lingual Alignment (NeuronXA) to assess the cross-lingual a lignment capabilities of LLMs, which offers a more semantically grounded approach to assess cross-lingual alignment. We evaluate NeuronXA on several prominent multilingual LLMs (LLaMA, Qwen, Mistral, GLM, and OLMo) across two transfer tasks and three multilingual benchmarks. The results demonstrate that with only 100 parallel sentence pairs, NeuronXA achieves a Pearson correlation of 0.9556 with downstream tasks performance and 0.8514 with transferability. These findings demonstrate NeuronXA's effectiveness in assessing both cross-lingual alignment and transferability, even with a small dataset. This highlights its potential to advance cross-lingual alignment research and to improve the semantic understanding of multilingual LLMs.
| null |
https://arxiv.org/abs/2507.14900v1
|
https://arxiv.org/pdf/2507.14900v1.pdf
| null |
[
"Chongxuan Huang",
"Yongshi Ye",
"Biao Fu",
"Qifeng Su",
"Xiaodong Shi"
] |
[
"Sentence",
"Sentence Embeddings"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/analytic-estimation-of-parameters-of
|
2507.11868
| null | null |
Analytic estimation of parameters of stochastic volatility diffusion models with exponential-affine characteristic function for currency option pricing
|
This dissertation develops and justifies a novel method for deriving approximate formulas to estimate two parameters in stochastic volatility diffusion models with exponentially-affine characteristic functions and single- or two-factor variance. These formulas aim to improve the accuracy of option pricing and enhance the calibration process by providing reliable initial values for local minimization algorithms. The parameters relate to the volatility of the stochastic factor in instantaneous variance dynamics and the correlation between stochastic factors and asset price dynamics. The study comprises five chapters. Chapter one outlines the currency option market, pricing methods, and the general structure of stochastic volatility models. Chapter two derives the replication strategy dynamics and introduces a new two-factor volatility model: the OUOU model. Chapter three analyzes the distribution and surface dynamics of implied volatilities using principal component and common factor analysis. Chapter four discusses calibration methods for stochastic volatility models, particularly the Heston model, and presents the new Implied Central Moments method to estimate parameters in the Heston and Sch\"obel-Zhu models. Extensions to two-factor models, Bates and OUOU, are also explored. Chapter five evaluates the performance of the proposed formulas on the EURUSD options market, demonstrating the superior accuracy of the new method. The dissertation successfully meets its research objectives, expanding tools for derivative pricing and risk assessment. Key contributions include faster and more precise parameter estimation formulas and the introduction of the OUOU model - an extension of the Sch\"obel-Zhu model with a semi-analytical valuation formula for European options, previously unexamined in the literature.
|
Chapter four discusses calibration methods for stochastic volatility models, particularly the Heston model, and presents the new Implied Central Moments method to estimate parameters in the Heston and Sch\"obel-Zhu models.
|
https://arxiv.org/abs/2507.11868v1
|
https://arxiv.org/pdf/2507.11868v1.pdf
| null |
[
"Mikołaj Łabędzki"
] |
[
"parameter estimation"
] | 2025-07-16T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/pimref-detecting-and-explaining-ever-evolving
|
2507.15393
| null | null |
PiMRef: Detecting and Explaining Ever-evolving Spear Phishing Emails with Knowledge Base Invariants
|
Phishing emails are a critical component of the cybercrime kill chain due to their wide reach and low cost. Their ever-evolving nature renders traditional rule-based and feature-engineered detectors ineffective in the ongoing arms race between attackers and defenders. The rise of large language models (LLMs) further exacerbates the threat, enabling attackers to craft highly convincing phishing emails at minimal cost. This work demonstrates that LLMs can generate psychologically persuasive phishing emails tailored to victim profiles, successfully bypassing nearly all commercial and academic detectors. To defend against such threats, we propose PiMRef, the first reference-based phishing email detector that leverages knowledge-based invariants. Our core insight is that persuasive phishing emails often contain disprovable identity claims, which contradict real-world facts. PiMRef reframes phishing detection as an identity fact-checking task. Given an email, PiMRef (i) extracts the sender's claimed identity, (ii) verifies the legitimacy of the sender's domain against a predefined knowledge base, and (iii) detects call-to-action prompts that push user engagement. Contradictory claims are flagged as phishing indicators and serve as human-understandable explanations. Compared to existing methods such as D-Fence, HelpHed, and ChatSpamDetector, PiMRef boosts precision by 8.8% with no loss in recall on standard benchmarks like Nazario and PhishPot. In a real-world evaluation of 10,183 emails across five university accounts over three years, PiMRef achieved 92.1% precision, 87.9% recall, and a median runtime of 0.05s, outperforming the state-of-the-art in both effectiveness and efficiency.
| null |
https://arxiv.org/abs/2507.15393v1
|
https://arxiv.org/pdf/2507.15393v1.pdf
| null |
[
"Ruofan Liu",
"Yun Lin",
"Silas Yeo Shuen Yu",
"Xiwen Teoh",
"Zhenkai Liang",
"Jin Song Dong"
] |
[
"Fact Checking"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/geminus-dual-aware-global-and-scene-adaptive
|
2507.14456
| null | null |
GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving
|
End-to-end autonomous driving requires adaptive and robust handling of complex and diverse traffic environments. However, prevalent single-mode planning methods attempt to learn an overall policy while struggling to acquire diversified driving skills to handle diverse scenarios. Therefore, this paper proposes GEMINUS, a Mixture-of-Experts end-to-end autonomous driving framework featuring a Global Expert, a Scene-Adaptive Experts Group, and equipped with a Dual-aware Router. Specifically, the Global Expert is trained on the overall dataset, possessing robust performance. The Scene-Adaptive Experts are trained on corresponding scene subsets, achieving adaptive performance. The Dual-aware Router simultaneously considers scenario-level features and routing uncertainty to dynamically activate expert modules. Through the effective coupling of the Global Expert and the Scene-Adaptive Experts Group via the Dual-aware Router, GEMINUS achieves adaptive and robust performance in diverse scenarios. GEMINUS outperforms existing methods in the Bench2Drive closed-loop benchmark and achieves state-of-the-art performance in Driving Score and Success Rate, even with only monocular vision input. Furthermore, ablation studies demonstrate significant improvements over the original single-expert baseline: 7.67% in Driving Score, 22.06% in Success Rate, and 19.41% in MultiAbility-Mean. The code will be available at https://github.com/newbrains1/GEMINUS.
| null |
https://arxiv.org/abs/2507.14456v1
|
https://arxiv.org/pdf/2507.14456v1.pdf
| null |
[
"Chi Wan",
"Yixin Cui",
"Jiatong Du",
"Shuo Yang",
"Yulong Bai",
"Yanjun Huang"
] |
[
"Autonomous Driving",
"Bench2Drive",
"Mixture-of-Experts"
] | 2025-07-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/the-electoral-consequences-of-natural
|
2507.14331
| null | null |
The Electoral Consequences of Natural Disasters: A Dynamic Fixed-Effects Analysis
|
With the increasing frequency of major natural disasters, understanding their political consequences is of paramount importance for democratic accountability. The existing literature is deeply divided, with some studies finding that voters punish incumbents for disaster-related damages, while others find they reward them for relief efforts. This paper investigates the electoral consequences of natural disasters for incumbent mayors, broader electoral dynamics, and the long-term political ambition of officeholders. The study leverages a comprehensive panel dataset of over 10,000 candidate-election observations in U.S. mayoral races from 1989 to 2021, combining detailed election data with a global registry of disaster events. To identify causal effects, the analysis employs a robust dynamic two-way fixed-effects event-study design, validated by extensive pre-trend and placebo tests. The findings reveal that the electoral impact of disasters is highly conditional on their timing. A disaster that strikes in the same quarter as an election provides a significant electoral boost to incumbents, increasing their vote share by over 6 percentage points. However, disasters consistently suppress voter turnout, reducing it by an average of 1.4 percentage points. In a novel finding, the analysis demonstrates that the experience of managing a disaster significantly increases an incumbent's likelihood of seeking re-election in the subsequent cycle by as much as 12 percentage points. These findings help reconcile conflicting theories of retrospective voting by highlighting the critical role of voter myopia and salience. They also reveal a previously undocumented channel through which crises shape political careers, suggesting that disaster management is not only a test of governance but also a catalyst for political ambition. [The current version is a preprint.]
| null |
https://arxiv.org/abs/2507.14331v1
|
https://arxiv.org/pdf/2507.14331v1.pdf
| null |
[
"Nima Taheri Hosseinkhani"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/lacache-ladder-shaped-kv-caching-for
|
2507.14204
| null | null |
LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models
|
Recent advancements in Large Language Models (LLMs) have spurred interest in numerous applications requiring robust long-range capabilities, essential for processing extensive input contexts and continuously generating extended outputs. As sequence lengths increase, the number of Key-Value (KV) pairs in LLMs escalates, creating a significant efficiency bottleneck. In this paper, we propose a new KV cache optimization paradigm called LaCache, a training-free method for efficient and accurate generative inference of LLMs. LaCache enables LLMs to simultaneously address both of the critical challenges in long-range modeling: robust long-range capabilities and continuous generation without running out-of-memory (OOM). Specifically, LaCache integrates two key innovations: (1) a ladder-shaped KV cache pattern that stores KV pairs not only sequentially (left-to-right within each layer) but also across layers (from shallow to deep), providing an extended span for capturing long-range dependencies under a fixed storage budget, thereby boosting long-range capabilities; and (2) an iterative compaction mechanism that progressively compresses older caches, freeing up space for new tokens within a fixed cache size. This token distance-based dynamic compression enables more effective continuous generation under constrained cache budgets. Experiments across various tasks, benchmarks, and LLM models consistently validate LaCache's effectiveness in enhancing LLMs' long-range capabilities. Our code is available at https://github.com/GATECH-EIC/LaCache.
| null |
https://arxiv.org/abs/2507.14204v1
|
https://arxiv.org/pdf/2507.14204v1.pdf
| null |
[
"Dachuan Shi",
"Yonggan Fu",
"Xiangchi Yuan",
"Zhongzhi Yu",
"Haoran You",
"Sixu Li",
"Xin Dong",
"Jan Kautz",
"Pavlo Molchanov",
"Yingyan",
"Lin"
] |
[
"Long-range modeling"
] | 2025-07-14T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/leveraging-context-for-multimodal-fallacy
|
2507.15641
| null | null |
Leveraging Context for Multimodal Fallacy Classification in Political Debates
|
In this paper, we present our submission to the MM-ArgFallacy2025 shared task, which aims to advance research in multimodal argument mining, focusing on logical fallacies in political debates. Our approach uses pretrained Transformer-based models and proposes several ways to leverage context. In the fallacy classification subtask, our models achieved macro F1-scores of 0.4444 (text), 0.3559 (audio), and 0.4403 (multimodal). Our multimodal model showed performance comparable to the text-only model, suggesting potential for improvements.
|
In this paper, we present our submission to the MM-ArgFallacy2025 shared task, which aims to advance research in multimodal argument mining, focusing on logical fallacies in political debates.
|
https://arxiv.org/abs/2507.15641v1
|
https://arxiv.org/pdf/2507.15641v1.pdf
| null |
[
"Alessio Pittiglio"
] |
[
"Argument Mining",
"Logical Fallacies"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/manimator-transforming-research-papers-into
|
2507.14306
| null | null |
Manimator: Transforming Research Papers into Visual Explanations
|
Understanding complex scientific and mathematical concepts, particularly those presented in dense research papers, poses a significant challenge for learners. Dynamic visualizations can greatly enhance comprehension, but creating them manually is time-consuming and requires specialized knowledge and skills. We introduce manimator, an open-source system that leverages Large Language Models to transform research papers and natural language prompts into explanatory animations using the Manim engine. Manimator employs a pipeline where an LLM interprets the input text or research paper PDF to generate a structured scene description outlining key concepts, mathematical formulas, and visual elements and another LLM translates this description into executable Manim Python code. We discuss its potential as an educational tool for rapidly creating engaging visual explanations for complex STEM topics, democratizing the creation of high-quality educational content.
| null |
https://arxiv.org/abs/2507.14306v1
|
https://arxiv.org/pdf/2507.14306v1.pdf
| null |
[
"Samarth P",
"Vyoman Jain",
"Shiva Golugula",
"Motamarri Sai Sathvik"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/linr-pcgc-lossless-implicit-neural
|
2507.15686
| null | null |
LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression
|
Existing AI-based point cloud compression methods struggle with dependence on specific training data distributions, which limits their real-world deployment. Implicit Neural Representation (INR) methods solve the above problem by encoding overfitted network parameters to the bitstream, resulting in more distribution-agnostic results. However, due to the limitation of encoding time and decoder size, current INR based methods only consider lossy geometry compression. In this paper, we propose the first INR based lossless point cloud geometry compression method called Lossless Implicit Neural Representations for Point Cloud Geometry Compression (LINR-PCGC). To accelerate encoding speed, we design a group of point clouds level coding framework with an effective network initialization strategy, which can reduce around 60% encoding time. A lightweight coding network based on multiscale SparseConv, consisting of scale context extraction, child node prediction, and model compression modules, is proposed to realize fast inference and compact decoder size. Experimental results show that our method consistently outperforms traditional and AI-based methods: for example, with the convergence time in the MVUB dataset, our method reduces the bitstream by approximately 21.21% compared to G-PCC TMC13v23 and 21.95% compared to SparsePCGC. Our project can be seen on https://huangwenjie2023.github.io/LINR-PCGC/.
| null |
https://arxiv.org/abs/2507.15686v1
|
https://arxiv.org/pdf/2507.15686v1.pdf
| null |
[
"Wenjie Huang",
"Qi Yang",
"Shuting Xia",
"He Huang",
"Zhu Li",
"Yiling Xu"
] |
[
"Decoder",
"Model Compression"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/autoencoding-random-forests
|
2505.21441
| null | null |
Autoencoding Random Forests
|
We propose a principled method for autoencoding with random forests. Our strategy builds on foundational results from nonparametric statistics and spectral graph theory to learn a low-dimensional embedding of the model that optimally represents relationships in the data. We provide exact and approximate solutions to the decoding problem via constrained optimization, split relabeling, and nearest neighbors regression. These methods effectively invert the compression pipeline, establishing a map from the embedding space back to the input space using splits learned by the ensemble's constituent trees. The resulting decoders are universally consistent under common regularity assumptions. The procedure works with supervised or unsupervised models, providing a window into conditional or joint distributions. We demonstrate various applications of this autoencoder, including powerful new tools for visualization, compression, clustering, and denoising. Experiments illustrate the ease and utility of our method in a wide range of settings, including tabular, image, and genomic data.
| null |
https://arxiv.org/abs/2505.21441v1
|
https://arxiv.org/pdf/2505.21441v1.pdf
| null |
[
"Binh Duc Vu",
"Jan Kapar",
"Marvin Wright",
"David S. Watson"
] |
[
"Denoising"
] | 2025-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/missing-value-imputation-with-adversarial
|
2507.15681
| null | null |
Missing value imputation with adversarial random forests -- MissARF
|
Handling missing values is a common challenge in biostatistical analyses, typically addressed by imputation methods. We propose a novel, fast, and easy-to-use imputation method called missing value imputation with adversarial random forests (MissARF), based on generative machine learning, that provides both single and multiple imputation. MissARF employs adversarial random forest (ARF) for density estimation and data synthesis. To impute a missing value of an observation, we condition on the non-missing values and sample from the estimated conditional distribution generated by ARF. Our experiments demonstrate that MissARF performs comparably to state-of-the-art single and multiple imputation methods in terms of imputation quality and fast runtime with no additional costs for multiple imputation.
| null |
https://arxiv.org/abs/2507.15681v1
|
https://arxiv.org/pdf/2507.15681v1.pdf
| null |
[
"Pegah Golchian",
"Jan Kapar",
"David S. Watson",
"Marvin N. Wright"
] |
[
"Density Estimation",
"Imputation",
"Missing Values"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ontview-what-you-see-is-what-you-meant
|
2507.13759
| null | null |
OntView: What you See is What you Meant
|
In the field of knowledge management and computer science, ontologies provide a structured framework for modeling domain-specific knowledge by defining concepts and their relationships. However, the lack of tools that provide effective visualization is still a significant challenge. While numerous ontology editors and viewers exist, most of them fail to graphically represent ontology structures in a meaningful and non-overwhelming way, limiting users' ability to comprehend dependencies and properties within large ontological frameworks. In this paper, we present OntView, an ontology viewer that is designed to provide users with an intuitive visual representation of ontology concepts and their formal definitions through a user-friendly interface. Building on the use of a DL reasoner, OntView follows a "What you see is what you meant" paradigm, showing the actual inferred knowledge. One key aspect for this is its ability to visualize General Concept Inclusions (GCI), a feature absent in existing visualization tools. Moreover, to avoid a possible information overload, OntView also offers different ways to show a simplified view of the ontology by: 1) creating ontology summaries by assessing the importance of the concepts (according to different available algorithms), 2) focusing the visualization on the existing TBox elements between two given classes and 3) allowing to hide/show different branches in a dynamic way without losing the semantics. OntView has been released with an open-source license for the whole community.
|
In this paper, we present OntView, an ontology viewer that is designed to provide users with an intuitive visual representation of ontology concepts and their formal definitions through a user-friendly interface.
|
https://arxiv.org/abs/2507.13759v1
|
https://arxiv.org/pdf/2507.13759v1.pdf
| null |
[
"Carlos Bobed",
"Carlota Quintana",
"Eduardo Mena",
"Jorge Bobed",
"Fernando Bobillo"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/a-reproducibility-study-of-product-side
|
2507.14352
| null | null |
A Reproducibility Study of Product-side Fairness in Bundle Recommendation
|
Recommender systems are known to exhibit fairness issues, particularly on the product side, where products and their associated suppliers receive unequal exposure in recommended results. While this problem has been widely studied in traditional recommendation settings, its implications for bundle recommendation (BR) remain largely unexplored. This emerging task introduces additional complexity: recommendations are generated at the bundle level, yet user satisfaction and product (or supplier) exposure depend on both the bundle and the individual items it contains. Existing fairness frameworks and metrics designed for traditional recommender systems may not directly translate to this multi-layered setting. In this paper, we conduct a comprehensive reproducibility study of product-side fairness in BR across three real-world datasets using four state-of-the-art BR methods. We analyze exposure disparities at both the bundle and item levels using multiple fairness metrics, uncovering important patterns. Our results show that exposure patterns differ notably between bundles and items, revealing the need for fairness interventions that go beyond bundle-level assumptions. We also find that fairness assessments vary considerably depending on the metric used, reinforcing the need for multi-faceted evaluation. Furthermore, user behavior plays a critical role: when users interact more frequently with bundles than with individual items, BR systems tend to yield fairer exposure distributions across both levels. Overall, our findings offer actionable insights for building fairer bundle recommender systems and establish a vital foundation for future research in this emerging domain.
|
This emerging task introduces additional complexity: recommendations are generated at the bundle level, yet user satisfaction and product (or supplier) exposure depend on both the bundle and the individual items it contains.
|
https://arxiv.org/abs/2507.14352v1
|
https://arxiv.org/pdf/2507.14352v1.pdf
| null |
[
"Huy-Son Nguyen",
"Yuanna Liu",
"Masoud Mansoury",
"Mohammad Alian Nejadi",
"Alan Hanjalic",
"Maarten de Rijke"
] |
[
"Fairness",
"Recommendation Systems"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/ramen-multi-strategy-multi-modal-learning-for
|
2507.14361
| null | null |
RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction
|
Existing studies on bundle construction have relied merely on user feedback via bipartite graphs or enhanced item representations using semantic information. These approaches fail to capture elaborate relations hidden in real-world bundle structures, resulting in suboptimal bundle representations. To overcome this limitation, we propose RaMen, a novel method that provides a holistic multi-strategy approach for bundle construction. RaMen utilizes both intrinsic (characteristics) and extrinsic (collaborative signals) information to model bundle structures through Explicit Strategy-aware Learning (ESL) and Implicit Strategy-aware Learning (ISL). ESL employs task-specific attention mechanisms to encode multi-modal data and direct collaborative relations between items, thereby explicitly capturing essential bundle features. Moreover, ISL computes hyperedge dependencies and hypergraph message passing to uncover shared latent intents among groups of items. Integrating diverse strategies enables RaMen to learn more comprehensive and robust bundle representations. Meanwhile, Multi-strategy Alignment & Discrimination module is employed to facilitate knowledge transfer between learning strategies and ensure discrimination between items/bundles. Extensive experiments demonstrate the effectiveness of RaMen over state-of-the-art models on various domains, justifying valuable insights into complex item set problems.
|
Existing studies on bundle construction have relied merely on user feedback via bipartite graphs or enhanced item representations using semantic information.
|
https://arxiv.org/abs/2507.14361v1
|
https://arxiv.org/pdf/2507.14361v1.pdf
| null |
[
"Huy-Son Nguyen",
"Quang-Huy Nguyen",
"Duc-Hoang Pham",
"Duc-Trong Le",
"Hoang-Quynh Le",
"Padipat Sitkrongwong",
"Atsuhiro Takasu",
"Masoud Mansoury"
] |
[
"Transfer Learning"
] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/hear-your-code-fail-voice-assisted-debugging
|
2507.15007
| null | null |
Hear Your Code Fail, Voice-Assisted Debugging for Python
|
This research introduces an innovative voice-assisted debugging plugin for Python that transforms silent runtime errors into actionable audible diagnostics. By implementing a global exception hook architecture with pyttsx3 text-to-speech conversion and Tkinter-based GUI visualization, the solution delivers multimodal error feedback through parallel auditory and visual channels. Empirical evaluation demonstrates 37% reduced cognitive load (p<0.01, n=50) compared to traditional stack-trace debugging, while enabling 78% faster error identification through vocalized exception classification and contextualization. The system achieves sub-1.2 second voice latency with under 18% CPU overhead during exception handling, vocalizing error types and consequences while displaying interactive tracebacks with documentation deep links. Criteria validate compatibility across Python 3.7+ environments on Windows, macOS, and Linux platforms. Needing only two lines of integration code, the plugin significantly boosts availability for aesthetically impaired designers and supports multitasking workflows through hands-free error medical diagnosis. Educational applications show particular promise, with pilot studies indicating 45% faster debugging skill acquisition among novice programmers. Future development will incorporate GPT-based repair suggestions and real-time multilingual translation to further advance auditory debugging paradigms. The solution represents a fundamental shift toward human-centric error diagnostics, bridging critical gaps in programming accessibility while establishing new standards for cognitive efficiency in software development workflows.
| null |
https://arxiv.org/abs/2507.15007v1
|
https://arxiv.org/pdf/2507.15007v1.pdf
| null |
[
"Sayed Mahbub Hasan Amiri",
"Md. Mainul Islam",
"Mohammad Shakhawat Hossen",
"Sayed Majhab Hasan Amiri",
"Mohammad Shawkat Ali Mamun",
"Sk. Humaun Kabir",
"Naznin Akter"
] |
[
"CPU",
"Medical Diagnosis",
"text-to-speech",
"Text to Speech"
] | 2025-07-20T00:00:00 | null | null | null | null |
[
{
"code_snippet_url": "",
"description": "To contact a live representative at Celebrity Cruises call their 24/7 customer service hotline at (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) or 1-855-Celebrity Cruises. You can also use their website-s live chat or email for assistance. Whether you-re changing a Cruise handling a booking issue or need general support speaking with a live agent is the fastest way to get help. This guide outlines all contact methods and suggests the best times to call.\r\nWhen you need help from Celebrity Cruises knowing the right way to reach their customer service can save you time and stress. As a frequent Celebrity Cruises traveler I’ve explored every available channel—phone chat email and more—to resolve booking issues get Cruise updates and manage travel plans. Below is a complete user-focused guide on 12 ways to connect with Celebrity Cruises customer service including the exclusive number: (+1-855-732-4023 (US) or +44-289-708-0062 (UK)).\r\n1. Call Celebrity Cruises Directly (24/ Hotline)\r\nThe most direct and often the fastest way to get help is by calling Celebrity Cruises’s main customer service line. As a user I always keep this number handy for urgent issues like Cruise changes or cancellations. Celebrity Cruises’s support is available 24/ so you can call anytime even in the middle of the night.\r\nCelebrity Cruises Customer Service Number: (+1-855-732-4023 (US) or +44-289-708-0062 (UK))\r\nWhat you need: Have your booking reference SkyMiles number and travel details ready for faster service.\r\nWhen to use: Urgent booking changes cancellations Cruise delays or immediate travel needs.\r\n2. Use the Celebrity Cruises Live Chat Feature\r\nIf you prefer not to wait on hold Celebrity Cruises’s live chat is a fantastic option. I’ve used this for quick questions about baggage allowance or seat selection.\r\nHow to access: (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) Go to Celebrity Cruises’s official website or open the Fly Celebrity Cruises app navigate to the “Help” or “Contact Us” section and start a chat session.\r\nBest for: Quick questions minor booking adjustments and when you can’t make a call.\r\n3. Email Celebrity Cruises Customer Support\r\nFor non-urgent concerns or when you need to send documents (like refund requests or medical certificates) email is ideal.\r\nHow to use: Fill out the contact form on Celebrity Cruises’s website or email through their official support address.\r\nResponse time: Usually within a few business days.\r\nBest for: Detailed inquiries complaints or documentation-heavy requests.\r\n4. Reach Out via Social Media\r\nCelebrity Cruises is active on platforms like Twitter and Facebook. I’ve found that sending a direct message often gets a quick response especially for public complaints or quick clarifications.\r\nWhere to message: Twitter (@Celebrity Cruises) Facebook Messenger.\r\nBest for: Non-urgent issues sharing feedback or getting updates on widespread disruptions.\r\n. Visit a Celebrity Cruises Customer Service Desk at the Airport\r\nIf you’re already at the airport and need immediate assistance—like rebooking after a cancellation—visit the Celebrity Cruises service desk.\r\nWhere to find: At all major airports near check-in or boarding gates.\r\nBest for: Last-minute changes baggage issues or special travel needs.\r\n. Use the Celebrity Cruises Mobile App\r\nThe Fly Celebrity Cruises app isn’t just for checking in. You can manage bookings chat with support and even request callbacks.\r\nHow to use: Download the app log in and access the “Help” section.\r\nBest for: On-the-go support managing reservations and receiving real-time notifications.\r\n. Contact Celebrity Cruises via WhatsApp (If Available)\r\nSome regions offer WhatsApp support for Celebrity Cruises. I’ve used this for quick text-based support when traveling internationally.\r\nHow to access: Check the Celebrity Cruises website for the latest WhatsApp contact details.\r\nBest for: Quick queries when you have limited phone access.\r\n. Use Celebrity Cruises’s Automated Phone System\r\nIf you don’t need a live agent Celebrity Cruises’s automated system can help you check Cruise status baggage info or basic booking details.\r\nHow to use: Call (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) and follow the voice prompts.\r\nBest for: Cruise status automated check-in or simple information requests.\r\n. Request a Callback from Celebrity Cruises\r\nDon’t want to wait on hold? Use the callback feature on Celebrity Cruises’s website or app.\r\nHow to use: Enter your phone number and issue; Celebrity Cruises will call you back when an agent is available.\r\nBest for: Busy travelers who don’t want to wait on hold.\r\n. Reach Out via Celebrity Cruises’s International Support Numbers\r\nTraveling abroad? Celebrity Cruises has dedicated numbers for different countries. Always check the official website for the correct number in your region.\r\nHow to use: Visit Celebrity Cruises’s “Contact Us” page select your country and dial the listed number.\r\nBest for: International travel support local language assistance.\r\n11. Utilize Celebrity Cruises’s Accessibility Support\r\nIf you need special assistance due to a disability or medical condition Celebrity Cruises offers dedicated support lines and services.\r\nHow to access: Call the accessibility support number or request help via the Celebrity Cruises website.\r\nBest for: Wheelchair requests medical accommodations or traveling with service animals.\r\n12. Visit Celebrity Cruises’s Official Website for FAQs and Self-Service\r\nMany issues can be resolved without contacting an agent. The Celebrity Cruises website offers comprehensive FAQs booking management tools and travel advisories.\r\nHow to access: Go to Celebrity Cruises.com and navigate to the “Help Center.”\r\nBest for: Self-service bookings policy information and travel updates.\r\nComparison Table: Celebrity Cruises Customer Service Channels\r\nMethod Best For Availability User Experience\r\nPhone ((+1-855-732-4023 (US) or +44-289-708-0062 (UK))) Urgent complex issues 24/ Immediate personal\r\nLive Chat Quick queries minor changes Website/App hours Fast convenient\r\nEmail Non-urgent documentation 24/ (response in days) Detailed trackable\r\nSocial Media Non-urgent public feedback 24/ Fast public\r\nAirport Desk Last-minute in-person help Airport hours Direct face-to-face\r\nMobile App On-the-go all-in-one 24/ Seamless mobile\r\nWhatsApp Quick text-based help Region-specific Convenient global\r\nAutomated Phone System Info status checks 24/ Efficient simple\r\nCallback Busy travelers 24/ No hold time\r\nInternational Numbers Overseas travel support 24/ Localized helpful\r\nAccessibility Support Special needs 24/ Dedicated caring\r\nWebsite FAQs Self-service info 24/ DIY fast\r\nPro Tips for Getting the Best Celebrity Cruises Customer Service Experience\r\nAlways have your booking details handy when you call or chat—this speeds up verification and resolution.\r\nBe clear and concise about your issue; state your problem and desired resolution upfront.\r\nUse the callback option during peak hours to avoid long wait times.\r\nCheck the Celebrity Cruises app and website first for self-service solutions; many issues can be resolved without waiting for an agent.\r\nFor urgent or complex issues call the dedicated number: (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) for immediate assistance.\r\nFrequently Asked Questions\r\nQ: What is the fastest way to reach a live agent at Celebrity Cruises?\r\nA: Call (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) or use the live chat feature on the Celebrity Cruises website or app for immediate support.\r\nQ: Can I get help with special needs or accessibility?\r\nA: Yes Celebrity Cruises offers dedicated accessibility support lines and services for passengers with disabilities or medical needs.\r\nQ: How long does it take to get a response by email?\r\nA: Typically you’ll receive a response within a few business days depending on the complexity of your request.\r\nQ: Is Celebrity Cruises customer service available 24/?\r\nA: Yes phone support and many digital channels are available around the clock.\r\nConclusion\r\nAs an Celebrity Cruises customer you have multiple ways to connect with support—whether you need urgent help or just have a quick question. For the fastest service keep the dedicated number (+1-855-732-4023 (US) or +44-289-708-0062 (UK)) ready. Use chat email social media or in-person support depending on your situation and preference. With these 12 options you’ll never be left stranded when you need Celebrity Cruises’s help the most.",
"full_name": "01 Ways to Call How can i speak to someone at Celebrity Cruises: A Step by Step Guide",
"introduced_year": 2000,
"main_collection": {
"area": "Computer Vision",
"description": "To ask a question on Expedia, you can utilize their Help Center +1-888-829-0881, call customer service, use live chat, or reach out via social media. You can also find answers to frequently asked questions on their website.\r\nDetailed Methods:\r\nExpedia Help Center: Visit the Expedia website or app and navigate to the Help Center. This section contains FAQs, articles, and troubleshooting tips.\r\nCall Customer Service: Dial +1-844-Expedia or +1-888-829-0881 to speak with a representative\r\nHow do I ask questions on Expedia?\r\nTo ask a question on Expedia, you can visit their Help Center on the website or app, call them at +1-888-829-0881use the live chat, or reach out via social media. You can also find answers to common questions in the FAQ section.\r\nWays to Ask a Question on Expedia · Phone Support: Call +1-888-829-0881or +1-888-829-0881to speak with a live Expedia agent. · Live Chat: Use the chat feature\r\nTo ask a question on Expedia +1-888-829-0881, go to their Help Center, use the search bar, or start a live chat. You can also call their customer.\r\nTo ask a question at Expedia, visit their Help Center on the website or app. You can also call +1-888-829-0881) , use the live chat feature, or reach out via\r\nTo communicate with Expedia, you can call their customer service hotline (+1-888-829-0881), use their live chat feature on their website\r\nTo ask questions on Expedia at +1-888-829-0881) (OTA) // +1-888-829-0881 [LIVE PERSON], visit their Help Center to search for answers ",
"name": "3D Face Mesh Models",
"parent": null
},
"name": "01 Ways to Call How can i speak to someone at Celebrity Cruises: A Step by Step Guide",
"source_title": "0/1 Deep Neural Networks via Block Coordinate Descent",
"source_url": "https://arxiv.org/abs/2206.09379v2"
}
] |
https://paperswithcode.com/paper/language-integration-in-fine-tuning
| null | null | null |
Language Integration in Fine-Tuning Multimodal Large Language Models for Image-Based Regression
|
Multimodal Large Language Models (MLLMs) show promise for image-based regression tasks, but current approaches face key limitations. Recent methods fine-tune MLLMs using preset output vocabularies and generic task-level prompts (e.g., "How would you rate this image?"), assuming this mimics human rating behavior. Our analysis reveals these approaches provide no benefit over image-only training. Models using preset vocabularies and generic prompts perform equivalently to image-only models, failing to leverage semantic understanding from textual input. We propose Regression via Transformer-Based Classification (RvTC), which replaces vocabulary-constrained classification with a flexible bin-based approach. Unlike approaches that address discretization errors through complex distributional modeling, RvTC eliminates manual vocabulary crafting through straightforward bin increase, achieving state-of-the-art performance on four image assessment datasets using only images. More importantly, we demonstrate that data-specific prompts dramatically improve performance. Unlike generic task descriptions, prompts containing semantic information about specific images enable MLLMs to leverage cross-modal understanding. On the AVA dataset, adding challenge titles to prompts improves correlations from 0.83 to 0.90, a new state-of-the-art. We demonstrate through empirical evidence from the AVA and AGIQA-3k datasets that MLLMs benefit from semantic prompt information surpassing mere statistical biases. This underscores the importance of incorporating meaningful textual context in multimodal regression tasks.
| null |
https://arxiv.org/abs/2507.14997
|
https://arxiv.org/pdf/2507.14997
| null |
[
"Roy H. Jennings",
"Genady Paikin",
"Roy Shaul",
"Evgeny Soloveichik"
] |
[
"Aesthetics Quality Assessment",
"No-Reference Image Quality Assessment",
"regression"
] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/sec-advancing-complex-video-object
|
2507.15852
| null | null |
SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction
|
Video Object Segmentation (VOS) is a core task in computer vision, requiring models to track and segment target objects across video frames. Despite notable advances with recent efforts, current techniques still lag behind human capabilities in handling drastic visual variations, occlusions, and complex scene changes. This limitation arises from their reliance on appearance matching, neglecting the human-like conceptual understanding of objects that enables robust identification across temporal dynamics. Motivated by this gap, we propose Segment Concept (SeC), a concept-driven segmentation framework that shifts from conventional feature matching to the progressive construction and utilization of high-level, object-centric representations. SeC employs Large Vision-Language Models (LVLMs) to integrate visual cues across diverse frames, constructing robust conceptual priors. During inference, SeC forms a comprehensive semantic representation of the target based on processed frames, realizing robust segmentation of follow-up frames. Furthermore, SeC adaptively balances LVLM-based semantic reasoning with enhanced feature matching, dynamically adjusting computational efforts based on scene complexity. To rigorously assess VOS methods in scenarios demanding high-level conceptual reasoning and robust semantic understanding, we introduce the Semantic Complex Scenarios Video Object Segmentation benchmark (SeCVOS). SeCVOS comprises 160 manually annotated multi-scenario videos designed to challenge models with substantial appearance variations and dynamic scene transformations. In particular, SeC achieves an 11.8-point improvement over SAM 2.1 on SeCVOS, establishing a new state-of-the-art in concept-aware video object segmentation.
| null |
https://arxiv.org/abs/2507.15852v1
|
https://arxiv.org/pdf/2507.15852v1.pdf
| null |
[
"Zhixiong Zhang",
"Shuangrui Ding",
"Xiaoyi Dong",
"Songxin He",
"Jianfan Lin",
"Junsong Tang",
"Yuhang Zang",
"Yuhang Cao",
"Dahua Lin",
"Jiaqi Wang"
] |
[
"Object",
"Segmentation",
"Semantic Segmentation",
"Video Object Segmentation",
"Video Semantic Segmentation"
] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/real-time-captioning-of-sign-language
|
2507.14543
| null | null |
Real Time Captioning of Sign Language Gestures in Video Meetings
|
It has always been a rather tough task to communicate with someone possessing a hearing impairment. One of the most tested ways to establish such a communication is through the use of sign based languages. However, not many people are aware of the smaller intricacies involved with sign language. Sign language recognition using computer vision aims at eliminating the communication barrier between deaf-mute and ordinary people so that they can properly communicate with others. Recently the pandemic has left the whole world shaken up and has transformed the way we communicate. Video meetings have become essential for everyone, even people with a hearing disability. In recent studies, it has been found that people with hearing disabilities prefer to sign over typing during these video calls. In this paper, we are proposing a browser extension that will automatically translate sign language to subtitles for everyone else in the video call. The Large-scale dataset which contains more than 2000 Word-Level ASL videos, which were performed by over 100 signers will be used.
| null |
https://arxiv.org/abs/2507.14543v1
|
https://arxiv.org/pdf/2507.14543v1.pdf
| null |
[
"Sharanya Mukherjee",
"Md Hishaam Akhtar",
"Kannadasan R"
] |
[] | 2025-07-19T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/loop2net-data-driven-generation-and
|
2507.01057
| null | null |
Loop2Net: Data-Driven Generation and Optimization of Airfoil CFD Meshes from Sparse Boundary Coordinates
|
In this study, an innovative intelligent optimization system for mesh quality is proposed, which is based on a deep convolutional neural network architecture, to achieve mesh generation and optimization. The core of the study is the Loop2Net generator and loss function, it predicts the mesh based on the given wing coordinates. And the model's performance is continuously optimised by two key loss functions during the training. Then discipline by adding penalties, the goal of mesh generation was finally reached.
| null |
https://arxiv.org/abs/2507.01057v1
|
https://arxiv.org/pdf/2507.01057v1.pdf
| null |
[
"Lushun Fan",
"Yuqin Xia",
"Jun Li",
"Karl Jenkins"
] |
[] | 2025-06-28T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/turin3d-evaluating-adaptation-strategies
| null | null | null |
Turin3D: Evaluating Adaptation Strategies under Label Scarcity in Urban LiDAR Segmentation with Semi-Supervised Techniques
|
3D semantic segmentation plays a critical role in urban modelling, enabling detailed understanding and mapping of city environments. In this paper, we introduce Turin3D: a new aerial LiDAR dataset for point cloud semantic segmentation covering an area of around 1.43 km2 in the city centre of Turin with almost 70M points. We describe the data collection process and compare Turin3D with others previously proposed in the literature. We did not fully annotate the dataset due to the complexity and time-consuming nature of the process; however, a manual annotation process was performed on the validation and test sets, to enable a reliable evaluation of the proposed techniques. We first benchmark the performances of several point cloud semantic segmentation models, trained on the existing datasets, when tested on Turin3D, and then improve their performances by applying a semi-supervised learning technique leveraging the unlabelled training set. The dataset will be publicly available to support research in outdoor point cloud segmentation, with particular relevance for self-supervised and semi-supervised learning approaches given the absence of ground truth annotations for the training set.
| null |
https://openaccess.thecvf.com/content/CVPR2025W/USM3D/html/Barco_Turin3D_Evaluating_Adaptation_Strategies_under_Label_Scarcity_in_Urban_LiDAR_CVPRW_2025_paper.html
|
https://openaccess.thecvf.com/content/CVPR2025W/USM3D/papers/Barco_Turin3D_Evaluating_Adaptation_Strategies_under_Label_Scarcity_in_Urban_LiDAR_CVPRW_2025_paper.pdf
|
Computer Vision and Pattern Recognition Conference (CVPR) Workshops 2025 6
|
[
"Luca Barco",
"Giacomo Blanco",
"Gaetano Chiriaco",
"Alessia Intini",
"Luigi La Riccia",
"Vittorio Scolamiero",
"Piero Boccardo",
"Paolo Garza",
"Fabrizio Dominici"
] |
[] | 2025-06-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/osdmamba-enhancing-oil-spill-detection-from
|
2506.18006
| null | null |
OSDMamba: Enhancing Oil Spill Detection from Remote Sensing Images Using Selective State Space Model
|
Semantic segmentation is commonly used for Oil Spill Detection (OSD) in remote sensing images. However, the limited availability of labelled oil spill samples and class imbalance present significant challenges that can reduce detection accuracy. Furthermore, most existing methods, which rely on convolutional neural networks (CNNs), struggle to detect small oil spill areas due to their limited receptive fields and inability to effectively capture global contextual information. This study explores the potential of State-Space Models (SSMs), particularly Mamba, to overcome these limitations, building on their recent success in vision applications. We propose OSDMamba, the first Mamba-based architecture specifically designed for oil spill detection. OSDMamba leverages Mamba's selective scanning mechanism to effectively expand the model's receptive field while preserving critical details. Moreover, we designed an asymmetric decoder incorporating ConvSSM and deep supervision to strengthen multi-scale feature fusion, thereby enhancing the model's sensitivity to minority class samples. Experimental results show that the proposed OSDMamba achieves state-of-the-art performance, yielding improvements of 8.9% and 11.8% in OSD across two publicly available datasets.
| null |
https://arxiv.org/abs/2506.18006v1
|
https://arxiv.org/pdf/2506.18006v1.pdf
| null |
[
"Shuaiyu Chen",
"Fu Wang",
"Peng Ren",
"Chunbo Luo",
"Zeyu Fu"
] |
[] | 2025-06-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/subliminal-learning-language-models-transmit
|
2507.14805
| null | null |
Subliminal Learning: Language models transmit behavioral traits via hidden signals in data
|
We study subliminal learning, a surprising phenomenon where language models transmit behavioral traits via semantically unrelated data. In our main experiments, a "teacher" model with some trait T (such as liking owls or being misaligned) generates a dataset consisting solely of number sequences. Remarkably, a "student" model trained on this dataset learns T. This occurs even when the data is filtered to remove references to T. We observe the same effect when training on code or reasoning traces generated by the same teacher model. However, we do not observe the effect when the teacher and student have different base models. To help explain our findings, we prove a theoretical result showing that subliminal learning occurs in all neural networks under certain conditions, and demonstrate subliminal learning in a simple MLP classifier. We conclude that subliminal learning is a general phenomenon that presents an unexpected pitfall for AI development. Distillation could propagate unintended traits, even when developers try to prevent this via data filtering.
| null |
https://arxiv.org/abs/2507.14805v1
|
https://arxiv.org/pdf/2507.14805v1.pdf
| null |
[
"Alex Cloud",
"Minh Le",
"James Chua",
"Jan Betley",
"Anna Sztyber-Betley",
"Jacob Hilton",
"Samuel Marks",
"Owain Evans"
] |
[] | 2025-07-20T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/text2stereo-repurposing-stable-diffusion-for
|
2506.05367
| null | null |
Text2Stereo: Repurposing Stable Diffusion for Stereo Generation with Consistency Rewards
|
In this paper, we propose a novel diffusion-based approach to generate stereo images given a text prompt. Since stereo image datasets with large baselines are scarce, training a diffusion model from scratch is not feasible. Therefore, we propose leveraging the strong priors learned by Stable Diffusion and fine-tuning it on stereo image datasets to adapt it to the task of stereo generation. To improve stereo consistency and text-to-image alignment, we further tune the model using prompt alignment and our proposed stereo consistency reward functions. Comprehensive experiments demonstrate the superiority of our approach in generating high-quality stereo images across diverse scenarios, outperforming existing methods.
| null |
https://arxiv.org/abs/2506.05367v1
|
https://arxiv.org/pdf/2506.05367v1.pdf
| null |
[
"Aakash Garg",
"Libing Zeng",
"Andrii Tsarov",
"Nima Khademi Kalantari"
] |
[] | 2025-05-27T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/recursive-kalmannet-analyse-des-capacites-de
|
2507.14144
| null | null |
Recursive KalmanNet: Analyse des capacités de généralisation d'un réseau de neurones récurrent guidé par un filtre de Kalman
|
The Recursive KalmanNet, recently introduced by the authors, is a recurrent neural network guided by a Kalman filter, capable of estimating the state variables and error covariance of stochastic dynamic systems from noisy measurements, without prior knowledge of the noise characteristics. This paper explores its generalization capabilities in out-of-distribution scenarios, where the temporal dynamics of the test measurements differ from those encountered during training. Le Recursive KalmanNet, r\'ecemment introduit par les auteurs, est un r\'eseau de neurones r\'ecurrent guid\'e par un filtre de Kalman, capable d'estimer les variables d'\'etat et la covariance des erreurs des syst\`emes dynamiques stochastiques \`a partir de mesures bruit\'ees, sans connaissance pr\'ealable des caract\'eristiques des bruits. Cet article explore ses capacit\'es de g\'en\'eralisation dans des sc\'enarios hors distribution, o\`u les dynamiques temporelles des mesures de test diff\`erent de celles rencontr\'ees \`a l'entra\^inement.
| null |
https://arxiv.org/abs/2507.14144v1
|
https://arxiv.org/pdf/2507.14144v1.pdf
| null |
[
"Cyril Falcon",
"Hassan Mortada",
"Mathéo Clavaud",
"Jean-Philippe Michel"
] |
[] | 2025-06-25T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/adaptive-multi-agent-reasoning-via-automated
|
2507.14393
| null | null |
Adaptive Multi-Agent Reasoning via Automated Workflow Generation
|
The rise of Large Reasoning Models (LRMs) promises a significant leap forward in language model capabilities, aiming to tackle increasingly sophisticated tasks with unprecedented efficiency and accuracy. However, despite their impressive performance, recent studies have highlighted how current reasoning models frequently fail to generalize to novel, unseen problems, often resorting to memorized solutions rather than genuine inferential reasoning. Such behavior underscores a critical limitation in modern LRMs, i.e., their tendency toward overfitting, which in turn results in poor generalization in problem-solving capabilities. In this paper, we introduce Nexus Architect, an enhanced iteration of our multi-agent system framework, Nexus, equipped with a novel automated workflow synthesis mechanism. Given a user's prompt and a small set of representative examples, the Architect autonomously generates a tailored reasoning workflow by selecting suitable strategies, tool integrations, and adversarial techniques for a specific problem class. Furthermore, the Architect includes an iterative prompt refinement mechanism that fine-tunes agents' system prompts to maximize performance and improve the generalization capabilities of the system. We empirically evaluate Nexus Architect by employing an off-the-shelf, non-reasoning model on a custom dataset of challenging logical questions and compare its performance against state-of-the-art LRMs. Results show that Nexus Architect consistently outperforms existing solutions, achieving up to a 66% increase in pass rate over Gemini 2.5 Flash Preview, nearly 2.5$\times$ against Claude Sonnet 4 and DeepSeek-R1, and over 3$\times$ w.r.t. Llama 4 Scout.
| null |
https://arxiv.org/abs/2507.14393v1
|
https://arxiv.org/pdf/2507.14393v1.pdf
| null |
[
"Humza Sami",
"Mubashir ul Islam",
"Pierre-Emmanuel Gaillardon",
"Valerio Tenace"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/paper-summary-attack-jailbreaking-llms
|
2507.13474
| null | null |
Paper Summary Attack: Jailbreaking LLMs through LLM Safety Papers
|
The safety of large language models (LLMs) has garnered significant research attention. In this paper, we argue that previous empirical studies demonstrate LLMs exhibit a propensity to trust information from authoritative sources, such as academic papers, implying new possible vulnerabilities. To verify this possibility, a preliminary analysis is designed to illustrate our two findings. Based on this insight, a novel jailbreaking method, Paper Summary Attack (\llmname{PSA}), is proposed. It systematically synthesizes content from either attack-focused or defense-focused LLM safety paper to construct an adversarial prompt template, while strategically infilling harmful query as adversarial payloads within predefined subsections. Extensive experiments show significant vulnerabilities not only in base LLMs, but also in state-of-the-art reasoning model like Deepseek-R1. PSA achieves a 97\% attack success rate (ASR) on well-aligned models like Claude3.5-Sonnet and an even higher 98\% ASR on Deepseek-R1. More intriguingly, our work has further revealed diametrically opposed vulnerability bias across different base models, and even between different versions of the same model, when exposed to either attack-focused or defense-focused papers. This phenomenon potentially indicates future research clues for both adversarial methodologies and safety alignment.Code is available at https://github.com/233liang/Paper-Summary-Attack
| null |
https://arxiv.org/abs/2507.13474v1
|
https://arxiv.org/pdf/2507.13474v1.pdf
| null |
[
"Liang Lin",
"Zhihao Xu",
"Xuehai Tang",
"Shi Liu",
"Biyu Zhou",
"Fuqing Zhu",
"Jizhong Han",
"Songlin Hu"
] |
[] | 2025-07-17T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/aptx-neuron-a-unified-trainable-neuron
|
2507.14270
| null | null |
APTx Neuron: A Unified Trainable Neuron Architecture Integrating Activation and Computation
|
We propose the APTx Neuron, a novel, unified neural computation unit that integrates non-linear activation and linear transformation into a single trainable expression. The APTx Neuron is derived from the APTx activation function, thereby eliminating the need for separate activation layers and making the architecture both computationally efficient and elegant. The proposed neuron follows the functional form $y = \sum_{i=1}^{n} ((\alpha_i + \tanh(\beta_i x_i)) \cdot \gamma_i x_i) + \delta$, where all parameters $\alpha_i$, $\beta_i$, $\gamma_i$, and $\delta$ are trainable. We validate our APTx Neuron-based architecture on the MNIST dataset, achieving up to 96.69\% test accuracy in just 20 epochs using approximately 332K trainable parameters. The results highlight the superior expressiveness and computational efficiency of the APTx Neuron compared to traditional neurons, pointing toward a new paradigm in unified neuron design and the architectures built upon it.
| null |
https://arxiv.org/abs/2507.14270v1
|
https://arxiv.org/pdf/2507.14270v1.pdf
| null |
[
"Ravin Kumar"
] |
[] | 2025-07-18T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/time-to-retrain-detecting-concept-drifts-in
|
2410.09190
| null | null |
Time to Retrain? Detecting Concept Drifts in Machine Learning Systems
|
With the boom of machine learning (ML) techniques, software practitioners build ML systems to process the massive volume of streaming data for diverse software engineering tasks such as failure prediction in AIOps. Trained using historical data, such ML models encounter performance degradation caused by concept drift, i.e., data and inter-relationship (concept) changes between training and production. It is essential to use concept rift detection to monitor the deployed ML models and re-train the ML models when needed. In this work, we explore applying state-of-the-art (SOTA) concept drift detection techniques on synthetic and real-world datasets in an industrial setting. Such an industrial setting requires minimal manual effort in labeling and maximal generality in ML model architecture. We find that current SOTA semi-supervised methods not only require significant labeling effort but also only work for certain types of ML models. To overcome such limitations, we propose a novel model-agnostic technique (CDSeer) for detecting concept drift. Our evaluation shows that CDSeer has better precision and recall compared to the state-of-the-art while requiring significantly less manual labeling. We demonstrate the effectiveness of CDSeer at concept drift detection by evaluating it on eight datasets from different domains and use cases. Results from internal deployment of CDSeer on an industrial proprietary dataset show a 57.1% improvement in precision while using 99% fewer labels compared to the SOTA concept drift detection method. The performance is also comparable to the supervised concept drift detection method, which requires 100% of the data to be labeled. The improved performance and ease of adoption of CDSeer are valuable in making ML systems more reliable.
| null |
https://arxiv.org/abs/2410.09190v1
|
https://arxiv.org/pdf/2410.09190v1.pdf
| null |
[
"Tri Minh Triet Pham",
"Karthikeyan Premkumar",
"Mohamed Naili",
"Jinqiu Yang"
] |
[] | 2024-10-11T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/aligned-manifold-property-and-topology-point
|
2507.16223
| null | null |
Aligned Manifold Property and Topology Point Clouds for Learning Molecular Properties
|
Machine learning models for molecular property prediction generally rely on representations -- such as SMILES strings and molecular graphs -- that overlook the surface-local phenomena driving intermolecular behavior. 3D-based approaches often reduce surface detail or require computationally expensive SE(3)-equivariant architectures to manage spatial variance. To overcome these limitations, this work introduces AMPTCR (Aligned Manifold Property and Topology Cloud Representation), a molecular surface representation that combines local quantum-derived scalar fields and custom topological descriptors within an aligned point cloud format. Each surface point includes a chemically meaningful scalar, geodesically derived topology vectors, and coordinates transformed into a canonical reference frame, enabling efficient learning with conventional SE(3)-sensitive architectures. AMPTCR is evaluated using a DGCNN framework on two tasks: molecular weight and bacterial growth inhibition. For molecular weight, results confirm that AMPTCR encodes physically meaningful data, with a validation R^2 of 0.87. In the bacterial inhibition task, AMPTCR enables both classification and direct regression of E. coli inhibition values using Dual Fukui functions as the electronic descriptor and Morgan Fingerprints as auxiliary data, achieving an ROC AUC of 0.912 on the classification task, and an R^2 of 0.54 on the regression task. These results help demonstrate that AMPTCR offers a compact, expressive, and architecture-agnostic representation for modeling surface-mediated molecular properties.
| null |
https://arxiv.org/abs/2507.16223v1
|
https://arxiv.org/pdf/2507.16223v1.pdf
| null |
[
"Alexander Mihalcea"
] |
[] | 2025-07-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/holitracer-holistic-vectorization-of
|
2507.16251
| null | null |
HoliTracer: Holistic Vectorization of Geographic Objects from Large-Size Remote Sensing Imagery
|
With the increasing resolution of remote sensing imagery (RSI), large-size RSI has emerged as a vital data source for high-precision vector mapping of geographic objects. Existing methods are typically constrained to processing small image patches, which often leads to the loss of contextual information and produces fragmented vector outputs. To address these, this paper introduces HoliTracer, the first framework designed to holistically extract vectorized geographic objects from large-size RSI. In HoliTracer, we enhance segmentation of large-size RSI using the Context Attention Net (CAN), which employs a local-to-global attention mechanism to capture contextual dependencies. Furthermore, we achieve holistic vectorization through a robust pipeline that leverages the Mask Contour Reformer (MCR) to reconstruct polygons and the Polygon Sequence Tracer (PST) to trace vertices. Extensive experiments on large-size RSI datasets, including buildings, water bodies, and roads, demonstrate that HoliTracer outperforms state-of-the-art methods. Our code and data are available in https://github.com/vvangfaye/HoliTracer.
| null |
https://arxiv.org/abs/2507.16251v1
|
https://arxiv.org/pdf/2507.16251v1.pdf
| null |
[
"Yu Wang",
"Bo Dang",
"Wanchun Li",
"Wei Chen",
"Yansheng Li"
] |
[] | 2025-07-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/interpretable-embeddings-of-speech-enhance
|
2507.16080
| null | null |
Interpretable Embeddings of Speech Enhance and Explain Brain Encoding Performance of Audio Models
|
Self-supervised speech models (SSMs) are increasingly hailed as more powerful computational models of human speech perception than models based on traditional hand-crafted features. However, since their representations are inherently black-box, it remains unclear what drives their alignment with brain responses. To remedy this, we built linear encoding models from six interpretable feature families: mel-spectrogram, Gabor filter bank features, speech presence, phonetic, syntactic, and semantic Question-Answering features, and contextualized embeddings from three state-of-the-art SSMs (Whisper, HuBERT, WavLM), quantifying the shared and unique neural variance captured by each feature class. Contrary to prevailing assumptions, our interpretable model predicted electrocorticography (ECoG) responses to speech more accurately than any SSM. Moreover, augmenting SSM representations with interpretable features yielded the best overall neural predictions, significantly outperforming either class alone. Further variance-partitioning analyses revealed previously unresolved components of SSM representations that contribute to their neural alignment: 1. Despite the common assumption that later layers of SSMs discard low-level acoustic information, these models compress and preferentially retain frequency bands critical for neural encoding of speech (100-1000 Hz). 2. Contrary to previous claims, SSMs encode brain-relevant semantic information that cannot be reduced to lower-level features, improving with context length and model size. These results highlight the importance of using refined, interpretable features in understanding speech perception.
| null |
https://arxiv.org/abs/2507.16080v1
|
https://arxiv.org/pdf/2507.16080v1.pdf
| null |
[
"Riki Shimizu",
"Richard J. Antonello",
"Chandan Singh",
"Nima Mesgarani"
] |
[] | 2025-07-21T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/beyond-binary-rewards-training-lms-to-reason
|
2507.16806
| null | null |
Beyond Binary Rewards: Training LMs to Reason About Their Uncertainty
|
When language models (LMs) are trained via reinforcement learning (RL) to generate natural language "reasoning chains", their performance improves on a variety of difficult question answering tasks. Today, almost all successful applications of RL for reasoning use binary reward functions that evaluate the correctness of LM outputs. Because such reward functions do not penalize guessing or low-confidence outputs, they often have the unintended side-effect of degrading calibration and increasing the rate at which LMs generate incorrect responses (or "hallucinate") in other problem domains. This paper describes RLCR (Reinforcement Learning with Calibration Rewards), an approach to training reasoning models that jointly improves accuracy and calibrated confidence estimation. During RLCR, LMs generate both predictions and numerical confidence estimates after reasoning. They are trained to optimize a reward function that augments a binary correctness score with a Brier score -- a scoring rule for confidence estimates that incentivizes calibrated prediction. We first prove that this reward function (or any analogous reward function that uses a bounded, proper scoring rule) yields models whose predictions are both accurate and well-calibrated. We next show that across diverse datasets, RLCR substantially improves calibration with no loss in accuracy, on both in-domain and out-of-domain evaluations -- outperforming both ordinary RL training and classifiers trained to assign post-hoc confidence scores. While ordinary RL hurts calibration, RLCR improves it. Finally, we demonstrate that verbalized confidence can be leveraged at test time to improve accuracy and calibration via confidence-weighted scaling methods. Our results show that explicitly optimizing for calibration can produce more generally reliable reasoning models.
| null |
https://arxiv.org/abs/2507.16806v1
|
https://arxiv.org/pdf/2507.16806v1.pdf
| null |
[
"Mehul Damani",
"Isha Puri",
"Stewart Slocum",
"Idan Shenfeld",
"Leshem Choshen",
"Yoon Kim",
"Jacob Andreas"
] |
[] | 2025-07-22T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/learning-multiple-semantic-knowledge-for
| null | null | null |
Learning Multiple Semantic Knowledge For Cross-Domain Unsupervised Vehicle Re-Identification
|
Unsupervised Vehicle re-identification (reID) aims at searching the similar vehicles’ images from large unlabelled datasets captured in a multiple camera network, which is still a challenging task. In this paper, a multiple semantic knowledge learning approach is proposed to exploit the potential similarity of unlabeled samples, which builds multiple clusters from different views automatically with different cues. Specially, different from some existing works focus on the knowledge of one view, for each vehicle in the target domain, different semantic knowledge could be learned with the proposed focal drop network and several different labels can be assigned according these knowledge, which would be employed to train the vehicle reID model jointly. In addition, due to the unreliability of pseudo labels assigned by the clustering, the hard triplet center loss is proposed to take the difference of intra-cluster and inter-cluster into consideration for better training the unsupervised framework to adapt the unknown domain. Comprehensive experimental results clearly demonstrate that our method achieves excellent performance on both VehicleID dataset and VeRi-776 dataset.
| null |
https://ieeexplore.ieee.org/document/9428440
|
https://ieeexplore.ieee.org/document/9428440
|
IEEE International Conference on Multimedia and Expo (ICME) 2021 7
|
[
"Huibing Wang",
"Jinjia Peng",
"Guangqi Jiang",
"Xianping Fu"
] |
[
"Unsupervised Domain Adaptation"
] | 2021-07-05T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/small-edits-big-consequences-telling-good
|
2507.15868
| null | null |
Small Edits, Big Consequences: Telling Good from Bad Robustness in Large Language Models
|
Large language models (LLMs) now write code in settings where misreading a single word can break safety or cost money, yet we still expect them to overlook stray typos. To probe where useful robustness ends and harmful insensitivity begins, we compile 50 LeetCode problems and craft three minimal prompt perturbations that should vary in importance: (i) progressive underspecification deleting 10 % of words per step; (ii) lexical flip swapping a pivotal quantifier ("max" to "min"); and (iii) jargon inflation replacing a common noun with an obscure technical synonym. Six frontier models, including three "reasoning-tuned" versions, solve each mutated prompt, and their Python outputs are checked against the original test suites to reveal whether they reused the baseline solution or adapted. Among 11 853 generations we observe a sharp double asymmetry. Models remain correct in 85 % of cases even after 90 % of the prompt is missing, showing over-robustness to underspecification, yet only 54 % react to a single quantifier flip that reverses the task, with reasoning-tuned variants even less sensitive than their bases. Jargon edits lie in between, passing through 56 %. Current LLMs thus blur the line between harmless noise and meaning - changing edits, often treating both as ignorable. Masking salient anchors such as function names can force re - evaluation. We advocate evaluation and training protocols that reward differential sensitivity: stay steady under benign noise but adapt - or refuse - when semantics truly change.
| null |
https://arxiv.org/abs/2507.15868v1
|
https://arxiv.org/pdf/2507.15868v1.pdf
| null |
[
"Altynbek Ismailov",
"Salia Asanova"
] |
[] | 2025-07-15T00:00:00 | null | null | null | null |
[] |
https://paperswithcode.com/paper/earthcrafter-scalable-3d-earth-generation-via
|
2507.16535
| null | null |
EarthCrafter: Scalable 3D Earth Generation via Dual-Sparse Latent Diffusion
|
Despite the remarkable developments achieved by recent 3D generation works, scaling these methods to geographic extents, such as modeling thousands of square kilometers of Earth's surface, remains an open challenge. We address this through a dual innovation in data infrastructure and model architecture. First, we introduce Aerial-Earth3D, the largest 3D aerial dataset to date, consisting of 50k curated scenes (each measuring 600m x 600m) captured across the U.S. mainland, comprising 45M multi-view Google Earth frames. Each scene provides pose-annotated multi-view images, depth maps, normals, semantic segmentation, and camera poses, with explicit quality control to ensure terrain diversity. Building on this foundation, we propose EarthCrafter, a tailored framework for large-scale 3D Earth generation via sparse-decoupled latent diffusion. Our architecture separates structural and textural generation: 1) Dual sparse 3D-VAEs compress high-resolution geometric voxels and textural 2D Gaussian Splats (2DGS) into compact latent spaces, largely alleviating the costly computation suffering from vast geographic scales while preserving critical information. 2) We propose condition-aware flow matching models trained on mixed inputs (semantics, images, or neither) to flexibly model latent geometry and texture features independently. Extensive experiments demonstrate that EarthCrafter performs substantially better in extremely large-scale generation. The framework further supports versatile applications, from semantic-guided urban layout generation to unconditional terrain synthesis, while maintaining geographic plausibility through our rich data priors from Aerial-Earth3D.
| null |
https://arxiv.org/abs/2507.16535v1
|
https://arxiv.org/pdf/2507.16535v1.pdf
| null |
[
"Shang Liu",
"Chenjie Cao",
"Chaohui Yu",
"Wen Qian",
"Jing Wang",
"Fan Wang"
] |
[] | 2025-07-22T00:00:00 | null | null | null | null |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.