parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
el2015radar
|
\cite{el2015radar}
|
Radar and vision sensors calibration for outdoor 3D reconstruction
| null | null | true | false |
El Natour, Ghina and Aider, Omar Ait and Rouveure, Raphael and Berry, Fran{\c{c}}ois and Faure, Patrice
| 2,015 | null | null | null | null |
Radar and vision sensors calibration for outdoor 3D reconstruction
|
Radar and vision sensors calibration for outdoor 3D reconstruction
|
https://ieeexplore.ieee.org/document/7139473/
|
In this paper we introduce a new geometric calibration algorithm, and a geometric method of 3D reconstruction using a panoramic microwave radar and a camera
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
li2023automatic
|
\cite{li2023automatic}
|
Automatic targetless LiDAR--camera calibration: a survey
| null | null | true | false |
Li, Xingchen and Xiao, Yuxuan and Wang, Beibei and Ren, Haojie and Zhang, Yanyong and Ji, Jianmin
| 2,023 | null | null | null |
Artificial Intelligence Review
|
Automatic targetless LiDAR--camera calibration: a survey
|
Automatic targetless LiDAR–camera calibration: a survey
|
https://link.springer.com/article/10.1007/s10462-022-10317-y
|
This paper reviews the existing calibration algorithms for automatic targetless calibration between LiDARs and cameras. Unmanned intelligent
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
pandey2012automatic
|
\cite{pandey2012automatic}
|
Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information
| null | null | true | false |
Pandey, Gaurav and McBride, James and Savarese, Silvio and Eustice, Ryan
| 2,012 | null | null | null | null |
Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information
|
(PDF) Automatic Targetless Extrinsic Calibration of a 3D Lidar and ...
|
https://www.researchgate.net/publication/267843813_Automatic_Targetless_Extrinsic_Calibration_of_a_3D_Lidar_and_Camera_by_Maximizing_Mutual_Information
|
This paper reports on an algorithm for automatic, targetless, extrinsic calibration of a lidar and optical camera system based upon the maximization of mutual
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
taylor2015motion
|
\cite{taylor2015motion}
|
Motion-based calibration of multimodal sensor arrays
| null | null | true | false |
Taylor, Zachary and Nieto, Juan
| 2,015 | null | null | null | null |
Motion-based calibration of multimodal sensor arrays
|
(PDF) Motion-Based Calibration of Multimodal Sensor Arrays
|
https://www.researchgate.net/publication/273576814_Motion-Based_Calibration_of_Multimodal_Sensor_Arrays
|
This paper formulates a new pipeline for automated extrinsic calibration of multi-sensor mobile platforms. The new method can operate on any combination of
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
levinson2013automatic
|
\cite{levinson2013automatic}
|
Automatic online calibration of cameras and lasers.
| null | null | true | false |
Levinson, Jesse and Thrun, Sebastian
| 2,013 | null | null | null | null |
Automatic online calibration of cameras and lasers.
|
Automatic Online Calibration of Cameras and Lasers
|
https://www.roboticsproceedings.org/rss09/p29.pdf
|
by J Levinson · Cited by 379 — In this paper, we introduce two new real-time techniques that enable camera-laser calibration online, automatically, and in arbitrary environments. The
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
yuan2021pixel
|
\cite{yuan2021pixel}
|
Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and
Camera in Targetless Environments
|
http://arxiv.org/abs/2103.01627v2
|
In this letter, we present a novel method for automatic extrinsic calibration
of high-resolution LiDARs and RGB cameras in targetless environments. Our
approach does not require checkerboards but can achieve pixel-level accuracy by
aligning natural edge features in the two sensors. On the theory level, we
analyze the constraints imposed by edge features and the sensitivity of
calibration accuracy with respect to edge distribution in the scene. On the
implementation level, we carefully investigate the physical measuring
principles of LiDARs and propose an efficient and accurate LiDAR edge
extraction method based on point cloud voxel cutting and plane fitting. Due to
the edges' richness in natural scenes, we have carried out experiments in many
indoor and outdoor scenes. The results show that this method has high
robustness, accuracy, and consistency. It can promote the research and
application of the fusion between LiDAR and camera. We have open-sourced our
code on GitHub to benefit the community.
| true | true |
Yuan, Chongjian and Liu, Xiyuan and Hong, Xiaoping and Zhang, Fu
| 2,021 | null | null | null |
IEEE Robotics and Automation Letters
|
Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and
Camera in Targetless Environments
|
Pixel-level Extrinsic Self Calibration of High Resolution LiDAR and ...
|
https://arxiv.org/abs/2103.01627
|
In this letter, we present a novel method for automatic extrinsic calibration of high-resolution LiDARs and RGB cameras in targetless environments.
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
schneider2017regnet
|
\cite{schneider2017regnet}
|
RegNet: Multimodal Sensor Registration Using Deep Neural Networks
|
http://arxiv.org/abs/1707.03167v1
|
In this paper, we present RegNet, the first deep convolutional neural network
(CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between
multimodal sensors, exemplified using a scanning LiDAR and a monocular camera.
Compared to existing approaches, RegNet casts all three conventional
calibration steps (feature extraction, feature matching and global regression)
into a single real-time capable CNN. Our method does not require any human
interaction and bridges the gap between classical offline and target-less
online calibration approaches as it provides both a stable initial estimation
as well as a continuous online correction of the extrinsic parameters. During
training we randomly decalibrate our system in order to train RegNet to infer
the correspondence between projected depth measurements and RGB image and
finally regress the extrinsic calibration. Additionally, with an iterative
execution of multiple CNNs, that are trained on different magnitudes of
decalibration, our approach compares favorably to state-of-the-art methods in
terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm
for the translation components even for large decalibrations up to 1.5 m and 20
degrees.
| true | true |
Schneider, Nick and Piewak, Florian and Stiller, Christoph and Franke, Uwe
| 2,017 | null | null | null | null |
RegNet: Multimodal Sensor Registration Using Deep Neural Networks
|
RegNet: Multimodal Sensor Registration Using Deep Neural Networks
|
http://arxiv.org/pdf/1707.03167v1
|
In this paper, we present RegNet, the first deep convolutional neural network
(CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between
multimodal sensors, exemplified using a scanning LiDAR and a monocular camera.
Compared to existing approaches, RegNet casts all three conventional
calibration steps (feature extraction, feature matching and global regression)
into a single real-time capable CNN. Our method does not require any human
interaction and bridges the gap between classical offline and target-less
online calibration approaches as it provides both a stable initial estimation
as well as a continuous online correction of the extrinsic parameters. During
training we randomly decalibrate our system in order to train RegNet to infer
the correspondence between projected depth measurements and RGB image and
finally regress the extrinsic calibration. Additionally, with an iterative
execution of multiple CNNs, that are trained on different magnitudes of
decalibration, our approach compares favorably to state-of-the-art methods in
terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm
for the translation components even for large decalibrations up to 1.5 m and 20
degrees.
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
iyer2018calibnet
|
\cite{iyer2018calibnet}
|
CalibNet: Geometrically Supervised Extrinsic Calibration using 3D
Spatial Transformer Networks
|
http://arxiv.org/abs/1803.08181v2
|
3D LiDARs and 2D cameras are increasingly being used alongside each other in
sensor rigs for perception tasks. Before these sensors can be used to gather
meaningful data, however, their extrinsics (and intrinsics) need to be
accurately calibrated, as the performance of the sensor rig is extremely
sensitive to these calibration parameters. A vast majority of existing
calibration techniques require significant amounts of data and/or calibration
targets and human effort, severely impacting their applicability in large-scale
production systems. We address this gap with CalibNet: a self-supervised deep
network capable of automatically estimating the 6-DoF rigid body transformation
between a 3D LiDAR and a 2D camera in real-time. CalibNet alleviates the need
for calibration targets, thereby resulting in significant savings in
calibration efforts. During training, the network only takes as input a LiDAR
point cloud, the corresponding monocular image, and the camera calibration
matrix K. At train time, we do not impose direct supervision (i.e., we do not
directly regress to the calibration parameters, for example). Instead, we train
the network to predict calibration parameters that maximize the geometric and
photometric consistency of the input images and point clouds. CalibNet learns
to iteratively solve the underlying geometric problem and accurately predicts
extrinsic calibration parameters for a wide range of mis-calibrations, without
requiring retraining or domain adaptation. The project page is hosted at
https://epiception.github.io/CalibNet
| true | true |
Iyer, Ganesh and Ram, R Karnik and Murthy, J Krishna and Krishna, K Madhava
| 2,018 | null | null | null | null |
CalibNet: Geometrically Supervised Extrinsic Calibration using 3D
Spatial Transformer Networks
|
CalibNet: Geometrically Supervised Extrinsic Calibration ...
|
https://dl.acm.org/doi/10.1109/IROS.2018.8593693
|
by G Iyer · 2018 · Cited by 247 — CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks. Authors: Ganesh Iyer. Ganesh Iyer. Robotics Research Center
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
shi2020calibrcnn
|
\cite{shi2020calibrcnn}
|
Calibrcnn: Calibrating camera and lidar by recurrent convolutional neural network and geometric constraints
| null | null | true | false |
Shi, Jieying and Zhu, Ziheng and Zhang, Jianhua and Liu, Ruyu and Wang, Zhenhua and Chen, Shengyong and Liu, Honghai
| 2,020 | null | null | null | null |
Calibrcnn: Calibrating camera and lidar by recurrent convolutional neural network and geometric constraints
|
Calibrating Camera and LiDAR by recurrent convolutional neural ...
|
https://researchportal.port.ac.uk/en/publications/calibrcnn(a901bae3-8f6e-49d3-89e2-1c503f95db11).html
|
Missing: 04/08/2025
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
sak2014long
|
\cite{sak2014long}
|
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition
| null | null | true | false |
Sak, Ha{\c{s}}im and Senior, Andrew and Beaufays, Fran{\c{c}}oise
| 2,014 | null | null | null |
arXiv preprint arXiv:1402.1128
|
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition
|
long short-term memory based recurrent neural network ... - ar5iv
|
https://ar5iv.labs.arxiv.org/html/1402.1128
|
In this paper, we show that LSTM based RNN architectures can obtain state of the art performance in a large vocabulary speech recognition system with thousands
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
lv2021lccnet
|
\cite{lv2021lccnet}
|
LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network
|
http://arxiv.org/abs/2012.13901v2
|
In this paper, we propose a novel online self-calibration approach for Light
Detection and Ranging (LiDAR) and camera sensors. Compared to the previous
CNN-based methods that concatenate the feature maps of the RGB image and
decalibrated depth image, we exploit the cost volume inspired by the PWC-Net
for feature matching. Besides the smooth L1-Loss of the predicted extrinsic
calibration parameters, an additional point cloud loss is applied. Instead of
regress the extrinsic parameters between LiDAR and camera directly, we predict
the decalibrated deviation from initial calibration to the ground truth. During
inference, the calibration error decreases further with the usage of iterative
refinement and the temporal filtering approach. The evaluation results on the
KITTI dataset illustrate that our approach outperforms CNN-based
state-of-the-art methods in terms of a mean absolute calibration error of
0.297cm in translation and 0.017{\deg} in rotation with miscalibration
magnitudes of up to 1.5m and 20{\deg}.
| true | true |
Lv, Xudong and Wang, Boya and Dou, Ziwen and Ye, Dong and Wang, Shuo
| 2,021 | null | null | null | null |
LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network
|
LCCNet: LiDAR and Camera Self-Calibration using Cost ...
|
https://arxiv.org/abs/2012.13901
|
by X Lv · 2020 · Cited by 175 — Abstract:In this paper, we propose a novel online self-calibration approach for Light Detection and Ranging (LiDAR) and camera sensors.See more
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
pervsic2021online
|
\cite{pervsic2021online}
|
Online multi-sensor calibration based on moving object tracking
| null | null | true | false |
Per{\v{s}}i{\'c}, Juraj and Petrovi{\'c}, Luka and Markovi{\'c}, Ivan and Petrovi{\'c}, Ivan
| 2,021 | null | null | null |
Advanced Robotics
|
Online multi-sensor calibration based on moving object tracking
|
Online multi-sensor calibration based on moving object tracking
|
https://www.researchgate.net/publication/345092954_Online_multi-sensor_calibration_based_on_moving_object_tracking
|
Peršić et al. [5] propose an online targetless multi-sensor calibration method based on the detection and tracking of moving objects. It employs the tracking-
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
scholler2019targetless
|
\cite{scholler2019targetless}
|
Targetless Rotational Auto-Calibration of Radar and Camera for
Intelligent Transportation Systems
|
http://arxiv.org/abs/1904.08743v2
|
Most intelligent transportation systems use a combination of radar sensors
and cameras for robust vehicle perception. The calibration of these
heterogeneous sensor types in an automatic fashion during system operation is
challenging due to differing physical measurement principles and the high
sparsity of traffic radars. We propose - to the best of our knowledge - the
first data-driven method for automatic rotational radar-camera calibration
without dedicated calibration targets. Our approach is based on a coarse and a
fine convolutional neural network. We employ a boosting-inspired training
algorithm, where we train the fine network on the residual error of the coarse
network. Due to the unavailability of public datasets combining radar and
camera measurements, we recorded our own real-world data. We demonstrate that
our method is able to reach precise and robust sensor registration and show its
generalization capabilities to different sensor alignments and perspectives.
| true | true |
Sch{\"o}ller, Christoph and Schnettler, Maximilian and Kr{\"a}mmer, Annkathrin and Hinz, Gereon and Bakovic, Maida and G{\"u}zet, M{\"u}ge and Knoll, Alois
| 2,019 | null | null | null | null |
Targetless Rotational Auto-Calibration of Radar and Camera for
Intelligent Transportation Systems
|
Targetless Rotational Auto-Calibration of Radar and Camera ... - arXiv
|
https://arxiv.org/abs/1904.08743
|
Authors:Christoph Schöller, Maximilian Schnettler, Annkathrin Krämmer, Gereon Hinz, Maida Bakovic, Müge Güzet, Alois Knoll View a PDF of the paper titled Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems, by Christoph Sch\"oller and 6 other authors Comments:Accepted at the IEEE Intelligent Transportation Systems Conference (ITSC) 2019Subjects:Computer Vision and Pattern Recognition (cs.CV)Cite as:arXiv:1904.08743 [cs.CV] (or arXiv:1904.08743v2 [cs.CV] for this version) https://doi.org/10.48550/arXiv.1904.08743Focus to learn morearXiv-issued DOI via DataCite View a PDF of the paper titled Targetless Rotational Auto-Calibration of Radar and Camera for Intelligent Transportation Systems, by Christoph Sch\"oller and 6 other authors Bibliographic Explorer Toggle Connected Papers Toggle Litmaps Toggle alphaXiv Toggle Links to Code Toggle DagsHub Toggle GotitPub Toggle Links to Code Toggle ScienceCast Toggle Replicate Toggle
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
wise2021continuous
|
\cite{wise2021continuous}
|
A Continuous-Time Approach for 3D Radar-to-Camera Extrinsic Calibration
|
http://arxiv.org/abs/2103.07505v2
|
Reliable operation in inclement weather is essential to the deployment of
safe autonomous vehicles (AVs). Robustness and reliability can be achieved by
fusing data from the standard AV sensor suite (i.e., lidars, cameras) with
weather robust sensors, such as millimetre-wavelength radar. Critically,
accurate sensor data fusion requires knowledge of the rigid-body transform
between sensor pairs, which can be determined through the process of extrinsic
calibration. A number of extrinsic calibration algorithms have been designed
for 2D (planar) radar sensors - however, recently-developed, low-cost 3D
millimetre-wavelength radars are set to displace their 2D counterparts in many
applications. In this paper, we present a continuous-time 3D radar-to-camera
extrinsic calibration algorithm that utilizes radar velocity measurements and,
unlike the majority of existing techniques, does not require specialized radar
retroreflectors to be present in the environment. We derive the observability
properties of our formulation and demonstrate the efficacy of our algorithm
through synthetic and real-world experiments.
| true | true |
Wise, Emmett and Per{\v{s}}i{\'c}, Juraj and Grebe, Christopher and Petrovi{\'c}, Ivan and Kelly, Jonathan
| 2,021 | null | null | null | null |
A Continuous-Time Approach for 3D Radar-to-Camera Extrinsic Calibration
|
A Continuous-Time Approach for 3D Radar-to-Camera ...
|
https://dl.acm.org/doi/10.1109/ICRA48506.2021.9561938
|
by E Wise · 2021 · Cited by 42 — In this paper, we present a continuous-time 3D radar-to-camera extrinsic calibration algorithm that utilizes radar velocity measurements and, unlike the
|
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
|
2505.22427v1
|
wise2023spatiotemporal
|
\cite{wise2023spatiotemporal}
|
Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera
Pairs
|
http://arxiv.org/abs/2211.01871v4
|
Autonomous vehicles (AVs) fuse data from multiple sensors and sensing
modalities to impart a measure of robustness when operating in adverse
conditions. Radars and cameras are popular choices for use in sensor fusion;
although radar measurements are sparse in comparison to camera images, radar
scans penetrate fog, rain, and snow. However, accurate sensor fusion depends
upon knowledge of the spatial transform between the sensors and any temporal
misalignment that exists in their measurement times. During the life cycle of
an AV, these calibration parameters may change, so the ability to perform
in-situ spatiotemporal calibration is essential to ensure reliable long-term
operation. State-of-the-art 3D radar-camera spatiotemporal calibration
algorithms require bespoke calibration targets that are not readily available
in the field. In this paper, we describe an algorithm for targetless
spatiotemporal calibration that does not require specialized infrastructure.
Our approach leverages the ability of the radar unit to measure its own
ego-velocity relative to a fixed, external reference frame. We analyze the
identifiability of the spatiotemporal calibration problem and determine the
motions necessary for calibration. Through a series of simulation studies, we
characterize the sensitivity of our algorithm to measurement noise. Finally, we
demonstrate accurate calibration for three real-world systems, including a
handheld sensor rig and a vehicle-mounted sensor array. Our results show that
we are able to match the performance of an existing, target-based method, while
calibrating in arbitrary, infrastructure-free environments.
| true | true |
Wise, Emmett and Cheng, Qilong and Kelly, Jonathan
| 2,023 | null | null | null |
IEEE Transactions on Robotics
|
Spatiotemporal Calibration of 3D Millimetre-Wavelength Radar-Camera
Pairs
|
Spatiotemporal Calibration of 3-D Millimetre-Wavelength Radar ...
|
http://ieeexplore.ieee.org/iel7/8860/10352149/10256219.pdf
|
During calibration, the approach in [6] filters radar-camera measurement pairs by return intensity; the intensity is maximal for reflectors that lie on the
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
ho2020ddpm
|
\cite{ho2020ddpm}
|
Denoising Diffusion Probabilistic Models
|
http://arxiv.org/abs/2006.11239v2
|
We present high quality image synthesis results using diffusion probabilistic
models, a class of latent variable models inspired by considerations from
nonequilibrium thermodynamics. Our best results are obtained by training on a
weighted variational bound designed according to a novel connection between
diffusion probabilistic models and denoising score matching with Langevin
dynamics, and our models naturally admit a progressive lossy decompression
scheme that can be interpreted as a generalization of autoregressive decoding.
On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and
a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality
similar to ProgressiveGAN. Our implementation is available at
https://github.com/hojonathanho/diffusion
| true | true |
Ho, Jonathan and Jain, Ajay and Abbeel, Pieter
| 2,020 | null | null | null |
Advances in neural information processing systems
|
Denoising Diffusion Probabilistic Models
|
Denoising Diffusion Probabilistic Models
|
http://arxiv.org/pdf/2006.11239v2
|
We present high quality image synthesis results using diffusion probabilistic
models, a class of latent variable models inspired by considerations from
nonequilibrium thermodynamics. Our best results are obtained by training on a
weighted variational bound designed according to a novel connection between
diffusion probabilistic models and denoising score matching with Langevin
dynamics, and our models naturally admit a progressive lossy decompression
scheme that can be interpreted as a generalization of autoregressive decoding.
On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and
a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality
similar to ProgressiveGAN. Our implementation is available at
https://github.com/hojonathanho/diffusion
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
rombach2022ldm
|
\cite{rombach2022ldm}
|
High-resolution image synthesis with latent diffusion models
| null | null | true | false |
Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj{\"o}rn
| 2,022 | null | null | null | null |
High-resolution image synthesis with latent diffusion models
|
[PDF] High-Resolution Image Synthesis With Latent Diffusion Models
|
https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf
|
High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach1 ∗ Andreas Blattmann1 ∗ Dominik Lorenz1 Patrick Esser Bj¨ orn Ommer1 1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany Runway ML https://github.com/CompVis/latent-diffusion Abstract By decomposing the image formation process into a se-quential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Our latent diffusion models (LDMs) achieve new state of the art scores for im-age inpainting and class-conditional image synthesis and highly competitive performance on various tasks, includ-ing unconditional image generation, text-to-image synthe-sis, and super-resolution, while significantly reducing com-putational requirements compared to pixel-based DMs. 1.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
li2024qdm
|
\cite{li2024qdm}
|
Q-dm: An efficient low-bit quantized diffusion model
| null | null | true | false |
Li, Yanjing and Xu, Sheng and Cao, Xianbin and Sun, Xiao and Zhang, Baochang
| 2,024 | null | null | null |
Advances in Neural Information Processing Systems
|
Q-dm: An efficient low-bit quantized diffusion model
|
Q-DM: An Efficient Low-bit Quantized Diffusion Model
|
https://proceedings.neurips.cc/paper_files/paper/2023/hash/f1ee1cca0721de55bb35cf28ab95e1b4-Abstract-Conference.html
|
We propose an efficient Q-DM to calculate low-bit DMs by considering both training and inference process in the same framework.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
zheng2024binarydm
|
\cite{zheng2024binarydm}
|
Binarydm: Towards accurate binarization of diffusion model
| null | null | true | false |
Zheng, Xingyu and Qin, Haotong and Ma, Xudong and Zhang, Mingyuan and Hao, Haojie and Wang, Jiakai and Zhao, Zixiang and Guo, Jinyang and Liu, Xianglong
| 2,024 | null | null | null |
arXiv preprint arXiv:2404.05662
|
Binarydm: Towards accurate binarization of diffusion model
|
BinaryDM: Towards Accurate Binarization of Diffusion Model
|
https://arxiv.org/abs/2404.05662v1/
|
In this paper, we propose BinaryDM, a novel accurate quantization-aware training approach to push the weights of diffusion models towards the limit of 1-bit.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
zheng2024bidm
|
\cite{zheng2024bidm}
|
BiDM: Pushing the Limit of Quantization for Diffusion Models
|
http://arxiv.org/abs/2412.05926v1
|
Diffusion models (DMs) have been significantly developed and widely used in
various applications due to their excellent generative qualities. However, the
expensive computation and massive parameters of DMs hinder their practical use
in resource-constrained scenarios. As one of the effective compression
approaches, quantization allows DMs to achieve storage saving and inference
acceleration by reducing bit-width while maintaining generation performance.
However, as the most extreme quantization form, 1-bit binarization causes the
generation performance of DMs to face severe degradation or even collapse. This
paper proposes a novel method, namely BiDM, for fully binarizing weights and
activations of DMs, pushing quantization to the 1-bit limit. From a temporal
perspective, we introduce the Timestep-friendly Binary Structure (TBS), which
uses learnable activation binarizers and cross-timestep feature connections to
address the highly timestep-correlated activation features of DMs. From a
spatial perspective, we propose Space Patched Distillation (SPD) to address the
difficulty of matching binary features during distillation, focusing on the
spatial locality of image generation tasks and noise estimation networks. As
the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for
LSUN-Bedrooms 256$\times$256 achieves a remarkable FID of 22.74, significantly
outperforming the current state-of-the-art general binarization methods with an
FID of 59.44 and invalid generative samples, and achieves up to excellent 28.0
times storage and 52.7 times OPs savings. The code is available at
https://github.com/Xingyu-Zheng/BiDM .
| true | true |
Zheng, Xingyu and Liu, Xianglong and Bian, Yichen and Ma, Xudong and Zhang, Yulun and Wang, Jiakai and Guo, Jinyang and Qin, Haotong
| 2,024 | null | null | null |
arXiv preprint arXiv:2412.05926
|
BiDM: Pushing the Limit of Quantization for Diffusion Models
|
BiDM: Pushing the Limit of Quantization for Diffusion Models
|
http://arxiv.org/pdf/2412.05926v1
|
Diffusion models (DMs) have been significantly developed and widely used in
various applications due to their excellent generative qualities. However, the
expensive computation and massive parameters of DMs hinder their practical use
in resource-constrained scenarios. As one of the effective compression
approaches, quantization allows DMs to achieve storage saving and inference
acceleration by reducing bit-width while maintaining generation performance.
However, as the most extreme quantization form, 1-bit binarization causes the
generation performance of DMs to face severe degradation or even collapse. This
paper proposes a novel method, namely BiDM, for fully binarizing weights and
activations of DMs, pushing quantization to the 1-bit limit. From a temporal
perspective, we introduce the Timestep-friendly Binary Structure (TBS), which
uses learnable activation binarizers and cross-timestep feature connections to
address the highly timestep-correlated activation features of DMs. From a
spatial perspective, we propose Space Patched Distillation (SPD) to address the
difficulty of matching binary features during distillation, focusing on the
spatial locality of image generation tasks and noise estimation networks. As
the first work to fully binarize DMs, the W1A1 BiDM on the LDM-4 model for
LSUN-Bedrooms 256$\times$256 achieves a remarkable FID of 22.74, significantly
outperforming the current state-of-the-art general binarization methods with an
FID of 59.44 and invalid generative samples, and achieves up to excellent 28.0
times storage and 52.7 times OPs savings. The code is available at
https://github.com/Xingyu-Zheng/BiDM .
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
lu2024terdit
|
\cite{lu2024terdit}
|
TerDiT: Ternary Diffusion Models with Transformers
|
http://arxiv.org/abs/2405.14854v2
|
Recent developments in large-scale pre-trained text-to-image diffusion models
have significantly improved the generation of high-fidelity images,
particularly with the emergence of diffusion transformer models (DiTs). Among
diffusion models, diffusion transformers have demonstrated superior
image-generation capabilities, boosting lower FID scores and higher
scalability. However, deploying large-scale DiT models can be expensive due to
their excessive parameter numbers. Although existing research has explored
efficient deployment techniques for diffusion models, such as model
quantization, there is still little work concerning DiT-based models. To tackle
this research gap, we propose TerDiT, the first quantization-aware training
(QAT) and efficient deployment scheme for extremely low-bit diffusion
transformer models. We focus on the ternarization of DiT networks, with model
sizes ranging from 600M to 4.2B, and image resolution from 256$\times$256 to
512$\times$512. Our work contributes to the exploration of efficient deployment
of large-scale DiT models, demonstrating the feasibility of training extremely
low-bit DiT models from scratch while maintaining competitive image generation
capacities compared to full-precision models. Our code and pre-trained TerDiT
checkpoints have been released at https://github.com/Lucky-Lance/TerDiT.
| true | true |
Lu, Xudong and Zhou, Aojun and Lin, Ziyi and Liu, Qi and Xu, Yuhui and Zhang, Renrui and Wen, Yafei and Ren, Shuai and Gao, Peng and Yan, Junchi and others
| 2,024 | null | null | null |
arXiv preprint arXiv:2405.14854
|
TerDiT: Ternary Diffusion Models with Transformers
|
TerDiT: Ternary Diffusion Models with Transformers
|
http://arxiv.org/pdf/2405.14854v2
|
Recent developments in large-scale pre-trained text-to-image diffusion models
have significantly improved the generation of high-fidelity images,
particularly with the emergence of diffusion transformer models (DiTs). Among
diffusion models, diffusion transformers have demonstrated superior
image-generation capabilities, boosting lower FID scores and higher
scalability. However, deploying large-scale DiT models can be expensive due to
their excessive parameter numbers. Although existing research has explored
efficient deployment techniques for diffusion models, such as model
quantization, there is still little work concerning DiT-based models. To tackle
this research gap, we propose TerDiT, the first quantization-aware training
(QAT) and efficient deployment scheme for extremely low-bit diffusion
transformer models. We focus on the ternarization of DiT networks, with model
sizes ranging from 600M to 4.2B, and image resolution from 256$\times$256 to
512$\times$512. Our work contributes to the exploration of efficient deployment
of large-scale DiT models, demonstrating the feasibility of training extremely
low-bit DiT models from scratch while maintaining competitive image generation
capacities compared to full-precision models. Our code and pre-trained TerDiT
checkpoints have been released at https://github.com/Lucky-Lance/TerDiT.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
li2023qdiffusion
|
\cite{li2023qdiffusion}
|
Q-Diffusion: Quantizing Diffusion Models
|
http://arxiv.org/abs/2302.04304v3
|
Diffusion models have achieved great success in image synthesis through
iterative noise estimation using deep neural networks. However, the slow
inference, high memory consumption, and computation intensity of the noise
estimation model hinder the efficient adoption of diffusion models. Although
post-training quantization (PTQ) is considered a go-to compression method for
other tasks, it does not work out-of-the-box on diffusion models. We propose a
novel PTQ method specifically tailored towards the unique multi-timestep
pipeline and model architecture of the diffusion models, which compresses the
noise estimation network to accelerate the generation process. We identify the
key difficulty of diffusion model quantization as the changing output
distributions of noise estimation networks over multiple time steps and the
bimodal activation distribution of the shortcut layers within the noise
estimation network. We tackle these challenges with timestep-aware calibration
and split shortcut quantization in this work. Experimental results show that
our proposed method is able to quantize full-precision unconditional diffusion
models into 4-bit while maintaining comparable performance (small FID change of
at most 2.34 compared to >100 for traditional PTQ) in a training-free manner.
Our approach can also be applied to text-guided image generation, where we can
run stable diffusion in 4-bit weights with high generation quality for the
first time.
| true | true |
Li, Xiuyu and Liu, Yijiang and Lian, Long and Yang, Huanrui and Dong, Zhen and Kang, Daniel and Zhang, Shanghang and Keutzer, Kurt
| 2,023 | null | null | null | null |
Q-Diffusion: Quantizing Diffusion Models
|
Q-Diffusion: Quantizing Diffusion Models
|
http://arxiv.org/pdf/2302.04304v3
|
Diffusion models have achieved great success in image synthesis through
iterative noise estimation using deep neural networks. However, the slow
inference, high memory consumption, and computation intensity of the noise
estimation model hinder the efficient adoption of diffusion models. Although
post-training quantization (PTQ) is considered a go-to compression method for
other tasks, it does not work out-of-the-box on diffusion models. We propose a
novel PTQ method specifically tailored towards the unique multi-timestep
pipeline and model architecture of the diffusion models, which compresses the
noise estimation network to accelerate the generation process. We identify the
key difficulty of diffusion model quantization as the changing output
distributions of noise estimation networks over multiple time steps and the
bimodal activation distribution of the shortcut layers within the noise
estimation network. We tackle these challenges with timestep-aware calibration
and split shortcut quantization in this work. Experimental results show that
our proposed method is able to quantize full-precision unconditional diffusion
models into 4-bit while maintaining comparable performance (small FID change of
at most 2.34 compared to >100 for traditional PTQ) in a training-free manner.
Our approach can also be applied to text-guided image generation, where we can
run stable diffusion in 4-bit weights with high generation quality for the
first time.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
shang2023ptq4dm
|
\cite{shang2023ptq4dm}
|
Post-training Quantization on Diffusion Models
|
http://arxiv.org/abs/2211.15736v3
|
Denoising diffusion (score-based) generative models have recently achieved
significant accomplishments in generating realistic and diverse data. These
approaches define a forward diffusion process for transforming data into noise
and a backward denoising process for sampling data from noise. Unfortunately,
the generation process of current denoising diffusion models is notoriously
slow due to the lengthy iterative noise estimations, which rely on cumbersome
neural networks. It prevents the diffusion models from being widely deployed,
especially on edge devices. Previous works accelerate the generation process of
diffusion model (DM) via finding shorter yet effective sampling trajectories.
However, they overlook the cost of noise estimation with a heavy network in
every iteration. In this work, we accelerate generation from the perspective of
compressing the noise estimation network. Due to the difficulty of retraining
DMs, we exclude mainstream training-aware compression paradigms and introduce
post-training quantization (PTQ) into DM acceleration. However, the output
distributions of noise estimation networks change with time-step, making
previous PTQ methods fail in DMs since they are designed for single-time step
scenarios. To devise a DM-specific PTQ method, we explore PTQ on DM in three
aspects: quantized operations, calibration dataset, and calibration metric. We
summarize and use several observations derived from all-inclusive
investigations to formulate our method, which especially targets the unique
multi-time-step structure of DMs. Experimentally, our method can directly
quantize full-precision DMs into 8-bit models while maintaining or even
improving their performance in a training-free manner. Importantly, our method
can serve as a plug-and-play module on other fast-sampling methods, e.g., DDIM.
The code is available at https://github.com/42Shawn/PTQ4DM .
| true | true |
Shang, Yuzhang and Yuan, Zhihang and Xie, Bin and Wu, Bingzhe and Yan, Yan
| 2,023 | null | null | null | null |
Post-training Quantization on Diffusion Models
|
[2211.15736] Post-training Quantization on Diffusion Models - arXiv
|
https://arxiv.org/abs/2211.15736
|
Our method can directly quantize full-precision DMs into 8-bit models while maintaining or even improving their performance in a training-free manner.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
he2024ptqd
|
\cite{he2024ptqd}
|
PTQD: Accurate Post-Training Quantization for Diffusion Models
|
http://arxiv.org/abs/2305.10657v4
|
Diffusion models have recently dominated image synthesis tasks. However, the
iterative denoising process is expensive in computations at inference time,
making diffusion models less practical for low-latency and scalable real-world
applications. Post-training quantization (PTQ) of diffusion models can
significantly reduce the model size and accelerate the sampling process without
re-training. Nonetheless, applying existing PTQ methods directly to low-bit
diffusion models can significantly impair the quality of generated samples.
Specifically, for each denoising step, quantization noise leads to deviations
in the estimated mean and mismatches with the predetermined variance schedule.
As the sampling process proceeds, the quantization noise may accumulate,
resulting in a low signal-to-noise ratio (SNR) during the later denoising
steps. To address these challenges, we propose a unified formulation for the
quantization noise and diffusion perturbed noise in the quantized denoising
process. Specifically, we first disentangle the quantization noise into its
correlated and residual uncorrelated parts regarding its full-precision
counterpart. The correlated part can be easily corrected by estimating the
correlation coefficient. For the uncorrelated part, we subtract the bias from
the quantized results to correct the mean deviation and calibrate the denoising
variance schedule to absorb the excess variance resulting from quantization.
Moreover, we introduce a mixed-precision scheme for selecting the optimal
bitwidth for each denoising step. Extensive experiments demonstrate that our
method outperforms previous post-training quantized diffusion models, with only
a 0.06 increase in FID score compared to full-precision LDM-4 on ImageNet
256x256, while saving 19.9x bit operations. Code is available at
https://github.com/ziplab/PTQD.
| true | true |
He, Yefei and Liu, Luping and Liu, Jing and Wu, Weijia and Zhou, Hong and Zhuang, Bohan
| 2,024 | null | null | null |
Advances in Neural Information Processing Systems
|
PTQD: Accurate Post-Training Quantization for Diffusion Models
|
PTQD: Accurate Post-Training Quantization for Diffusion Models
|
https://arxiv.org/abs/2305.10657
|
Post-training quantization (PTQ) of diffusion models can significantly reduce the model size and accelerate the sampling process without re-training.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
huang2024tfmq
|
\cite{huang2024tfmq}
|
Tfmq-dm: Temporal feature maintenance quantization for diffusion models
| null | null | true | false |
Huang, Yushi and Gong, Ruihao and Liu, Jing and Chen, Tianlong and Liu, Xianglong
| 2,024 | null | null | null | null |
Tfmq-dm: Temporal feature maintenance quantization for diffusion models
|
TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
|
http://arxiv.org/pdf/2311.16503v3
|
The Diffusion model, a prevalent framework for image generation, encounters
significant challenges in terms of broad applicability due to its extended
inference times and substantial memory requirements. Efficient Post-training
Quantization (PTQ) is pivotal for addressing these issues in traditional
models. Different from traditional models, diffusion models heavily depend on
the time-step $t$ to achieve satisfactory multi-round denoising. Usually, $t$
from the finite set $\{1, \ldots, T\}$ is encoded to a temporal feature by a
few modules totally irrespective of the sampling data. However, existing PTQ
methods do not optimize these modules separately. They adopt inappropriate
reconstruction targets and complex calibration methods, resulting in a severe
disturbance of the temporal feature and denoising trajectory, as well as a low
compression efficiency. To solve these, we propose a Temporal Feature
Maintenance Quantization (TFMQ) framework building upon a Temporal Information
Block which is just related to the time-step $t$ and unrelated to the sampling
data. Powered by the pioneering block design, we devise temporal information
aware reconstruction (TIAR) and finite set calibration (FSC) to align the
full-precision temporal features in a limited time. Equipped with the
framework, we can maintain the most temporal information and ensure the
end-to-end generation quality. Extensive experiments on various datasets and
diffusion models prove our state-of-the-art results. Remarkably, our
quantization approach, for the first time, achieves model performance nearly on
par with the full-precision model under 4-bit weight quantization.
Additionally, our method incurs almost no extra computational cost and
accelerates quantization time by $2.0 \times$ on LSUN-Bedrooms $256 \times 256$
compared to previous works. Our code is publicly available at
https://github.com/ModelTC/TFMQ-DM.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
wang2024quest
|
\cite{wang2024quest}
|
QuEST: Low-bit Diffusion Model Quantization via Efficient Selective
Finetuning
|
http://arxiv.org/abs/2402.03666v6
|
The practical deployment of diffusion models is still hindered by the high
memory and computational overhead. Although quantization paves a way for model
compression and acceleration, existing methods face challenges in achieving
low-bit quantization efficiently. In this paper, we identify imbalanced
activation distributions as a primary source of quantization difficulty, and
propose to adjust these distributions through weight finetuning to be more
quantization-friendly. We provide both theoretical and empirical evidence
supporting finetuning as a practical and reliable solution. Building on this
approach, we further distinguish two critical types of quantized layers: those
responsible for retaining essential temporal information and those particularly
sensitive to bit-width reduction. By selectively finetuning these layers under
both local and global supervision, we mitigate performance degradation while
enhancing quantization efficiency. Our method demonstrates its efficacy across
three high-resolution image generation tasks, obtaining state-of-the-art
performance across multiple bit-width settings.
| true | true |
Wang, Haoxuan and Shang, Yuzhang and Yuan, Zhihang and Wu, Junyi and Yan, Yan
| 2,024 | null | null | null |
arXiv preprint arXiv:2402.03666
|
QuEST: Low-bit Diffusion Model Quantization via Efficient Selective
Finetuning
|
Low-bit Diffusion Model Quantization via Efficient Selective Finetuning
|
https://arxiv.org/abs/2402.03666
|
In this paper, we identify imbalanced activation distributions as a primary source of quantization difficulty, and propose to adjust these distributions
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
he2023efficientdm
|
\cite{he2023efficientdm}
|
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit
Diffusion Models
|
http://arxiv.org/abs/2310.03270v4
|
Diffusion models have demonstrated remarkable capabilities in image synthesis
and related generative tasks. Nevertheless, their practicality for real-world
applications is constrained by substantial computational costs and latency
issues. Quantization is a dominant way to compress and accelerate diffusion
models, where post-training quantization (PTQ) and quantization-aware training
(QAT) are two main approaches, each bearing its own properties. While PTQ
exhibits efficiency in terms of both time and data usage, it may lead to
diminished performance in low bit-width. On the other hand, QAT can alleviate
performance degradation but comes with substantial demands on computational and
data resources. In this paper, we introduce a data-free and parameter-efficient
fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to
achieve QAT-level performance with PTQ-like efficiency. Specifically, we
propose a quantization-aware variant of the low-rank adapter (QALoRA) that can
be merged with model weights and jointly quantized to low bit-width. The
fine-tuning process distills the denoising capabilities of the full-precision
model into its quantized counterpart, eliminating the requirement for training
data. We also introduce scale-aware optimization and temporal learned step-size
quantization to further enhance performance. Extensive experimental results
demonstrate that our method significantly outperforms previous PTQ-based
diffusion models while maintaining similar time and data efficiency.
Specifically, there is only a 0.05 sFID increase when quantizing both weights
and activations of LDM-4 to 4-bit on ImageNet 256x256. Compared to QAT-based
methods, our EfficientDM also boasts a 16.2x faster quantization speed with
comparable generation quality. Code is available at
\href{https://github.com/ThisisBillhe/EfficientDM}{this hrl}.
| true | true |
He, Yefei and Liu, Jing and Wu, Weijia and Zhou, Hong and Zhuang, Bohan
| 2,023 | null | null | null |
arXiv preprint arXiv:2310.03270
|
EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit
Diffusion Models
|
Efficient Quantization-Aware Fine-Tuning of Low-Bit ...
|
https://openreview.net/forum?id=UmMa3UNDAz
|
by Y He · Cited by 59 — We introduce a data-free, quantization-aware and parameter-efficient fine-tuning framework for low-bit diffusion models, dubbed EfficientDM, to achieve QAT-
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
zhao2025mixdq
|
\cite{zhao2025mixdq}
|
MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with
Metric-Decoupled Mixed Precision Quantization
|
http://arxiv.org/abs/2405.17873v2
|
Diffusion models have achieved significant visual generation quality.
However, their significant computational and memory costs pose challenge for
their application on resource-constrained mobile devices or even desktop GPUs.
Recent few-step diffusion models reduces the inference time by reducing the
denoising steps. However, their memory consumptions are still excessive. The
Post Training Quantization (PTQ) replaces high bit-width FP representation with
low-bit integer values (INT4/8) , which is an effective and efficient technique
to reduce the memory cost. However, when applying to few-step diffusion models,
existing quantization methods face challenges in preserving both the image
quality and text alignment. To address this issue, we propose an
mixed-precision quantization framework - MixDQ. Firstly, We design specialized
BOS-aware quantization method for highly sensitive text embedding quantization.
Then, we conduct metric-decoupled sensitivity analysis to measure the
sensitivity of each layer. Finally, we develop an integer-programming-based
method to conduct bit-width allocation. While existing quantization methods
fall short at W8A8, MixDQ could achieve W8A8 without performance loss, and W4A8
with negligible visual degradation. Compared with FP16, we achieve 3-4x
reduction in model size and memory cost, and 1.45x latency speedup.
| true | true |
Zhao, Tianchen and Ning, Xuefei and Fang, Tongcheng and Liu, Enshu and Huang, Guyue and Lin, Zinan and Yan, Shengen and Dai, Guohao and Wang, Yu
| 2,025 | null | null | null | null |
MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with
Metric-Decoupled Mixed Precision Quantization
|
MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion ...
|
https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/02212.pdf
|
by T Zhao12 · Cited by 29 — MixDQ is a mixed-precision quantization method for few-step text-to-image models, compressing memory by 3.4x without performance loss.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
chen2024qdit
|
\cite{chen2024qdit}
|
Q-dit: Accurate post-training quantization for diffusion transformers
| null | null | true | false |
Chen, Lei and Meng, Yuan and Tang, Chen and Ma, Xinzhu and Jiang, Jingyan and Wang, Xin and Wang, Zhi and Zhu, Wenwu
| 2,024 | null | null | null |
arXiv preprint arXiv:2406.17343
|
Q-dit: Accurate post-training quantization for diffusion transformers
|
[PDF] Q-DiT: Accurate Post-Training Quantization for Diffusion Transformers
|
https://openaccess.thecvf.com/content/CVPR2025/papers/Chen_Q-DiT_Accurate_Post-Training_Quantization_for_Diffusion_Transformers_CVPR_2025_paper.pdf
|
Post-Training. Quantization (PTQ) emerges as a promising solution, en- abling model compression and accelerated inference for pretrained models, without the
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
wu2024ptq4dit
|
\cite{wu2024ptq4dit}
|
PTQ4DiT: Post-training Quantization for Diffusion Transformers
|
http://arxiv.org/abs/2405.16005v3
|
The recent introduction of Diffusion Transformers (DiTs) has demonstrated
exceptional capabilities in image generation by using a different backbone
architecture, departing from traditional U-Nets and embracing the scalable
nature of transformers. Despite their advanced capabilities, the wide
deployment of DiTs, particularly for real-time applications, is currently
hampered by considerable computational demands at the inference stage.
Post-training Quantization (PTQ) has emerged as a fast and data-efficient
solution that can significantly reduce computation and memory footprint by
using low-bit weights and activations. However, its applicability to DiTs has
not yet been explored and faces non-trivial difficulties due to the unique
design of DiTs. In this paper, we propose PTQ4DiT, a specifically designed PTQ
method for DiTs. We discover two primary quantization challenges inherent in
DiTs, notably the presence of salient channels with extreme magnitudes and the
temporal variability in distributions of salient activation over multiple
timesteps. To tackle these challenges, we propose Channel-wise Salience
Balancing (CSB) and Spearmen's $\rho$-guided Salience Calibration (SSC). CSB
leverages the complementarity property of channel magnitudes to redistribute
the extremes, alleviating quantization errors for both activations and weights.
SSC extends this approach by dynamically adjusting the balanced salience to
capture the temporal variations in activation. Additionally, to eliminate extra
computational costs caused by PTQ4DiT during inference, we design an offline
re-parameterization strategy for DiTs. Experiments demonstrate that our PTQ4DiT
successfully quantizes DiTs to 8-bit precision (W8A8) while preserving
comparable generation ability and further enables effective quantization to
4-bit weight precision (W4A8) for the first time.
| true | true |
Wu, Junyi and Wang, Haoxuan and Shang, Yuzhang and Shah, Mubarak and Yan, Yan
| 2,024 | null | null | null |
arXiv preprint arXiv:2405.16005
|
PTQ4DiT: Post-training Quantization for Diffusion Transformers
|
PTQ4DiT: Post-training Quantization for Diffusion Transformers
|
https://openreview.net/forum?id=NLmAGkN6nn&referrer=%5Bthe%20profile%20of%20Haoxuan%20Wang%5D(%2Fprofile%3Fid%3D~Haoxuan_Wang1)
|
This paper presents PTQ4DiT, a quantization method designed for diffusion transformers. The method focuses on addressing quantization challenges
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
li2024svdqunat
|
\cite{li2024svdqunat}
|
Svdqunat: Absorbing outliers by low-rank components for 4-bit diffusion models
| null | null | true | false |
Li, Muyang and Lin, Yujun and Zhang, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song
| 2,024 | null | null | null |
arXiv preprint arXiv:2411.05007
|
Svdqunat: Absorbing outliers by low-rank components for 4-bit diffusion models
|
SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit ...
|
https://arxiv.org/html/2411.05007v1
|
SVDQuant is a post-training quantization technique for 4-bit weights and activations that well maintains visual fidelity.
|
Q-VDiT: Towards Accurate Quantization and Distillation of
Video-Generation Diffusion Transformers
|
2505.22167v1
|
zhao2024vidit
|
\cite{zhao2024vidit}
|
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
| null | null | true | false |
Zhao, Tianchen and Fang, Tongcheng and Liu, Enshu and Rui, Wan and Soedarmadji, Widyadewi and Li, Shiyao and Lin, Zinan and Dai, Guohao and Yan, Shengen and Yang, Huazhong and others
| 2,024 | null | null | null |
arXiv preprint arXiv:2406.02540
|
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
|
ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation
|
http://arxiv.org/pdf/2406.02540v3
|
Diffusion transformers have demonstrated remarkable performance in visual
generation tasks, such as generating realistic images or videos based on
textual instructions. However, larger model sizes and multi-frame processing
for video generation lead to increased computational and memory costs, posing
challenges for practical deployment on edge devices. Post-Training Quantization
(PTQ) is an effective method for reducing memory costs and computational
complexity. When quantizing diffusion transformers, we find that existing
quantization methods face challenges when applied to text-to-image and video
tasks. To address these challenges, we begin by systematically analyzing the
source of quantization error and conclude with the unique challenges posed by
DiT quantization. Accordingly, we design an improved quantization scheme:
ViDiT-Q (Video & Image Diffusion Transformer Quantization), tailored
specifically for DiT models. We validate the effectiveness of ViDiT-Q across a
variety of text-to-image and video models, achieving W8A8 and W4A8 with
negligible degradation in visual quality and metrics. Additionally, we
implement efficient GPU kernels to achieve practical 2-2.5x memory saving and a
1.4-1.7x end-to-end latency speedup.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
fever
|
\cite{fever}
|
FEVER: a large-scale dataset for Fact Extraction and VERification
|
http://arxiv.org/abs/1803.05355v3
|
In this paper we introduce a new publicly available dataset for verification
against textual sources, FEVER: Fact Extraction and VERification. It consists
of 185,445 claims generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
The claims are classified as Supported, Refuted or NotEnoughInfo by annotators
achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators
also recorded the sentence(s) forming the necessary evidence for their
judgment. To characterize the challenge of the dataset presented, we develop a
pipeline approach and compare it to suitably designed oracles. The best
accuracy we achieve on labeling a claim accompanied by the correct evidence is
31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that
FEVER is a challenging testbed that will help stimulate progress on claim
verification against textual sources.
| true | true |
James Thorne and
Andreas Vlachos and
Christos Christodoulopoulos and
Arpit Mittal
| 2,018 | null |
https://doi.org/10.18653/v1/n18-1074
|
10.18653/V1/N18-1074
| null |
FEVER: a large-scale dataset for Fact Extraction and VERification
|
FEVER: a Large-scale Dataset for Fact Extraction and ...
|
https://aclanthology.org/N18-1074/
|
by J Thorne · 2018 · Cited by 2060 — In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
faviq
|
\cite{faviq}
|
{F}a{VIQ}: {FA}ct Verification from Information-seeking Questions
| null | null | true | false |
Park, Jungsoo and
Min, Sewon and
Kang, Jaewoo and
Zettlemoyer, Luke and
Hajishirzi, Hannaneh
| 2,022 | null |
https://aclanthology.org/2022.acl-long.354/
|
10.18653/v1/2022.acl-long.354
| null |
{F}a{VIQ}: {FA}ct Verification from Information-seeking Questions
|
FAVIQ: FAct Verification from Information-seeking Questions
|
https://aclanthology.org/2022.acl-long.354.pdf
|
by J Park · 2022 · Cited by 39 — We construct a fact verification dataset from highly ambiguous information-seeking questions. Our claims have significantly less lexical bias
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
vitamin-c
|
\cite{vitamin-c}
|
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
|
http://arxiv.org/abs/2103.08541v1
|
Typical fact verification models use retrieved written evidence to verify
claims. Evidence sources, however, often change over time as more information
is gathered and revised. In order to adapt, models must be sensitive to subtle
differences in supporting evidence. We present VitaminC, a benchmark infused
with challenging cases that require fact verification models to discern and
adjust to slight factual changes. We collect over 100,000 Wikipedia revisions
that modify an underlying fact, and leverage these revisions, together with
additional synthetically constructed ones, to create a total of over 400,000
claim-evidence pairs. Unlike previous resources, the examples in VitaminC are
contrastive, i.e., they contain evidence pairs that are nearly identical in
language and content, with the exception that one supports a given claim while
the other does not. We show that training using this design increases
robustness -- improving accuracy by 10% on adversarial fact verification and 6%
on adversarial natural language inference (NLI). Moreover, the structure of
VitaminC leads us to define additional tasks for fact-checking resources:
tagging relevant words in the evidence for verifying the claim, identifying
factual revisions, and providing automatic edits via factually consistent text
generation.
| true | true |
Schuster, Tal and
Fisch, Adam and
Barzilay, Regina
| 2,021 | null |
https://aclanthology.org/2021.naacl-main.52/
|
10.18653/v1/2021.naacl-main.52
| null |
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
|
Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence
|
http://arxiv.org/pdf/2103.08541v1
|
Typical fact verification models use retrieved written evidence to verify
claims. Evidence sources, however, often change over time as more information
is gathered and revised. In order to adapt, models must be sensitive to subtle
differences in supporting evidence. We present VitaminC, a benchmark infused
with challenging cases that require fact verification models to discern and
adjust to slight factual changes. We collect over 100,000 Wikipedia revisions
that modify an underlying fact, and leverage these revisions, together with
additional synthetically constructed ones, to create a total of over 400,000
claim-evidence pairs. Unlike previous resources, the examples in VitaminC are
contrastive, i.e., they contain evidence pairs that are nearly identical in
language and content, with the exception that one supports a given claim while
the other does not. We show that training using this design increases
robustness -- improving accuracy by 10% on adversarial fact verification and 6%
on adversarial natural language inference (NLI). Moreover, the structure of
VitaminC leads us to define additional tasks for fact-checking resources:
tagging relevant words in the evidence for verifying the claim, identifying
factual revisions, and providing automatic edits via factually consistent text
generation.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
hover
|
\cite{hover}
|
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
|
http://arxiv.org/abs/2011.03088v2
|
We introduce HoVer (HOppy VERification), a dataset for many-hop evidence
extraction and fact verification. It challenges models to extract facts from
several Wikipedia articles that are relevant to a claim and classify whether
the claim is Supported or Not-Supported by the facts. In HoVer, the claims
require evidence to be extracted from as many as four English Wikipedia
articles and embody reasoning graphs of diverse shapes. Moreover, most of the
3/4-hop claims are written in multiple sentences, which adds to the complexity
of understanding long-range dependency relations such as coreference. We show
that the performance of an existing state-of-the-art semantic-matching model
degrades significantly on our dataset as the number of reasoning hops
increases, hence demonstrating the necessity of many-hop reasoning to achieve
strong results. We hope that the introduction of this challenging dataset and
the accompanying evaluation task will encourage research in many-hop fact
retrieval and information verification. We make the HoVer dataset publicly
available at https://hover-nlp.github.io
| true | true |
Yichen Jiang and
Shikha Bordia and
Zheng Zhong and
Charles Dognin and
Maneesh Kumar Singh and
Mohit Bansal
| 2,020 | null |
https://doi.org/10.18653/v1/2020.findings-emnlp.309
|
10.18653/V1/2020.FINDINGS-EMNLP.309
| null |
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
|
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
|
https://arxiv.org/abs/2011.03088
|
We introduce HoVer (HOppy VERification), a dataset for many-hop evidence extraction and fact verification. It challenges models to extract facts from several
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
graph-review
|
\cite{graph-review}
|
Graph Neural Networks: A Review of Methods and Applications
|
http://arxiv.org/abs/1812.08434v6
|
Lots of learning tasks require dealing with graph data which contains rich
relation information among elements. Modeling physics systems, learning
molecular fingerprints, predicting protein interface, and classifying diseases
demand a model to learn from graph inputs. In other domains such as learning
from non-structural data like texts and images, reasoning on extracted
structures (like the dependency trees of sentences and the scene graphs of
images) is an important research topic which also needs graph reasoning models.
Graph neural networks (GNNs) are neural models that capture the dependence of
graphs via message passing between the nodes of graphs. In recent years,
variants of GNNs such as graph convolutional network (GCN), graph attention
network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking
performances on many deep learning tasks. In this survey, we propose a general
design pipeline for GNN models and discuss the variants of each component,
systematically categorize the applications, and propose four open problems for
future research.
| true | true |
Jie Zhou and
Ganqu Cui and
Shengding Hu and
Zhengyan Zhang and
Cheng Yang and
Zhiyuan Liu and
Lifeng Wang and
Changcheng Li and
Maosong Sun
| 2,020 | null |
https://doi.org/10.1016/j.aiopen.2021.01.001
|
10.1016/J.AIOPEN.2021.01.001
|
{AI} Open
|
Graph Neural Networks: A Review of Methods and Applications
|
Graph Neural Networks: A Review of Methods and Applications
|
http://arxiv.org/pdf/1812.08434v6
|
Lots of learning tasks require dealing with graph data which contains rich
relation information among elements. Modeling physics systems, learning
molecular fingerprints, predicting protein interface, and classifying diseases
demand a model to learn from graph inputs. In other domains such as learning
from non-structural data like texts and images, reasoning on extracted
structures (like the dependency trees of sentences and the scene graphs of
images) is an important research topic which also needs graph reasoning models.
Graph neural networks (GNNs) are neural models that capture the dependence of
graphs via message passing between the nodes of graphs. In recent years,
variants of GNNs such as graph convolutional network (GCN), graph attention
network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking
performances on many deep learning tasks. In this survey, we propose a general
design pipeline for GNN models and discuss the variants of each component,
systematically categorize the applications, and propose four open problems for
future research.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
tapas
|
\cite{tapas}
|
TAPAS: Weakly Supervised Table Parsing via Pre-training
|
http://arxiv.org/abs/2004.02349v2
|
Answering natural language questions over tables is usually seen as a
semantic parsing task. To alleviate the collection cost of full logical forms,
one popular approach focuses on weak supervision consisting of denotations
instead of logical forms. However, training semantic parsers from weak
supervision poses difficulties, and in addition, the generated logical forms
are only used as an intermediate step prior to retrieving the denotation. In
this paper, we present TAPAS, an approach to question answering over tables
without generating logical forms. TAPAS trains from weak supervision, and
predicts the denotation by selecting table cells and optionally applying a
corresponding aggregation operator to such selection. TAPAS extends BERT's
architecture to encode tables as input, initializes from an effective joint
pre-training of text segments and tables crawled from Wikipedia, and is trained
end-to-end. We experiment with three different semantic parsing datasets, and
find that TAPAS outperforms or rivals semantic parsing models by improving
state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with
the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model
architecture. We additionally find that transfer learning, which is trivial in
our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the
state-of-the-art.
| true | true |
Herzig, Jonathan and
Nowak, Pawel Krzysztof and
M{\"u}ller, Thomas and
Piccinno, Francesco and
Eisenschlos, Julian
| 2,020 | null |
https://aclanthology.org/2020.acl-main.398/
|
10.18653/v1/2020.acl-main.398
| null |
TAPAS: Weakly Supervised Table Parsing via Pre-training
|
TaPas: Weakly Supervised Table Parsing via Pre-training
|
https://aclanthology.org/2020.acl-main.398/
|
by J Herzig · 2020 · Cited by 784 — TaPas trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
rat-sql
|
\cite{rat-sql}
|
{RAT-SQL}: Relation-Aware Schema Encoding and Linking for Text-to-{SQL} Parsers
| null | null | true | false |
Wang, Bailin and
Shin, Richard and
Liu, Xiaodong and
Polozov, Oleksandr and
Richardson, Matthew
| 2,020 | null |
https://aclanthology.org/2020.acl-main.677/
|
10.18653/v1/2020.acl-main.677
| null |
{RAT-SQL}: Relation-Aware Schema Encoding and Linking for Text-to-{SQL} Parsers
|
RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to ...
|
https://arxiv.org/abs/1911.04942
|
View a PDF of the paper titled RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers, by Bailin Wang and 4 other authors.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
programfc
|
\cite{programfc}
|
Fact-Checking Complex Claims with Program-Guided Reasoning
|
http://arxiv.org/abs/2305.12744v1
|
Fact-checking real-world claims often requires collecting multiple pieces of
evidence and applying complex multi-step reasoning. In this paper, we present
Program-Guided Fact-Checking (ProgramFC), a novel fact-checking model that
decomposes complex claims into simpler sub-tasks that can be solved using a
shared library of specialized functions. We first leverage the in-context
learning ability of large language models to generate reasoning programs to
guide the verification process. Afterward, we execute the program by delegating
each sub-task to the corresponding sub-task handler. This process makes our
model both explanatory and data-efficient, providing clear explanations of its
reasoning process and requiring minimal training data. We evaluate ProgramFC on
two challenging fact-checking datasets and show that it outperforms seven
fact-checking baselines across different settings of evidence availability,
with explicit output programs that benefit human debugging. Our codes and data
are publicly available at https://github.com/mbzuai-nlp/ProgramFC.
| true | true |
Liangming Pan and
Xiaobao Wu and
Xinyuan Lu and
Anh Tuan Luu and
William Yang Wang and
Min{-}Yen Kan and
Preslav Nakov
| 2,023 | null |
https://doi.org/10.18653/v1/2023.acl-long.386
|
10.18653/V1/2023.ACL-LONG.386
| null |
Fact-Checking Complex Claims with Program-Guided Reasoning
|
Fact-Checking Complex Claims with Program-Guided ...
|
https://aclanthology.org/2023.acl-long.386/
|
by L Pan · 2023 · Cited by 158 — A novel fact-checking model that decomposes complex claims into simpler sub-tasks that can be solved using a shared library of specialized functions.See more
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
folk
|
\cite{folk}
|
Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models
|
http://arxiv.org/abs/2310.05253v2
|
Claim verification plays a crucial role in combating misinformation. While
existing works on claim verification have shown promising results, a crucial
piece of the puzzle that remains unsolved is to understand how to verify claims
without relying on human-annotated data, which is expensive to create at a
large scale. Additionally, it is important for models to provide comprehensive
explanations that can justify their decisions and assist human fact-checkers.
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)
Reasoning that can verify complex claims and generate explanations without the
need for annotated evidence using Large Language Models (LLMs). FOLK leverages
the in-context learning ability of LLMs to translate the claim into a
First-Order-Logic (FOL) clause consisting of predicates, each corresponding to
a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning
over a set of knowledge-grounded question-and-answer pairs to make veracity
predictions and generate explanations to justify its decision-making process.
This process makes our model highly explanatory, providing clear explanations
of its reasoning process in human-readable form. Our experiment results
indicate that FOLK outperforms strong baselines on three datasets encompassing
various claim verification challenges. Our code and data are available.
| true | true |
Haoran Wang and
Kai Shu
| 2,023 | null |
https://doi.org/10.18653/v1/2023.findings-emnlp.416
|
10.18653/V1/2023.FINDINGS-EMNLP.416
| null |
Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models
|
[PDF] Explainable Claim Verification via Knowledge-Grounded Reasoning ...
|
https://aclanthology.org/2023.findings-emnlp.416.pdf
|
FOLK uses LLMs to translate claims into First-Order Logic, then uses knowledge-grounded reasoning to verify claims and generate explanations.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
factkg
|
\cite{factkg}
|
FactKG: Fact Verification via Reasoning on Knowledge Graphs
|
http://arxiv.org/abs/2305.06590v2
|
In real world applications, knowledge graphs (KG) are widely used in various
domains (e.g. medical applications and dialogue agents). However, for fact
verification, KGs have not been adequately utilized as a knowledge source. KGs
can be a valuable knowledge source in fact verification due to their
reliability and broad applicability. A KG consists of nodes and edges which
makes it clear how concepts are linked together, allowing machines to reason
over chains of topics. However, there are many challenges in understanding how
these machine-readable concepts map to information in text. To enable the
community to better use KGs, we introduce a new dataset, FactKG: Fact
Verification via Reasoning on Knowledge Graphs. It consists of 108k natural
language claims with five types of reasoning: One-hop, Conjunction, Existence,
Multi-hop, and Negation. Furthermore, FactKG contains various linguistic
patterns, including colloquial style claims as well as written style claims to
increase practicality. Lastly, we develop a baseline approach and analyze
FactKG over these reasoning types. We believe FactKG can advance both
reliability and practicality in KG-based fact verification.
| true | true |
Jiho Kim and
Sungjin Park and
Yeonsu Kwon and
Yohan Jo and
James Thorne and
Edward Choi
| 2,023 | null |
https://doi.org/10.18653/v1/2023.acl-long.895
|
10.18653/V1/2023.ACL-LONG.895
| null |
FactKG: Fact Verification via Reasoning on Knowledge Graphs
|
FactKG: Fact Verification via Reasoning on Knowledge Graphs
|
http://arxiv.org/pdf/2305.06590v2
|
In real world applications, knowledge graphs (KG) are widely used in various
domains (e.g. medical applications and dialogue agents). However, for fact
verification, KGs have not been adequately utilized as a knowledge source. KGs
can be a valuable knowledge source in fact verification due to their
reliability and broad applicability. A KG consists of nodes and edges which
makes it clear how concepts are linked together, allowing machines to reason
over chains of topics. However, there are many challenges in understanding how
these machine-readable concepts map to information in text. To enable the
community to better use KGs, we introduce a new dataset, FactKG: Fact
Verification via Reasoning on Knowledge Graphs. It consists of 108k natural
language claims with five types of reasoning: One-hop, Conjunction, Existence,
Multi-hop, and Negation. Furthermore, FactKG contains various linguistic
patterns, including colloquial style claims as well as written style claims to
increase practicality. Lastly, we develop a baseline approach and analyze
FactKG over these reasoning types. We believe FactKG can advance both
reliability and practicality in KG-based fact verification.
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
kg_gpt
|
\cite{kg_gpt}
|
{KG-GPT:} {A} General Framework for Reasoning on Knowledge Graphs
Using Large Language Models
| null | null | true | false |
Jiho Kim and
Yeonsu Kwon and
Yohan Jo and
Edward Choi
| 2,023 | null |
https://doi.org/10.18653/v1/2023.findings-emnlp.631
|
10.18653/V1/2023.FINDINGS-EMNLP.631
| null |
{KG-GPT:} {A} General Framework for Reasoning on Knowledge Graphs
Using Large Language Models
|
KG-GPT: A General Framework for Reasoning on Knowledge ...
|
https://www.researchgate.net/publication/376404206_KG-GPT_A_General_Framework_for_Reasoning_on_Knowledge_Graphs_Using_Large_Language_Models
|
Recently, Large Language Models (LLMs) have shown remarkable proficiency, prompting growing interest in AQA among researchers.GraphLLM: A General Framework for Multi-hop Question Answering over Knowledge Graphs Using Large Language Models .
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
struct-gpt
|
\cite{struct-gpt}
|
{S}truct{GPT}: A General Framework for Large Language Model to Reason over Structured Data
| null | null | true | false |
Jiang, Jinhao and
Zhou, Kun and
Dong, Zican and
Ye, Keming and
Zhao, Xin and
Wen, Ji-Rong
| 2,023 | null |
https://aclanthology.org/2023.emnlp-main.574/
|
10.18653/v1/2023.emnlp-main.574
| null |
{S}truct{GPT}: A General Framework for Large Language Model to Reason over Structured Data
|
StructGPT: A General Framework for Large Language Model ... - arXiv
|
https://arxiv.org/abs/2305.09645
|
View a PDF of the paper titled StructGPT: A General Framework for Large Language Model to Reason over Structured Data, by Jinhao Jiang and 4 other authors > Abstract:In this paper, we study how to improve the zero-shot reasoning ability of large language models~(LLMs) over structured data in a unified way. View a PDF of the paper titled StructGPT: A General Framework for Large Language Model to Reason over Structured Data, by Jinhao Jiang and 4 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle
|
ClaimPKG: Enhancing Claim Verification via Pseudo-Subgraph Generation
with Lightweight Specialized LLM
|
2505.22552v1
|
reasoningongraph
|
\cite{reasoningongraph}
|
Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning
|
http://arxiv.org/abs/2310.01061v2
|
Large language models (LLMs) have demonstrated impressive reasoning abilities
in complex tasks. However, they lack up-to-date knowledge and experience
hallucinations during reasoning, which can lead to incorrect reasoning
processes and diminish their performance and trustworthiness. Knowledge graphs
(KGs), which capture vast amounts of facts in a structured format, offer a
reliable source of knowledge for reasoning. Nevertheless, existing KG-based LLM
reasoning methods only treat KGs as factual knowledge bases and overlook the
importance of their structural information for reasoning. In this paper, we
propose a novel method called reasoning on graphs (RoG) that synergizes LLMs
with KGs to enable faithful and interpretable reasoning. Specifically, we
present a planning-retrieval-reasoning framework, where RoG first generates
relation paths grounded by KGs as faithful plans. These plans are then used to
retrieve valid reasoning paths from the KGs for LLMs to conduct faithful
reasoning. Furthermore, RoG not only distills knowledge from KGs to improve the
reasoning ability of LLMs through training but also allows seamless integration
with any arbitrary LLMs during inference. Extensive experiments on two
benchmark KGQA datasets demonstrate that RoG achieves state-of-the-art
performance on KG reasoning tasks and generates faithful and interpretable
reasoning results.
| true | true |
Linhao Luo and
Yuan{-}Fang Li and
Gholamreza Haffari and
Shirui Pan
| 2,024 | null |
https://openreview.net/forum?id=ZGNWW7xZ6Q
| null | null |
Reasoning on Graphs: Faithful and Interpretable Large Language Model
Reasoning
|
Faithful and Interpretable Large Language Model Reasoning
|
https://arxiv.org/abs/2310.01061
|
**arXiv:2310.01061** (cs) View a PDF of the paper titled Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning, by Linhao Luo and 3 other authors (or arXiv:2310.01061v2 [cs.CL] for this version) View a PDF of the paper titled Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning, by Linhao Luo and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/eurosys/NarayanFPH15
|
\cite{DBLP:conf/eurosys/NarayanFPH15}
|
Verifiable Differential Privacy
|
http://arxiv.org/abs/2208.09011v2
|
Differential Privacy (DP) is often presented as a strong privacy-enhancing
technology with broad applicability and advocated as a de-facto standard for
releasing aggregate statistics on sensitive data. However, in many embodiments,
DP introduces a new attack surface: a malicious entity entrusted with releasing
statistics could manipulate the results and use the randomness of DP as a
convenient smokescreen to mask its nefariousness. Since revealing the random
noise would obviate the purpose of introducing it, the miscreant may have a
perfect alibi. To close this loophole, we introduce the idea of
\textit{Verifiable Differential Privacy}, which requires the publishing entity
to output a zero-knowledge proof that convinces an efficient verifier that the
output is both DP and reliable. Such a definition might seem unachievable, as a
verifier must validate that DP randomness was generated faithfully without
learning anything about the randomness itself. We resolve this paradox by
carefully mixing private and public randomness to compute verifiable DP
counting queries with theoretical guarantees and show that it is also practical
for real-world deployment. We also demonstrate that computational assumptions
are necessary by showing a separation between information-theoretic DP and
computational DP under our definition of verifiability.
| true | true |
Arjun Narayan and
Ariel Feldman and
Antonis Papadimitriou and
Andreas Haeberlen
| 2,015 | null |
https://doi.org/10.1145/2741948.2741978
|
10.1145/2741948.2741978
| null |
Verifiable Differential Privacy
|
Verifiable Differential Privacy
|
http://arxiv.org/pdf/2208.09011v2
|
Differential Privacy (DP) is often presented as a strong privacy-enhancing
technology with broad applicability and advocated as a de-facto standard for
releasing aggregate statistics on sensitive data. However, in many embodiments,
DP introduces a new attack surface: a malicious entity entrusted with releasing
statistics could manipulate the results and use the randomness of DP as a
convenient smokescreen to mask its nefariousness. Since revealing the random
noise would obviate the purpose of introducing it, the miscreant may have a
perfect alibi. To close this loophole, we introduce the idea of
\textit{Verifiable Differential Privacy}, which requires the publishing entity
to output a zero-knowledge proof that convinces an efficient verifier that the
output is both DP and reliable. Such a definition might seem unachievable, as a
verifier must validate that DP randomness was generated faithfully without
learning anything about the randomness itself. We resolve this paradox by
carefully mixing private and public randomness to compute verifiable DP
counting queries with theoretical guarantees and show that it is also practical
for real-world deployment. We also demonstrate that computational assumptions
are necessary by showing a separation between information-theoretic DP and
computational DP under our definition of verifiability.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
dprio
|
\cite{dprio}
|
DPrio: Efficient Differential Privacy with High Utility for Prio
| null | null | true | false |
Dana Keeler and
Chelsea Komlo and
Emily Lepert and
Shannon Veitch and
Xi He
| 2,023 | null |
https://doi.org/10.56553/popets-2023-0086
|
10.56553/POPETS-2023-0086
|
Proc. Priv. Enhancing Technol.
|
DPrio: Efficient Differential Privacy with High Utility for Prio
|
DPrio: Efficient Differential Privacy with High Utility for Prio
|
https://petsymposium.org/popets/2023/popets-2023-0086.php
|
We present a lightweight method that we call DPrio to augment Prio and related systems with differential privacy assurances while ensuring higher data utility.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
KCY21
|
\cite{KCY21}
|
Preventing Manipulation Attack in Local Differential Privacy using
Verifiable Randomization Mechanism
|
http://arxiv.org/abs/2104.06569v2
|
Several randomization mechanisms for local differential privacy (LDP) (e.g.,
randomized response) are well-studied to improve the utility. However, recent
studies show that LDP is generally vulnerable to malicious data providers in
nature. Because a data collector has to estimate background data distribution
only from already randomized data, malicious data providers can manipulate
their output before sending, i.e., randomization would provide them plausible
deniability. Attackers can skew the estimations effectively since they are
calculated by normalizing with randomization probability defined in the LDP
protocol, and can even control the estimations. In this paper, we show how we
prevent malicious attackers from compromising LDP protocol. Our approach is to
utilize a verifiable randomization mechanism. The data collector can verify the
completeness of executing an agreed randomization mechanism for every data
provider. Our proposed method completely protects the LDP protocol from
output-manipulations, and significantly mitigates the expected damage from
attacks. We do not assume any specific attacks, and it works effectively
against general output-manipulation, and thus is more powerful than previously
proposed countermeasures. We describe the secure version of three
state-of-the-art LDP protocols and empirically show they cause acceptable
overheads according to several parameters.
| true | true |
Fumiyuki Kato and
Yang Cao and
Masatoshi Yoshikawa
| 2,021 | null |
https://doi.org/10.1007/978-3-030-81242-3\_3
|
10.1007/978-3-030-81242-3\_3
| null |
Preventing Manipulation Attack in Local Differential Privacy using
Verifiable Randomization Mechanism
|
Preventing Manipulation Attack in Local Differential Privacy ...
|
https://inria.hal.science/hal-03677038v1
|
In this paper, we propose secure and efficient verifiable LDP protocols to prevent manipulation attacks. Specifically, we leverage Cryptographic Randomized
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/iclr/ShamsabadiTCBHP24
|
\cite{DBLP:conf/iclr/ShamsabadiTCBHP24}
|
Confidential-DPproof: Confidential Proof of Differentially Private
Training
| null | null | true | false |
Ali Shahin Shamsabadi and
Gefei Tan and
Tudor Cebere and
Aur{\'{e}}lien Bellet and
Hamed Haddadi and
Nicolas Papernot and
Xiao Wang and
Adrian Weller
| 2,024 | null |
https://openreview.net/forum?id=PQY2v6VtGe
| null | null |
Confidential-DPproof: Confidential Proof of Differentially Private
Training
|
[PDF] Confidential-DPproof - OpenReview
|
https://openreview.net/pdf?id=PQY2v6VtGe
|
We introduce Confidential-. DPproof, a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the (ε
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
BC23
|
\cite{BC23}
|
Interactive Proofs For Differentially Private Counting
| null | null | true | false |
Ari Biswas and
Graham Cormode
| 2,023 | null |
https://doi.org/10.1145/3576915.3616681
|
10.1145/3576915.3616681
| null |
Interactive Proofs For Differentially Private Counting
|
Interactive Proofs For Differentially Private Counting
|
https://dl.acm.org/doi/10.1145/3576915.3616681
|
We introduce the idea of Interactive Proofs For Differential Privacy, which requires the publishing entity to output a zero knowledge proof.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/pkc/AmbainisJL04
|
\cite{DBLP:conf/pkc/AmbainisJL04}
|
Cryptographic Randomized Response Techniques
|
http://arxiv.org/abs/cs/0302025v2
|
We develop cryptographically secure techniques to guarantee unconditional
privacy for respondents to polls. Our constructions are efficient and
practical, and are shown not to allow cheating respondents to affect the
``tally'' by more than their own vote -- which will be given the exact same
weight as that of other respondents. We demonstrate solutions to this problem
based on both traditional cryptographic techniques and quantum cryptography.
| true | true |
Andris Ambainis and
Markus Jakobsson and
Helger Lipmaa
| 2,004 | null |
https://doi.org/10.1007/978-3-540-24632-9\_31
|
10.1007/978-3-540-24632-9\_31
| null |
Cryptographic Randomized Response Techniques
|
Cryptographic Randomized Response Techniques
|
http://arxiv.org/pdf/cs/0302025v2
|
We develop cryptographically secure techniques to guarantee unconditional
privacy for respondents to polls. Our constructions are efficient and
practical, and are shown not to allow cheating respondents to affect the
``tally'' by more than their own vote -- which will be given the exact same
weight as that of other respondents. We demonstrate solutions to this problem
based on both traditional cryptographic techniques and quantum cryptography.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/sp/BonehBCGI21
|
\cite{DBLP:conf/sp/BonehBCGI21}
|
Lightweight Techniques for Private Heavy Hitters
|
http://arxiv.org/abs/2012.14884v5
|
This paper presents Poplar, a new system for solving the private
heavy-hitters problem. In this problem, there are many clients and a small set
of data-collection servers. Each client holds a private bitstring. The servers
want to recover the set of all popular strings, without learning anything else
about any client's string. A web-browser vendor, for instance, can use Poplar
to figure out which homepages are popular, without learning any user's
homepage. We also consider the simpler private subset-histogram problem, in
which the servers want to count how many clients hold strings in a particular
set without revealing this set to the clients.
Poplar uses two data-collection servers and, in a protocol run, each client
send sends only a single message to the servers. Poplar protects client privacy
against arbitrary misbehavior by one of the servers and our approach requires
no public-key cryptography (except for secure channels), nor general-purpose
multiparty computation. Instead, we rely on incremental distributed point
functions, a new cryptographic tool that allows a client to succinctly
secret-share the labels on the nodes of an exponentially large binary tree,
provided that the tree has a single non-zero path. Along the way, we develop
new general tools for providing malicious security in applications of
distributed point functions.
| true | true |
Dan Boneh and
Elette Boyle and
Henry Corrigan{-}Gibbs and
Niv Gilboa and
Yuval Ishai
| 2,021 | null |
https://doi.org/10.1109/SP40001.2021.00048
|
10.1109/SP40001.2021.00048
| null |
Lightweight Techniques for Private Heavy Hitters
|
Lightweight Techniques for Private Heavy Hitters
|
http://arxiv.org/pdf/2012.14884v5
|
This paper presents Poplar, a new system for solving the private
heavy-hitters problem. In this problem, there are many clients and a small set
of data-collection servers. Each client holds a private bitstring. The servers
want to recover the set of all popular strings, without learning anything else
about any client's string. A web-browser vendor, for instance, can use Poplar
to figure out which homepages are popular, without learning any user's
homepage. We also consider the simpler private subset-histogram problem, in
which the servers want to count how many clients hold strings in a particular
set without revealing this set to the clients.
Poplar uses two data-collection servers and, in a protocol run, each client
send sends only a single message to the servers. Poplar protects client privacy
against arbitrary misbehavior by one of the servers and our approach requires
no public-key cryptography (except for secure channels), nor general-purpose
multiparty computation. Instead, we rely on incremental distributed point
functions, a new cryptographic tool that allows a client to succinctly
secret-share the labels on the nodes of an exponentially large binary tree,
provided that the tree has a single non-zero path. Along the way, we develop
new general tools for providing malicious security in applications of
distributed point functions.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/sigmod/ChowdhuryW0MJ20
|
\cite{DBLP:conf/sigmod/ChowdhuryW0MJ20}
|
Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted
Servers
|
http://arxiv.org/abs/1902.07756v5
|
Differential privacy (DP) has steadily become the de-facto standard for
achieving privacy in data analysis, which is typically implemented either in
the "central" or "local" model. The local model has been more popular for
commercial deployments as it does not require a trusted data collector. This
increased privacy, however, comes at a cost of utility and algorithmic
expressibility as compared to the central model.
In this work, we propose, Crypt$\epsilon$, a system and programming framework
that (1) achieves the accuracy guarantees and algorithmic expressibility of the
central model (2) without any trusted data collector like in the local model.
Crypt$\epsilon$ achieves the "best of both worlds" by employing two
non-colluding untrusted servers that run DP programs on encrypted data from the
data owners. Although straightforward implementations of DP programs using
secure computation tools can achieve the above goal theoretically, in practice
they are beset with many challenges such as poor performance and tricky
security proofs. To this end, Crypt$\epsilon$ allows data analysts to author
logical DP programs that are automatically translated to secure protocols that
work on encrypted data. These protocols ensure that the untrusted servers learn
nothing more than the noisy outputs, thereby guaranteeing DP (for
computationally bounded adversaries) for all Crypt$\epsilon$ programs.
Crypt$\epsilon$ supports a rich class of DP programs that can be expressed via
a small set of transformation and measurement operators followed by arbitrary
post-processing. Further, we propose performance optimizations leveraging the
fact that the output is noisy. We demonstrate Crypt$\epsilon$'s feasibility for
practical DP analysis with extensive empirical evaluations on real datasets.
| true | true |
Amrita Roy Chowdhury and
Chenghong Wang and
Xi He and
Ashwin Machanavajjhala and
Somesh Jha
| 2,020 | null |
https://doi.org/10.1145/3318464.3380596
|
10.1145/3318464.3380596
| null |
Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted
Servers
|
Crypt$ε$: Crypto-Assisted Differential Privacy on Untrusted Servers
|
https://arxiv.org/abs/1902.07756
|
Crypt\epsilon allows data analysts to author logical DP programs that are automatically translated to secure protocols that work on encrypted data.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/ccs/BellBGL020
|
\cite{DBLP:conf/ccs/BellBGL020}
|
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
| null | null | true | false |
James Henry Bell and
Kallista A. Bonawitz and
Adri{\`{a}} Gasc{\'{o}}n and
Tancr{\`{e}}de Lepoint and
Mariana Raykova
| 2,020 | null |
https://doi.org/10.1145/3372297.3417885
|
10.1145/3372297.3417885
| null |
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
|
Secure Single-Server Aggregation with (Poly)Logarithmic Overhead
|
https://eprint.iacr.org/2020/704
|
We present the first constructions for secure aggregation that achieve polylogarithmic communication and computation per client.
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/eurocrypt/DworkKMMN06
|
\cite{DBLP:conf/eurocrypt/DworkKMMN06}
|
Our Data, Ourselves: Privacy Via Distributed Noise Generation
| null | null | true | false |
Cynthia Dwork and
Krishnaram Kenthapadi and
Frank McSherry and
Ilya Mironov and
Moni Naor
| 2,006 | null |
https://doi.org/10.1007/11761679\_29
|
10.1007/11761679\_29
| null |
Our Data, Ourselves: Privacy Via Distributed Noise Generation
|
[PDF] Our Data, Ourselves: Privacy via Distributed Noise Generation - IACR
|
https://iacr.org/archive/eurocrypt2006/40040493/40040493.pdf
|
Abstract. In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/ccs/ChampionSU19
|
\cite{DBLP:conf/ccs/ChampionSU19}
|
Securely Sampling Biased Coins with Applications to Differential Privacy
| null | null | true | false |
Jeffrey Champion and
Abhi Shelat and
Jonathan R. Ullman
| 2,019 | null |
https://doi.org/10.1145/3319535.3354256
|
10.1145/3319535.3354256
| null |
Securely Sampling Biased Coins with Applications to Differential Privacy
|
Securely Sampling Biased Coins with Applications to ...
|
https://www.cs.utexas.edu/~jchamps/Slides/SecurelySampling.pdf
|
by J Champion · Cited by 37 — Securely Sampling Biased Coins with. Applications to Differential Privacy. Jeffrey Champion, abhi shelat, Jonathan Ullman. Northeastern University. Page 2
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/uss/BohlerK20
|
\cite{DBLP:conf/uss/BohlerK20}
|
Secure Multi-party Computation of Differentially Private Median
| null | null | true | false |
Jonas B{\"{o}}hler and
Florian Kerschbaum
| 2,020 | null |
https://www.usenix.org/conference/usenixsecurity20/presentation/boehler
| null | null |
Secure Multi-party Computation of Differentially Private Median
|
[PDF] Secure Multi-party Computation of Differentially Private Median
|
https://www.usenix.org/system/files/sec20-bohler.pdf
|
In the following, we introduce preliminaries for differential privacy and secure multi-party computation. We consider a set of input parties P =
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/ccs/BohlerK21
|
\cite{DBLP:conf/ccs/BohlerK21}
|
Secure Multi-party Computation of Differentially Private Heavy Hitters
| null | null | true | false |
Jonas B{\"{o}}hler and
Florian Kerschbaum
| 2,021 | null |
https://doi.org/10.1145/3460120.3484557
|
10.1145/3460120.3484557
| null |
Secure Multi-party Computation of Differentially Private Heavy Hitters
|
Secure Multi-party Computation of Differentially Private Heavy ...
|
https://dl.acm.org/doi/10.1145/3460120.3484557
|
* Zhang Y Ye Q Hu H(2025)Federated Heavy Hitter Analytics with Local Differential Privacy Proceedings of the ACM on Management of Data 10.1145/3709739**3**:1(1-27)Online publication date: 11-Feb-2025https://dl.acm.org/doi/10.1145/3709739 * Fu Y Wang T Luo B Liao X Xu J Kirda E Lie D(2024)Benchmarking Secure Sampling Protocols for Differential Privacy Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security 10.1145/3658644.3690257(318-332)Online publication date: 2-Dec-2024https://dl.acm.org/doi/10.1145/3658644.3690257 * Tong W Chen H Niu J Zhong S Luo B Liao X Xu J Kirda E Lie D(2024)Data Poisoning Attacks to Locally Differentially Private Frequent Itemset Mining Protocols Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security 10.1145/3658644.3670298(3555-3569)Online publication date: 2-Dec-2024https://dl.acm.org/doi/10.1145/3658644.3670298
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:journals/corr/abs-2109-10074
|
\cite{DBLP:journals/corr/abs-2109-10074}
|
{STAR:} Distributed Secret Sharing for Private Threshold Aggregation
Reporting
| null | null | true | false |
Alex Davidson and
Peter Snyder and
E. B. Quirk and
Joseph Genereux and
Benjamin Livshits
| 2,021 | null |
https://arxiv.org/abs/2109.10074
| null |
CoRR
|
{STAR:} Distributed Secret Sharing for Private Threshold Aggregation
Reporting
|
draft-dss-star-02 - STAR: Distributed Secret Sharing for ...
|
https://datatracker.ietf.org/doc/draft-dss-star/
|
In this document we describe STAR, an efficient and secure threshold aggregation protocol for collecting measurements from clients by an untrusted aggregation
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/ccs/WeiYFCW23
|
\cite{DBLP:conf/ccs/WeiYFCW23}
|
Securely Sampling Discrete Gaussian Noise for Multi-Party Differential
Privacy
| null | null | true | false |
Chengkun Wei and
Ruijing Yu and
Yuan Fan and
Wenzhi Chen and
Tianhao Wang
| 2,023 | null |
https://doi.org/10.1145/3576915.3616641
|
10.1145/3576915.3616641
| null |
Securely Sampling Discrete Gaussian Noise for Multi-Party Differential
Privacy
|
Securely Sampling Discrete Gaussian Noise for Multi-Party ...
|
https://dl.acm.org/doi/10.1145/3576915.3616641
|
Our work presents the first MPC solution for sampling discrete Gaussian, a common type of noise used for constructing DP mechanisms, which plays nicely with
|
VDDP: Verifiable Distributed Differential Privacy under the
Client-Server-Verifier Setup
|
2504.21752v1
|
DBLP:conf/ccs/FuW24
|
\cite{DBLP:conf/ccs/FuW24}
|
Benchmarking Secure Sampling Protocols for Differential Privacy
|
http://arxiv.org/abs/2409.10667v2
|
Differential privacy (DP) is widely employed to provide privacy protection
for individuals by limiting information leakage from the aggregated data. Two
well-known models of DP are the central model and the local model. The former
requires a trustworthy server for data aggregation, while the latter requires
individuals to add noise, significantly decreasing the utility of aggregated
results. Recently, many studies have proposed to achieve DP with Secure
Multi-party Computation (MPC) in distributed settings, namely, the distributed
model, which has utility comparable to central model while, under specific
security assumptions, preventing parties from obtaining others' information.
One challenge of realizing DP in distributed model is efficiently sampling
noise with MPC. Although many secure sampling methods have been proposed, they
have different security assumptions and isolated theoretical analyses. There is
a lack of experimental evaluations to measure and compare their performances.
We fill this gap by benchmarking existing sampling protocols in MPC and
performing comprehensive measurements of their efficiency. First, we present a
taxonomy of the underlying techniques of these sampling protocols. Second, we
extend widely used distributed noise generation protocols to be resilient
against Byzantine attackers. Third, we implement discrete sampling protocols
and align their security settings for a fair comparison. We then conduct an
extensive evaluation to study their efficiency and utility.
| true | true |
Yucheng Fu and
Tianhao Wang
| 2,024 | null |
https://doi.org/10.1145/3658644.3690257
|
10.1145/3658644.3690257
| null |
Benchmarking Secure Sampling Protocols for Differential Privacy
|
Benchmarking Secure Sampling Protocols for Differential Privacy
|
http://arxiv.org/pdf/2409.10667v2
|
Differential privacy (DP) is widely employed to provide privacy protection
for individuals by limiting information leakage from the aggregated data. Two
well-known models of DP are the central model and the local model. The former
requires a trustworthy server for data aggregation, while the latter requires
individuals to add noise, significantly decreasing the utility of aggregated
results. Recently, many studies have proposed to achieve DP with Secure
Multi-party Computation (MPC) in distributed settings, namely, the distributed
model, which has utility comparable to central model while, under specific
security assumptions, preventing parties from obtaining others' information.
One challenge of realizing DP in distributed model is efficiently sampling
noise with MPC. Although many secure sampling methods have been proposed, they
have different security assumptions and isolated theoretical analyses. There is
a lack of experimental evaluations to measure and compare their performances.
We fill this gap by benchmarking existing sampling protocols in MPC and
performing comprehensive measurements of their efficiency. First, we present a
taxonomy of the underlying techniques of these sampling protocols. Second, we
extend widely used distributed noise generation protocols to be resilient
against Byzantine attackers. Third, we implement discrete sampling protocols
and align their security settings for a fair comparison. We then conduct an
extensive evaluation to study their efficiency and utility.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
TabelDiscovery
|
\cite{TabelDiscovery}
|
Table Discovery in Data Lakes: State-of-the-art and Future Directions
| null | null | true | false |
Grace Fan and
Jin Wang and
Yuliang Li and
Ren{\'{e}}e J. Miller
| 2,023 | null | null | null | null |
Table Discovery in Data Lakes: State-of-the-art and Future Directions
|
Table Discovery in Data Lakes: State-of-the-art and Future Directions
|
https://dl.acm.org/doi/pdf/10.1145/3555041.3589409
|
We will cover table understanding tasks such as domain discov- ery, table annotation, and table representation learning which help data lake
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
DataLake_Survey
|
\cite{DataLake_Survey}
|
Data Lakes: A Survey of Functions and Systems
|
http://arxiv.org/abs/2106.09592v2
|
Data lakes are becoming increasingly prevalent for big data management and
data analytics. In contrast to traditional 'schema-on-write' approaches such as
data warehouses, data lakes are repositories storing raw data in its original
formats and providing a common access interface. Despite the strong interest
raised from both academia and industry, there is a large body of ambiguity
regarding the definition, functions and available technologies for data lakes.
A complete, coherent picture of data lake challenges and solutions is still
missing. This survey reviews the development, architectures, and systems of
data lakes. We provide a comprehensive overview of research questions for
designing and building data lakes. We classify the existing approaches and
systems based on their provided functions for data lakes, which makes this
survey a useful technical reference for designing, implementing and deploying
data lakes. We hope that the thorough comparison of existing solutions and the
discussion of open research challenges in this survey will motivate the future
development of data lake research and practice.
| true | true |
Rihan Hai and
Christos Koutras and
Christoph Quix and
Matthias Jarke
| 2,023 | null | null | null |
{IEEE} Trans. Knowl. Data Eng.
|
Data Lakes: A Survey of Functions and Systems
|
Data Lakes: A Survey of Functions and Systems
|
http://arxiv.org/pdf/2106.09592v2
|
Data lakes are becoming increasingly prevalent for big data management and
data analytics. In contrast to traditional 'schema-on-write' approaches such as
data warehouses, data lakes are repositories storing raw data in its original
formats and providing a common access interface. Despite the strong interest
raised from both academia and industry, there is a large body of ambiguity
regarding the definition, functions and available technologies for data lakes.
A complete, coherent picture of data lake challenges and solutions is still
missing. This survey reviews the development, architectures, and systems of
data lakes. We provide a comprehensive overview of research questions for
designing and building data lakes. We classify the existing approaches and
systems based on their provided functions for data lakes, which makes this
survey a useful technical reference for designing, implementing and deploying
data lakes. We hope that the thorough comparison of existing solutions and the
discussion of open research challenges in this survey will motivate the future
development of data lake research and practice.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
AdelfioS13
|
\cite{AdelfioS13}
|
Schema Extraction for Tabular Data on the Web
| null | null | true | false |
Marco D. Adelfio and
Hanan Samet
| 2,013 | null | null | null |
Proc. {VLDB} Endow.
|
Schema Extraction for Tabular Data on the Web
|
[PDF] Schema Extraction for Tabular Data on the Web ∗ - VLDB Endowment
|
http://www.vldb.org/pvldb/vol6/p421-adelfio.pdf
|
The schemas of these data ta- bles are determined using a classification technique based on conditional random fields in combination with a novel fea- ture
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
GoogleSearch
|
\cite{GoogleSearch}
|
Google Dataset Search: Building a search engine for datasets in an
open Web ecosystem
| null | null | true | false |
Dan Brickley and
Matthew Burgess and
Natasha F. Noy
| 2,019 | null | null | null | null |
Google Dataset Search: Building a search engine for datasets in an
open Web ecosystem
|
Building a search engine for datasets in an open Web ecosystem
|
https://research.google/pubs/google-dataset-search-building-a-search-engine-for-datasets-in-an-open-web-ecosystem/
|
In this paper, we discuss Google Dataset Search, a dataset-discovery tool that provides search capabilities over potentially all datasets published on the Web.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
JOSIE
|
\cite{JOSIE}
|
{JOSIE:} Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes
| null | null | true | false |
Erkang Zhu and
Dong Deng and
Fatemeh Nargesian and
Ren{\'{e}}e J. Miller
| 2,019 | null | null | null | null |
{JOSIE:} Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes
|
JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in ...
|
https://dl.acm.org/doi/10.1145/3299869.3300065
|
- JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes # JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes We show that JOSIE completely out performs the state-of-the-art overlap set similarity search techniques on data lakes. 1. JOSIE: Overlap Set Similarity Search for Finding Joinable Tables in Data Lakes Similarity search for data streams has attracted much attention recently in the area of information recommendation. - Mann WAugsten NJensen CPawlik M(2024)SWOOP: top-k similarity joins over set streamsThe VLDB Journal — The International Journal on Very Large Data Bases10.1007/s00778-024-00880-x**34**:1Online publication date: 23-Dec-2024
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
Deepjoin
|
\cite{Deepjoin}
|
DeepJoin: Joinable Table Discovery with Pre-trained Language Models
|
http://arxiv.org/abs/2212.07588v2
|
Due to the usefulness in data enrichment for data analysis tasks, joinable
table discovery has become an important operation in data lake management.
Existing approaches target equi-joins, the most common way of combining tables
for creating a unified view, or semantic joins, which tolerate misspellings and
different formats to deliver more join results. They are either exact solutions
whose running time is linear in the sizes of query column and target table
repository or approximate solutions lacking precision. In this paper, we
propose Deepjoin, a deep learning model for accurate and efficient joinable
table discovery. Our solution is an embedding-based retrieval, which employs a
pre-trained language model (PLM) and is designed as one framework serving both
equi- and semantic joins. We propose a set of contextualization options to
transform column contents to a text sequence. The PLM reads the sequence and is
fine-tuned to embed columns to vectors such that columns are expected to be
joinable if they are close to each other in the vector space. Since the output
of the PLM is fixed in length, the subsequent search procedure becomes
independent of the column size. With a state-of-the-art approximate nearest
neighbor search algorithm, the search time is logarithmic in the repository
size. To train the model, we devise the techniques for preparing training data
as well as data augmentation. The experiments on real datasets demonstrate that
by training on a small subset of a corpus, Deepjoin generalizes to large
datasets and its precision consistently outperforms other approximate
solutions'. Deepjoin is even more accurate than an exact solution to semantic
joins when evaluated with labels from experts. Moreover, when equipped with a
GPU, Deepjoin is up to two orders of magnitude faster than existing solutions.
| true | true |
Yuyang Dong and
Chuan Xiao and
Takuma Nozawa and
Masafumi Enomoto and
Masafumi Oyamada
| 2,023 | null | null | null |
Proc. {VLDB} Endow.
|
DeepJoin: Joinable Table Discovery with Pre-trained Language Models
|
[PDF] DeepJoin: Joinable Table Discovery with Pre-trained Language ...
|
https://www.vldb.org/pvldb/vol16/p2458-dong.pdf
|
DeepJoin is a deep learning model using a pre-trained language model for joinable table discovery, handling both equi- and semantic joins.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
Snoopy
|
\cite{Snoopy}
|
Snoopy: Effective and Efficient Semantic Join Discovery via Proxy
Columns
|
http://arxiv.org/abs/2502.16813v1
|
Semantic join discovery, which aims to find columns in a table repository
with high semantic joinabilities to a query column, is crucial for dataset
discovery. Existing methods can be divided into two categories: cell-level
methods and column-level methods. However, neither of them ensures both
effectiveness and efficiency simultaneously. Cell-level methods, which compute
the joinability by counting cell matches between columns, enjoy ideal
effectiveness but suffer poor efficiency. In contrast, column-level methods,
which determine joinability only by computing the similarity of column
embeddings, enjoy proper efficiency but suffer poor effectiveness due to the
issues occurring in their column embeddings: (i) semantics-joinability-gap,
(ii) size limit, and (iii) permutation sensitivity. To address these issues,
this paper proposes to compute column embeddings via proxy columns;
furthermore, a novel column-level semantic join discovery framework, Snoopy, is
presented, leveraging proxy-column-based embeddings to bridge effectiveness and
efficiency. Specifically, the proposed column embeddings are derived from the
implicit column-to-proxy-column relationships, which are captured by the
lightweight approximate-graph-matching-based column projection.To acquire good
proxy columns for guiding the column projection, we introduce a rank-aware
contrastive learning paradigm. Extensive experiments on four real-world
datasets demonstrate that Snoopy outperforms SOTA column-level methods by 16%
in Recall@25 and 10% in NDCG@25, and achieves superior efficiency--being at
least 5 orders of magnitude faster than cell-level solutions, and 3.5x faster
than existing column-level methods.
| true | true |
Guo, Yuxiang and Mao, Yuren and Hu, Zhonghao and Chen, Lu and Gao, Yunjun
| 2,025 | null | null | null |
arXiv preprint arXiv:2502.16813
|
Snoopy: Effective and Efficient Semantic Join Discovery via Proxy
Columns
|
Effective and Efficient Semantic Join Discovery via Proxy Columns
|
https://arxiv.org/abs/2502.16813
|
A novel column-level semantic join discovery framework, Snoopy, is presented, leveraging proxy-column-based embeddings to bridge effectiveness and efficiency.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
starmine
|
\cite{starmine}
|
Semantics-aware Dataset Discovery from Data Lakes with Contextualized
Column-based Representation Learning
|
http://arxiv.org/abs/2210.01922v2
|
Dataset discovery from data lakes is essential in many real application
scenarios. In this paper, we propose Starmie, an end-to-end framework for
dataset discovery from data lakes (with table union search as the main use
case). Our proposed framework features a contrastive learning method to train
column encoders from pre-trained language models in a fully unsupervised
manner. The column encoder of Starmie captures the rich contextual semantic
information within tables by leveraging a contrastive multi-column pre-training
strategy. We utilize the cosine similarity between column embedding vectors as
the column unionability score and propose a filter-and-verification framework
that allows exploring a variety of design choices to compute the unionability
score between two tables accordingly. Empirical evaluation results on real
table benchmark datasets show that Starmie outperforms the best-known solutions
in the effectiveness of table union search by 6.8 in MAP and recall. Moreover,
Starmie is the first to employ the HNSW (Hierarchical Navigable Small World)
index for accelerate query processing of table union search which provides a
3,000X performance gain over the linear scan baseline and a 400X performance
gain over an LSH index (the state-of-the-art solution for data lake indexing).
| true | true |
Grace Fan and
Jin Wang and
Yuliang Li and
Dan Zhang and
Ren{\'{e}}e J. Miller
| 2,023 | null | null | null |
Proc. {VLDB} Endow.
|
Semantics-aware Dataset Discovery from Data Lakes with Contextualized
Column-based Representation Learning
|
Semantics-aware Dataset Discovery from Data Lakes with ...
|
https://www.researchgate.net/publication/364194737_Semantics-aware_Dataset_Discovery_from_Data_Lakes_with_Contextualized_Column-based_Representation_Learning
|
Our proposed framework features a contrastive learning method to train column encoders from pre-trained language models in a fully unsupervised
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
santos
|
\cite{santos}
|
SANTOS: Relationship-based Semantic Table Union Search
|
http://arxiv.org/abs/2209.13589v1
|
Existing techniques for unionable table search define unionability using
metadata (tables must have the same or similar schemas) or column-based metrics
(for example, the values in a table should be drawn from the same domain). In
this work, we introduce the use of semantic relationships between pairs of
columns in a table to improve the accuracy of union search. Consequently, we
introduce a new notion of unionability that considers relationships between
columns, together with the semantics of columns, in a principled way. To do so,
we present two new methods to discover semantic relationship between pairs of
columns. The first uses an existing knowledge base (KB), the second (which we
call a "synthesized KB") uses knowledge from the data lake itself. We adopt an
existing Table Union Search benchmark and present new (open) benchmarks that
represent small and large real data lakes. We show that our new unionability
search algorithm, called SANTOS, outperforms a state-of-the-art union search
that uses a wide variety of column-based semantics, including word embeddings
and regular expressions. We show empirically that our synthesized KB improves
the accuracy of union search by representing relationship semantics that may
not be contained in an available KB. This result hints at a promising future of
creating a synthesized KBs from data lakes with limited KB coverage and using
them for union search.
| true | true |
Aamod Khatiwada and
Grace Fan and
Roee Shraga and
Zixuan Chen and
Wolfgang Gatterbauer and
Ren{\'{e}}e J. Miller and
Mirek Riedewald
| 2,023 | null | null | null |
Proc. {ACM} Manag. Data
|
SANTOS: Relationship-based Semantic Table Union Search
|
SANTOS: Relationship-based Semantic Table Union Search
|
https://dl.acm.org/doi/10.1145/3588689
|
Our new unionability search algorithm, called SANTOS, outperforms a state-of-the-art union search that uses a wide variety of column-based semantics.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
TUS
|
\cite{TUS}
|
Table Union Search on Open Data
| null | null | true | false |
Fatemeh Nargesian and
Erkang Zhu and
Ken Q. Pu and
Ren{\'{e}}e J. Miller
| 2,018 | null | null | null |
Proc. {VLDB} Endow.
|
Table Union Search on Open Data
|
[PDF] Table Union Search on Open Data
|
https://www.semanticscholar.org/paper/Table-Union-Search-on-Open-Data-Nargesian-Zhu/5cadff7988d29c1596689d5b864f87f371783a50
|
This work defines the table union search problem and presents a probabilistic solution for finding tables that are unionable with a query table within
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
Solo
|
\cite{Solo}
|
Solo: Data Discovery Using Natural Language Questions Via A
Self-Supervised Approach
|
http://arxiv.org/abs/2301.03560v2
|
Most deployed data discovery systems, such as Google Datasets, and open data
portals only support keyword search. Keyword search is geared towards general
audiences but limits the types of queries the systems can answer. We propose a
new system that lets users write natural language questions directly. A major
barrier to using this learned data discovery system is it needs
expensive-to-collect training data, thus limiting its utility. In this paper,
we introduce a self-supervised approach to assemble training datasets and train
learned discovery systems without human intervention. It requires addressing
several challenges, including the design of self-supervised strategies for data
discovery, table representation strategies to feed to the models, and relevance
models that work well with the synthetically generated questions. We combine
all the above contributions into a system, Solo, that solves the problem end to
end. The evaluation results demonstrate the new techniques outperform
state-of-the-art approaches on well-known benchmarks. All in all, the technique
is a stepping stone towards building learned discovery systems. The code is
open-sourced at https://github.com/TheDataStation/solo
| true | true |
Qiming Wang and
Raul Castro Fernandez
| 2,023 | null | null | null |
Proc. {ACM} Manag. Data
|
Solo: Data Discovery Using Natural Language Questions Via A
Self-Supervised Approach
|
[PDF] Solo: Data Discovery Using Natural Language Questions Via A Self ...
|
https://arxiv.org/pdf/2301.03560
|
Solo is a system that allows users to write natural language questions for data discovery, using a self-supervised approach to train the system.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
OpenDTR
|
\cite{OpenDTR}
|
Open Domain Question Answering over Tables via Dense Retrieval
|
http://arxiv.org/abs/2103.12011v2
|
Recent advances in open-domain QA have led to strong models based on dense
retrieval, but only focused on retrieving textual passages. In this work, we
tackle open-domain QA over tables for the first time, and show that retrieval
can be improved by a retriever designed to handle tabular context. We present
an effective pre-training procedure for our retriever and improve retrieval
quality with mined hard negatives. As relevant datasets are missing, we extract
a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA
dataset. We find that our retriever improves retrieval results from 72.0 to
81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a
BERT based retriever.
| true | true |
Jonathan Herzig and
Thomas M{\"{u}}ller and
Syrine Krichene and
Julian Martin Eisenschlos
| 2,021 | null | null | null | null |
Open Domain Question Answering over Tables via Dense Retrieval
|
Open Domain Question Answering over Tables via Dense Retrieval
|
http://arxiv.org/pdf/2103.12011v2
|
Recent advances in open-domain QA have led to strong models based on dense
retrieval, but only focused on retrieving textual passages. In this work, we
tackle open-domain QA over tables for the first time, and show that retrieval
can be improved by a retriever designed to handle tabular context. We present
an effective pre-training procedure for our retriever and improve retrieval
quality with mined hard negatives. As relevant datasets are missing, we extract
a subset of Natural Questions (Kwiatkowski et al., 2019) into a Table QA
dataset. We find that our retriever improves retrieval results from 72.0 to
81.1 recall@10 and end-to-end QA results from 33.8 to 37.7 exact match, over a
BERT based retriever.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
OpenWiki
|
\cite{OpenWiki}
|
Open-WikiTable : Dataset for Open Domain Question Answering with Complex
Reasoning over Table
| null | null | true | false |
Sunjun Kweon and
Yeonsu Kwon and
Seonhee Cho and
Yohan Jo and
Edward Choi
| 2,023 | null | null | null | null |
Open-WikiTable : Dataset for Open Domain Question Answering with Complex
Reasoning over Table
|
Open-WikiTable :Dataset for Open Domain Question Answering with ...
|
https://github.com/sean0042/Open_WikiTable
|
The first ODQA dataset that requires complex reasoning over tables. Open-WikiTable is built upon WikiSQL and WikiTableQuestions to be applicable in the open-
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
TAPAS
|
\cite{TAPAS}
|
TAPAS: Weakly Supervised Table Parsing via Pre-training
|
http://arxiv.org/abs/2004.02349v2
|
Answering natural language questions over tables is usually seen as a
semantic parsing task. To alleviate the collection cost of full logical forms,
one popular approach focuses on weak supervision consisting of denotations
instead of logical forms. However, training semantic parsers from weak
supervision poses difficulties, and in addition, the generated logical forms
are only used as an intermediate step prior to retrieving the denotation. In
this paper, we present TAPAS, an approach to question answering over tables
without generating logical forms. TAPAS trains from weak supervision, and
predicts the denotation by selecting table cells and optionally applying a
corresponding aggregation operator to such selection. TAPAS extends BERT's
architecture to encode tables as input, initializes from an effective joint
pre-training of text segments and tables crawled from Wikipedia, and is trained
end-to-end. We experiment with three different semantic parsing datasets, and
find that TAPAS outperforms or rivals semantic parsing models by improving
state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with
the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model
architecture. We additionally find that transfer learning, which is trivial in
our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the
state-of-the-art.
| true | true |
Jonathan Herzig and
Pawel Krzysztof Nowak and
Thomas M{\"{u}}ller and
Francesco Piccinno and
Julian Martin Eisenschlos
| 2,020 | null | null | null | null |
TAPAS: Weakly Supervised Table Parsing via Pre-training
|
TaPas: Weakly Supervised Table Parsing via Pre-training
|
https://aclanthology.org/2020.acl-main.398/
|
by J Herzig · 2020 · Cited by 784 — TaPas trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
GTR
|
\cite{GTR}
|
Retrieving Complex Tables with Multi-Granular Graph Representation
Learning
|
http://arxiv.org/abs/2105.01736v1
|
The task of natural language table retrieval (NLTR) seeks to retrieve
semantically relevant tables based on natural language queries. Existing
learning systems for this task often treat tables as plain text based on the
assumption that tables are structured as dataframes. However, tables can have
complex layouts which indicate diverse dependencies between subtable
structures, such as nested headers. As a result, queries may refer to different
spans of relevant content that is distributed across these structures.
Moreover, such systems fail to generalize to novel scenarios beyond those seen
in the training set. Prior methods are still distant from a generalizable
solution to the NLTR problem, as they fall short in handling complex table
layouts or queries over multiple granularities. To address these issues, we
propose Graph-based Table Retrieval (GTR), a generalizable NLTR framework with
multi-granular graph representation learning. In our framework, a table is
first converted into a tabular graph, with cell nodes, row nodes and column
nodes to capture content at different granularities. Then the tabular graph is
input to a Graph Transformer model that can capture both table cell content and
the layout structures. To enhance the robustness and generalizability of the
model, we further incorporate a self-supervised pre-training task based on
graph-context matching. Experimental results on two benchmarks show that our
method leads to significant improvements over the current state-of-the-art
systems. Further experiments demonstrate promising performance of our method on
cross-dataset generalization, and enhanced capability of handling complex
tables and fulfilling diverse query intents. Code and data are available at
https://github.com/FeiWang96/GTR.
| true | true |
Fei Wang and
Kexuan Sun and
Muhao Chen and
Jay Pujara and
Pedro A. Szekely
| 2,021 | null | null | null | null |
Retrieving Complex Tables with Multi-Granular Graph Representation
Learning
|
[PDF] Retrieving Complex Tables with Multi-Granular Graph ... - arXiv
|
https://arxiv.org/pdf/2105.01736
|
GTR leverages state-of-the-art graph representation learning techniques to capture both content and layout structures of complex tables.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
AdHoc_TR
|
\cite{AdHoc_TR}
|
Ad Hoc Table Retrieval using Semantic Similarity
|
http://arxiv.org/abs/1802.06159v3
|
We introduce and address the problem of ad hoc table retrieval: answering a
keyword query with a ranked list of tables. This task is not only interesting
on its own account, but is also being used as a core component in many other
table-based information access scenarios, such as table completion or table
mining. The main novel contribution of this work is a method for performing
semantic matching between queries and tables. Specifically, we (i) represent
queries and tables in multiple semantic spaces (both discrete sparse and
continuous dense vector representations) and (ii) introduce various similarity
measures for matching those semantic representations. We consider all possible
combinations of semantic representations and similarity measures and use these
as features in a supervised learning model. Using a purpose-built test
collection based on Wikipedia tables, we demonstrate significant and
substantial improvements over a state-of-the-art baseline.
| true | true |
Shuo Zhang and
Krisztian Balog
| 2,018 | null | null | null | null |
Ad Hoc Table Retrieval using Semantic Similarity
|
Ad Hoc Table Retrieval using Semantic Similarity
|
http://arxiv.org/pdf/1802.06159v3
|
We introduce and address the problem of ad hoc table retrieval: answering a
keyword query with a ranked list of tables. This task is not only interesting
on its own account, but is also being used as a core component in many other
table-based information access scenarios, such as table completion or table
mining. The main novel contribution of this work is a method for performing
semantic matching between queries and tables. Specifically, we (i) represent
queries and tables in multiple semantic spaces (both discrete sparse and
continuous dense vector representations) and (ii) introduce various similarity
measures for matching those semantic representations. We consider all possible
combinations of semantic representations and similarity measures and use these
as features in a supervised learning model. Using a purpose-built test
collection based on Wikipedia tables, we demonstrate significant and
substantial improvements over a state-of-the-art baseline.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
TableSearch
|
\cite{TableSearch}
|
Table Search Using a Deep Contextualized Language Model
|
http://arxiv.org/abs/2005.09207v2
|
Pretrained contextualized language models such as BERT have achieved
impressive results on various natural language processing benchmarks.
Benefiting from multiple pretraining tasks and large scale training corpora,
pretrained models can capture complex syntactic word relations. In this paper,
we use the deep contextualized language model BERT for the task of ad hoc table
retrieval. We investigate how to encode table content considering the table
structure and input length limit of BERT. We also propose an approach that
incorporates features from prior literature on table retrieval and jointly
trains them with BERT. In experiments on public datasets, we show that our best
approach can outperform the previous state-of-the-art method and BERT baselines
with a large margin under different evaluation metrics.
| true | true |
Zhiyu Chen and
Mohamed Trabelsi and
Jeff Heflin and
Yinan Xu and
Brian D. Davison
| 2,020 | null | null | null | null |
Table Search Using a Deep Contextualized Language Model
|
Table Search Using a Deep Contextualized Language Model
|
http://arxiv.org/pdf/2005.09207v2
|
Pretrained contextualized language models such as BERT have achieved
impressive results on various natural language processing benchmarks.
Benefiting from multiple pretraining tasks and large scale training corpora,
pretrained models can capture complex syntactic word relations. In this paper,
we use the deep contextualized language model BERT for the task of ad hoc table
retrieval. We investigate how to encode table content considering the table
structure and input length limit of BERT. We also propose an approach that
incorporates features from prior literature on table retrieval and jointly
trains them with BERT. In experiments on public datasets, we show that our best
approach can outperform the previous state-of-the-art method and BERT baselines
with a large margin under different evaluation metrics.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
DSI
|
\cite{DSI}
|
Transformer Memory as a Differentiable Search Index
|
http://arxiv.org/abs/2202.06991v3
|
In this paper, we demonstrate that information retrieval can be accomplished
with a single Transformer, in which all information about the corpus is encoded
in the parameters of the model. To this end, we introduce the Differentiable
Search Index (DSI), a new paradigm that learns a text-to-text model that maps
string queries directly to relevant docids; in other words, a DSI model answers
queries directly using only its parameters, dramatically simplifying the whole
retrieval process. We study variations in how documents and their identifiers
are represented, variations in training procedures, and the interplay between
models and corpus sizes. Experiments demonstrate that given appropriate design
choices, DSI significantly outperforms strong baselines such as dual encoder
models. Moreover, DSI demonstrates strong generalization capabilities,
outperforming a BM25 baseline in a zero-shot setup.
| true | true |
Tay, Yi and Tran, Vinh Q and Dehghani, Mostafa and Ni, Jianmo and Bahri, Dara and Mehta, Harsh and Qin, Zhen and Hui, Kai and Zhao, Zhe and Gupta, Jai and others
| 2,022 | null | null | null | null |
Transformer Memory as a Differentiable Search Index
|
Transformer Memory as a Differentiable Search Index
|
http://arxiv.org/pdf/2202.06991v3
|
In this paper, we demonstrate that information retrieval can be accomplished
with a single Transformer, in which all information about the corpus is encoded
in the parameters of the model. To this end, we introduce the Differentiable
Search Index (DSI), a new paradigm that learns a text-to-text model that maps
string queries directly to relevant docids; in other words, a DSI model answers
queries directly using only its parameters, dramatically simplifying the whole
retrieval process. We study variations in how documents and their identifiers
are represented, variations in training procedures, and the interplay between
models and corpus sizes. Experiments demonstrate that given appropriate design
choices, DSI significantly outperforms strong baselines such as dual encoder
models. Moreover, DSI demonstrates strong generalization capabilities,
outperforming a BM25 baseline in a zero-shot setup.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
NCI
|
\cite{NCI}
|
A Neural Corpus Indexer for Document Retrieval
|
http://arxiv.org/abs/2206.02743v3
|
Current state-of-the-art document retrieval solutions mainly follow an
index-retrieve paradigm, where the index is hard to be directly optimized for
the final retrieval target. In this paper, we aim to show that an end-to-end
deep neural network unifying training and indexing stages can significantly
improve the recall performance of traditional methods. To this end, we propose
Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates
relevant document identifiers directly for a designated query. To optimize the
recall performance of NCI, we invent a prefix-aware weight-adaptive decoder
architecture, and leverage tailored techniques including query generation,
semantic document identifiers, and consistency-based regularization. Empirical
studies demonstrated the superiority of NCI on two commonly used academic
benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on
NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to
the best baseline method.
| true | true |
Wang, Yujing and Hou, Yingyan and Wang, Haonan and Miao, Ziming and Wu, Shibin and Sun, Hao and Chen, Qi and Xia, Yuqing and Chi, Chengmin and Zhao, Guoshuai and others
| 2,022 | null | null | null | null |
A Neural Corpus Indexer for Document Retrieval
|
A Neural Corpus Indexer for Document Retrieval
|
http://arxiv.org/pdf/2206.02743v3
|
Current state-of-the-art document retrieval solutions mainly follow an
index-retrieve paradigm, where the index is hard to be directly optimized for
the final retrieval target. In this paper, we aim to show that an end-to-end
deep neural network unifying training and indexing stages can significantly
improve the recall performance of traditional methods. To this end, we propose
Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates
relevant document identifiers directly for a designated query. To optimize the
recall performance of NCI, we invent a prefix-aware weight-adaptive decoder
architecture, and leverage tailored techniques including query generation,
semantic document identifiers, and consistency-based regularization. Empirical
studies demonstrated the superiority of NCI on two commonly used academic
benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on
NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to
the best baseline method.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
DSI-QG
|
\cite{DSI-QG}
|
Bridging the Gap Between Indexing and Retrieval for Differentiable
Search Index with Query Generation
|
http://arxiv.org/abs/2206.10128v3
|
The Differentiable Search Index (DSI) is an emerging paradigm for information
retrieval. Unlike traditional retrieval architectures where index and retrieval
are two different and separate components, DSI uses a single transformer model
to perform both indexing and retrieval.
In this paper, we identify and tackle an important issue of current DSI
models: the data distribution mismatch that occurs between the DSI indexing and
retrieval processes. Specifically, we argue that, at indexing, current DSI
methods learn to build connections between the text of long documents and the
identifier of the documents, but then retrieval of document identifiers is
based on queries that are commonly much shorter than the indexed documents.
This problem is further exacerbated when using DSI for cross-lingual retrieval,
where document text and query text are in different languages.
To address this fundamental problem of current DSI models, we propose a
simple yet effective indexing framework for DSI, called DSI-QG. When indexing,
DSI-QG represents documents with a number of potentially relevant queries
generated by a query generation model and re-ranked and filtered by a
cross-encoder ranker. The presence of these queries at indexing allows the DSI
models to connect a document identifier to a set of queries, hence mitigating
data distribution mismatches present between the indexing and the retrieval
phases. Empirical results on popular mono-lingual and cross-lingual passage
retrieval datasets show that DSI-QG significantly outperforms the original DSI
model.
| true | true |
Shengyao Zhuang and
Houxing Ren and
Linjun Shou and
Jian Pei and
Ming Gong and
Guido Zuccon and
Daxin Jiang
| 2,022 | null | null | null |
CoRR
|
Bridging the Gap Between Indexing and Retrieval for Differentiable
Search Index with Query Generation
|
Bridging the Gap Between Indexing and Retrieval for Differentiable ...
|
https://arxiv.org/abs/2206.10128
|
Missing: 04/08/2025
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
CorpusLM
|
\cite{CorpusLM}
|
CorpusLM: Towards a Unified Language Model on Corpus for
Knowledge-Intensive Tasks
|
http://arxiv.org/abs/2402.01176v2
|
Large language models (LLMs) have gained significant attention in various
fields but prone to hallucination, especially in knowledge-intensive (KI)
tasks. To address this, retrieval-augmented generation (RAG) has emerged as a
popular solution to enhance factual accuracy. However, traditional retrieval
modules often rely on large document index and disconnect with generative
tasks. With the advent of generative retrieval (GR), language models can
retrieve by directly generating document identifiers (DocIDs), offering
superior performance in retrieval tasks. However, the potential relationship
between GR and downstream tasks remains unexplored. In this paper, we propose
\textbf{CorpusLM}, a unified language model that leverages external corpus to
tackle various knowledge-intensive tasks by integrating generative retrieval,
closed-book generation, and RAG through a unified greedy decoding process. We
design the following mechanisms to facilitate effective retrieval and
generation, and improve the end-to-end effectiveness of KI tasks: (1) We
develop a ranking-oriented DocID list generation strategy, which refines GR by
directly learning from a DocID ranking list, to improve retrieval quality. (2)
We design a continuous DocIDs-References-Answer generation strategy, which
facilitates effective and efficient RAG. (3) We employ well-designed
unsupervised DocID understanding tasks, to comprehend DocID semantics and their
relevance to downstream tasks. We evaluate our approach on the widely used KILT
benchmark with two variants of backbone models, i.e., T5 and Llama2.
Experimental results demonstrate the superior performance of our models in both
retrieval and downstream tasks.
| true | true |
Xiaoxi Li and
Zhicheng Dou and
Yujia Zhou and
Fangchao Liu
| 2,024 | null | null | null | null |
CorpusLM: Towards a Unified Language Model on Corpus for
Knowledge-Intensive Tasks
|
CorpusLM: Towards a Unified Language Model on Corpus ...
|
https://dl.acm.org/doi/10.1145/3626772.3657778
|
In this paper, we propose CorpusLM, a unified language model that leverages external corpus to tackle various knowledge-intensive tasks.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
Tiger
|
\cite{Tiger}
|
Recommender Systems with Generative Retrieval
|
http://arxiv.org/abs/2305.05065v3
|
Modern recommender systems perform large-scale retrieval by first embedding
queries and item candidates in the same unified space, followed by approximate
nearest neighbor search to select top candidates given a query embedding. In
this paper, we propose a novel generative retrieval approach, where the
retrieval model autoregressively decodes the identifiers of the target
candidates. To that end, we create semantically meaningful tuple of codewords
to serve as a Semantic ID for each item. Given Semantic IDs for items in a user
session, a Transformer-based sequence-to-sequence model is trained to predict
the Semantic ID of the next item that the user will interact with. To the best
of our knowledge, this is the first Semantic ID-based generative model for
recommendation tasks. We show that recommender systems trained with the
proposed paradigm significantly outperform the current SOTA models on various
datasets. In addition, we show that incorporating Semantic IDs into the
sequence-to-sequence model enhances its ability to generalize, as evidenced by
the improved retrieval performance observed for items with no prior interaction
history.
| true | true |
Rajput, Shashank and Mehta, Nikhil and Singh, Anima and Keshavan, Raghunandan and Vu, Trung and Heidt, Lukasz and Hong, Lichan and Tay, Yi and Tran, Vinh Q and Samost, Jonah and others
| 2,023 | null | null | null | null |
Recommender Systems with Generative Retrieval
|
Recommender Systems with Generative Retrieval
|
http://arxiv.org/pdf/2305.05065v3
|
Modern recommender systems perform large-scale retrieval by first embedding
queries and item candidates in the same unified space, followed by approximate
nearest neighbor search to select top candidates given a query embedding. In
this paper, we propose a novel generative retrieval approach, where the
retrieval model autoregressively decodes the identifiers of the target
candidates. To that end, we create semantically meaningful tuple of codewords
to serve as a Semantic ID for each item. Given Semantic IDs for items in a user
session, a Transformer-based sequence-to-sequence model is trained to predict
the Semantic ID of the next item that the user will interact with. To the best
of our knowledge, this is the first Semantic ID-based generative model for
recommendation tasks. We show that recommender systems trained with the
proposed paradigm significantly outperform the current SOTA models on various
datasets. In addition, we show that incorporating Semantic IDs into the
sequence-to-sequence model enhances its ability to generalize, as evidenced by
the improved retrieval performance observed for items with no prior interaction
history.
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
DSI++
|
\cite{DSI++}
|
{DSI++:} Updating Transformer Memory with New Documents
| null | null | true | false |
Sanket Vaibhav Mehta and
Jai Gupta and
Yi Tay and
Mostafa Dehghani and
Vinh Q. Tran and
Jinfeng Rao and
Marc Najork and
Emma Strubell and
Donald Metzler
| 2,023 | null | null | null | null |
{DSI++:} Updating Transformer Memory with New Documents
|
DSI++: Updating Transformer Memory with New Documents
|
https://aclanthology.org/2023.emnlp-main.510/
|
DSI++: Updating Transformer Memory with New Documents - ACL Anthology Anthology ID:2023.emnlp-main.510 Volume:Proceedings of the 2023 Conference on Empirical Methods in Natural Language ProcessingMonth:December Year:2023 Address:Singapore Editors:Houda Bouamor, Juan Pino, Kalika BaliVenue:EMNLPSIG:Publisher:Association for Computational Linguistics Note:Pages:8198–8213 Language:URL:https://aclanthology.org/2023.emnlp-main.510/DOI:10.18653/v1/2023.emnlp-main.510Bibkey:mehta-etal-2023-dsi Cite (ACL):Sanket Vaibhav Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Association for Computational Linguistics.Cite (Informal):DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)Copy Citation:BibTeX Markdown MODS XML Endnote More options…PDF:https://aclanthology.org/2023.emnlp-main.510.pdfVideo:https://aclanthology.org/2023.emnlp-main.510.mp4 title = "{DSI}++: Updating Transformer Memory with New Documents", <title>DSI++: Updating Transformer Memory with New Documents</title> <namePart type="family">Mehta</namePart> <namePart type="given">Houda</namePart> <namePart type="given">Juan</namePart> <namePart type="given">Kalika</namePart> DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023) * DSI++: Updating Transformer Memory with New Documents (Mehta et al., EMNLP 2023)
|
Birdie: Natural Language-Driven Table Discovery Using Differentiable
Search Index
|
2504.21282v1
|
CLEVER
|
\cite{CLEVER}
|
Continual Learning for Generative Retrieval over Dynamic Corpora
|
http://arxiv.org/abs/2308.14968v1
|
Generative retrieval (GR) directly predicts the identifiers of relevant
documents (i.e., docids) based on a parametric model. It has achieved solid
performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a
static document collection. In many practical scenarios, however, document
collections are dynamic, where new documents are continuously added to the
corpus. The ability to incrementally index new documents while preserving the
ability to answer queries with both previously and newly indexed relevant
documents is vital to applying GR models. In this paper, we address this
practical continual learning problem for GR. We put forward a novel
Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major
contributions to continual learning for GR: (i) To encode new documents into
docids with low computational cost, we present Incremental Product
Quantization, which updates a partial quantization codebook according to two
adaptive thresholds; and (ii) To memorize new documents for querying without
forgetting previous knowledge, we propose a memory-augmented learning
mechanism, to form meaningful connections between old and new documents.
Empirical results demonstrate the effectiveness and efficiency of the proposed
model.
| true | true |
Jiangui Chen and
Ruqing Zhang and
Jiafeng Guo and
Maarten de Rijke and
Wei Chen and
Yixing Fan and
Xueqi Cheng
| 2,023 | null | null | null | null |
Continual Learning for Generative Retrieval over Dynamic Corpora
|
Continual Learning for Generative Retrieval over Dynamic Corpora
|
http://arxiv.org/pdf/2308.14968v1
|
Generative retrieval (GR) directly predicts the identifiers of relevant
documents (i.e., docids) based on a parametric model. It has achieved solid
performance on many ad-hoc retrieval tasks. So far, these tasks have assumed a
static document collection. In many practical scenarios, however, document
collections are dynamic, where new documents are continuously added to the
corpus. The ability to incrementally index new documents while preserving the
ability to answer queries with both previously and newly indexed relevant
documents is vital to applying GR models. In this paper, we address this
practical continual learning problem for GR. We put forward a novel
Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major
contributions to continual learning for GR: (i) To encode new documents into
docids with low computational cost, we present Incremental Product
Quantization, which updates a partial quantization codebook according to two
adaptive thresholds; and (ii) To memorize new documents for querying without
forgetting previous knowledge, we propose a memory-augmented learning
mechanism, to form meaningful connections between old and new documents.
Empirical results demonstrate the effectiveness and efficiency of the proposed
model.
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
ErroDetection
|
\cite{ErroDetection}
|
Exploiting Active Learning in Novel Refractive Error Detection with Smartphones
| null | null | true | false |
Fu, Eugene Yujun and Yang, Zhongqi and Leong, Hong Va and Ngai, Grace and Do, Chi-wai and Chan, Lily
| 2,020 | null | null | null | null |
Exploiting Active Learning in Novel Refractive Error Detection with Smartphones
|
Exploiting active learning in novel refractive error detection with ...
|
https://repository.eduhk.hk/en/publications/exploiting-active-learning-in-novel-refractive-error-detection-wi
|
Dive into the research topics of 'Exploiting active learning in novel refractive error detection with smartphones'. Together they form a unique fingerprint.
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
ImageCaption
|
\cite{ImageCaption}
|
Structural Semantic Adversarial Active Learning for Image Captioning
| null | null | true | false |
Zhang, Beichen and Li, Liang and Su, Li and Wang, Shuhui and Deng, Jincan and Zha, Zheng-Jun and Huang, Qingming
| 2,020 | null | null | null | null |
Structural Semantic Adversarial Active Learning for Image Captioning
|
Structural Semantic Adversarial Active Learning for Image Captioning
|
https://dl.acm.org/doi/abs/10.1145/3394171.3413885
|
We propose a structural semantic adversarial active learning (SSAAL) model that leverages both visual and textual information for deriving the most
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
PersonIdentification
|
\cite{PersonIdentification}
|
Cluster and Scatter: A Multi-Grained Active Semi-Supervised Learning Framework for Scalable Person Re-Identification
| null | null | true | false |
Hu, Bingyu and Zha, Zheng-Jun and Liu, Jiawei and Zhu, Xierong and Xie, Hongtao
| 2,021 | null | null | null | null |
Cluster and Scatter: A Multi-Grained Active Semi-Supervised Learning Framework for Scalable Person Re-Identification
|
arXiv:2204.10008v1 [cs.CV] 21 Apr 2022
|
https://arxiv.org/pdf/2204.10008
|
by D Jin · 2022 · Cited by 4 — Cluster and scatter: A multi-grained active semi-supervised learning framework for scalable person re-identification. In ACMMM, pages. 2605
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
lewis1994heterogeneous
|
\cite{lewis1994heterogeneous}
|
Heterogeneous uncertainty sampling for supervised learning
| null | null | true | false |
Lewis, David D and Catlett, Jason
| 1,994 | null | null | null | null |
Heterogeneous uncertainty sampling for supervised learning
|
Heterogeneous Uncertainty Sampling for Supervised ...
|
https://www.sciencedirect.com/science/article/pii/B978155860335650026X
|
by DD Lewis · 1994 · Cited by 1814 — Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances.
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
lewis1994sequential
|
\cite{lewis1994sequential}
|
A Sequential Algorithm for Training Text Classifiers
|
http://arxiv.org/abs/cmp-lg/9407020v2
|
The ability to cheaply train text classifiers is critical to their use in
information retrieval, content analysis, natural language processing, and other
tasks involving data which is partly or fully textual. An algorithm for
sequential sampling during machine learning of statistical classifiers was
developed and tested on a newswire text categorization task. This method, which
we call uncertainty sampling, reduced by as much as 500-fold the amount of
training data that would have to be manually classified to achieve a given
level of effectiveness.
| true | true |
Lewis, David D and Gale, William A
| 1,994 | null | null | null | null |
A Sequential Algorithm for Training Text Classifiers
|
A Sequential Algorithm for Training Text Classifiers
|
http://arxiv.org/pdf/cmp-lg/9407020v2
|
The ability to cheaply train text classifiers is critical to their use in
information retrieval, content analysis, natural language processing, and other
tasks involving data which is partly or fully textual. An algorithm for
sequential sampling during machine learning of statistical classifiers was
developed and tested on a newswire text categorization task. This method, which
we call uncertainty sampling, reduced by as much as 500-fold the amount of
training data that would have to be manually classified to achieve a given
level of effectiveness.
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
joshi2009multi
|
\cite{joshi2009multi}
|
Active Learning for Multi-class Image Classification
|
http://arxiv.org/abs/2505.06825v1
|
A principle bottleneck in image classification is the large number of
training examples needed to train a classifier. Using active learning, we can
reduce the number of training examples to teach a CNN classifier by
strategically selecting examples. Assigning values to image examples using
different uncertainty metrics allows the model to identify and select
high-value examples in a smaller training set size. We demonstrate results for
digit recognition and fruit classification on the MNIST and Fruits360 data
sets. We formally compare results for four different uncertainty metrics.
Finally, we observe active learning is also effective on simpler (binary)
classification tasks, but marked improvement from random sampling is more
evident on more difficult tasks. We show active learning is a viable algorithm
for image classification problems.
| true | true |
Joshi, Ajay J and Porikli, Fatih and Papanikolopoulos, Nikolaos
| 2,009 | null | null | null | null |
Active Learning for Multi-class Image Classification
|
Multi-Class Active Learning for Image Classification
|
https://porikli.com/mysite/pdfs/porikli%202009%20-%20Multi-Class%20Active%20Learning%20for%20Image%20Classification.pdf
|
by AJ Joshi · Cited by 989 — In this paper, we have proposed a simple active learning method for multi-class image classification. The proposed method achieves significant reduction in
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
luo2013latent
|
\cite{luo2013latent}
|
Latent structured active learning
| null | null | true | false |
Luo, Wenjie and Schwing, Alex and Urtasun, Raquel
| 2,013 | null | null | null |
NeurIPS
|
Latent structured active learning
|
[PDF] Latent Structured Active Learning - Alexander Schwing
|
https://www.alexander-schwing.de/papers/LuoEtAl_NIPS2013.pdf
|
In this paper we present active learning algorithms in the context of structured prediction problems. To reduce the amount of labeling necessary to learn
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
settles2012active
|
\cite{settles2012active}
|
Active learning: Synthesis lectures on artificial intelligence and machine learning
| null | null | true | false |
Settles, Burr
| 2,012 | null | null | null |
Morgan {\&} Claypool Publishers
|
Active learning: Synthesis lectures on artificial intelligence and machine learning
|
Active Learning - Book
|
https://link.springer.com/book/10.1007/978-3-031-01560-1
|
by B Settles · Cited by 3007 — Part of the book series: Synthesis Lectures on Artificial Intelligence and Machine Learning (SLAIML) ... The key idea behind active learning is that a machine
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
blundell2015weight
|
\cite{blundell2015weight}
|
Weight Uncertainty in Neural Networks
|
http://arxiv.org/abs/1505.05424v2
|
We introduce a new, efficient, principled and backpropagation-compatible
algorithm for learning a probability distribution on the weights of a neural
network, called Bayes by Backprop. It regularises the weights by minimising a
compression cost, known as the variational free energy or the expected lower
bound on the marginal likelihood. We show that this principled kind of
regularisation yields comparable performance to dropout on MNIST
classification. We then demonstrate how the learnt uncertainty in the weights
can be used to improve generalisation in non-linear regression problems, and
how this weight uncertainty can be used to drive the exploration-exploitation
trade-off in reinforcement learning.
| true | true |
Blundell, Charles and Cornebise, Julien and Kavukcuoglu, Koray and Wierstra, Daan
| 2,015 | null | null | null | null |
Weight Uncertainty in Neural Networks
|
Weight Uncertainty in Neural Networks
|
http://arxiv.org/pdf/1505.05424v2
|
We introduce a new, efficient, principled and backpropagation-compatible
algorithm for learning a probability distribution on the weights of a neural
network, called Bayes by Backprop. It regularises the weights by minimising a
compression cost, known as the variational free energy or the expected lower
bound on the marginal likelihood. We show that this principled kind of
regularisation yields comparable performance to dropout on MNIST
classification. We then demonstrate how the learnt uncertainty in the weights
can be used to improve generalisation in non-linear regression problems, and
how this weight uncertainty can be used to drive the exploration-exploitation
trade-off in reinforcement learning.
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
gal2016dropout
|
\cite{gal2016dropout}
|
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
Deep Learning
|
http://arxiv.org/abs/1506.02142v6
|
Deep learning tools have gained tremendous attention in applied machine
learning. However such tools for regression and classification do not capture
model uncertainty. In comparison, Bayesian models offer a mathematically
grounded framework to reason about model uncertainty, but usually come with a
prohibitive computational cost. In this paper we develop a new theoretical
framework casting dropout training in deep neural networks (NNs) as approximate
Bayesian inference in deep Gaussian processes. A direct result of this theory
gives us tools to model uncertainty with dropout NNs -- extracting information
from existing models that has been thrown away so far. This mitigates the
problem of representing uncertainty in deep learning without sacrificing either
computational complexity or test accuracy. We perform an extensive study of the
properties of dropout's uncertainty. Various network architectures and
non-linearities are assessed on tasks of regression and classification, using
MNIST as an example. We show a considerable improvement in predictive
log-likelihood and RMSE compared to existing state-of-the-art methods, and
finish by using dropout's uncertainty in deep reinforcement learning.
| true | true |
Yarin Gal and Zoubin Ghahramani
| 2,016 | null | null | null | null |
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
Deep Learning
|
Representing Model Uncertainty in Deep Learning - arXiv
|
https://arxiv.org/abs/1506.02142
|
In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
huang2021semi
|
\cite{huang2021semi}
|
Semi-Supervised Active Learning with Temporal Output Discrepancy
| null | null | true | false |
Huang, Siyu and Wang, Tianyang and Xiong, Haoyi and Huan, Jun and Dou, Dejing
| 2,021 | null | null | null | null |
Semi-Supervised Active Learning with Temporal Output Discrepancy
|
Supplementary Material: Semi-Supervised Active Learning ...
|
https://openaccess.thecvf.com/content/ICCV2021/supplemental/Huang_Semi-Supervised_Active_Learning_ICCV_2021_supplemental.pdf
|
Semi-Supervised Active Learning with Temporal Output Discrepancy. Siyu Huang1. Tianyang Wang2. Haoyi Xiong1. Jun Huan3. Dejing Dou1. 1Baidu Research. 2Austin
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
guo2010active
|
\cite{guo2010active}
|
Active instance sampling via matrix partition.
| null | null | true | false |
Guo, Yuhong
| 2,010 | null | null | null | null |
Active instance sampling via matrix partition.
|
Active instance sampling via matrix partition - Volume 1
|
https://dl.acm.org/doi/10.5555/2997189.2997279
|
by Y Guo · 2010 · Cited by 183 — By employing a Gaussian process framework, this mutual information based instance selection problem can be formulated as a matrix partition problem. Although
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
yang2015multi
|
\cite{yang2015multi}
|
Multi-class active learning by uncertainty sampling with diversity maximization
| null | null | true | false |
Yang, Yi and Ma, Zhigang and Nie, Feiping and Chang, Xiaojun and Hauptmann, Alexander G
| 2,015 | null | null | null |
Int. J. Comput. Vis.
|
Multi-class active learning by uncertainty sampling with diversity maximization
|
Multi-class active learning by uncertainty sampling with diversity ...
|
https://research.monash.edu/en/publications/multi-class-active-learning-by-uncertainty-sampling-with-diversit
|
As a multi-class active learning algorithm, our algorithm is able to exploit uncertainty across multiple classes. An efficient algorithm is used to optimize the
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
nguyen2004active
|
\cite{nguyen2004active}
|
Active learning using pre-clustering
| null | null | true | false |
Nguyen, Hieu T and Smeulders, Arnold
| 2,004 | null | null | null | null |
Active learning using pre-clustering
|
Active learning using pre-clustering | Proceedings of the ...
|
https://dl.acm.org/doi/10.1145/1015330.1015349
|
The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the
|
CHASe: Client Heterogeneity-Aware Data Selection for Effective Federated
Active Learning
|
2504.17448v1
|
sener2018active
|
\cite{sener2018active}
|
Active Learning for Convolutional Neural Networks: A Core-Set Approach
|
http://arxiv.org/abs/1708.00489v4
|
Convolutional neural networks (CNNs) have been successfully applied to many
recognition and learning tasks using a universal recipe; training a deep model
on a very large dataset of supervised examples. However, this approach is
rather restrictive in practice since collecting a large set of labeled images
is very expensive. One way to ease this problem is coming up with smart ways
for choosing images to be labelled from a very large collection (ie. active
learning).
Our empirical study suggests that many of the active learning heuristics in
the literature are not effective when applied to CNNs in batch setting.
Inspired by these limitations, we define the problem of active learning as
core-set selection, ie. choosing set of points such that a model learned over
the selected subset is competitive for the remaining data points. We further
present a theoretical result characterizing the performance of any selected
subset using the geometry of the datapoints. As an active learning algorithm,
we choose the subset which is expected to yield best result according to our
characterization. Our experiments show that the proposed method significantly
outperforms existing approaches in image classification experiments by a large
margin.
| true | true |
Sener, Ozan and Savarese, Silvio
| 2,018 | null | null | null | null |
Active Learning for Convolutional Neural Networks: A Core-Set Approach
|
Active Learning for Convolutional Neural Networks: A Core ...
|
https://arxiv.org/abs/1708.00489
|
by O Sener · 2017 · Cited by 2576 — We define the problem of active learning as core-set selection, ie. choosing set of points such that a model learned over the selected subset is competitive
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.