parent_paper_title
stringclasses
63 values
parent_paper_arxiv_id
stringclasses
63 values
citation_shorthand
stringlengths
2
56
raw_citation_text
stringlengths
9
63
cited_paper_title
stringlengths
5
161
cited_paper_arxiv_link
stringlengths
32
37
cited_paper_abstract
stringlengths
406
1.92k
has_metadata
bool
1 class
is_arxiv_paper
bool
2 classes
bib_paper_authors
stringlengths
2
2.44k
bib_paper_year
float64
1.97k
2.03k
bib_paper_month
stringclasses
16 values
bib_paper_url
stringlengths
20
116
bib_paper_doi
stringclasses
269 values
bib_paper_journal
stringlengths
3
148
original_title
stringlengths
5
161
search_res_title
stringlengths
4
122
search_res_url
stringlengths
22
267
search_res_content
stringlengths
19
1.92k
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
hubert
\cite{hubert}
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
http://arxiv.org/abs/2106.07447v1
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.
true
true
Wei{-}Ning Hsu and Benjamin Bolte and Yao{-}Hung Hubert Tsai and Kushal Lakhotia and Ruslan Salakhutdinov and Abdelrahman Mohamed
2,021
null
null
null
ACM TASLP
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
HuBERT: Self-Supervised Speech Representation Learning ... - arXiv
https://arxiv.org/abs/2106.07447
We propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
ao2023gesturediffuclip
\cite{ao2023gesturediffuclip}
GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
http://arxiv.org/abs/2303.14613v4
The automatic generation of stylized co-speech gestures has recently received increasing attention. Previous systems typically allow style control via predefined text labels or example motion clips, which are often not flexible enough to convey user intent accurately. In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video. Our system learns a latent diffusion model to generate high-quality gestures and infuses the CLIP representations of style into the generator via an adaptive instance normalization (AdaIN) layer. We further devise a gesture-transcript alignment mechanism that ensures a semantically correct gesture generation based on contrastive learning. Our system can also be extended to allow fine-grained style control of individual body parts. We demonstrate an extensive set of examples showing the flexibility and generalizability of our model to a variety of style descriptions. In a user study, we show that our system outperforms the state-of-the-art approaches regarding human likeness, appropriateness, and style correctness.
true
true
Ao, Tenglong and Zhang, Zeyi and Liu, Libin
2,023
null
null
null
ACM TOG
GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents
http://arxiv.org/pdf/2303.14613v4
The automatic generation of stylized co-speech gestures has recently received increasing attention. Previous systems typically allow style control via predefined text labels or example motion clips, which are often not flexible enough to convey user intent accurately. In this work, we present GestureDiffuCLIP, a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control. We leverage the power of the large-scale Contrastive-Language-Image-Pre-training (CLIP) model and present a novel CLIP-guided mechanism that extracts efficient style representations from multiple input modalities, such as a piece of text, an example motion clip, or a video. Our system learns a latent diffusion model to generate high-quality gestures and infuses the CLIP representations of style into the generator via an adaptive instance normalization (AdaIN) layer. We further devise a gesture-transcript alignment mechanism that ensures a semantically correct gesture generation based on contrastive learning. Our system can also be extended to allow fine-grained style control of individual body parts. We demonstrate an extensive set of examples showing the flexibility and generalizability of our model to a variety of style descriptions. In a user study, we show that our system outperforms the state-of-the-art approaches regarding human likeness, appropriateness, and style correctness.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
liang2024omg
\cite{liang2024omg}
OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
http://arxiv.org/abs/2312.08985v3
We have recently seen tremendous progress in realistic text-to-motion generation. Yet, the existing methods often fail or produce implausible motions with unseen text inputs, which limits the applications. In this paper, we present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the pretrain-then-finetune paradigm into the text-to-motion generation. At the pre-training stage, our model improves the generation ability by learning the rich out-of-domain inherent motion traits. To this end, we scale up a large unconditional diffusion model up to 1B parameters, so as to utilize the massive unlabeled motion data up to over 20M motion instances. At the subsequent fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information, through a trainable copy of the pre-trained model and the proposed novel Mixture-of-Controllers (MoC) block. MoC block adaptively recognizes various ranges of the sub-motions with a cross-attention mechanism and processes them separately with the text-token-specific experts. Such a design effectively aligns the CLIP token embeddings of text prompts to various ranges of compact and expressive motion features. Extensive experiments demonstrate that our OMG achieves significant improvements over the state-of-the-art methods on zero-shot text-to-motion generation. Project page: https://tr3e.github.io/omg-page.
true
true
Liang, Han and Bao, Jiacheng and Zhang, Ruichi and Ren, Sihan and Xu, Yuecheng and Yang, Sibei and Chen, Xin and Yu, Jingyi and Xu, Lan
2,024
null
null
null
null
OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
[PDF] OMG: Towards Open-vocabulary Motion Generation via Mixture of ...
https://openaccess.thecvf.com/content/CVPR2024/papers/Liang_OMG_Towards_Open-vocabulary_Motion_Generation_via_Mixture_of_Controllers_CVPR_2024_paper.pdf
We propose a fine-tuning scheme for text conditioning, utilizing a mixture of controllers to effectively improve the alignment between text and motion. 2.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
zhang2022motiondiffuse
\cite{zhang2022motiondiffuse}
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
http://arxiv.org/abs/2208.15001v1
Human motion modeling is important for many modern graphics applications, which typically require professional skills. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. However, it remains challenging to achieve diverse and fine-grained motion generation with various text inputs. To address this problem, we propose MotionDiffuse, the first diffusion model-based text-driven motion generation framework, which demonstrates several desired properties over existing methods. 1) Probabilistic Mapping. Instead of a deterministic language-motion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected. 2) Realistic Synthesis. MotionDiffuse excels at modeling complicated data distribution and generating vivid motion sequences. 3) Multi-Level Manipulation. MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts. Our experiments show MotionDiffuse outperforms existing SoTA methods by convincing margins on text-driven motion generation and action-conditioned motion generation. A qualitative analysis further demonstrates MotionDiffuse's controllability for comprehensive motion generation. Homepage: https://mingyuan-zhang.github.io/projects/MotionDiffuse.html
true
true
Mingyuan Zhang and Zhongang Cai and Liang Pan and Fangzhou Hong and Xinying Guo and Lei Yang and Ziwei Liu
2,024
null
null
null
TPAMI
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
Text-Driven Human Motion Generation With Diffusion Model
https://dl.acm.org/doi/abs/10.1109/TPAMI.2024.3355414
MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
mughal2024convofusion
\cite{mughal2024convofusion}
ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
http://arxiv.org/abs/2403.17936v1
Gestures play a key role in human communication. Recent methods for co-speech gesture generation, while managing to generate beat-aligned motions, struggle generating gestures that are semantically aligned with the utterance. Compared to beat gestures that align naturally to the audio signal, semantically coherent gestures require modeling the complex interactions between the language and human motion, and can be controlled by focusing on certain words. Therefore, we present ConvoFusion, a diffusion-based approach for multi-modal gesture synthesis, which can not only generate gestures based on multi-modal speech inputs, but can also facilitate controllability in gesture synthesis. Our method proposes two guidance objectives that allow the users to modulate the impact of different conditioning modalities (e.g. audio vs text) as well as to choose certain words to be emphasized during gesturing. Our method is versatile in that it can be trained either for generating monologue gestures or even the conversational gestures. To further advance the research on multi-party interactive gestures, the DnD Group Gesture dataset is released, which contains 6 hours of gesture data showing 5 people interacting with one another. We compare our method with several recent works and demonstrate effectiveness of our method on a variety of tasks. We urge the reader to watch our supplementary video at our website.
true
true
Mughal, Muhammad Hamza and Dabral, Rishabh and Habibie, Ikhsanul and Donatelli, Lucia and Habermann, Marc and Theobalt, Christian
2,024
null
null
null
null
ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
Multi-Modal Conversational Diffusion for Co-Speech Gesture ... - arXiv
https://arxiv.org/abs/2403.17936
We present ConvoFusion, a diffusion-based approach for multi-modal gesture synthesis, which can not only generate gestures based on multi-modal speech inputs.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
zhao2024media2face
\cite{zhao2024media2face}
Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
http://arxiv.org/abs/2401.15687v2
The synthesis of 3D facial animations from speech has garnered considerable attention. Due to the scarcity of high-quality 4D facial data and well-annotated abundant multi-modality labels, previous methods often suffer from limited realism and a lack of lexible conditioning. We address this challenge through a trilogy. We first introduce Generalized Neural Parametric Facial Asset (GNPFA), an efficient variational auto-encoder mapping facial geometry and images to a highly generalized expression latent space, decoupling expressions and identities. Then, we utilize GNPFA to extract high-quality expressions and accurate head poses from a large array of videos. This presents the M2F-D dataset, a large, diverse, and scan-level co-speech 3D facial animation dataset with well-annotated emotional and style labels. Finally, we propose Media2Face, a diffusion model in GNPFA latent space for co-speech facial animation generation, accepting rich multi-modality guidances from audio, text, and image. Extensive experiments demonstrate that our model not only achieves high fidelity in facial animation synthesis but also broadens the scope of expressiveness and style adaptability in 3D facial animation.
true
true
Qingcheng Zhao and Pengyu Long and Qixuan Zhang and Dafei Qin and Han Liang and Longwen Zhang and Yingliang Zhang and Jingyi Yu and Lan Xu
2,024
null
null
null
null
Media2Face: Co-speech Facial Animation Generation With Multi-Modality Guidance
Co-speech Facial Animation Generation With Multi-Modality Guidance
https://arxiv.org/abs/2401.15687
We propose Media2Face, a diffusion model in GNPFA latent space for co-speech facial animation generation, accepting rich multi-modality guidances from audio,
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
DBLP:conf/cvpr/ChhatreDABPBB24
\cite{DBLP:conf/cvpr/ChhatreDABPBB24}
Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion
http://arxiv.org/abs/2312.04466v2
Existing methods for synthesizing 3D human gestures from speech have shown promising results, but they do not explicitly model the impact of emotions on the generated gestures. Instead, these methods directly output animations from speech without control over the expressed emotion. To address this limitation, we present AMUSE, an emotional speech-driven body animation model based on latent diffusion. Our observation is that content (i.e., gestures related to speech rhythm and word utterances), emotion, and personal style are separable. To account for this, AMUSE maps the driving audio to three disentangled latent vectors: one for content, one for emotion, and one for personal style. A latent diffusion model, trained to generate gesture motion sequences, is then conditioned on these latent vectors. Once trained, AMUSE synthesizes 3D human gestures directly from speech with control over the expressed emotions and style by combining the content from the driving speech with the emotion and style of another speech sequence. Randomly sampling the noise of the diffusion model further generates variations of the gesture with the same emotional expressivity. Qualitative, quantitative, and perceptual evaluations demonstrate that AMUSE outputs realistic gesture sequences. Compared to the state of the art, the generated gestures are better synchronized with the speech content, and better represent the emotion expressed by the input speech. Our code is available at amuse.is.tue.mpg.de.
true
true
Kiran Chhatre and Radek Danecek and Nikos Athanasiou and Giorgio Becherini and Christopher E. Peters and Michael J. Black and Timo Bolkart
2,024
null
null
null
null
Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion
[2312.04466] Emotional Speech-driven 3D Body Animation via ...
https://arxiv.org/abs/2312.04466
To account for this, AMUSE maps the driving audio to three disentangled latent vectors: one for content, one for emotion, and one for personal
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
ElizaldeZR19
\cite{ElizaldeZR19}
Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio
null
null
true
false
Benjamin Elizalde and Shuayb Zarar and Bhiksha Raj
2,019
null
null
null
null
Cross Modal Audio Search and Retrieval with Joint Embeddings Based on Text and Audio
Cross Modal Audio Search and Retrieval with Joint Embeddings ...
https://www.microsoft.com/en-us/research/publication/cross-modal-audio-search-and-retrieval-with-joint-embeddings-based-on-text-and-audio/
Missing: 04/08/2025
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
Yu0L19
\cite{Yu0L19}
Mining Audio, Text and Visual Information for Talking Face Generation
null
null
true
false
Lingyun Yu and Jun Yu and Qiang Ling
2,019
null
null
null
null
Mining Audio, Text and Visual Information for Talking Face Generation
Mining Audio, Text and Visual Information for Talking Face Generation
https://ieeexplore.ieee.org/document/8970886
First, a multimodal learning method is proposed to generate accurate mouth landmarks with multimedia inputs (both text and audio). Second, a network named
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
EMAGE
\cite{EMAGE}
EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling
http://arxiv.org/abs/2401.00374v5
We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new mesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with FLAME head parameters and further refines the modeling of head, neck, and finger movements, offering a community-standardized, high-quality 3D motion captured dataset. EMAGE leverages masked body gesture priors during training to boost inference performance. It involves a Masked Audio Gesture Transformer, facilitating joint training on audio-to-gesture generation and masked gesture reconstruction to effectively encode audio and body gesture hints. Encoded body hints from masked gestures are then separately employed to generate facial and body movements. Moreover, EMAGE adaptively merges speech features from the audio's rhythm and content and utilizes four compositional VQ-VAEs to enhance the results' fidelity and diversity. Experiments demonstrate that EMAGE generates holistic gestures with state-of-the-art performance and is flexible in accepting predefined spatial-temporal gesture inputs, generating complete, audio-synchronized results. Our code and dataset are available https://pantomatrix.github.io/EMAGE/
true
true
Haiyang Liu and Zihao Zhu and Giorgio Becherini and Yichen Peng and Mingyang Su and You Zhou and Xuefei Zhe and Naoya Iwamoto and Bo Zheng and Michael J. Black
2,024
null
null
null
null
EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling
EMAGE - CVPR 2024 Open Access Repository
https://openaccess.thecvf.com/content/CVPR2024/html/Liu_EMAGE_Towards_Unified_Holistic_Co-Speech_Gesture_Generation_via_Expressive_Masked_CVPR_2024_paper.html
EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling. Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
RN5318
\cite{RN5318}
Snapshot Compressive Imaging: Principle, Implementation, Theory, Algorithms and Applications
http://arxiv.org/abs/2103.04421v1
Capturing high-dimensional (HD) data is a long-term challenge in signal processing and related fields. Snapshot compressive imaging (SCI) uses a two-dimensional (2D) detector to capture HD ($\ge3$D) data in a {\em snapshot} measurement. Via novel optical designs, the 2D detector samples the HD data in a {\em compressive} manner; following this, algorithms are employed to reconstruct the desired HD data-cube. SCI has been used in hyperspectral imaging, video, holography, tomography, focal depth imaging, polarization imaging, microscopy, \etc.~Though the hardware has been investigated for more than a decade, the theoretical guarantees have only recently been derived. Inspired by deep learning, various deep neural networks have also been developed to reconstruct the HD data-cube in spectral SCI and video SCI. This article reviews recent advances in SCI hardware, theory and algorithms, including both optimization-based and deep-learning-based algorithms. Diverse applications and the outlook of SCI are also discussed.
true
true
Yuan, Xin and Brady, David J. and Katsaggelos, Aggelos K.
2,021
null
null
10.1109/msp.2020.3023869
IEEE Signal Processing Magazine
Snapshot Compressive Imaging: Principle, Implementation, Theory, Algorithms and Applications
Snapshot Compressive Imaging: Theory, Algorithms, and ...
https://www.researchgate.net/publication/349697698_Snapshot_Compressive_Imaging_Theory_Algorithms_and_Applications
Snapshot compressive imaging (SCI) uses a 2D detector to capture HD (>3D) data in a snapshot measurement. Via novel optical designs, the 2D detector samples the
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
wang2023full
\cite{wang2023full}
Full-resolution and full-dynamic-range coded aperture compressive temporal imaging
null
null
true
false
Wang, Ping and Wang, Lishun and Qiao, Mu and Yuan, Xin
2,023
null
null
null
Optics Letters
Full-resolution and full-dynamic-range coded aperture compressive temporal imaging
Full-resolution and full-dynamic-range coded aperture ...
https://opg.optica.org/abstract.cfm?uri=ol-48-18-4813
by P Wang · 2023 · Cited by 9 — Coded aperture compressive temporal imaging (CACTI) aims to capture a sequence of video frames in a single shot, using an off-the-shelf 2D sensor.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
wang2024hierarchical
\cite{wang2024hierarchical}
Hierarchical Separable Video Transformer for Snapshot Compressive Imaging
http://arxiv.org/abs/2407.11946v2
Transformers have achieved the state-of-the-art performance on solving the inverse problem of Snapshot Compressive Imaging (SCI) for video, whose ill-posedness is rooted in the mixed degradation of spatial masking and temporal aliasing. However, previous Transformers lack an insight into the degradation and thus have limited performance and efficiency. In this work, we tailor an efficient reconstruction architecture without temporal aggregation in early layers and Hierarchical Separable Video Transformer (HiSViT) as building block. HiSViT is built by multiple groups of Cross-Scale Separable Multi-head Self-Attention (CSS-MSA) and Gated Self-Modulated Feed-Forward Network (GSM-FFN) with dense connections, each of which is conducted within a separate channel portions at a different scale, for multi-scale interactions and long-range modeling. By separating spatial operations from temporal ones, CSS-MSA introduces an inductive bias of paying more attention within frames instead of between frames while saving computational overheads. GSM-FFN further enhances the locality via gated mechanism and factorized spatial-temporal convolutions. Extensive experiments demonstrate that our method outperforms previous methods by $\!>\!0.5$ dB with comparable or fewer parameters and complexity. The source codes and pretrained models are released at https://github.com/pwangcs/HiSViT.
true
true
Wang, Ping and Zhang, Yulun and Wang, Lishun and Yuan, Xin
2,024
null
null
null
null
Hierarchical Separable Video Transformer for Snapshot Compressive Imaging
pwangcs/HiSViT: [ECCV 2024] Hierarchical Separable ...
https://github.com/pwangcs/HiSViT
[ECCV 2024] Hierarchical Separable Video Transformer for Snapshot Compressive Imaging · Ping Wang, Yulun Zhang, Lishun Wang, Xin Yuan. Video SCI Reconstruction
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
wang2023deep
\cite{wang2023deep}
Deep Optics for Video Snapshot Compressive Imaging
http://arxiv.org/abs/2404.05274v1
Video snapshot compressive imaging (SCI) aims to capture a sequence of video frames with only a single shot of a 2D detector, whose backbones rest in optical modulation patterns (also known as masks) and a computational reconstruction algorithm. Advanced deep learning algorithms and mature hardware are putting video SCI into practical applications. Yet, there are two clouds in the sunshine of SCI: i) low dynamic range as a victim of high temporal multiplexing, and ii) existing deep learning algorithms' degradation on real system. To address these challenges, this paper presents a deep optics framework to jointly optimize masks and a reconstruction network. Specifically, we first propose a new type of structural mask to realize motion-aware and full-dynamic-range measurement. Considering the motion awareness property in measurement domain, we develop an efficient network for video SCI reconstruction using Transformer to capture long-term temporal dependencies, dubbed Res2former. Moreover, sensor response is introduced into the forward model of video SCI to guarantee end-to-end model training close to real system. Finally, we implement the learned structural masks on a digital micro-mirror device. Experimental results on synthetic and real data validate the effectiveness of the proposed framework. We believe this is a milestone for real-world video SCI. The source code and data are available at https://github.com/pwangcs/DeepOpticsSCI.
true
true
Wang, Ping and Wang, Lishun and Yuan, Xin
2,023
null
null
null
null
Deep Optics for Video Snapshot Compressive Imaging
Deep Optics for Video Snapshot Compressive Imaging
http://arxiv.org/pdf/2404.05274v1
Video snapshot compressive imaging (SCI) aims to capture a sequence of video frames with only a single shot of a 2D detector, whose backbones rest in optical modulation patterns (also known as masks) and a computational reconstruction algorithm. Advanced deep learning algorithms and mature hardware are putting video SCI into practical applications. Yet, there are two clouds in the sunshine of SCI: i) low dynamic range as a victim of high temporal multiplexing, and ii) existing deep learning algorithms' degradation on real system. To address these challenges, this paper presents a deep optics framework to jointly optimize masks and a reconstruction network. Specifically, we first propose a new type of structural mask to realize motion-aware and full-dynamic-range measurement. Considering the motion awareness property in measurement domain, we develop an efficient network for video SCI reconstruction using Transformer to capture long-term temporal dependencies, dubbed Res2former. Moreover, sensor response is introduced into the forward model of video SCI to guarantee end-to-end model training close to real system. Finally, we implement the learned structural masks on a digital micro-mirror device. Experimental results on synthetic and real data validate the effectiveness of the proposed framework. We believe this is a milestone for real-world video SCI. The source code and data are available at https://github.com/pwangcs/DeepOpticsSCI.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
figueiredo2007gradient
\cite{figueiredo2007gradient}
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
null
null
true
false
Figueiredo, M{\'a}rio AT and Nowak, Robert D and Wright, Stephen J
2,007
null
null
null
IEEE Journal of Selected Topics in Signal Processing
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
Gradient Projection for Sparse Reconstruction: Application ...
https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=a5a5f31a9d521db9566db94410b06defbbd40c22
by MAT Figueiredo · Cited by 4600 — Gradient projection (GP) algorithms are proposed for sparse reconstruction in signal processing, using bound-constrained quadratic programming, and are faster
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
4587391
\cite{4587391}
An efficient algorithm for compressed MR imaging using total variation and wavelets
null
null
true
false
Shiqian Ma and Wotao Yin and Yin Zhang and Chakraborty, Amit
2,008
null
null
null
null
An efficient algorithm for compressed MR imaging using total variation and wavelets
Compressed MRI reconstruction exploiting a rotation-invariant total ...
https://www.sciencedirect.com/science/article/abs/pii/S0730725X19307507
An efficient algorithm for compressed MR imaging using total variation and wavelets. M. Lustig et al. Compressed sensing MRI. IEEE Signal Processing Magazine.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
he2009exploiting
\cite{he2009exploiting}
Exploiting structure in wavelet-based Bayesian compressive sensing
null
null
true
false
He, Lihan and Carin, Lawrence
2,009
null
null
null
IEEE Transactions on Signal Processing
Exploiting structure in wavelet-based Bayesian compressive sensing
Exploiting structure in wavelet-based Bayesian compressive sensing
https://dl.acm.org/doi/abs/10.1109/tsp.2009.2022003
The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
blumensath2009iterative
\cite{blumensath2009iterative}
Iterative Hard Thresholding for Compressed Sensing
http://arxiv.org/abs/0805.0510v1
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
true
true
Blumensath, Thomas and Davies, Mike E
2,009
null
null
null
Applied and Computational Harmonic Analysis
Iterative Hard Thresholding for Compressed Sensing
Iterative Hard Thresholding for Compressed Sensing
http://arxiv.org/pdf/0805.0510v1
Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) - It gives near-optimal error guarantees. - It is robust to observation noise. - It succeeds with a minimum number of observations. - It can be used with any sampling operator for which the operator and its adjoint can be computed. - The memory requirement is linear in the problem size. - Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. - It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. - Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
beck2009fast
\cite{beck2009fast}
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
null
null
true
false
Beck, Amir and Teboulle, Marc
2,009
null
null
null
SIAM Journal on Imaging Sciences
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
[PDF] A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse ...
https://www.ceremade.dauphine.fr/~carlier/FISTA
Abstract. We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
kim2010compressed
\cite{kim2010compressed}
Compressed sensing using a Gaussian scale mixtures model in wavelet domain
null
null
true
false
Kim, Yookyung and Nadar, Mariappan S and Bilgin, Ali
2,010
null
null
null
null
Compressed sensing using a Gaussian scale mixtures model in wavelet domain
Compressed Sensing With a Gaussian Scale Mixture ...
https://pmc.ncbi.nlm.nih.gov/articles/PMC6207971/
by J Meng · 2018 · Cited by 11 — In this method, the structure dependencies of signals in the wavelet domain were incorporated into the imaging framework through the Gaussian scale mixture
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
yang2011alternating
\cite{yang2011alternating}
Alternating Direction Algorithms for {$\ell_{1}$}-Problems in Compressive Sensing
null
null
true
false
Yang, Junfeng and Zhang, Yin
2,011
null
null
null
SIAM Journal on Scientific Computing
Alternating Direction Algorithms for {$\ell_{1}$}-Problems in Compressive Sensing
[PDF] alternating direction algorithms for `1-problems in compressive ...
https://www.cmor-faculty.rice.edu/~zhang/reports/tr0937.pdf
In this paper, we propose and study the use of alternating direction algorithms for several `1-norm minimization problems arising from sparse solution recovery
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
dong2014compressive
\cite{dong2014compressive}
Compressive sensing via nonlocal low-rank regularization
null
null
true
false
Dong, Weisheng and Shi, Guangming and Li, Xin and Ma, Yi and Huang, Feng
2,014
null
null
null
IEEE Transactions on Image Processing
Compressive sensing via nonlocal low-rank regularization
[PDF] Compressive Sensing via Nonlocal Low-rank Regularization
http://people.eecs.berkeley.edu/~yima/psfile/CS_low_rank_final.pdf
Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
Metzler2016FromDT
\cite{Metzler2016FromDT}
From Denoising to Compressed Sensing
http://arxiv.org/abs/1406.4175v5
A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
true
true
Metzler, Christopher A and Maleki, Arian and Baraniuk, Richard G
2,016
null
null
null
IEEE Transactions on Information Theory
From Denoising to Compressed Sensing
From Denoising to Compressed Sensing
http://arxiv.org/pdf/1406.4175v5
A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
zhang2021plug
\cite{zhang2021plug}
Deep Plug-and-Play Prior for Hyperspectral Image Restoration
http://arxiv.org/abs/2209.08240v1
Deep-learning-based hyperspectral image (HSI) restoration methods have gained great popularity for their remarkable performance but often demand expensive network retraining whenever the specifics of task changes. In this paper, we propose to restore HSIs in a unified approach with an effective plug-and-play method, which can jointly retain the flexibility of optimization-based methods and utilize the powerful representation capability of deep neural networks. Specifically, we first develop a new deep HSI denoiser leveraging gated recurrent convolution units, short- and long-term skip connections, and an augmented noise level map to better exploit the abundant spatio-spectral information within HSIs. It, therefore, leads to the state-of-the-art performance on HSI denoising under both Gaussian and complex noise settings. Then, the proposed denoiser is inserted into the plug-and-play framework as a powerful implicit HSI prior to tackle various HSI restoration tasks. Through extensive experiments on HSI super-resolution, compressed sensing, and inpainting, we demonstrate that our approach often achieves superior performance, which is competitive with or even better than the state-of-the-art on each task, via a single model without any task-specific training.
true
true
Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu
2,021
null
null
null
IEEE Transactions on Pattern Analysis and Machine Intelligence
Deep Plug-and-Play Prior for Hyperspectral Image Restoration
Deep Plug-and-Play Prior for Hyperspectral Image Restoration
https://www.researchgate.net/publication/363667470_Deep_Plug-and-Play_Prior_for_Hyperspectral_Image_Restoration
In this paper, we propose to restore HSIs in a unified approach with an effective plug-and-play method, which can jointly retain the flexibility
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
hurault2022gradient
\cite{hurault2022gradient}
Gradient Step Denoiser for convergent Plug-and-Play
http://arxiv.org/abs/2110.03220v2
Plug-and-Play methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Although Plug-and-Play methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly convex data terms. In this work, we propose a new type of Plug-and-Play methods, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network. Exploiting convergence results for proximal gradient descent algorithms in the non-convex setting, we show that the proposed Plug-and-Play algorithm is a convergent iterative scheme that targets stationary points of an explicit global functional. Besides, experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in Plug-and-Play schemes. We apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, super-resolution and inpainting. For all these applications, numerical results empirically confirm the convergence results. Experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively.
true
true
Hurault, Samuel and Leclaire, Arthur and Papadakis, Nicolas
2,022
null
null
null
null
Gradient Step Denoiser for convergent Plug-and-Play
[2110.03220] Gradient Step Denoiser for convergent Plug-and-Play
https://arxiv.org/abs/2110.03220
We propose a new type of Plug-and-Play methods, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
hurault2022proximal
\cite{hurault2022proximal}
Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization
null
null
true
false
Hurault, Samuel and Leclaire, Arthur and Papadakis, Nicolas
2,022
null
null
null
null
Proximal denoiser for convergent plug-and-play optimization with nonconvex regularization
[PDF] Proximal Denoiser for Convergent Plug-and-Play Optimization with ...
https://icml.cc/media/icml-2022/Slides/18135.pdf
Proximal Denoiser for Convergent. Plug-and-Play Optimization with Nonconvex. Regularization. Samuel Hurault, Arthur Leclaire, Nicolas Papadakis. Institut de
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
fangs
\cite{fangs}
What's in a Prior? Learned Proximal Networks for Inverse Problems
http://arxiv.org/abs/2310.14344v2
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data.
true
true
Fang, Zhenghan and Buchanan, Sam and Sulam, Jeremias
null
null
null
null
null
What's in a Prior? Learned Proximal Networks for Inverse Problems
What's in a Prior? Learned Proximal Networks for Inverse Problems
http://arxiv.org/pdf/2310.14344v2
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
hu2024stochastic
\cite{hu2024stochastic}
Stochastic Deep Restoration Priors for Imaging Inverse Problems
http://arxiv.org/abs/2410.02057v1
Deep neural networks trained as image denoisers are widely used as priors for solving imaging inverse problems. While Gaussian denoising is thought sufficient for learning image priors, we show that priors from deep models pre-trained as more general restoration operators can perform better. We introduce Stochastic deep Restoration Priors (ShaRP), a novel method that leverages an ensemble of such restoration models to regularize inverse problems. ShaRP improves upon methods using Gaussian denoiser priors by better handling structured artifacts and enabling self-supervised training even without fully sampled data. We prove ShaRP minimizes an objective function involving a regularizer derived from the score functions of minimum mean square error (MMSE) restoration operators, and theoretically analyze its convergence. Empirically, ShaRP achieves state-of-the-art performance on tasks such as magnetic resonance imaging reconstruction and single-image super-resolution, surpassing both denoiser-and diffusion-model-based methods without requiring retraining.
true
true
Hu, Yuyang and Peng, Albert and Gan, Weijie and Milanfar, Peyman and Delbracio, Mauricio and Kamilov, Ulugbek S
2,024
null
null
null
arXiv preprint arXiv:2410.02057
Stochastic Deep Restoration Priors for Imaging Inverse Problems
Stochastic Deep Restoration Priors for Imaging Inverse Problems
http://arxiv.org/pdf/2410.02057v1
Deep neural networks trained as image denoisers are widely used as priors for solving imaging inverse problems. While Gaussian denoising is thought sufficient for learning image priors, we show that priors from deep models pre-trained as more general restoration operators can perform better. We introduce Stochastic deep Restoration Priors (ShaRP), a novel method that leverages an ensemble of such restoration models to regularize inverse problems. ShaRP improves upon methods using Gaussian denoiser priors by better handling structured artifacts and enabling self-supervised training even without fully sampled data. We prove ShaRP minimizes an objective function involving a regularizer derived from the score functions of minimum mean square error (MMSE) restoration operators, and theoretically analyze its convergence. Empirically, ShaRP achieves state-of-the-art performance on tasks such as magnetic resonance imaging reconstruction and single-image super-resolution, surpassing both denoiser-and diffusion-model-based methods without requiring retraining.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
kulkarni2016reconnet
\cite{kulkarni2016reconnet}
ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements
http://arxiv.org/abs/1601.06892v2
The goal of this paper is to present a non-iterative and more importantly an extremely fast algorithm to reconstruct images from compressively sensed (CS) random measurements. To this end, we propose a novel convolutional neural network (CNN) architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction. We call this network, ReconNet. The intermediate reconstruction is fed into an off-the-shelf denoiser to obtain the final reconstructed image. On a standard dataset of images we show significant improvements in reconstruction results (both in terms of PSNR and time complexity) over state-of-the-art iterative CS reconstruction algorithms at various measurement rates. Further, through qualitative experiments on real data collected using our block single pixel camera (SPC), we show that our network is highly robust to sensor noise and can recover visually better quality images than competitive algorithms at extremely low sensing rates of 0.1 and 0.04. To demonstrate that our algorithm can recover semantically informative images even at a low measurement rate of 0.01, we present a very robust proof of concept real-time visual tracking application.
true
true
Kulkarni, Kuldeep and Lohit, Suhas and Turaga, Pavan and Kerviche, Ronan and Ashok, Amit
2,016
null
null
null
null
ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements
ReconNet: Non-Iterative Reconstruction of Images From ...
https://openaccess.thecvf.com/content_cvpr_2016/papers/Kulkarni_ReconNet_Non-Iterative_Reconstruction_CVPR_2016_paper.pdf
by K Kulkarni · 2016 · Cited by 941 — ReconNet is a non-iterative, fast CNN algorithm that reconstructs images from compressively sensed measurements, using a novel CNN architecture.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
shi2019image
\cite{shi2019image}
Image compressed sensing using convolutional neural network
null
null
true
false
Shi, Wuzhen and Jiang, Feng and Liu, Shaohui and Zhao, Debin
2,019
null
null
null
IEEE Transactions on Image Processing
Image compressed sensing using convolutional neural network
inofficialamanjha/Image-Compressed-Sensing-using- ...
https://github.com/inofficialamanjha/Image-Compressed-Sensing-using-convolutional-Neural-Network
We have implemented an image CS framework using Convolutional Neural Network (CSNet), that includes a sampling network and a reconstruction network, which are
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
shi2019scalable
\cite{shi2019scalable}
Scalable convolutional neural network for image compressed sensing
null
null
true
false
Shi, Wuzhen and Jiang, Feng and Liu, Shaohui and Zhao, Debin
2,019
null
null
null
null
Scalable convolutional neural network for image compressed sensing
Scalable Convolutional Neural Network for Image ...
https://openaccess.thecvf.com/content_CVPR_2019/papers/Shi_Scalable_Convolutional_Neural_Network_for_Image_Compressed_Sensing_CVPR_2019_paper.pdf
by W Shi · 2019 · Cited by 205 — compressed sensing. SCSNet is the first to implement s- calable sampling and scalable reconstruction using CNN, which provides both coarse granular scalability
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
yao2019dr2
\cite{yao2019dr2}
Dr2-net: Deep residual reconstruction network for image compressive sensing
null
null
true
false
Yao, Hantao and Dai, Feng and Zhang, Shiliang and Zhang, Yongdong and Tian, Qi and Xu, Changsheng
2,019
null
null
null
Neurocomputing
Dr2-net: Deep residual reconstruction network for image compressive sensing
DR2-Net: Deep Residual Reconstruction Network for Image Compressive Sensing
http://arxiv.org/pdf/1702.05743v4
Most traditional algorithms for compressive sensing image reconstruction suffer from the intensive computation. Recently, deep learning-based reconstruction algorithms have been reported, which dramatically reduce the time complexity than iterative reconstruction algorithms. In this paper, we propose a novel \textbf{D}eep \textbf{R}esidual \textbf{R}econstruction Network (DR$^{2}$-Net) to reconstruct the image from its Compressively Sensed (CS) measurement. The DR$^{2}$-Net is proposed based on two observations: 1) linear mapping could reconstruct a high-quality preliminary image, and 2) residual learning could further improve the reconstruction quality. Accordingly, DR$^{2}$-Net consists of two components, \emph{i.e.,} linear mapping network and residual network, respectively. Specifically, the fully-connected layer in neural network implements the linear mapping network. We then expand the linear mapping network to DR$^{2}$-Net by adding several residual learning blocks to enhance the preliminary image. Extensive experiments demonstrate that the DR$^{2}$-Net outperforms traditional iterative methods and recent deep learning-based methods by large margins at measurement rates 0.01, 0.04, 0.1, and 0.25, respectively. The code of DR$^{2}$-Net has been released on: https://github.com/coldrainyht/caffe\_dr2
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
metzler2017learned
\cite{metzler2017learned}
Learned D-AMP: Principled Neural Network based Compressive Image Recovery
null
null
true
false
Metzler, Chris and Mousavi, Ali and Baraniuk, Richard
2,017
null
null
null
null
Learned D-AMP: Principled Neural Network based Compressive Image Recovery
Learned D-AMP: Principled Neural Network based Compressive Image Recovery
http://arxiv.org/pdf/1704.06625v4
Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be "unrolled" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over $50\times$ faster than BM3D-AMP and hundreds of times faster than NLR-CS.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
zhang2018ista
\cite{zhang2018ista}
ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing
null
null
true
false
Zhang, Jian and Ghanem, Bernard
2,018
null
null
null
null
ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing
ISTA-Net: Interpretable Optimization-Inspired Deep Network for ...
https://ieeexplore.ieee.org/iel7/8576498/8578098/08578294.pdf
ISTA-Net is a structured deep network inspired by ISTA for image compressive sensing, combining traditional and network-based methods, with learned parameters.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
yang2018admm
\cite{yang2018admm}
ADMM-CSNet: A deep learning approach for image compressive sensing
null
null
true
false
Yang, Yan and Sun, Jian and Li, Huibin and Xu, Zongben
2,018
null
null
null
IEEE Transactions on Pattern Analysis and Machine Intelligence
ADMM-CSNet: A deep learning approach for image compressive sensing
ADMM-CSNet: A Deep Learning Approach for Image Compressive ...
https://ieeexplore.ieee.org/document/8550778/
In this paper, we propose two versions of a novel deep learning architecture, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
zhang2020optimization
\cite{zhang2020optimization}
Optimization-inspired compact deep compressive sensing
null
null
true
false
Zhang, Jian and Zhao, Chen and Gao, Wen
2,020
null
null
null
IEEE Journal of Selected Topics in Signal Processing
Optimization-inspired compact deep compressive sensing
Optimization-Inspired Compact Deep Compressive Sensing
https://ieeexplore.ieee.org/document/9019857/
In this paper, we propose a novel framework to design an OPtimization-INspired Explicable deep Network, dubbed OPINE-Net, for adaptive sampling and recovery.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
zhang2020amp
\cite{zhang2020amp}
AMP-Net: Denoising-based deep unfolding for compressive image sensing
null
null
true
false
Zhang, Zhonghao and Liu, Yipeng and Liu, Jiani and Wen, Fei and Zhu, Ce
2,020
null
null
null
IEEE Transactions on Image Processing
AMP-Net: Denoising-based deep unfolding for compressive image sensing
Denoising-Based Deep Unfolding for Compressive Image ...
https://ieeexplore.ieee.org/iel7/83/9263394/09298950.pdf
by Z Zhang · 2020 · Cited by 297 — AMP-Net is a deep unfolding model for compressive image sensing, established by unfolding the denoising process of the approximate message
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
shen2022transcs
\cite{shen2022transcs}
TransCS: a transformer-based hybrid architecture for image compressed sensing
null
null
true
false
Shen, Minghe and Gan, Hongping and Ning, Chao and Hua, Yi and Zhang, Tao
2,022
null
null
null
IEEE Transactions on Image Processing
TransCS: a transformer-based hybrid architecture for image compressed sensing
TransCS: A Transformer-Based Hybrid Architecture for ...
https://www.researchgate.net/publication/364935930_TransCS_A_Transformer-based_Hybrid_Architecture_for_Image_Compressed_Sensing
In this paper, we propose a novel Transformer-based hybrid architecture (dubbed TransCS) to achieve high-quality image CS. In the sampling module, TransCS
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
song2021memory
\cite{song2021memory}
Memory-Augmented Deep Unfolding Network for Compressive Sensing
http://arxiv.org/abs/2110.09766v2
Mapping a truncated optimization method into a deep neural network, deep unfolding network (DUN) has attracted growing attention in compressive sensing (CS) due to its good interpretability and high performance. Each stage in DUNs corresponds to one iteration in optimization. By understanding DUNs from the perspective of the human brain's memory processing, we find there exists two issues in existing DUNs. One is the information between every two adjacent stages, which can be regarded as short-term memory, is usually lost seriously. The other is no explicit mechanism to ensure that the previous stages affect the current stage, which means memory is easily forgotten. To solve these issues, in this paper, a novel DUN with persistent memory for CS is proposed, dubbed Memory-Augmented Deep Unfolding Network (MADUN). We design a memory-augmented proximal mapping module (MAPMM) by combining two types of memory augmentation mechanisms, namely High-throughput Short-term Memory (HSM) and Cross-stage Long-term Memory (CLM). HSM is exploited to allow DUNs to transmit multi-channel short-term memory, which greatly reduces information loss between adjacent stages. CLM is utilized to develop the dependency of deep information across cascading stages, which greatly enhances network representation capability. Extensive CS experiments on natural and MR images show that with the strong ability to maintain and balance information our MADUN outperforms existing state-of-the-art methods by a large margin. The source code is available at https://github.com/jianzhangcs/MADUN/.
true
true
Song, Jiechong and Chen, Bin and Zhang, Jian
2,021
null
null
null
null
Memory-Augmented Deep Unfolding Network for Compressive Sensing
Memory-Augmented Deep Unfolding Network for Compressive ...
https://dl.acm.org/doi/10.1145/3474085.3475562
Learning memory augmented cascading network for compressed sensing of images. In Proceedings of the European Conference on Computer Vision (ECCV)
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
you2021coast
\cite{you2021coast}
COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing
http://arxiv.org/abs/2107.07225v1
Recent deep network-based compressive sensing (CS) methods have achieved great success. However, most of them regard different sampling matrices as different independent tasks and need to train a specific model for each target sampling matrix. Such practices give rise to inefficiency in computing and suffer from poor generalization ability. In this paper, we propose a novel COntrollable Arbitrary-Sampling neTwork, dubbed COAST, to solve CS problems of arbitrary-sampling matrices (including unseen sampling matrices) with one single model. Under the optimization-inspired deep unfolding framework, our COAST exhibits good interpretability. In COAST, a random projection augmentation (RPA) strategy is proposed to promote the training diversity in the sampling space to enable arbitrary sampling, and a controllable proximal mapping module (CPMM) and a plug-and-play deblocking (PnP-D) strategy are further developed to dynamically modulate the network features and effectively eliminate the blocking artifacts, respectively. Extensive experiments on widely used benchmark datasets demonstrate that our proposed COAST is not only able to handle arbitrary sampling matrices with one single model but also to achieve state-of-the-art performance with fast speed. The source code is available on https://github.com/jianzhangcs/COAST.
true
true
You, Di and Zhang, Jian and Xie, Jingfen and Chen, Bin and Ma, Siwei
2,021
null
null
null
IEEE Transactions on Image Processing
COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing
COntrollable Arbitrary-Sampling NeTwork for Compressive ...
https://ieeexplore.ieee.org/iel7/83/9263394/09467810.pdf
by D You · 2021 · Cited by 150 — In this paper, we pro- pose a novel COntrollable Arbitrary-Sampling neTwork, dubbed. COAST, to solve CS problems of arbitrary-sampling matrices.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
mou2022deep
\cite{mou2022deep}
Deep Generalized Unfolding Networks for Image Restoration
http://arxiv.org/abs/2204.13348v1
Deep neural networks (DNN) have achieved great success in image restoration. However, most DNN methods are designed as a black box, lacking transparency and interpretability. Although some methods are proposed to combine traditional optimization algorithms with DNN, they usually demand pre-defined degradation processes or handcrafted assumptions, making it difficult to deal with complex and real-world applications. In this paper, we propose a Deep Generalized Unfolding Network (DGUNet) for image restoration. Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation. In addition, we design inter-stage information pathways across proximal mapping in different PGD iterations to rectify the intrinsic information loss in most deep unfolding networks (DUN) through a multi-scale and spatial-adaptive way. By integrating the flexible gradient descent and informative proximal mapping, we unfold the iterative PGD algorithm into a trainable DNN. Extensive experiments on various image restoration tasks demonstrate the superiority of our method in terms of state-of-the-art performance, interpretability, and generalizability. The source code is available at https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration.
true
true
Mou, Chong and Wang, Qian and Zhang, Jian
2,022
null
null
null
null
Deep Generalized Unfolding Networks for Image Restoration
Deep Generalized Unfolding Networks for Image Restoration
http://arxiv.org/pdf/2204.13348v1
Deep neural networks (DNN) have achieved great success in image restoration. However, most DNN methods are designed as a black box, lacking transparency and interpretability. Although some methods are proposed to combine traditional optimization algorithms with DNN, they usually demand pre-defined degradation processes or handcrafted assumptions, making it difficult to deal with complex and real-world applications. In this paper, we propose a Deep Generalized Unfolding Network (DGUNet) for image restoration. Concretely, without loss of interpretability, we integrate a gradient estimation strategy into the gradient descent step of the Proximal Gradient Descent (PGD) algorithm, driving it to deal with complex and real-world image degradation. In addition, we design inter-stage information pathways across proximal mapping in different PGD iterations to rectify the intrinsic information loss in most deep unfolding networks (DUN) through a multi-scale and spatial-adaptive way. By integrating the flexible gradient descent and informative proximal mapping, we unfold the iterative PGD algorithm into a trainable DNN. Extensive experiments on various image restoration tasks demonstrate the superiority of our method in terms of state-of-the-art performance, interpretability, and generalizability. The source code is available at https://github.com/MC-E/Deep-Generalized-Unfolding-Networks-for-Image-Restoration.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
ye2023csformer
\cite{ye2023csformer}
CSformer: Bridging Convolution and Transformer for Compressive Sensing
http://arxiv.org/abs/2112.15299v1
Convolution neural networks (CNNs) have succeeded in compressive image sensing. However, due to the inductive bias of locality and weight sharing, the convolution operations demonstrate the intrinsic limitations in modeling the long-range dependency. Transformer, designed initially as a sequence-to-sequence model, excels at capturing global contexts due to the self-attention-based architectures even though it may be equipped with limited localization abilities. This paper proposes CSformer, a hybrid framework that integrates the advantages of leveraging both detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning. The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurement is projected into dual stems. One is the CNN stem for modeling the neighborhood relationships by convolution, and the other is the transformer stem for adopting global self-attention mechanism. The dual branches structure is concurrent, and the local features and global representations are fused under different resolutions to maximize the complementary of features. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameter and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets.
true
true
Ye, Dongjie and Ni, Zhangkai and Wang, Hanli and Zhang, Jian and Wang, Shiqi and Kwong, Sam
2,023
null
null
null
IEEE Transactions on Image Processing
CSformer: Bridging Convolution and Transformer for Compressive Sensing
CSformer: Bridging Convolution and Transformer for Compressive Sensing
http://arxiv.org/pdf/2112.15299v1
Convolution neural networks (CNNs) have succeeded in compressive image sensing. However, due to the inductive bias of locality and weight sharing, the convolution operations demonstrate the intrinsic limitations in modeling the long-range dependency. Transformer, designed initially as a sequence-to-sequence model, excels at capturing global contexts due to the self-attention-based architectures even though it may be equipped with limited localization abilities. This paper proposes CSformer, a hybrid framework that integrates the advantages of leveraging both detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning. The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurement is projected into dual stems. One is the CNN stem for modeling the neighborhood relationships by convolution, and the other is the transformer stem for adopting global self-attention mechanism. The dual branches structure is concurrent, and the local features and global representations are fused under different resolutions to maximize the complementary of features. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameter and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
song2023optimization
\cite{song2023optimization}
Optimization-Inspired Cross-Attention Transformer for Compressive Sensing
http://arxiv.org/abs/2304.13986v1
By integrating certain optimization solvers with deep neural networks, deep unfolding network (DUN) with good interpretability and high performance has attracted growing attention in compressive sensing (CS). However, existing DUNs often improve the visual quality at the price of a large number of parameters and have the problem of feature information loss during iteration. In this paper, we propose an Optimization-inspired Cross-attention Transformer (OCT) module as an iterative process, leading to a lightweight OCT-based Unfolding Framework (OCTUF) for image CS. Specifically, we design a novel Dual Cross Attention (Dual-CA) sub-module, which consists of an Inertia-Supplied Cross Attention (ISCA) block and a Projection-Guided Cross Attention (PGCA) block. ISCA block introduces multi-channel inertia forces and increases the memory effect by a cross attention mechanism between adjacent iterations. And, PGCA block achieves an enhanced information interaction, which introduces the inertia force into the gradient descent step through a cross attention block. Extensive CS experiments manifest that our OCTUF achieves superior performance compared to state-of-the-art methods while training lower complexity. Codes are available at https://github.com/songjiechong/OCTUF.
true
true
Song, Jiechong and Mou, Chong and Wang, Shiqi and Ma, Siwei and Zhang, Jian
2,023
null
null
null
null
Optimization-Inspired Cross-Attention Transformer for Compressive Sensing
Optimization-Inspired Cross-Attention Transformer for ...
https://arxiv.org/abs/2304.13986
by J Song · 2023 · Cited by 70 — In this paper, we propose an Optimization-inspired Cross-attention Transformer (OCT) module as an iterative process, leading to a lightweight OCT-based
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
wang2023saunet
\cite{wang2023saunet}
Saunet: Spatial-attention unfolding network for image compressive sensing
null
null
true
false
Wang, Ping and Yuan, Xin
2,023
null
null
null
null
Saunet: Spatial-attention unfolding network for image compressive sensing
Spatial-Attention Unfolding Network for Image Compressive Sensing".
https://github.com/pwangcs/SAUNet
SAUNet has achieved SOTA performance. More importantly, SAUNet contributes to real-world image compressive sensing systems, such as single-pixel cameras.
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
wang2024ufc
\cite{wang2024ufc}
UFC-Net: Unrolling Fixed-point Continuous Network for Deep Compressive Sensing
null
null
true
false
Wang, Xiaoyang and Gan, Hongping
2,024
null
null
null
null
UFC-Net: Unrolling Fixed-point Continuous Network for Deep Compressive Sensing
[PDF] UFC-Net: Unrolling Fixed-point Continuous Network for Deep ...
https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_UFC-Net_Unrolling_Fixed-point_Continuous_Network_for_Deep_Compressive_Sensing_CVPR_2024_paper.pdf
In this paper, we propose Unrolling Fixed- point Continuous Network (UFC-Net), a novel deep CS framework motivated by the traditional fixed-point contin- uous
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
guo2024cpp
\cite{guo2024cpp}
CPP-Net: Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA Network for Compressive Sensing
null
null
true
false
Guo, Zhen and Gan, Hongping
2,024
null
null
null
null
CPP-Net: Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA Network for Compressive Sensing
[PDF] Embracing Multi-Scale Feature Fusion into Deep Unfolding CP-PPA ...
https://openaccess.thecvf.com/content/CVPR2024/papers/Guo_CPP-Net_Embracing_Multi-Scale_Feature_Fusion_into_Deep_Unfolding_CP-PPA_Network_CVPR_2024_paper.pdf
In this paper, we propose CPP-Net, a novel deep unfolding CS framework, inspired by the primal- dual hybrid strategy of the Chambolle and Pock Proximal. Point
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
qu2024dual
\cite{qu2024dual}
Dual-Scale Transformer for Large-Scale Single-Pixel Imaging
http://arxiv.org/abs/2404.05001v1
Single-pixel imaging (SPI) is a potential computational imaging technique which produces image by solving an illposed reconstruction problem from few measurements captured by a single-pixel detector. Deep learning has achieved impressive success on SPI reconstruction. However, previous poor reconstruction performance and impractical imaging model limit its real-world applications. In this paper, we propose a deep unfolding network with hybrid-attention Transformer on Kronecker SPI model, dubbed HATNet, to improve the imaging quality of real SPI cameras. Specifically, we unfold the computation graph of the iterative shrinkagethresholding algorithm (ISTA) into two alternative modules: efficient tensor gradient descent and hybrid-attention multiscale denoising. By virtue of Kronecker SPI, the gradient descent module can avoid high computational overheads rooted in previous gradient descent modules based on vectorized SPI. The denoising module is an encoder-decoder architecture powered by dual-scale spatial attention for high- and low-frequency aggregation and channel attention for global information recalibration. Moreover, we build a SPI prototype to verify the effectiveness of the proposed method. Extensive experiments on synthetic and real data demonstrate that our method achieves the state-of-the-art performance. The source code and pre-trained models are available at https://github.com/Gang-Qu/HATNet-SPI.
true
true
Qu, Gang and Wang, Ping and Yuan, Xin
2,024
null
null
null
null
Dual-Scale Transformer for Large-Scale Single-Pixel Imaging
[PDF] Dual-Scale Transformer for Large-Scale Single-Pixel Imaging
https://openaccess.thecvf.com/content/CVPR2024/papers/Qu_Dual-Scale_Transformer_for_Large-Scale_Single-Pixel_Imaging_CVPR_2024_paper.pdf
In this paper, we propose a deep unfolding network with hybrid-attention. Transformer on Kronecker SPI model, dubbed HATNet, to im- prove the imaging quality of
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
yuan2016generalized
\cite{yuan2016generalized}
Generalized Alternating Projection Based Total Variation Minimization for Compressive Sensing
http://arxiv.org/abs/1511.03890v1
We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) algorithm. Extensive results demonstrate the high performance of proposed algorithm on compressive sensing, including two dimensional images, hyperspectral images and videos. We further derive the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework, respectively. Connections between GAP and ADMM are also provided.
true
true
Yuan, Xin
2,016
null
null
null
null
Generalized Alternating Projection Based Total Variation Minimization for Compressive Sensing
Generalized alternating projection based total variation minimization ...
https://ieeexplore.ieee.org/document/7532817/
We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP)
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
geman1995nonlinear
\cite{geman1995nonlinear}
Nonlinear image recovery with half-quadratic regularization
null
null
true
false
Geman, Donald and Yang, Chengda
1,995
null
null
null
IEEE transactions on Image Processing
Nonlinear image recovery with half-quadratic regularization
Nonlinear image recovery with half-quadratic regularization
https://www.semanticscholar.org/paper/Nonlinear-image-recovery-with-half-quadratic-Geman-Yang/1c99baa92387ead70c668dde6a6ed73b20697a6f
This approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary
Proximal Algorithm Unrolling: Flexible and Efficient Reconstruction Networks for Single-Pixel Imaging
2505.23180v1
romano2017little
\cite{romano2017little}
The Little Engine that Could: Regularization by Denoising (RED)
http://arxiv.org/abs/1611.02862v3
Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ($P^3$) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the $P^3$ method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems.
true
true
Romano, Yaniv and Elad, Michael and Milanfar, Peyman
2,017
null
null
null
SIAM Journal on Imaging Sciences
The Little Engine that Could: Regularization by Denoising (RED)
The Little Engine that Could: Regularization by Denoising (RED)
http://arxiv.org/pdf/1611.02862v3
Removal of noise from an image is an extensively studied problem in image processing. Indeed, the recent advent of sophisticated and highly effective denoising algorithms lead some to believe that existing methods are touching the ceiling in terms of noise removal performance. Can we leverage this impressive achievement to treat other tasks in image processing? Recent work has answered this question positively, in the form of the Plug-and-Play Prior ($P^3$) method, showing that any inverse problem can be handled by sequentially applying image denoising steps. This relies heavily on the ADMM optimization technique in order to obtain this chained denoising interpretation. Is this the only way in which tasks in image processing can exploit the image denoising engine? In this paper we provide an alternative, more powerful and more flexible framework for achieving the same goal. As opposed to the $P^3$ method, we offer Regularization by Denoising (RED): using the denoising engine in defining the regularization of the inverse problem. We propose an explicit image-adaptive Laplacian-based regularization functional, making the overall objective functional clearer and better defined. With a complete flexibility to choose the iterative optimization procedure for minimizing the above functional, RED is capable of incorporating any image denoising algorithm, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result. We test this approach and demonstrate state-of-the-art results in the image deblurring and super-resolution problems.
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
choi2007motion
\cite{choi2007motion}
Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation
null
null
true
false
Choi, Byeong-Doo and Han, Jong-Woo and Kim, Chang-Su and Ko, Sung-Jea
2,007
null
null
null
IEEE Transactions on Circuits and Systems for Video Technology
Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation
Motion-compensated frame interpolation using bilateral ...
https://pure.korea.ac.kr/en/publications/motion-compensated-frame-interpolation-using-bilateral-motion-est/fingerprints/
Dive into the research topics of 'Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation'.
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
parihar2022comprehensive
\cite{parihar2022comprehensive}
AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation
http://arxiv.org/abs/2506.01061v1
Video Frame Interpolation (VFI) is a fundamental Low-Level Vision (LLV) task that synthesizes intermediate frames between existing ones while maintaining spatial and temporal coherence. VFI techniques have evolved from classical motion compensation-based approach to deep learning-based approach, including kernel-, flow-, hybrid-, phase-, GAN-, Transformer-, Mamba-, and more recently diffusion model-based approach. We introduce AceVFI, the most comprehensive survey on VFI to date, covering over 250+ papers across these approaches. We systematically organize and describe VFI methodologies, detailing the core principles, design assumptions, and technical characteristics of each approach. We categorize the learning paradigm of VFI methods namely, Center-Time Frame Interpolation (CTFI) and Arbitrary-Time Frame Interpolation (ATFI). We analyze key challenges of VFI such as large motion, occlusion, lighting variation, and non-linear motion. In addition, we review standard datasets, loss functions, evaluation metrics. We examine applications of VFI including event-based, cartoon, medical image VFI and joint VFI with other LLV tasks. We conclude by outlining promising future research directions to support continued progress in the field. This survey aims to serve as a unified reference for both newcomers and experts seeking a deep understanding of modern VFI landscapes.
true
true
Parihar, Anil Singh and Varshney, Disha and Pandya, Kshitija and Aggarwal, Ashray
2,022
null
null
null
The Visual Computer
AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation
AceVFI: A Comprehensive Survey of Advances in Video Frame Interpolation
http://arxiv.org/pdf/2506.01061v1
Video Frame Interpolation (VFI) is a fundamental Low-Level Vision (LLV) task that synthesizes intermediate frames between existing ones while maintaining spatial and temporal coherence. VFI techniques have evolved from classical motion compensation-based approach to deep learning-based approach, including kernel-, flow-, hybrid-, phase-, GAN-, Transformer-, Mamba-, and more recently diffusion model-based approach. We introduce AceVFI, the most comprehensive survey on VFI to date, covering over 250+ papers across these approaches. We systematically organize and describe VFI methodologies, detailing the core principles, design assumptions, and technical characteristics of each approach. We categorize the learning paradigm of VFI methods namely, Center-Time Frame Interpolation (CTFI) and Arbitrary-Time Frame Interpolation (ATFI). We analyze key challenges of VFI such as large motion, occlusion, lighting variation, and non-linear motion. In addition, we review standard datasets, loss functions, evaluation metrics. We examine applications of VFI including event-based, cartoon, medical image VFI and joint VFI with other LLV tasks. We conclude by outlining promising future research directions to support continued progress in the field. This survey aims to serve as a unified reference for both newcomers and experts seeking a deep understanding of modern VFI landscapes.
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
DAIN
\cite{DAIN}
Depth-Aware Video Frame Interpolation
http://arxiv.org/abs/1904.00830v1
Video frame interpolation aims to synthesize nonexistent frames in-between the original frames. While significant advances have been made from the recent deep convolutional neural networks, the quality of interpolation is often reduced due to large object motion or occlusion. In this work, we propose a video frame interpolation method which explicitly detects the occlusion by exploring the depth information. Specifically, we develop a depth-aware flow projection layer to synthesize intermediate flows that preferably sample closer objects than farther ones. In addition, we learn hierarchical features to gather contextual information from neighboring pixels. The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame. Our model is compact, efficient, and fully differentiable. Quantitative and qualitative results demonstrate that the proposed model performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.
true
true
Bao, Wenbo and Lai, Wei-Sheng and Ma, Chao and Zhang, Xiaoyun and Gao, Zhiyong and Yang, Ming-Hsuan
2,019
null
null
null
null
Depth-Aware Video Frame Interpolation
[PDF] Depth-Aware Video Frame Interpolation - CVF Open Access
https://openaccess.thecvf.com/content_CVPR_2019/papers/Bao_Depth-Aware_Video_Frame_Interpolation_CVPR_2019_paper.pdf
Video frame interpolation aims to synthesize non- existent frames in-between the original frames. While sig- nificant advances have been made from the
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
RIFE
\cite{RIFE}
Real-Time Intermediate Flow Estimation for Video Frame Interpolation
http://arxiv.org/abs/2011.06294v12
Real-time video frame interpolation (VFI) is very useful in video processing, media players, and display devices. We propose RIFE, a Real-time Intermediate Flow Estimation algorithm for VFI. To realize a high-quality flow-based VFI method, RIFE uses a neural network named IFNet that can estimate the intermediate flows end-to-end with much faster speed. A privileged distillation scheme is designed for stable IFNet training and improve the overall performance. RIFE does not rely on pre-trained optical flow models and can support arbitrary-timestep frame interpolation with the temporal encoding input. Experiments demonstrate that RIFE achieves state-of-the-art performance on several public benchmarks. Compared with the popular SuperSlomo and DAIN methods, RIFE is 4--27 times faster and produces better results. Furthermore, RIFE can be extended to wider applications thanks to temporal encoding. The code is available at https://github.com/megvii-research/ECCV2022-RIFE.
true
true
Huang, Zhewei and Zhang, Tianyuan and Heng, Wen and Shi, Boxin and Zhou, Shuchang
2,022
null
null
null
null
Real-Time Intermediate Flow Estimation for Video Frame Interpolation
Real-Time Intermediate Flow Estimation for Video Frame ...
https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136740608.pdf
Video Frame Interpolation (VFI) aims to synthesize intermediate frames between two consecutive video frames. VFI supports various applications like slow-motion.
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
m2m
\cite{m2m}
Many-to-many Splatting for Efficient Video Frame Interpolation
http://arxiv.org/abs/2204.03513v1
Motion-based video frame interpolation commonly relies on optical flow to warp pixels from the inputs to the desired interpolation instant. Yet due to the inherent challenges of motion estimation (e.g. occlusions and discontinuities), most state-of-the-art interpolation approaches require subsequent refinement of the warped result to generate satisfying outputs, which drastically decreases the efficiency for multi-frame interpolation. In this work, we propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Specifically, given a frame pair, we estimate multiple bidirectional flows to directly forward warp the pixels to the desired time step, and then fuse any overlapping pixels. In doing so, each source pixel renders multiple target pixels and each target pixel can be synthesized from a larger area of visual context. This establishes a many-to-many splatting scheme with robustness to artifacts like holes. Moreover, for each input frame pair, M2M only performs motion estimation once and has a minuscule computational overhead when interpolating an arbitrary number of in-between frames, hence achieving fast multi-frame interpolation. We conducted extensive experiments to analyze M2M, and found that it significantly improves efficiency while maintaining high effectiveness.
true
true
Hu, Ping and Niklaus, Simon and Sclaroff, Stan and Saenko, Kate
2,022
null
null
null
null
Many-to-many Splatting for Efficient Video Frame Interpolation
Many-to-many Splatting for Efficient Video Frame Interpolation
https://ieeexplore.ieee.org/iel7/9878378/9878366/09878793.pdf
In this work, we propose a fully differentiable Many-to-Many (M2M) splatting framework to interpolate frames efficiently. Specifically, given a frame pair, we
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
EMA
\cite{EMA}
Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation
http://arxiv.org/abs/2303.00440v2
Effectively extracting inter-frame motion and appearance information is important for video frame interpolation (VFI). Previous works either extract both types of information in a mixed way or elaborate separate modules for each type of information, which lead to representation ambiguity and low efficiency. In this paper, we propose a novel module to explicitly extract motion and appearance information via a unifying operation. Specifically, we rethink the information process in inter-frame attention and reuse its attention map for both appearance feature enhancement and motion information extraction. Furthermore, for efficient VFI, our proposed module could be seamlessly integrated into a hybrid CNN and Transformer architecture. This hybrid pipeline can alleviate the computational complexity of inter-frame attention as well as preserve detailed low-level structure information. Experimental results demonstrate that, for both fixed- and arbitrary-timestep interpolation, our method achieves state-of-the-art performance on various datasets. Meanwhile, our approach enjoys a lighter computation overhead over models with close performance. The source code and models are available at https://github.com/MCG-NJU/EMA-VFI.
true
true
Zhang, Guozhen and Zhu, Yuhan and Wang, Haonan and Chen, Youxin and Wu, Gangshan and Wang, Limin
2,023
null
null
null
null
Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation
Extracting Motion and Appearance via Inter-Frame Attention ...
https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_Extracting_Motion_and_Appearance_via_Inter-Frame_Attention_for_Efficient_Video_CVPR_2023_paper.pdf
by G Zhang · 2023 · Cited by 157 — We propose to utilize inter-frame attention to extract both motion and appearance information simultane- ously for video frame interpolation. • An hybrid CNN
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
unisim
\cite{unisim}
UniSim: A Neural Closed-Loop Sensor Simulator
http://arxiv.org/abs/2308.01898v1
Rigorously testing autonomy systems is essential for making safe self-driving vehicles (SDV) a reality. It requires one to generate safety critical scenarios beyond what can be collected safely in the world, as many scenarios happen rarely on public roads. To accurately evaluate performance, we need to test the SDV on these scenarios in closed-loop, where the SDV and other actors interact with each other at each timestep. Previously recorded driving logs provide a rich resource to build these new scenarios from, but for closed loop evaluation, we need to modify the sensor data based on the new scene configuration and the SDV's decisions, as actors might be added or removed and the trajectories of existing actors and the SDV will differ from the original log. In this paper, we present UniSim, a neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle and converts it into a realistic closed-loop multi-sensor simulation. UniSim builds neural feature grids to reconstruct both the static background and dynamic actors in the scene, and composites them together to simulate LiDAR and camera data at new viewpoints, with actors added or removed and at new placements. To better handle extrapolated views, we incorporate learnable priors for dynamic objects, and leverage a convolutional network to complete unseen regions. Our experiments show UniSim can simulate realistic sensor data with small domain gap on downstream tasks. With UniSim, we demonstrate closed-loop evaluation of an autonomy system on safety-critical scenarios as if it were in the real world.
true
true
Yang, Ze and Chen, Yun and Wang, Jingkang and Manivasagam, Sivabalan and Ma, Wei-Chiu and Yang, Anqi Joyce and Urtasun, Raquel
2,023
null
null
null
null
UniSim: A Neural Closed-Loop Sensor Simulator
[2308.01898] UniSim: A Neural Closed-Loop Sensor Simulator - arXiv
https://arxiv.org/abs/2308.01898
A neural sensor simulator that takes a single recorded log captured by a sensor-equipped vehicle and converts it into a realistic closed-loop multi-sensor
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
neurad
\cite{neurad}
NeuRAD: Neural Rendering for Autonomous Driving
http://arxiv.org/abs/2311.15260v3
Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation, enabling testing of AD systems, and as an advanced training data augmentation technique. However, existing methods often require long training times, dense semantic supervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both camera and lidar -- including rolling shutter, beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we will openly release the NeuRAD source code. See https://github.com/georghess/NeuRAD .
true
true
Tonderski, Adam and Lindstr{\"o}m, Carl and Hess, Georg and Ljungbergh, William and Svensson, Lennart and Petersson, Christoffer
2,024
null
null
null
null
NeuRAD: Neural Rendering for Autonomous Driving
NeuRAD: Neural Rendering for Autonomous Driving
http://arxiv.org/pdf/2311.15260v3
Neural radiance fields (NeRFs) have gained popularity in the autonomous driving (AD) community. Recent methods show NeRFs' potential for closed-loop simulation, enabling testing of AD systems, and as an advanced training data augmentation technique. However, existing methods often require long training times, dense semantic supervision, or lack generalizability. This, in turn, hinders the application of NeRFs for AD at scale. In this paper, we propose NeuRAD, a robust novel view synthesis method tailored to dynamic AD data. Our method features simple network design, extensive sensor modeling for both camera and lidar -- including rolling shutter, beam divergence and ray dropping -- and is applicable to multiple datasets out of the box. We verify its performance on five popular AD datasets, achieving state-of-the-art performance across the board. To encourage further development, we will openly release the NeuRAD source code. See https://github.com/georghess/NeuRAD .
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
cao2024lightning
\cite{cao2024lightning}
Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous Driving
http://arxiv.org/abs/2403.05907v1
Recent studies have highlighted the promising application of NeRF in autonomous driving contexts. However, the complexity of outdoor environments, combined with the restricted viewpoints in driving scenarios, complicates the task of precisely reconstructing scene geometry. Such challenges often lead to diminished quality in reconstructions and extended durations for both training and rendering. To tackle these challenges, we present Lightning NeRF. It uses an efficient hybrid scene representation that effectively utilizes the geometry prior from LiDAR in autonomous driving scenarios. Lightning NeRF significantly improves the novel view synthesis performance of NeRF and reduces computational overheads. Through evaluations on real-world datasets, such as KITTI-360, Argoverse2, and our private dataset, we demonstrate that our approach not only exceeds the current state-of-the-art in novel view synthesis quality but also achieves a five-fold increase in training speed and a ten-fold improvement in rendering speed. Codes are available at https://github.com/VISION-SJTU/Lightning-NeRF .
true
true
Cao, Junyi and Li, Zhichao and Wang, Naiyan and Ma, Chao
2,024
null
null
null
arXiv preprint arXiv:2403.05907
Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous Driving
Efficient Hybrid Scene Representation for Autonomous Driving - arXiv
https://arxiv.org/abs/2403.05907
We present Lightning NeRF. It uses an efficient hybrid scene representation that effectively utilizes the geometry prior from LiDAR in autonomous driving
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
jiang2023alignerf
\cite{jiang2023alignerf}
AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
http://arxiv.org/abs/2211.09682v1
Neural Radiance Fields (NeRFs) are a powerful representation for modeling a 3D scene as a continuous function. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limits in a high-resolution setting. Specifically, existing NeRF-based methods face several limitations when reconstructing high-resolution real scenes, including a very large number of parameters, misaligned input data, and overly smooth details. In this work, we conduct the first pilot study on training NeRF with high-resolution data and propose the corresponding solutions: 1) marrying the multilayer perceptron (MLP) with convolutional layers which can encode more neighborhood information while reducing the total number of parameters; 2) a novel training strategy to address misalignment caused by moving objects or small camera calibration errors; and 3) a high-frequency aware loss. Our approach is nearly free without introducing obvious training/testing costs, while experiments on different datasets demonstrate that it can recover more high-frequency details compared with the current state-of-the-art NeRF models. Project page: \url{https://yifanjiang.net/alignerf.}
true
true
Jiang, Yifan and Hedman, Peter and Mildenhall, Ben and Xu, Dejia and Barron, Jonathan T and Wang, Zhangyang and Xue, Tianfan
2,023
null
null
null
null
AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
[PDF] High-Fidelity Neural Radiance Fields via Alignment-Aware Training
https://openaccess.thecvf.com/content/CVPR2023/papers/Jiang_AligNeRF_High-Fidelity_Neural_Radiance_Fields_via_Alignment-Aware_Training_CVPR_2023_paper.pdf
AligNeRF uses staged training: starting with an initial “normal” pre-training stage, followed by an alignment-aware fine-tuning stage. We choose mip-NeRF. 360
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
wynn2023diffusionerf
\cite{wynn2023diffusionerf}
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models
http://arxiv.org/abs/2302.12231v3
Under good conditions, Neural Radiance Fields (NeRFs) have shown impressive results on novel view synthesis tasks. NeRFs learn a scene's color and density fields by minimizing the photometric discrepancy between training views and differentiable renderings of the scene. Once trained from a sufficient set of views, NeRFs can generate novel views from arbitrary camera positions. However, the scene geometry and color fields are severely under-constrained, which can lead to artifacts, especially when trained with few input views. To alleviate this problem we learn a prior over scene geometry and color, using a denoising diffusion model (DDM). Our DDM is trained on RGBD patches of the synthetic Hypersim dataset and can be used to predict the gradient of the logarithm of a joint probability distribution of color and depth patches. We show that, these gradients of logarithms of RGBD patch priors serve to regularize geometry and color of a scene. During NeRF training, random RGBD patches are rendered and the estimated gradient of the log-likelihood is backpropagated to the color and density fields. Evaluations on LLFF, the most relevant dataset, show that our learned prior achieves improved quality in the reconstructed geometry and improved generalization to novel views. Evaluations on DTU show improved reconstruction quality among NeRF methods.
true
true
Wynn, Jamie and Turmukhambetov, Daniyar
2,023
null
null
null
null
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models
Regularizing Neural Radiance Fields with Denoising Diffusion Models
https://arxiv.org/abs/2302.12231
NeRFs learn a scene's color and density fields by minimizing the photometric discrepancy between training views and differentiable renderings of the scene.
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
3dgsEh
\cite{3dgsEh}
3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors
null
null
true
false
Liu, Xi and Zhou, Chaoyi and Huang, Siyu
2,024
null
null
null
arXiv preprint arXiv:2410.16266
3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors
Enhancing Unbounded 3D Gaussian Splatting with View- ...
https://arxiv.org/abs/2410.16266
Image 4: arxiv logo>cs> arXiv:2410.16266 **arXiv:2410.16266** (cs) View a PDF of the paper titled 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors, by Xi Liu and 2 other authors View a PDF of the paper titled 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors, by Xi Liu and 2 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
2505.22616v1
yu2024viewcrafter
\cite{yu2024viewcrafter}
ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis
http://arxiv.org/abs/2409.02048v1
Despite recent advancements in neural 3D reconstruction, the dependence on dense multi-view captures restricts their broader applicability. In this work, we propose \textbf{ViewCrafter}, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images with the prior of video diffusion model. Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames with precise camera pose control. To further enlarge the generation range of novel views, we tailored an iterative view synthesis strategy together with a camera trajectory planning algorithm to progressively extend the 3D clues and the areas covered by the novel views. With ViewCrafter, we can facilitate various applications, such as immersive experiences with real-time rendering by efficiently optimizing a 3D-GS representation using the reconstructed 3D points and the generated novel views, and scene-level text-to-3D generation for more imaginative content creation. Extensive experiments on diverse datasets demonstrate the strong generalization capability and superior performance of our method in synthesizing high-fidelity and consistent novel views.
true
true
Yu, Wangbo and Xing, Jinbo and Yuan, Li and Hu, Wenbo and Li, Xiaoyu and Huang, Zhipeng and Gao, Xiangjun and Wong, Tien-Tsin and Shan, Ying and Tian, Yonghong
2,024
null
null
null
arXiv preprint arXiv:2409.02048
ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis
Taming Video Diffusion Models for High-fidelity Novel View ...
https://github.com/Drexubery/ViewCrafter
ViewCrafter can generate high-fidelity novel views from a single or sparse reference image, while also supporting highly precise pose control.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
long2015fully
\cite{long2015fully}
Fully Convolutional Networks for Semantic Segmentation
http://arxiv.org/abs/1411.4038v2
Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.
true
true
Long, Jonathan and Shelhamer, Evan and Darrell, Trevor
2,015
null
null
null
null
Fully Convolutional Networks for Semantic Segmentation
Fully Convolutional Networks for Semantic Segmentation
http://arxiv.org/pdf/1411.4038v2
Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
chen2017deeplab
\cite{chen2017deeplab}
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
http://arxiv.org/abs/1606.00915v2
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
true
true
Chen, Liang-Chieh and Papandreou, George and Kokkinos, Iasonas and Murphy, Kevin and Yuille, Alan L
2,017
null
null
null
IEEE transactions on pattern analysis and machine intelligence
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
[PDF] DeepLab: Semantic Image Segmentation with Deep Convolutional ...
http://arxiv.org/pdf/1606.00915
A deep convolutional neural network (VGG-16 [4] or ResNet-101 [11] in this work) trained in the task of image classification is re-purposed to the task of semantic segmentation by (1) transforming all the fully connected layers to convolutional layers ( i.e ., fully convo-lutional network [14]) and (2) increasing feature resolution through atrous convolutional layers, allowing us to compute feature responses every 8 pixels instead of every 32 pixels in the original network. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” in ICLR , 2015. L. Yuille, “Weakly- and semi-supervised learning of a dcnn for semantic image segmentation,” in ICCV , 2015. van den Hengel, “High-performance semantic segmentation using very deep fully convolutional net-works,” arXiv:1604.04339 , 2016.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
liu2015parsenet
\cite{liu2015parsenet}
ParseNet: Looking Wider to See Better
http://arxiv.org/abs/1506.04579v2
We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https://github.com/weiliu89/caffe/tree/fcn .
true
true
Liu, Wei and Rabinovich, Andrew and Berg, Alexander C
2,015
null
null
null
arXiv preprint arXiv:1506.04579
ParseNet: Looking Wider to See Better
ParseNet: Looking Wider to See Better
http://arxiv.org/pdf/1506.04579v2
We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at https://github.com/weiliu89/caffe/tree/fcn .
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zhao2017pyramid
\cite{zhao2017pyramid}
Pyramid Scene Parsing Network
http://arxiv.org/abs/1612.01105v2
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
true
true
Zhao, Hengshuang and Shi, Jianping and Qi, Xiaojuan and Wang, Xiaogang and Jia, Jiaya
2,017
null
null
null
null
Pyramid Scene Parsing Network
Pyramid Scene Parsing Network
http://arxiv.org/pdf/1612.01105v2
Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zhao2018psanet
\cite{zhao2018psanet}
Psanet: Point-wise spatial attention network for scene parsing
null
null
true
false
Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Loy, Chen Change and Lin, Dahua and Jia, Jiaya
2,018
null
null
null
null
Psanet: Point-wise spatial attention network for scene parsing
[PDF] PSANet: Point-wise Spatial Attention Network for Scene Parsing
https://hszhao.github.io/paper/eccv18_psanet.pdf
In this paper, we propose the point-wise spatial attention network (PSANet) to aggregate long-range contextual information in a flexible and adaptive man- ner.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zhu2019asymmetric
\cite{zhu2019asymmetric}
Asymmetric Non-local Neural Networks for Semantic Segmentation
http://arxiv.org/abs/1908.07678v5
The non-local module works as a particularly useful technique for semantic segmentation while criticized for its prohibitive computation and GPU memory occupation. In this paper, we present Asymmetric Non-local Neural Network to semantic segmentation, which has two prominent components: Asymmetric Pyramid Non-local Block (APNB) and Asymmetric Fusion Non-local Block (AFNB). APNB leverages a pyramid sampling module into the non-local block to largely reduce the computation and memory consumption without sacrificing the performance. AFNB is adapted from APNB to fuse the features of different levels under a sufficient consideration of long range dependencies and thus considerably improves the performance. Extensive experiments on semantic segmentation benchmarks demonstrate the effectiveness and efficiency of our work. In particular, we report the state-of-the-art performance of 81.3 mIoU on the Cityscapes test set. For a 256x128 input, APNB is around 6 times faster than a non-local block on GPU while 28 times smaller in GPU running memory occupation. Code is available at: https://github.com/MendelXu/ANN.git.
true
true
Zhu, Zhen and Xu, Mengde and Bai, Song and Huang, Tengteng and Bai, Xiang
2,019
null
null
null
null
Asymmetric Non-local Neural Networks for Semantic Segmentation
Asymmetric Non-Local Neural Networks for Semantic ...
https://openaccess.thecvf.com/content_ICCV_2019/papers/Zhu_Asymmetric_Non-Local_Neural_Networks_for_Semantic_Segmentation_ICCV_2019_paper.pdf
In this paper, we present Asymmetric Non-local Neural Network to semantic segmentation, which has two promi-nent components: Asymmetric Pyramid Non-local Block (APNB) and Asymmetric Fusion Non-local Block (AFNB). Motivated by the spatial pyramid pooling [12, 16, 46] strategy, we propose to embed a pyramid sampling module into non-local blocks, which could largely reduce the computation overhead of matrix multiplications yet provide substantial semantic fea-ture statistics. Different from these works, our network uniquely incor-porates pyramid sampling strategies with non-local blocks to capture the semantic statistics of different scales with only a minor budget of computation, while maintaining the excellent performance as the original non-local modules.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
xie2021segformer
\cite{xie2021segformer}
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
http://arxiv.org/abs/2105.15203v3
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code will be released at: github.com/NVlabs/SegFormer.
true
true
Xie, Enze and Wang, Wenhai and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M and Luo, Ping
2,021
null
null
null
Advances in Neural Information Processing Systems
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
[PDF] SegFormer: Simple and Efficient Design for Semantic Segmentation ...
https://proceedings.neurips.cc/paper/2021/file/64f1f27bf1b4ec22924fd0acb550c235-Paper.pdf
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zheng2021rethinking
\cite{zheng2021rethinking}
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
http://arxiv.org/abs/2012.15840v3
Most recent semantic segmentation methods adopt a fully-convolutional network (FCN) with an encoder-decoder architecture. The encoder progressively reduces the spatial resolution and learns more abstract/semantic visual concepts with larger receptive fields. Since context modeling is critical for segmentation, the latest efforts have been focused on increasing the receptive field, through either dilated/atrous convolutions or inserting attention modules. However, the encoder-decoder based FCN architecture remains unchanged. In this paper, we aim to provide an alternative perspective by treating semantic segmentation as a sequence-to-sequence prediction task. Specifically, we deploy a pure transformer (ie, without convolution and resolution reduction) to encode an image as a sequence of patches. With the global context modeled in every layer of the transformer, this encoder can be combined with a simple decoder to provide a powerful segmentation model, termed SEgmentation TRansformer (SETR). Extensive experiments show that SETR achieves new state of the art on ADE20K (50.28% mIoU), Pascal Context (55.83% mIoU) and competitive results on Cityscapes. Particularly, we achieve the first position in the highly competitive ADE20K test server leaderboard on the day of submission.
true
true
Zheng, Sixiao and Lu, Jiachen and Zhao, Hengshuang and Zhu, Xiatian and Luo, Zekun and Wang, Yabiao and Fu, Yanwei and Feng, Jianfeng and Xiang, Tao and Torr, Philip HS and others
2,021
null
null
null
null
Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers
[PDF] Rethinking Semantic Segmentation From a Sequence-to-Sequence ...
https://openaccess.thecvf.com/content/CVPR2021/papers/Zheng_Rethinking_Semantic_Segmentation_From_a_Sequence-to-Sequence_Perspective_With_Transformers_CVPR_2021_paper.pdf
In this paper, we aim to provide an alternative perspective by treating semantic segmenta- tion as a sequence-to-sequence prediction task. Specifically, we
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
tsai2018learning
\cite{tsai2018learning}
Learning to Adapt Structured Output Space for Semantic Segmentation
http://arxiv.org/abs/1802.10349v3
Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.
true
true
Tsai, Yi-Hsuan and Hung, Wei-Chih and Schulter, Samuel and Sohn, Kihyuk and Yang, Ming-Hsuan and Chandraker, Manmohan
2,018
null
null
null
null
Learning to Adapt Structured Output Space for Semantic Segmentation
Learning to Adapt Structured Output Space for Semantic Segmentation
http://arxiv.org/pdf/1802.10349v3
Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
hong2018conditional
\cite{hong2018conditional}
Conditional generative adversarial network for structured domain adaptation
null
null
true
false
Hong, Weixiang and Wang, Zhenzhen and Yang, Ming and Yuan, Junsong
2,018
null
null
null
null
Conditional generative adversarial network for structured domain adaptation
Conditional Generative Adversarial Network for Structured Domain ...
https://weixianghong.github.io/publications/2018-10-04-CVPR/
Conditional Generative Adversarial Network for Structured Domain Adaptation. Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
kim2020learning
\cite{kim2020learning}
Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation
http://arxiv.org/abs/2003.00867v2
Since annotating pixel-level labels for semantic segmentation is laborious, leveraging synthetic data is an attractive solution. However, due to the domain gap between synthetic domain and real domain, it is challenging for a model trained with synthetic data to generalize to real data. In this paper, considering the fundamental difference between the two domains as the texture, we propose a method to adapt to the texture of the target domain. First, we diversity the texture of synthetic images using a style transfer algorithm. The various textures of generated images prevent a segmentation model from overfitting to one specific (synthetic) texture. Then, we fine-tune the model with self-training to get direct supervision of the target texture. Our results achieve state-of-the-art performance and we analyze the properties of the model trained on the stylized dataset with extensive experiments.
true
true
Kim, Myeongjin and Byun, Hyeran
2,020
null
null
null
null
Learning Texture Invariant Representation for Domain Adaptation of Semantic Segmentation
Learning Texture Invariant Representation for Domain ...
https://openaccess.thecvf.com/content_CVPR_2020/papers/Kim_Learning_Texture_Invariant_Representation_for_Domain_Adaptation_of_Semantic_Segmentation_CVPR_2020_paper.pdf
by M Kim · 2020 · Cited by 351 — We design a method to adapt to the target domain's tex- ture for domain adaptation of semantic segmentation, combining pixel-level method and self-training. 2.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
pan2020unsupervised
\cite{pan2020unsupervised}
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision
http://arxiv.org/abs/2004.07703v4
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation. However, these approaches heavily rely on annotated data which are labor intensive. To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models. However, the models trained from synthetic data are difficult to transfer to real images. To tackle this issue, previous works have considered directly adapting models from the source data to the unlabeled target data (to reduce the inter-domain gap). Nonetheless, these techniques do not consider the large distribution gap among the target data itself (intra-domain gap). In this work, we propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we conduct the inter-domain adaptation of the model; from this adaptation, we separate the target domain into an easy and hard split using an entropy-based ranking function. Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split. Experimental results on numerous benchmark datasets highlight the effectiveness of our method against existing state-of-the-art approaches. The source code is available at https://github.com/feipan664/IntraDA.git.
true
true
Pan, Fei and Shin, Inkyu and Rameau, Francois and Lee, Seokju and Kweon, In So
2,020
null
null
null
null
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision
[PDF] Unsupervised Intra-Domain Adaptation for Semantic Segmentation ...
https://openaccess.thecvf.com/content_CVPR_2020/papers/Pan_Unsupervised_Intra-Domain_Adaptation_for_Semantic_Segmentation_Through_Self-Supervision_CVPR_2020_paper.pdf
In this work, we propose a two-step self- supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together. First, we con- duct
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
tsai2019domain
\cite{tsai2019domain}
Domain Adaptation for Structured Output via Discriminative Patch Representations
http://arxiv.org/abs/1901.05427v4
Predicting structured outputs such as semantic segmentation relies on expensive per-pixel annotations to learn supervised models like convolutional neural networks. However, models trained on one data domain may not generalize well to other domains without annotations for model finetuning. To avoid the labor-intensive process of annotation, we develop a domain adaptation method to adapt the source data to the unlabeled target domain. We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution through the construction of a clustered space. With such representations as guidance, we use an adversarial learning scheme to push the feature representations of target patches in the clustered space closer to the distributions of source patches. In addition, we show that our framework is complementary to existing domain adaptation techniques and achieves consistent improvements on semantic segmentation. Extensive ablations and results are demonstrated on numerous benchmark datasets with various settings, such as synthetic-to-real and cross-city scenarios.
true
true
Tsai, Yi-Hsuan and Sohn, Kihyuk and Schulter, Samuel and Chandraker, Manmohan
2,019
null
null
null
null
Domain Adaptation for Structured Output via Discriminative Patch Representations
Domain Adaptation for Structured Output via Discriminative ...
https://www.computer.org/csdl/proceedings-article/iccv/2019/480300b456/1hVlpOKL1FC
by YH Tsai · 2019 · Cited by 417 — We propose to learn discriminative feature representations of patches in the source domain by discovering multiple modes of patch-wise output distribution ...See more
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
chen2019synergistic
\cite{chen2019synergistic}
Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation
http://arxiv.org/abs/1901.08211v4
This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively tackle the problem of domain shift. Domain adaptation has become an important and hot topic in recent studies on deep learning, aiming to recover performance degradation when applying the neural networks to new testing domains. Our proposed SIFA is an elegant learning diagram which presents synergistic fusion of adaptations from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features towards the segmentation task. The feature encoder layers are shared by both perspectives to grasp their mutual benefits during the end-to-end learning procedure. Without using any annotation from the target domain, the learning of our unified model is guided by adversarial losses, with multiple discriminators employed from various aspects. We have extensively validated our method with a challenging application of cross-modality medical image segmentation of cardiac structures. Experimental results demonstrate that our SIFA model recovers the degraded performance from 17.2% to 73.0%, and outperforms the state-of-the-art methods by a significant margin.
true
true
Chen, Cheng and Dou, Qi and Chen, Hao and Qin, Jing and Heng, Pheng-Ann
2,019
null
null
null
null
Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation
Synergistic Image and Feature Adaptation: Towards Cross-Modality ...
https://aaai.org/papers/00865-synergistic-image-and-feature-adaptation-towards-cross-modality-domain-adaptation-for-medical-image-segmentation/
This paper presents a novel unsupervised domain adaptation framework, called Synergistic Image and Feature Adaptation (SIFA), to effectively
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
du2019ssf
\cite{du2019ssf}
Ssf-dan: Separated semantic feature based domain adaptation network for semantic segmentation
null
null
true
false
Du, Liang and Tan, Jingang and Yang, Hongye and Feng, Jianfeng and Xue, Xiangyang and Zheng, Qibao and Ye, Xiaoqing and Zhang, Xiaolin
2,019
null
null
null
null
Ssf-dan: Separated semantic feature based domain adaptation network for semantic segmentation
ICCV 2019 Open Access Repository
https://openaccess.thecvf.com/content_ICCV_2019/html/Du_SSF-DAN_Separated_Semantic_Feature_Based_Domain_Adaptation_Network_for_Semantic_ICCV_2019_paper.html
by L Du · 2019 · Cited by 213 — In this work, we propose a Separated Semantic Feature based domain adaptation network, named SSF-DAN, for semantic segmentation. First, a Semantic-wise
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
melas2021pixmatch
\cite{melas2021pixmatch}
PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training
http://arxiv.org/abs/2105.08128v1
Unsupervised domain adaptation is a promising technique for semantic segmentation and other computer vision tasks for which large-scale data annotation is costly and time-consuming. In semantic segmentation, it is attractive to train models on annotated images from a simulated (source) domain and deploy them on real (target) domains. In this work, we present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training. Intuitively, our work is based on the idea that in order to perform well on the target domain, a model's output should be consistent with respect to small perturbations of inputs in the target domain. Specifically, we introduce a new loss term to enforce pixelwise consistency between the model's predictions on a target image and a perturbed version of the same image. In comparison to popular adversarial adaptation methods, our approach is simpler, easier to implement, and more memory-efficient during training. Experiments and extensive ablation studies demonstrate that our simple approach achieves remarkably strong results on two challenging synthetic-to-real benchmarks, GTA5-to-Cityscapes and SYNTHIA-to-Cityscapes. Code is available at: https://github.com/lukemelas/pixmatch
true
true
Melas-Kyriazi, Luke and Manrai, Arjun K
2,021
null
null
null
null
PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency Training
Unsupervised Domain Adaptation via Pixelwise Consistency Training
https://arxiv.org/abs/2105.08128
PixMatch is an unsupervised domain adaptation method using target-domain consistency training, enforcing pixelwise consistency between predictions and
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
hoyer2022daformer
\cite{hoyer2022daformer}
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
http://arxiv.org/abs/2111.14887v2
As acquiring pixel-wise annotations of real-world images for semantic segmentation is a costly process, a model can instead be trained with more accessible synthetic data and adapted to real images without requiring their annotations. This process is studied in unsupervised domain adaptation (UDA). Even though a large number of methods propose new adaptation strategies, they are mostly based on outdated network architectures. As the influence of recent network architectures has not been systematically studied, we first benchmark different network architectures for UDA and newly reveal the potential of Transformers for UDA semantic segmentation. Based on the findings, we propose a novel UDA method, DAFormer. The network architecture of DAFormer consists of a Transformer encoder and a multi-level context-aware feature fusion decoder. It is enabled by three simple but crucial training strategies to stabilize the training and to avoid overfitting to the source domain: While (1) Rare Class Sampling on the source domain improves the quality of the pseudo-labels by mitigating the confirmation bias of self-training toward common classes, (2) a Thing-Class ImageNet Feature Distance and (3) a learning rate warmup promote feature transfer from ImageNet pretraining. DAFormer represents a major advance in UDA. It improves the state of the art by 10.8 mIoU for GTA-to-Cityscapes and 5.4 mIoU for Synthia-to-Cityscapes and enables learning even difficult classes such as train, bus, and truck well. The implementation is available at https://github.com/lhoyer/DAFormer.
true
true
Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc
2,022
null
null
null
null
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
lhoyer/DAFormer: [CVPR22] Official Implementation of ...
https://github.com/lhoyer/DAFormer
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation. by Lukas Hoyer, Dengxin Dai, and Luc Van Gool.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
hoyer2022hrda
\cite{hoyer2022hrda}
HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation
http://arxiv.org/abs/2204.13132v2
Unsupervised domain adaptation (UDA) aims to adapt a model trained on the source domain (e.g. synthetic data) to the target domain (e.g. real-world data) without requiring further annotations on the target domain. This work focuses on UDA for semantic segmentation as real-world pixel-wise annotations are particularly expensive to acquire. As UDA methods for semantic segmentation are usually GPU memory intensive, most previous methods operate only on downscaled images. We question this design as low-resolution predictions often fail to preserve fine details. The alternative of training with random crops of high-resolution images alleviates this problem but falls short in capturing long-range, domain-robust context information. Therefore, we propose HRDA, a multi-resolution training approach for UDA, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention, while maintaining a manageable GPU memory footprint. HRDA enables adapting small objects and preserving fine segmentation details. It significantly improves the state-of-the-art performance by 5.5 mIoU for GTA-to-Cityscapes and 4.9 mIoU for Synthia-to-Cityscapes, resulting in unprecedented 73.8 and 65.8 mIoU, respectively. The implementation is available at https://github.com/lhoyer/HRDA.
true
true
Hoyer, Lukas and Dai, Dengxin and Van Gool, Luc
2,022
null
null
null
null
HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic Segmentation
[PDF] HRDA: Context-Aware High-Resolution Domain-Adaptive Semantic ...
https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136900370.pdf
HRDA is a multi-resolution training approach for UDA, using high-resolution crops for details and low-resolution for context, with a learned scale attention.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zou2018unsupervised
\cite{zou2018unsupervised}
Unsupervised domain adaptation for semantic segmentation via class-balanced self-training
null
null
true
false
Zou, Yang and Yu, Zhiding and Kumar, BVK and Wang, Jinsong
2,018
null
null
null
null
Unsupervised domain adaptation for semantic segmentation via class-balanced self-training
Unsupervised Domain Adaptation for Semantic ...
https://openaccess.thecvf.com/content_ECCV_2018/papers/Yang_Zou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.pdf
by Y Zou · 2018 · Cited by 1832 — A class-balanced self-training (CBST) is introduced to overcome the imbalance issue of transfer- ring difficulty among classes via generating pseudo-labels with
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
chen2019domain
\cite{chen2019domain}
Domain adaptation for semantic segmentation with maximum squares loss
null
null
true
false
Chen, Minghao and Xue, Hongyang and Cai, Deng
2,019
null
null
null
null
Domain adaptation for semantic segmentation with maximum squares loss
Domain Adaptation for Semantic Segmentation with Maximum Squares Loss
http://arxiv.org/pdf/1909.13589v1
Deep neural networks for semantic segmentation always require a large number of samples with pixel-level labels, which becomes the major difficulty in their real-world applications. To reduce the labeling cost, unsupervised domain adaptation (UDA) approaches are proposed to transfer knowledge from labeled synthesized datasets to unlabeled real-world datasets. Recently, some semi-supervised learning methods have been applied to UDA and achieved state-of-the-art performance. One of the most popular approaches in semi-supervised learning is the entropy minimization method. However, when applying the entropy minimization to UDA for semantic segmentation, the gradient of the entropy is biased towards samples that are easy to transfer. To balance the gradient of well-classified target samples, we propose the maximum squares loss. Our maximum squares loss prevents the training process being dominated by easy-to-transfer samples in the target domain. Besides, we introduce the image-wise weighting ratio to alleviate the class imbalance in the unlabeled target domain. Both synthetic-to-real and cross-city adaptation experiments demonstrate the effectiveness of our proposed approach. The code is released at https://github. com/ZJULearning/MaxSquareLoss.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zou2019confidence
\cite{zou2019confidence}
Confidence Regularized Self-Training
http://arxiv.org/abs/1908.09822v3
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation. These methods often involve an iterative process of predicting on target domain and then taking the confident predictions as pseudo-labels for retraining. However, since pseudo-labels can be noisy, self-training can put overconfident label belief on wrong classes, leading to deviated solutions with propagated errors. To address the problem, we propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous latent variables jointly optimized via alternating optimization. We propose two types of confidence regularization: label regularization (LR) and model regularization (MR). CRST-LR generates soft pseudo-labels while CRST-MR encourages the smoothness on network output. Extensive experiments on image classification and semantic segmentation show that CRSTs outperform their non-regularized counterpart with state-of-the-art performance. The code and models of this work are available at https://github.com/yzou2/CRST.
true
true
Zou, Yang and Yu, Zhiding and Liu, Xiaofeng and Kumar, BVK and Wang, Jinsong
2,019
null
null
null
null
Confidence Regularized Self-Training
[1908.09822] Confidence Regularized Self-Training - arXiv
https://arxiv.org/abs/1908.09822
We propose a confidence regularized self-training (CRST) framework, formulated as regularized self-training. Our method treats pseudo-labels as continuous
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
wang2021domain
\cite{wang2021domain}
Domain adaptive semantic segmentation with self-supervised depth estimation
null
null
true
false
Wang, Qin and Dai, Dengxin and Hoyer, Lukas and Van Gool, Luc and Fink, Olga
2,021
null
null
null
null
Domain adaptive semantic segmentation with self-supervised depth estimation
[PDF] Domain Adaptive Semantic Segmentation With Self-Supervised ...
https://openaccess.thecvf.com/content/ICCV2021/papers/Wang_Domain_Adaptive_Semantic_Segmentation_With_Self-Supervised_Depth_Estimation_ICCV_2021_paper.pdf
Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation Qin Wang1 Dengxin Dai1,2* Lukas Hoyer1 Luc Van Gool1,3 Olga Fink1 1ETH Zurich, Switzerland 2MPI for Informatics, Germany 3KU Lueven, Belgium {qwang,lhoyer,ofink}@ethz.ch {dai,vangool}@vision.ee.ethz.ch Abstract Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distri-bution shift between source and target domain. We propose to use self-supervised depth estima-tion (green) to improve semantic segmentation performance under the unsupervised domain adaptation setup. The additional self-supervised depth estimation can fa-cilitate us to explicitly learn the correlation between tasks to 1 8515 improve the final semantic segmentation performance. By exploit-ing the supervision from self-supervised depth estimation and learning the correlation between semantics and depth, the proposed method achieves 55.0% mIoU (stereo depth) on this task.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
lian2019constructing
\cite{lian2019constructing}
Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach
http://arxiv.org/abs/1908.09547v1
We propose a new approach, called self-motivated pyramid curriculum domain adaptation (PyCDA), to facilitate the adaptation of semantic segmentation neural networks from synthetic source domains to real target domains. Our approach draws on an insight connecting two existing works: curriculum domain adaptation and self-training. Inspired by the former, PyCDA constructs a pyramid curriculum which contains various properties about the target domain. Those properties are mainly about the desired label distributions over the target domain images, image regions, and pixels. By enforcing the segmentation neural network to observe those properties, we can improve the network's generalization capability to the target domain. Motivated by the self-training, we infer this pyramid of properties by resorting to the semantic segmentation network itself. Unlike prior work, we do not need to maintain any additional models (e.g., logistic regression or discriminator networks) or to solve minmax problems which are often difficult to optimize. We report state-of-the-art results for the adaptation from both GTAV and SYNTHIA to Cityscapes, two popular settings in unsupervised domain adaptation for semantic segmentation.
true
true
Lian, Qing and Lv, Fengmao and Duan, Lixin and Gong, Boqing
2,019
null
null
null
null
Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach
lianqing11/PyCDA - A Non-Adversarial Approach
https://github.com/lianqing11/PyCDA
PyCDA. Code for Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach.See more
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
li2019bidirectional
\cite{li2019bidirectional}
Bidirectional Learning for Domain Adaptation of Semantic Segmentation
http://arxiv.org/abs/1904.10620v1
Domain adaptation for semantic image segmentation is very necessary since manually labeling large datasets with pixel-level labels is expensive and time consuming. Existing domain adaptation techniques either work on limited datasets, or yield not so good performance compared with supervised learning. In this paper, we propose a novel bidirectional learning framework for domain adaptation of segmentation. Using the bidirectional learning, the image translation model and the segmentation adaptation model can be learned alternatively and promote to each other. Furthermore, we propose a self-supervised learning algorithm to learn a better segmentation adaptation model and in return improve the image translation model. Experiments show that our method is superior to the state-of-the-art methods in domain adaptation of segmentation with a big margin. The source code is available at https://github.com/liyunsheng13/BDL.
true
true
Li, Yunsheng and Yuan, Lu and Vasconcelos, Nuno
2,019
null
null
null
null
Bidirectional Learning for Domain Adaptation of Semantic Segmentation
Bidirectional Learning for Domain Adaptation of Semantic Segmentation
http://arxiv.org/pdf/1904.10620v1
Domain adaptation for semantic image segmentation is very necessary since manually labeling large datasets with pixel-level labels is expensive and time consuming. Existing domain adaptation techniques either work on limited datasets, or yield not so good performance compared with supervised learning. In this paper, we propose a novel bidirectional learning framework for domain adaptation of segmentation. Using the bidirectional learning, the image translation model and the segmentation adaptation model can be learned alternatively and promote to each other. Furthermore, we propose a self-supervised learning algorithm to learn a better segmentation adaptation model and in return improve the image translation model. Experiments show that our method is superior to the state-of-the-art methods in domain adaptation of segmentation with a big margin. The source code is available at https://github.com/liyunsheng13/BDL.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
wang2021uncertainty
\cite{wang2021uncertainty}
Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation
null
null
true
false
Wang, Yuxi and Peng, Junran and Zhang, ZhaoXiang
2,021
null
null
null
null
Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation
[PDF] Uncertainty-Aware Pseudo Label Refinery for Domain Adaptive ...
https://openaccess.thecvf.com/content/ICCV2021/papers/Wang_Uncertainty-Aware_Pseudo_Label_Refinery_for_Domain_Adaptive_Semantic_Segmentation_ICCV_2021_paper.pdf
Domain Adaptation for Semantic Segmentation (DASS) aims to train a network that can assign pixel-level labels to unlabeled target data by learning from labeled
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
zhang2021prototypical
\cite{zhang2021prototypical}
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation
http://arxiv.org/abs/2101.10979v2
Self-training is a competitive approach in domain adaptive segmentation, which trains the network with the pseudo labels on the target domain. However inevitably, the pseudo labels are noisy and the target features are dispersed due to the discrepancy between source and target domains. In this paper, we rely on representative prototypes, the feature centroids of classes, to address the two issues for unsupervised domain adaptation. In particular, we take one step further and exploit the feature distances from prototypes that provide richer information than mere prototypes. Specifically, we use it to estimate the likelihood of pseudo labels to facilitate online correction in the course of training. Meanwhile, we align the prototypical assignments based on relative feature distances for two different views of the same target, producing a more compact target feature space. Moreover, we find that distilling the already learned knowledge to a self-supervised pretrained model further boosts the performance. Our method shows tremendous performance advantage over state-of-the-art methods. We will make the code publicly available.
true
true
Zhang, Pan and Zhang, Bo and Zhang, Ting and Chen, Dong and Wang, Yong and Wen, Fang
2,021
null
null
null
null
Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation
Prototypical Pseudo Label Denoising and Target Structure ...
https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Prototypical_Pseudo_Label_Denoising_and_Target_Structure_Learning_for_Domain_CVPR_2021_paper.pdf
by P Zhang · 2021 · Cited by 674 — This paper uses prototypes to address noisy pseudo labels in unsupervised domain adaptation, online correcting them and aligning soft assignments for a compact
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
tranheden2021dacs
\cite{tranheden2021dacs}
DACS: Domain Adaptation via Cross-domain Mixed Sampling
http://arxiv.org/abs/2007.08702v2
Semantic segmentation models based on convolutional neural networks have recently displayed remarkable performance for a multitude of applications. However, these models typically do not generalize well when applied on new domains, especially when going from synthetic to real data. In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain). Existing methods have seen success by training on pseudo-labels for these unlabelled images. Multiple techniques have been proposed to mitigate low-quality pseudo-labels arising from the domain shift, with varying degrees of success. We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels. These mixed samples are then trained on, in addition to the labelled data itself. We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes, a common synthetic-to-real semantic segmentation benchmark for UDA.
true
true
Tranheden, Wilhelm and Olsson, Viktor and Pinto, Juliano and Svensson, Lennart
2,021
null
null
null
null
DACS: Domain Adaptation via Cross-domain Mixed Sampling
DACS: Domain Adaptation via Cross-domain Mixed Sampling - arXiv
https://arxiv.org/abs/2007.08702
We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
you2019universal
\cite{you2019universal}
Universal Multi-Source Domain Adaptation
http://arxiv.org/abs/2011.02594v1
Unsupervised domain adaptation enables intelligent models to transfer knowledge from a labeled source domain to a similar but unlabeled target domain. Recent study reveals that knowledge can be transferred from one source domain to another unknown target domain, called Universal Domain Adaptation (UDA). However, in the real-world application, there are often more than one source domain to be exploited for domain adaptation. In this paper, we formally propose a more general domain adaptation setting, universal multi-source domain adaptation (UMDA), where the label sets of multiple source domains can be different and the label set of target domain is completely unknown. The main challenges in UMDA are to identify the common label set between each source domain and target domain, and to keep the model scalable as the number of source domains increases. To address these challenges, we propose a universal multi-source adaptation network (UMAN) to solve the domain adaptation problem without increasing the complexity of the model in various UMDA settings. In UMAN, we estimate the reliability of each known class in the common label set via the prediction margin, which helps adversarial training to better align the distributions of multiple source domains and target domain in the common label set. Moreover, the theoretical guarantee for UMAN is also provided. Massive experimental results show that existing UDA and multi-source DA (MDA) methods cannot be directly applied to UMDA and the proposed UMAN achieves the state-of-the-art performance in various UMDA settings.
true
true
You, Kaichao and Long, Mingsheng and Cao, Zhangjie and Wang, Jianmin and Jordan, Michael I
2,019
null
null
null
null
Universal Multi-Source Domain Adaptation
[2011.02594] Universal Multi-Source Domain Adaptation - arXiv
https://arxiv.org/abs/2011.02594
In this paper, we formally propose a more general domain adaptation setting, universal multi-source domain adaptation (UMDA), where the label sets of multiple
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
fu2020learning
\cite{fu2020learning}
Learning to detect open classes for universal domain adaptation
null
null
true
false
Fu, Bo and Cao, Zhangjie and Long, Mingsheng and Wang, Jianmin
2,020
null
null
null
null
Learning to detect open classes for universal domain adaptation
Learning to Detect Open Classes for Universal Domain ...
https://paperswithcode.com/paper/learning-to-detect-open-classes-for-universal
Universal domain adaptation (UDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
bucci2020effectiveness
\cite{bucci2020effectiveness}
On the Effectiveness of Image Rotation for Open Set Domain Adaptation
http://arxiv.org/abs/2007.12360v1
Open Set Domain Adaptation (OSDA) bridges the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source. To avoid negative transfer, OSDA can be tackled by first separating the known/unknown target samples and then aligning known target samples with the source data. We propose a novel method to addresses both these problems using the self-supervised task of rotation recognition. Moreover, we assess the performance with a new open set metric that properly balances the contribution of recognizing the known classes and rejecting the unknown samples. Comparative experiments with existing OSDA methods on the standard Office-31 and Office-Home benchmarks show that: (i) our method outperforms its competitors, (ii) reproducibility for this field is a crucial issue to tackle, (iii) our metric provides a reliable tool to allow fair open set evaluation.
true
true
Bucci, Silvia and Loghmani, Mohammad Reza and Tommasi, Tatiana
2,020
null
null
null
null
On the Effectiveness of Image Rotation for Open Set Domain Adaptation
On the Effectiveness of Image Rotation for Open Set Domain Adaptation
http://arxiv.org/pdf/2007.12360v1
Open Set Domain Adaptation (OSDA) bridges the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source. To avoid negative transfer, OSDA can be tackled by first separating the known/unknown target samples and then aligning known target samples with the source data. We propose a novel method to addresses both these problems using the self-supervised task of rotation recognition. Moreover, we assess the performance with a new open set metric that properly balances the contribution of recognizing the known classes and rejecting the unknown samples. Comparative experiments with existing OSDA methods on the standard Office-31 and Office-Home benchmarks show that: (i) our method outperforms its competitors, (ii) reproducibility for this field is a crucial issue to tackle, (iii) our metric provides a reliable tool to allow fair open set evaluation.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
saito2020universal
\cite{saito2020universal}
Universal Domain Adaptation through Self Supervision
http://arxiv.org/abs/2002.07953v3
Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at https://github.com/VisionLearningGroup/DANCE.
true
true
Saito, Kuniaki and Kim, Donghyun and Sclaroff, Stan and Saenko, Kate
2,020
null
null
null
Advances in neural information processing systems
Universal Domain Adaptation through Self Supervision
Universal Domain Adaptation through Self Supervision
http://arxiv.org/pdf/2002.07953v3
Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation framework that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. Implementation is available at https://github.com/VisionLearningGroup/DANCE.
Universal Domain Adaptation for Semantic Segmentation
2505.22458v1
saito2021ovanet
\cite{saito2021ovanet}
OVANet: One-vs-All Network for Universal Domain Adaptation
http://arxiv.org/abs/2104.03344v4
Universal Domain Adaptation (UNDA) aims to handle both domain-shift and category-shift between two datasets, where the main challenge is to transfer knowledge while rejecting unknown classes which are absent in the labeled source data but present in the unlabeled target data. Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples, but this strategy is not practical. In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain. Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target. To learn the inter-and intra-class distance, we propose to train a one-vs-all classifier for each class using labeled source data. Then, we adapt the open-set classifier to the target domain by minimizing class entropy. The resulting framework is the simplest of all baselines of UNDA and is insensitive to the value of a hyper-parameter yet outperforms baselines with a large margin.
true
true
Saito, Kuniaki and Saenko, Kate
2,021
null
null
null
null
OVANet: One-vs-All Network for Universal Domain Adaptation
One-vs-All Network for Universal Domain Adaptation
https://arxiv.org/abs/2104.03344
by K Saito · 2021 · Cited by 203 — We propose to train a one-vs-all classifier for each class using labeled source data. Then, we adapt the open-set classifier to the target domain by minimizing
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
2505.22427v1
sugimoto2004obstacle
\cite{sugimoto2004obstacle}
Obstacle detection using millimeter-wave radar and its visualization on image sequence
null
null
true
false
Sugimoto, Shigeki and Tateda, Hayato and Takahashi, Hidekazu and Okutomi, Masatoshi
2,004
null
null
null
null
Obstacle detection using millimeter-wave radar and its visualization on image sequence
Obstacle detection using millimeter-wave radar and its visualization ...
https://ieeexplore.ieee.org/iel5/9258/29387/01334537.pdf
This section presents a calibration result between the sensors along with segmentation and vi- sualization results using real radar/image frame sequences.
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
2505.22427v1
wang2011integrating
\cite{wang2011integrating}
Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications
null
null
true
false
Wang, Tao and Zheng, Nanning and Xin, Jingmin and Ma, Zheng
2,011
null
null
null
Sensors
Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications
Integrating millimeter wave radar with a monocular vision sensor for ...
https://pubmed.ncbi.nlm.nih.gov/22164117/
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection.
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
2505.22427v1
kim2014data
\cite{kim2014data}
Data fusion of radar and image measurements for multi-object tracking via Kalman filtering
null
null
true
false
Kim, Du Yong and Jeon, Moongu
2,014
null
null
null
Information Sciences
Data fusion of radar and image measurements for multi-object tracking via Kalman filtering
(PDF) Data fusion of radar and image measurements for multi-object ...
https://www.researchgate.net/publication/278072957_Data_fusion_of_radar_and_image_measurements_for_multi-object_tracking_via_Kalman_filtering
Data fusion of radar and image measurements for multi-object tracking via Kalman filtering. September 2014; Information Sciences 278:641-652.
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
2505.22427v1
kim2018radar
\cite{kim2018radar}
Radar and vision sensor fusion for object detection in autonomous vehicle surroundings
null
null
true
false
Kim, Jihun and Han, Dong Seog and Senouci, Benaoumeur
2,018
null
null
null
null
Radar and vision sensor fusion for object detection in autonomous vehicle surroundings
Radar and Vision Sensor Fusion for Object Detection ... - IEEE Xplore
https://ieeexplore.ieee.org/document/8436959
Radar and Vision Sensor Fusion for Object Detection in Autonomous Vehicle Surroundings | IEEE Conference Publication | IEEE Xplore * IEEE _Xplore_ Publisher: IEEE Multi-sensor data fusion for advanced driver assistance systems (ADAS) in the automotive industry has received much attention recently due to the emergence of self-drivin...Show More Multi-sensor data fusion for advanced driver assistance systems (ADAS) in the automotive industry has received much attention recently due to the emergence of self-driving vehicles and road traffic safety applications. Publisher: IEEE Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy ### IEEE Account * About IEEE _Xplore_
RC-AutoCalib: An End-to-End Radar-Camera Automatic Calibration Network
2505.22427v1
kim2017comparative
\cite{kim2017comparative}
Comparative analysis of RADAR-IR sensor fusion methods for object detection
null
null
true
false
Kim, Taehwan and Kim, Sungho and Lee, Eunryung and Park, Miryong
2,017
null
null
null
null
Comparative analysis of RADAR-IR sensor fusion methods for object detection
Comparative analysis of RADAR-IR sensor fusion methods for ...
https://ieeexplore.ieee.org/document/8204237/
This paper presents the Radar and IR sensor fusion method for objection detection. The infrared camera parameter calibration with Levenberg-Marquardt (LM)