parent_paper_title
stringclasses
63 values
parent_paper_arxiv_id
stringclasses
63 values
citation_shorthand
stringlengths
2
56
raw_citation_text
stringlengths
9
63
cited_paper_title
stringlengths
5
161
cited_paper_arxiv_link
stringlengths
32
37
cited_paper_abstract
stringlengths
406
1.92k
has_metadata
bool
1 class
is_arxiv_paper
bool
2 classes
bib_paper_authors
stringlengths
2
2.44k
bib_paper_year
float64
1.97k
2.03k
bib_paper_month
stringclasses
16 values
bib_paper_url
stringlengths
20
116
bib_paper_doi
stringclasses
269 values
bib_paper_journal
stringlengths
3
148
original_title
stringlengths
5
161
search_res_title
stringlengths
4
122
search_res_url
stringlengths
22
267
search_res_content
stringlengths
19
1.92k
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
ronneberger_unet_miccai_2015
\cite{ronneberger_unet_miccai_2015}
U-net: Convolutional networks for biomedical image segmentation
null
null
true
false
Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas
2,015
null
null
null
null
U-net: Convolutional networks for biomedical image segmentation
U-Net: Convolutional Networks for Biomedical Image Segmentation
http://arxiv.org/pdf/1505.04597v1
There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net .
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
menze_tmi_2015
\cite{menze_tmi_2015}
The {Multimodal} {Brain} {Tumor} {Image} {Segmentation} {Benchmark} ({BRATS})
null
null
true
false
Menze, Bjoern H and Jakab, Andras and Bauer, Stefan and Kalpathy-Cramer, Jayashree and Farahani, Keyvan and Kirby, Justin and Burren, Yuliya and Porz, Nicole and Slotboom, Johannes and Wiest, Roland and others
2,015
null
null
null
IEEE TMI
The {Multimodal} {Brain} {Tumor} {Image} {Segmentation} {Benchmark} ({BRATS})
The Multimodal Brain Tumor Image Segmentation Benchmark ...
https://pmc.ncbi.nlm.nih.gov/articles/PMC4833122/
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) - PMC The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) Find articles by Thomas J Taylor Find articles by Nicholas J Tustison [DOI00671-8)] [PMC free article] [PubMed] [Google Scholar00671-8&)] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI] [PMC free article] [PubMed] [Google Scholar] [DOI1522-2594(200004)43:4%3C589::aid-mrm14%3E3.0.co;2-2)] [PubMed] [Google Scholar%20and%20B(0)%20variations%20in%20quantitative%20T2%20measurements%20using%20MRI&author=J%20Sled&author=G%20Pike&volume=43&issue=4&publication_year=2000&pages=589-593&pmid=10748435&doi=10.1002/(sici)1522-2594(200004)43:4%3C589::aid-mrm14%3E3.0.co;2-2&)]
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
bakas_arxiv_2019
\cite{bakas_arxiv_2019}
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
http://arxiv.org/abs/1811.02629v3
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
true
true
Bakas, Spyridon and Reyes, Mauricio and Jakab, Andras and Bauer, Stefan and Rempfler, Markus and Crimi, Alessandro and Shinohara, Russell Takeshi and Berger, Christoph and Ha, Sung Min and Rozycki, Martin and others
2,018
null
null
null
arXiv preprint arXiv:1811.02629
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Identifying the Best Machine Learning Algorithms for Brain Tumor ...
https://arxiv.org/abs/1811.02629
View a PDF of the paper titled Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge, by Spyridon Bakas and 426 other authors View a PDF of the paper titled Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge, by Spyridon Bakas and 426 other authors
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
baid_arxiv_2021
\cite{baid_arxiv_2021}
The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification
http://arxiv.org/abs/2107.02314v2
The BraTS 2021 challenge celebrates its 10th anniversary and is jointly organized by the Radiological Society of North America (RSNA), the American Society of Neuroradiology (ASNR), and the Medical Image Computing and Computer Assisted Interventions (MICCAI) society. Since its inception, BraTS has been focusing on being a common benchmarking venue for brain glioma segmentation algorithms, with well-curated multi-institutional multi-parametric magnetic resonance imaging (mpMRI) data. Gliomas are the most common primary malignancies of the central nervous system, with varying degrees of aggressiveness and prognosis. The RSNA-ASNR-MICCAI BraTS 2021 challenge targets the evaluation of computational algorithms assessing the same tumor compartmentalization, as well as the underlying tumor's molecular characterization, in pre-operative baseline mpMRI data from 2,040 patients. Specifically, the two tasks that BraTS 2021 focuses on are: a) the segmentation of the histologically distinct brain tumor sub-regions, and b) the classification of the tumor's O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status. The performance evaluation of all participating algorithms in BraTS 2021 will be conducted through the Sage Bionetworks Synapse platform (Task 1) and Kaggle (Task 2), concluding in distributing to the top ranked participants monetary awards of $60,000 collectively.
true
true
Baid, Ujjwal and Ghodasara, Satyam and Mohan, Suyash and Bilello, Michel and Calabrese, Evan and Colak, Errol and Farahani, Keyvan and Kalpathy-Cramer, Jayashree and Kitamura, Felipe C and Pati, Sarthak and others
2,021
null
null
null
arXiv preprint arXiv:2107.02314
The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification
BraTS-Lighthouse 2025 Challenge - syn64153130 - Wiki
https://www.synapse.org/Synapse:syn64153130/wiki/631064
[1] U.Baid, et al., The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification, arXiv:2107.02314, 2021.
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
myronenko_miccai_2019
\cite{myronenko_miccai_2019}
3D MRI brain tumor segmentation using autoencoder regularization
http://arxiv.org/abs/1810.11654v3
Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge.
true
true
Myronenko, Andriy
2,019
null
null
null
null
3D MRI brain tumor segmentation using autoencoder regularization
3D MRI brain tumor segmentation using autoencoder regularization
http://arxiv.org/pdf/1810.11654v3
Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge.
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
jiang_cascaded_unet_miccai_2020
\cite{jiang_cascaded_unet_miccai_2020}
Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task
null
null
true
false
Jiang, Zeyu and Ding, Changxing and Liu, Minfeng and Tao, Dacheng
2,020
null
null
null
null
Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task
Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge ...
https://www.semanticscholar.org/paper/Two-Stage-Cascaded-U-Net%3A-1st-Place-Solution-to-Jiang-Ding/6eead90d63cc679263ef608121db075b78e03960
A novel two-stage cascaded U-Net to segment the substructures of brain tumors from coarse to fine is devised and won the 1st place in the BraTS 2019
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
isensee_nnunet_miccai_2021
\cite{isensee_nnunet_miccai_2021}
nnU-Net for Brain Tumor Segmentation
http://arxiv.org/abs/2011.00848v1
We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnUNet pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our final ensemble took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.
true
true
Isensee, Fabian and J{\"a}ger, Paul F and Full, Peter M and Vollmuth, Philipp and Maier-Hein, Klaus H
2,021
null
null
null
null
nnU-Net for Brain Tumor Segmentation
Brain tumor segmentation with advanced nnU-Net - ScienceDirect.com
https://www.sciencedirect.com/science/article/pii/S2772528624000013
This paper introduces an extended version of the nnU-Net architecture for brain tumor segmentation, addressing both adult (Glioma) and pediatric tumors.
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
luu_miccai_2022
\cite{luu_miccai_2022}
Extending nn-UNet for brain tumor segmentation
http://arxiv.org/abs/2112.04653v1
Brain tumor segmentation is essential for the diagnosis and prognosis of patients with gliomas. The brain tumor segmentation challenge has continued to provide a great source of data to develop automatic algorithms to perform the task. This paper describes our contribution to the 2021 competition. We developed our methods based on nn-UNet, the winning entry of last year competition. We experimented with several modifications, including using a larger network, replacing batch normalization with group normalization, and utilizing axial attention in the decoder. Internal 5-fold cross validation as well as online evaluation from the organizers showed the effectiveness of our approach, with minor improvement in quantitative metrics when compared to the baseline. The proposed models won first place in the final ranking on unseen test data. The codes, pretrained weights, and docker image for the winning submission are publicly available at https://github.com/rixez/Brats21_KAIST_MRI_Lab
true
true
Luu, Huan Minh and Park, Sung-Hong
2,021
null
null
null
null
Extending nn-UNet for brain tumor segmentation
Extending nn-UNet for Brain Tumor Segmentation
https://link.springer.com/chapter/10.1007/978-3-031-09002-8_16
by HM Luu · 2021 · Cited by 185 — We extended the nn-UNet framework by using a larger network, replacing batch normalization with group normalization, and using axial attention
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
zeineldin_miccai_2022
\cite{zeineldin_miccai_2022}
Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution
http://arxiv.org/abs/2212.09310v1
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively. The docker image for the winning submission is publicly available at (https://hub.docker.com/r/razeineldin/camed22).
true
true
Zeineldin, Ramy A and Karar, Mohamed E and Burgert, Oliver and Mathis-Ullrich, Franziska
2,022
null
null
null
arXiv preprint arXiv:2212.09310
Multimodal CNN Networks for Brain Tumor Segmentation in MRI: A BraTS 2022 Challenge Solution
Multimodal CNN Networks for Brain Tumor Segmentation in MRI
https://link.springer.com/chapter/10.1007/978-3-031-33842-7_11
The BraTS challenge is designed to encourage research in the field of medical image segmentation, with a focus on segmenting brain tumors in MRI
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
isensee_nnunet_nature_2021
\cite{isensee_nnunet_nature_2021}
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
null
null
true
false
Isensee, Fabian and Jaeger, Paul F and Kohl, Simon AA and Petersen, Jens and Maier-Hein, Klaus H
2,021
null
null
null
Nature methods
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
nnU-Net: a self-configuring method for deep learning-based ... - Nature
https://www.nature.com/articles/s41592-020-01008-z
# nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. ### Variability and reproducibility in deep learning for medical image segmentation U-net: convolutional networks for biomedical image segmentation. V-net: fully convolutional neural networks for volumetric medical image segmentation. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. F.I. and P.F.J. conceptualized the method and planned the experiments with the help of S.A.A.K., J.P. and K.H.M.-H. P.F.J., S.A.A.K. and K.H.M.-H. P.F.J., F.I. and K.H.M.-H. wrote the paper with contributions from J.P. and S.A.A.K. K.H.M.-H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
wang_transbts_miccai_2021
\cite{wang_transbts_miccai_2021}
TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
http://arxiv.org/abs/2103.04430v2
Transformer, which can benefit from global (long-range) information modeling using self-attention mechanisms, has been successful in natural language processing and 2D image classification recently. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we for the first time exploit Transformer in 3D CNN for MRI Brain Tumor Segmentation and propose a novel network named TransBTS based on the encoder-decoder structure. To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps. Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling. The decoder leverages the features embedded by Transformer and performs progressive upsampling to predict the detailed segmentation map. Extensive experimental results on both BraTS 2019 and 2020 datasets show that TransBTS achieves comparable or higher results than previous state-of-the-art 3D methods for brain tumor segmentation on 3D MRI scans. The source code is available at https://github.com/Wenxuan-1119/TransBTS
true
true
Wang, Wenxuan and Chen, Chen and Ding, Meng and Yu, Hong and Zha, Sen and Li, Jiangyun
2,021
null
null
null
null
TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
TransBTS: Multimodal Brain Tumor Segmentation Using Transformer
http://arxiv.org/pdf/2103.04430v2
Transformer, which can benefit from global (long-range) information modeling using self-attention mechanisms, has been successful in natural language processing and 2D image classification recently. However, both local and global features are crucial for dense prediction tasks, especially for 3D medical image segmentation. In this paper, we for the first time exploit Transformer in 3D CNN for MRI Brain Tumor Segmentation and propose a novel network named TransBTS based on the encoder-decoder structure. To capture the local 3D context information, the encoder first utilizes 3D CNN to extract the volumetric spatial feature maps. Meanwhile, the feature maps are reformed elaborately for tokens that are fed into Transformer for global feature modeling. The decoder leverages the features embedded by Transformer and performs progressive upsampling to predict the detailed segmentation map. Extensive experimental results on both BraTS 2019 and 2020 datasets show that TransBTS achieves comparable or higher results than previous state-of-the-art 3D methods for brain tumor segmentation on 3D MRI scans. The source code is available at https://github.com/Wenxuan-1119/TransBTS
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
swinunetr
\cite{swinunetr}
Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
http://arxiv.org/abs/2201.01266v1
Semantic segmentation of brain tumors is a fundamental medical image analysis task involving multiple MRI imaging modalities that can assist clinicians in diagnosing the patient and successively studying the progression of the malignant entity. In recent years, Fully Convolutional Neural Networks (FCNNs) approaches have become the de facto standard for 3D medical image segmentation. The popular "U-shaped" network architecture has achieved state-of-the-art performance benchmarks on different 2D and 3D semantic segmentation tasks and across various imaging modalities. However, due to the limited kernel size of convolution layers in FCNNs, their performance of modeling long-range information is sub-optimal, and this can lead to deficiencies in the segmentation of tumors with variable sizes. On the other hand, transformer models have demonstrated excellent capabilities in capturing such long-range information in multiple domains, including natural language processing and computer vision. Inspired by the success of vision transformers and their variants, we propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is reformulated as a sequence to sequence prediction problem wherein multi-modal input data is projected into a 1D sequence of embedding and used as an input to a hierarchical Swin transformer as the encoder. The swin transformer encoder extracts features at five different resolutions by utilizing shifted windows for computing self-attention and is connected to an FCNN-based decoder at each resolution via skip connections. We have participated in BraTS 2021 segmentation challenge, and our proposed model ranks among the top-performing approaches in the validation phase. Code: https://monai.io/research/swin-unetr
true
true
Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger R and Xu, Daguang
2,021
null
null
null
null
Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
Swin Transformers for Semantic Segmentation of Brain Tumors in ...
https://arxiv.org/abs/2201.01266
We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
chen_med3d_arxiv_2019
\cite{chen_med3d_arxiv_2019}
Med3D: Transfer Learning for 3D Medical Image Analysis
http://arxiv.org/abs/1904.00625v4
The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3% to 20%. Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6\% Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge.
true
true
Chen, Sihong and Ma, Kai and Zheng, Yefeng
2,019
null
null
null
arXiv preprint arXiv:1904.00625
Med3D: Transfer Learning for 3D Medical Image Analysis
Med3D: Transfer Learning for 3D Medical Image Analysis
http://arxiv.org/pdf/1904.00625v4
The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3% to 20%. Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6\% Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge.
Efficient 3D Brain Tumor Segmentation with Axial-Coronal-Sagittal Embedding
2506.00434v1
zhu_modelgenesis_mia_2021
\cite{zhu_modelgenesis_mia_2021}
Models Genesis
http://arxiv.org/abs/2004.07882v4
Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.
true
true
Zhou, Zongwei and Sodha, Vatsal and Pang, Jiaxuan and Gotway, Michael B and Liang, Jianming
2,021
null
null
null
Medical image analysis
Models Genesis
Models Genesis
http://arxiv.org/pdf/2004.07882v4
Transfer learning from natural images to medical images has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
zhu2023survey
\cite{zhu2023survey}
A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future
http://arxiv.org/abs/2307.09220v2
As the most fundamental scene understanding tasks, object detection and segmentation have made tremendous progress in deep learning era. Due to the expensive manual labeling cost, the annotated categories in existing datasets are often small-scale and pre-defined, i.e., state-of-the-art fully-supervised detectors and segmentors fail to generalize beyond the closed vocabulary. To resolve this limitation, in the last few years, the community has witnessed an increasing attention toward Open-Vocabulary Detection (OVD) and Segmentation (OVS). By ``open-vocabulary'', we mean that the models can classify objects beyond pre-defined categories. In this survey, we provide a comprehensive review on recent developments of OVD and OVS. A taxonomy is first developed to organize different tasks and methodologies. We find that the permission and usage of weak supervision signals can well discriminate different methodologies, including: visual-semantic space mapping, novel visual feature synthesis, region-aware training, pseudo-labeling, knowledge distillation, and transfer learning. The proposed taxonomy is universal across different tasks, covering object detection, semantic/instance/panoptic segmentation, 3D and video understanding. The main design principles, key challenges, development routes, methodology strengths, and weaknesses are thoroughly analyzed. In addition, we benchmark each task along with the vital components of each method in appendix and updated online at https://github.com/seanzhuh/awesome-open-vocabulary-detection-and-segmentation. Finally, several promising directions are provided and discussed to stimulate future research.
true
true
Zhu, Chaoyang and Chen, Long
2,023
null
null
null
null
A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future
Awesome OVD-OVS - A Survey on Open-Vocabulary ...
https://github.com/seanzhuh/Awesome-Open-Vocabulary-Detection-and-Segmentation
Awesome OVD-OVS - A Survey on Open-Vocabulary Detection and Segmentation: Past, Present, and Future
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
radford2021learning
\cite{radford2021learning}
Learning Transferable Visual Models From Natural Language Supervision
http://arxiv.org/abs/2103.00020v1
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
true
true
Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya
2,021
null
null
null
null
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
http://arxiv.org/pdf/2103.00020v1
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
lin2014microsoft
\cite{lin2014microsoft}
Microsoft COCO: Common Objects in Context
http://arxiv.org/abs/1405.0312v3
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
true
true
Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C. Lawrence
2,014
null
null
null
null
Microsoft COCO: Common Objects in Context
Microsoft COCO: Common Objects in Context
http://arxiv.org/pdf/1405.0312v3
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
gupta2019lvis
\cite{gupta2019lvis}
LVIS: A Dataset for Large Vocabulary Instance Segmentation
http://arxiv.org/abs/1908.03195v2
Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.
true
true
Gupta, Agrim and Dollar, Piotr and Girshick, Ross
2,019
null
null
null
null
LVIS: A Dataset for Large Vocabulary Instance Segmentation
LVIS: A Dataset for Large Vocabulary Instance Segmentation
http://arxiv.org/pdf/1908.03195v2
Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge. LVIS is available at http://www.lvisdataset.org.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
deng2009imagenet
\cite{deng2009imagenet}
{ImageNet: a Large-Scale Hierarchical Image Database}
null
null
true
false
Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li
2,009
null
null
null
null
{ImageNet: a Large-Scale Hierarchical Image Database}
(PDF) ImageNet: a Large-Scale Hierarchical Image Database
https://www.researchgate.net/publication/221361415_ImageNet_a_Large-Scale_Hierarchical_Image_Database
This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
zhou2022detecting
\cite{zhou2022detecting}
Detecting Twenty-thousand Classes using Image-level Supervision
http://arxiv.org/abs/2201.02605v3
Current object detectors are limited in vocabulary size due to the small scale of detection datasets. Image classifiers, on the other hand, reason about much larger vocabularies, as their datasets are larger and easier to collect. We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of thousands of concepts. Unlike prior work, Detic does not need complex assignment schemes to assign image labels to boxes based on model predictions, making it much easier to implement and compatible with a range of detection architectures and backbones. Our results show that Detic yields excellent detectors even for classes without box annotations. It outperforms prior work on both open-vocabulary and long-tail detection benchmarks. Detic provides a gain of 2.4 mAP for all classes and 8.3 mAP for novel classes on the open-vocabulary LVIS benchmark. On the standard LVIS benchmark, Detic obtains 41.7 mAP when evaluated on all classes, or only rare classes, hence closing the gap in performance for object categories with few samples. For the first time, we train a detector with all the twenty-one-thousand classes of the ImageNet dataset and show that it generalizes to new datasets without finetuning. Code is available at \url{https://github.com/facebookresearch/Detic}.
true
true
Zhou, Xingyi and Girdhar, Rohit and Joulin, Armand and Kr{\"a}henb{\"u}hl, Philipp and Misra, Ishan
2,022
null
null
null
null
Detecting Twenty-thousand Classes using Image-level Supervision
[PDF] Detecting Twenty-thousand Classes using Image-level Supervision
https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136690344.pdf
We propose Detic, which simply trains the classifiers of a detector on image classification data and thus expands the vocabulary of detectors to tens of
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
zhong2022regionclip
\cite{zhong2022regionclip}
RegionCLIP: Region-based Language-Image Pretraining
http://arxiv.org/abs/2112.09106v1
Contrastive language-image pretraining (CLIP) using image-text pairs has achieved impressive results on image classification in both zero-shot and transfer learning settings. However, we show that directly applying such models to recognize image regions for object detection leads to poor performance due to a domain shift: CLIP was trained to match an image as a whole to a text description, without capturing the fine-grained alignment between image regions and text spans. To mitigate this issue, we propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations, thus enabling fine-grained alignment between image regions and textual concepts. Our method leverages a CLIP model to match image regions with template captions and then pretrains our model to align these region-text pairs in the feature space. When transferring our pretrained model to the open-vocabulary object detection tasks, our method significantly outperforms the state of the art by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets, respectively. Moreoever, the learned region representations support zero-shot inference for object detection, showing promising results on both COCO and LVIS datasets. Our code is available at https://github.com/microsoft/RegionCLIP.
true
true
Zhong, Yiwu and Yang, Jianwei and Zhang, Pengchuan and Li, Chunyuan and Codella, Noel and Li, Liunian Harold and Zhou, Luowei and Dai, Xiyang and Yuan, Lu and Li, Yin and Gao, Jianfeng
2,022
null
null
null
null
RegionCLIP: Region-based Language-Image Pretraining
RegionCLIP: Region-based Language-Image Pretraining - arXiv
https://arxiv.org/abs/2112.09106
We propose a new method called RegionCLIP that significantly extends CLIP to learn region-level visual representations, thus enabling fine-grained alignment.
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
ma2024codet
\cite{ma2024codet}
CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection
http://arxiv.org/abs/2310.16667v1
Deriving reliable region-word alignment from image-text pairs is critical to learn object-level vision-language representations for open-vocabulary object detection. Existing methods typically rely on pre-trained or self-trained vision-language models for alignment, which are prone to limitations in localization accuracy or generalization capabilities. In this paper, we propose CoDet, a novel approach that overcomes the reliance on pre-aligned vision-language space by reformulating region-word alignment as a co-occurring object discovery problem. Intuitively, by grouping images that mention a shared concept in their captions, objects corresponding to the shared concept shall exhibit high co-occurrence among the group. CoDet then leverages visual similarities to discover the co-occurring objects and align them with the shared concept. Extensive experiments demonstrate that CoDet has superior performances and compelling scalability in open-vocabulary detection, e.g., by scaling up the visual backbone, CoDet achieves 37.0 $\text{AP}^m_{novel}$ and 44.7 $\text{AP}^m_{all}$ on OV-LVIS, surpassing the previous SoTA by 4.2 $\text{AP}^m_{novel}$ and 9.8 $\text{AP}^m_{all}$. Code is available at https://github.com/CVMI-Lab/CoDet.
true
true
Ma, Chuofan and Jiang, Yi and Wen, Xin and Yuan, Zehuan and Qi, Xiaojuan
2,023
null
null
null
null
CoDet: Co-Occurrence Guided Region-Word Alignment for Open-Vocabulary Object Detection
(NeurIPS2023) CoDet: Co-Occurrence Guided Region ...
https://github.com/CVMI-Lab/CoDet
Train an open-vocabulary detector with web-scale image-text pairs; Align regions and words by co-occurrence instead of region-text similarity
Test-time Vocabulary Adaptation for Language-driven Object Detection
2506.00333v1
liu2024shine
\cite{liu2024shine}
SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object Detection
http://arxiv.org/abs/2405.10053v1
Open-vocabulary object detection (OvOD) has transformed detection into a language-guided task, empowering users to freely define their class vocabularies of interest during inference. However, our initial investigation indicates that existing OvOD detectors exhibit significant variability when dealing with vocabularies across various semantic granularities, posing a concern for real-world deployment. To this end, we introduce Semantic Hierarchy Nexus (SHiNe), a novel classifier that uses semantic knowledge from class hierarchies. It runs offline in three steps: i) it retrieves relevant super-/sub-categories from a hierarchy for each target class; ii) it integrates these categories into hierarchy-aware sentences; iii) it fuses these sentence embeddings to generate the nexus classifier vector. Our evaluation on various detection benchmarks demonstrates that SHiNe enhances robustness across diverse vocabulary granularities, achieving up to +31.9% mAP50 with ground truth hierarchies, while retaining improvements using hierarchies generated by large language models. Moreover, when applied to open-vocabulary classification on ImageNet-1k, SHiNe improves the CLIP zero-shot baseline by +2.8% accuracy. SHiNe is training-free and can be seamlessly integrated with any off-the-shelf OvOD detector, without incurring additional computational overhead during inference. The code is open source.
true
true
Liu, Mingxuan and Hayes, Tyler L. and Ricci, Elisa and Csurka, Gabriela and Volpi, Riccardo
2,024
null
null
null
null
SHiNe: Semantic Hierarchy Nexus for Open-vocabulary Object Detection
[PDF] Semantic Hierarchy Nexus for Open-vocabulary Object Detection
https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_SHiNe_Semantic_Hierarchy_Nexus_for_Open-vocabulary_Object_Detection_CVPR_2024_paper.pdf
SHiNe is training-free and can be seamlessly integrated with any off-the-shelf OvOD detector, without incurring additional computational overhead dur- ing
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_2
\cite{ssl_2}
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks
null
null
true
false
Lee, Dong-Hyun
2,013
null
null
null
null
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks
Pseudo-Label : The Simple and Efficient Semi-Supervised ...
https://www.researchgate.net/publication/280581078_Pseudo-Label_The_Simple_and_Efficient_Semi-Supervised_Learning_Method_for_Deep_Neural_Networks
We propose the simple and efficient method of semi-supervised learning for deep neural networks. Basically, the proposed network is trained in a supervised
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_9
\cite{ssl_9}
Semi-supervised Learning by Entropy Minimization
null
null
true
false
Yves Grandvalet and Yoshua Bengio
2,004
null
null
null
null
Semi-supervised Learning by Entropy Minimization
Semi-supervised Learning by Entropy Minimization - NIPS
https://papers.nips.cc/paper/2740-semi-supervised-learning-by-entropy-minimization
We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. In the terminology used here, semi-supervised learning refers to learning a decision rule on X from labeled and unlabeled data. In the probabilistic framework, semi-supervised learning can be modeled as a missing data problem, which can be addressed by generative models such as mixture models thanks to the EM algorithm and extensions thereof .Generative models apply to the joint den- sity of patterns and class (X, Y ). Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_10
\cite{ssl_10}
Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning
http://arxiv.org/abs/2001.06001v2
In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled samples and a large set of unlabeled samples. Pseudo-labeling works by applying pseudo-labels to samples in the unlabeled set by using a model trained on the combination of the labeled samples and any previously pseudo-labeled samples, and iteratively repeating this process in a self-training cycle. Current methods seem to have abandoned this approach in favor of consistency regularization methods that train models under a combination of different styles of self-supervised losses on the unlabeled samples and standard supervised losses on the labeled samples. We empirically demonstrate that pseudo-labeling can in fact be competitive with the state-of-the-art, while being more resilient to out-of-distribution samples in the unlabeled set. We identify two key factors that allow pseudo-labeling to achieve such remarkable results (1) applying curriculum learning principles and (2) avoiding concept drift by restarting model parameters before each self-training cycle. We obtain 94.91% accuracy on CIFAR-10 using only 4,000 labeled samples, and 68.87% top-1 accuracy on Imagenet-ILSVRC using only 10% of the labeled samples. The code is available at https://github.com/uvavision/Curriculum-Labeling
true
true
Paola Cascante{-}Bonilla and Fuwen Tan and Yanjun Qi and Vicente Ordonez
2,021
null
null
null
null
Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised Learning
Revisiting Pseudo-Labeling for Semi-Supervised Learning
https://arxiv.org/abs/2001.06001
by P Cascante-Bonilla · 2020 · Cited by 409 — In this paper we revisit the idea of pseudo-labeling in the context of semi-supervised learning where a learning algorithm has access to a small set of labeled
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_11
\cite{ssl_11}
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
http://arxiv.org/abs/1703.01780v6
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%.
true
true
Antti Tarvainen and Harri Valpola
2,017
null
null
null
null
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
[PDF] Weight-averaged consistency targets improve semi-supervised ...
https://arxiv.org/pdf/1703.01780
Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_12
\cite{ssl_12}
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
http://arxiv.org/abs/1606.04586v1
Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.
true
true
Mehdi Sajjadi and Mehran Javanmardi and Tolga Tasdizen
2,016
null
null
null
null
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
Regularization With Stochastic Transformations and Perturbations ...
https://arxiv.org/abs/1606.04586
Abstract page for arXiv paper 1606.04586: Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_13
\cite{ssl_13}
Temporal Ensembling for Semi-Supervised Learning
http://arxiv.org/abs/1610.02242v3
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44% to 7.05% in SVHN with 500 labels and from 18.63% to 16.55% in CIFAR-10 with 4000 labels, and further to 5.12% and 12.16% by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.
true
true
Samuli Laine and Timo Aila
2,017
null
null
null
null
Temporal Ensembling for Semi-Supervised Learning
Review — Π-Model, Temporal Ensembling ... - Sik-Ho Tsang
https://sh-tsang.medium.com/review-%CF%80-model-temporal-ensembling-temporal-ensembling-for-semi-supervised-learning-9cb6eea6865e
Temporal Ensembling for Semi-Supervised Learning. Stochastic Augmentation, Network Dropout, & Momentum Encoder are Used.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_14
\cite{ssl_14}
Unsupervised Data Augmentation for Consistency Training
http://arxiv.org/abs/1904.12848v6
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.
true
true
Qizhe Xie and Zihang Dai and Eduard H. Hovy and Thang Luong and Quoc Le
2,020
null
null
null
null
Unsupervised Data Augmentation for Consistency Training
Unsupervised Data Augmentation for Consistency Training
http://arxiv.org/pdf/1904.12848v6
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
tnnls_2
\cite{tnnls_2}
MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization
http://arxiv.org/abs/2203.14316v2
The core issue in semi-supervised learning (SSL) lies in how to effectively leverage unlabeled data, whereas most existing methods tend to put a great emphasis on the utilization of high-confidence samples yet seldom fully explore the usage of low-confidence samples. In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch. Specifically, the high-confidence samples are required to exactly predict "what it is" by conventional True-Positive Classifier, while the low-confidence samples are employed to achieve a simpler goal -- to predict with ease "what it is not" by True-Negative Classifier. In this sense, we not only mitigate the pseudo-labeling errors but also make full use of the low-confidence unlabeled data by consistency of dissimilarity degree. MutexMatch achieves superior performance on multiple benchmark datasets, i.e., CIFAR-10, CIFAR-100, SVHN, STL-10, mini-ImageNet and Tiny-ImageNet. More importantly, our method further shows superiority when the amount of labeled data is scarce, e.g., 92.23% accuracy with only 20 labeled data on CIFAR-10. Our code and model weights have been released at https://github.com/NJUyued/MutexMatch4SSL.
true
true
Yue Duan and Zhen Zhao and Lei Qi and Lei Wang and Luping Zhou and Yinghuan Shi and Yang Gao
2,024
null
null
null
{IEEE} Trans. on Neural Networks and Learning Systems
MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization
MutexMatch: Semi-Supervised Learning with Mutex-Based ... - arXiv
https://arxiv.org/abs/2203.14316
In this paper, we aim to utilize low-confidence samples in a novel way with our proposed mutex-based consistency regularization, namely MutexMatch.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_3
\cite{ssl_3}
MixMatch: A Holistic Approach to Semi-Supervised Learning
http://arxiv.org/abs/1905.02249v2
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success.
true
true
David Berthelot and Nicholas Carlini and Ian J. Goodfellow and Nicolas Papernot and Avital Oliver and Colin Raffel
2,019
null
null
null
null
MixMatch: A Holistic Approach to Semi-Supervised Learning
MixMatch: a holistic approach to semi-supervised learning
https://dl.acm.org/doi/10.5555/3454287.3454741
A new algorithm, MixMatch, that guesses low-entropy labels for data-augmented un-labeled examples and mixes labeled and unlabeled data using MixUp.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_4
\cite{ssl_4}
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
http://arxiv.org/abs/2001.07685v2
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at https://github.com/google-research/fixmatch.
true
true
Kihyuk Sohn and David Berthelot and Nicholas Carlini and Zizhao Zhang and Han Zhang and Colin Raffel and Ekin Dogus Cubuk and Alexey Kurakin and Chun{-}Liang Li
2,020
null
null
null
null
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
FixMatch: simplifying semi-supervised learning with consistency and ...
https://dl.acm.org/doi/abs/10.5555/3495724.3495775
In this paper we propose FixMatch, an algorithm that is a significant simplification of existing SSL methods.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_16
\cite{ssl_16}
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
http://arxiv.org/abs/1911.09785v2
We improve the recently-proposed "MixMatch" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation anchoring. Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels. Augmentation anchoring feeds multiple strongly augmented versions of an input into the model and encourages each output to be close to the prediction for a weakly-augmented version of the same input. To produce strong augmentations, we propose a variant of AutoAugment which learns the augmentation policy while the model is being trained. Our new algorithm, dubbed ReMixMatch, is significantly more data-efficient than prior work, requiring between $5\times$ and $16\times$ less data to reach the same accuracy. For example, on CIFAR-10 with 250 labeled examples we reach $93.73\%$ accuracy (compared to MixMatch's accuracy of $93.58\%$ with $4{,}000$ examples) and a median accuracy of $84.92\%$ with just four labels per class. We make our code and data open-source at https://github.com/google-research/remixmatch.
true
true
David Berthelot and Nicholas Carlini and Ekin D. Cubuk and Alex Kurakin and Kihyuk Sohn and Han Zhang and Colin Raffel
2,020
null
null
null
null
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
ReMixMatch: Semi-Supervised Learning with Distribution Alignment ...
https://arxiv.org/abs/1911.09785
We improve the recently-proposed "MixMatch" semi-supervised learning algorithm by introducing two new techniques: distribution alignment and augmentation
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_19
\cite{ssl_19}
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
http://arxiv.org/abs/2110.08263v3
The recently proposed FixMatch achieved state-of-the-art results on most semi-supervised learning (SSL) benchmarks. However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes. To address this issue, we propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status. The core of CPL is to flexibly adjust thresholds for different classes at each time step to let pass informative unlabeled data and their pseudo labels. CPL does not introduce additional parameters or computations (forward or backward propagation). We apply CPL to FixMatch and call our improved algorithm FlexMatch. FlexMatch achieves state-of-the-art performance on a variety of SSL benchmarks, with especially strong performances when the labeled data are extremely limited or when the task is challenging. For example, FlexMatch achieves 13.96% and 18.96% error rate reduction over FixMatch on CIFAR-100 and STL-10 datasets respectively, when there are only 4 labels per class. CPL also significantly boosts the convergence speed, e.g., FlexMatch can use only 1/5 training time of FixMatch to achieve even better performance. Furthermore, we show that CPL can be easily adapted to other SSL algorithms and remarkably improve their performances. We open-source our code at https://github.com/TorchSSL/TorchSSL.
true
true
Zhang, Bowen and Wang, Yidong and Hou, Wenxin and Wu, Hao and Wang, Jindong and Okumura, Manabu and Shinozaki, Takahiro
2,021
null
null
null
null
FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling
https://arxiv.org/abs/2110.08263
We propose Curriculum Pseudo Labeling (CPL), a curriculum learning approach to leverage unlabeled data according to the model's learning status.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_20
\cite{ssl_20}
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
http://arxiv.org/abs/2205.07246v3
Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization. However, we argue that existing methods might fail to utilize the unlabeled data more effectively since they either use a pre-defined / fixed threshold or an ad-hoc threshold adjusting scheme, resulting in inferior performance and slow convergence. We first analyze a motivating example to obtain intuitions on the relationship between the desirable threshold and model's learning status. Based on the analysis, we hence propose FreeMatch to adjust the confidence threshold in a self-adaptive manner according to the model's learning status. We further introduce a self-adaptive class fairness regularization penalty to encourage the model for diverse predictions during the early training stage. Extensive experiments indicate the superiority of FreeMatch especially when the labeled data are extremely rare. FreeMatch achieves 5.78%, 13.59%, and 1.28% error rate reduction over the latest state-of-the-art method FlexMatch on CIFAR-10 with 1 label per class, STL-10 with 4 labels per class, and ImageNet with 100 labels per class, respectively. Moreover, FreeMatch can also boost the performance of imbalanced SSL. The codes can be found at https://github.com/microsoft/Semi-supervised-learning.
true
true
Yidong Wang and Hao Chen and Qiang Heng and Wenxin Hou and Yue Fan and Zhen Wu and Jindong Wang and Marios Savvides and Takahiro Shinozaki and Bhiksha Raj and Bernt Schiele and Xing Xie
2,023
null
null
null
null
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
https://openreview.net/forum?id=PDrUPTXJI_A
We propose FreeMatch to define and adjust the confidence threshold in a self-adaptive manner for semi-supervised learning.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_8
\cite{ssl_8}
SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning
http://arxiv.org/abs/2301.10921v2
The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
true
true
Hao Chen and Ran Tao and Yue Fan and Yidong Wang and Jindong Wang and Bernt Schiele and Xing Xie and Bhiksha Raj and Marios Savvides
2,023
null
null
null
null
SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning
Addressing the Quantity-Quality Tradeoff in Semi-supervised Learning
https://openreview.net/forum?id=ymt1zQXBDiF
This paper proposes SoftMatch to improve both the quantity and quality of pseudo-labels in semi-supervised learning. Basically, the authors
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_6
\cite{ssl_6}
SimMatch: Semi-supervised Learning with Similarity Matching
http://arxiv.org/abs/2203.06915v2
Learning with few labeled data has been a longstanding problem in the computer vision and machine learning research community. In this paper, we introduced a new semi-supervised learning framework, SimMatch, which simultaneously considers semantic similarity and instance similarity. In SimMatch, the consistency regularization will be applied on both semantic-level and instance-level. The different augmented views of the same instance are encouraged to have the same class prediction and similar similarity relationship respected to other instances. Next, we instantiated a labeled memory buffer to fully leverage the ground truth labels on instance-level and bridge the gaps between the semantic and instance similarities. Finally, we proposed the \textit{unfolding} and \textit{aggregation} operation which allows these two similarities be isomorphically transformed with each other. In this way, the semantic and instance pseudo-labels can be mutually propagated to generate more high-quality and reliable matching targets. Extensive experimental results demonstrate that SimMatch improves the performance of semi-supervised learning tasks across different benchmark datasets and different settings. Notably, with 400 epochs of training, SimMatch achieves 67.2\%, and 74.4\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the baseline methods and is better than previous semi-supervised learning frameworks. Code and pre-trained models are available at https://github.com/KyleZheng1997/simmatch.
true
true
Mingkai Zheng and Shan You and Lang Huang and Fei Wang and Chen Qian and Chang Xu
2,022
null
null
null
null
SimMatch: Semi-supervised Learning with Similarity Matching
SimMatch: Semi-supervised Learning with Similarity ...
https://arxiv.org/abs/2203.06915
by M Zheng · 2022 · Cited by 309 — In this paper, we introduced a new semi-supervised learning framework, SimMatch, which simultaneously considers semantic similarity and instance similarity.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_7
\cite{ssl_7}
SimMatchV2: Semi-Supervised Learning with Graph Consistency
http://arxiv.org/abs/2308.06692v1
Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor. In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and unlabeled data from the graph perspective. In SimMatchV2, we regard the augmented view of a sample as a node, which consists of a label and its corresponding representation. Different nodes are connected with the edges, which are measured by the similarity of the node representations. Inspired by the message passing and node classification in graph theory, we propose four types of consistencies, namely 1) node-node consistency, 2) node-edge consistency, 3) edge-edge consistency, and 4) edge-node consistency. We also uncover that a simple feature normalization can reduce the gaps of the feature norm between different augmented views, significantly improving the performance of SimMatchV2. Our SimMatchV2 has been validated on multiple semi-supervised learning benchmarks. Notably, with ResNet-50 as our backbone and 300 epochs of training, SimMatchV2 achieves 71.9\% and 76.2\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the previous methods and achieves state-of-the-art performance. Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/SimMatchV2}{https://github.com/mingkai-zheng/SimMatchV2}.
true
true
Mingkai Zheng and Shan You and Lang Huang and Chen Luo and Fei Wang and Chen Qian and Chang Xu
2,023
null
null
null
null
SimMatchV2: Semi-Supervised Learning with Graph Consistency
Semi-Supervised Learning with Graph Consistency
https://arxiv.org/abs/2308.06692
by M Zheng · 2023 · Cited by 17 — In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_17
\cite{ssl_17}
Label Propagation for Deep Semi-supervised Learning
http://arxiv.org/abs/1904.04717v1
Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption---that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network.Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art.
true
true
Ahmet Iscen and Giorgos Tolias and Yannis Avrithis and Ondrej Chum
2,019
null
null
null
null
Label Propagation for Deep Semi-supervised Learning
[PDF] Label Propagation for Deep Semi-Supervised Learning
https://openaccess.thecvf.com/content_CVPR_2019/papers/Iscen_Label_Propagation_for_Deep_Semi-Supervised_Learning_CVPR_2019_paper.pdf
Label propagation uses a transductive method to generate pseudo-labels for unlabeled data, using a graph based on network embeddings, to train a deep neural
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
tnnls_3
\cite{tnnls_3}
Graph-Based Semi-Supervised Learning: {A} Comprehensive Review
null
null
true
false
Zixing Song and Xiangli Yang and Zenglin Xu and Irwin King
2,023
null
null
null
{IEEE} Trans. on Neural Networks and Learning Systems
Graph-Based Semi-Supervised Learning: {A} Comprehensive Review
Graph-Based Semi-Supervised Learning
https://ieeexplore.ieee.org/document/9737635
Graph-Based Semi-Supervised Learning: A Comprehensive Review | IEEE Journals & Magazine | IEEE Xplore Publisher: IEEE An essential class of SSL methods, referred to as graph-based semi-supervised learning (GSSL) methods in the literature, is to first represent each sample as a node in an affinity graph, and then, the label information of unlabeled samples can be inferred based on the structure of the constructed graph. Publisher: IEEE A similarity graph is constructed based on the given data, including both the labeled and unlabeled samples. Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_5
\cite{ssl_5}
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
http://arxiv.org/abs/2011.11183v2
Semi-supervised learning has been an effective paradigm for leveraging unlabeled data to reduce the reliance on labeled data. We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. The two representations interact with each other to jointly evolve. The embeddings impose a smoothness constraint on the class probabilities to improve the pseudo-labels, whereas the pseudo-labels regularize the structure of the embeddings through graph-based contrastive learning. CoMatch achieves state-of-the-art performance on multiple datasets. It achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10. On ImageNet with 1% labels, CoMatch achieves a top-1 accuracy of 66.0%, outperforming FixMatch by 12.6%. Furthermore, CoMatch achieves better representation learning performance on downstream tasks, outperforming both supervised learning and self-supervised learning. Code and pre-trained models are available at https://github.com/salesforce/CoMatch.
true
true
Junnan Li and Caiming Xiong and Steven C. H. Hoi
2,021
null
null
null
null
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
CoMatch: Semi-Supervised Learning With Contrastive ...
https://openaccess.thecvf.com/content/ICCV2021/papers/Li_CoMatch_Semi-Supervised_Learning_With_Contrastive_Graph_Regularization_ICCV_2021_paper.pdf
by J Li · 2021 · Cited by 384 — We propose CoMatch, a new semi-supervised learning method that unifies dominant approaches and addresses their limitations. CoMatch jointly learns two
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
rep_3
\cite{rep_3}
Big Self-Supervised Models are Strong Semi-Supervised Learners
http://arxiv.org/abs/2006.10029v2
One paradigm for learning from few labeled examples while making best use of a large amount of unlabeled data is unsupervised pretraining followed by supervised fine-tuning. Although this paradigm uses unlabeled data in a task-agnostic way, in contrast to common approaches to semi-supervised learning for computer vision, we show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks during pretraining and fine-tuning. We find that, the fewer the labels, the more this approach (task-agnostic use of unlabeled data) benefits from a bigger network. After fine-tuning, the big network can be further improved and distilled into a much smaller one with little loss in classification accuracy by using the unlabeled examples for a second time, but in a task-specific way. The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge. This procedure achieves 73.9% ImageNet top-1 accuracy with just 1% of the labels ($\le$13 labeled images per class) using ResNet-50, a $10\times$ improvement in label efficiency over the previous state-of-the-art. With 10% of labels, ResNet-50 trained with our method achieves 77.5% top-1 accuracy, outperforming standard supervised training with all of the labels.
true
true
Ting Chen and Simon Kornblith and Kevin Swersky and Mohammad Norouzi and Geoffrey E. Hinton
2,020
null
null
null
null
Big Self-Supervised Models are Strong Semi-Supervised Learners
[2006.10029] Big Self-Supervised Models are Strong Semi ...
https://arxiv.org/abs/2006.10029
by T Chen · 2020 · Cited by 2883 — We show that it is surprisingly effective for semi-supervised learning on ImageNet. A key ingredient of our approach is the use of big (deep and wide) networks.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ssl_1
\cite{ssl_1}
Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
http://arxiv.org/abs/1804.09170v4
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available.
true
true
Avital Oliver and Augustus Odena and Colin Raffel and Ekin Dogus Cubuk and Ian J. Goodfellow
2,018
null
null
null
null
Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
https://arxiv.org/abs/1804.09170
Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_2
\cite{ossl_2}
Semi-Supervised Learning under Class Distribution Mismatch
null
null
true
false
Yanbei Chen and Xiatian Zhu and Wei Li and Shaogang Gong
2,020
null
null
null
null
Semi-Supervised Learning under Class Distribution Mismatch
[PDF] Semi-Supervised Learning under Class Distribution Mismatch
https://ojs.aaai.org/index.php/AAAI/article/view/5763/5619
Class distribution mismatch in semi-supervised learning occurs when labeled and unlabeled data come from different class distributions, unlike conventional SSL.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_14
\cite{ossl_14}
SCOMatch: Alleviating Overtrusting in Open-set Semi-supervised Learning
http://arxiv.org/abs/2409.17512v1
Open-set semi-supervised learning (OSSL) leverages practical open-set unlabeled data, comprising both in-distribution (ID) samples from seen classes and out-of-distribution (OOD) samples from unseen classes, for semi-supervised learning (SSL). Prior OSSL methods initially learned the decision boundary between ID and OOD with labeled ID data, subsequently employing self-training to refine this boundary. These methods, however, suffer from the tendency to overtrust the labeled ID data: the scarcity of labeled data caused the distribution bias between the labeled samples and the entire ID data, which misleads the decision boundary to overfit. The subsequent self-training process, based on the overfitted result, fails to rectify this problem. In this paper, we address the overtrusting issue by treating OOD samples as an additional class, forming a new SSL process. Specifically, we propose SCOMatch, a novel OSSL method that 1) selects reliable OOD samples as new labeled data with an OOD memory queue and a corresponding update strategy and 2) integrates the new SSL process into the original task through our Simultaneous Close-set and Open-set self-training. SCOMatch refines the decision boundary of ID and OOD classes across the entire dataset, thereby leading to improved results. Extensive experimental results show that SCOMatch significantly outperforms the state-of-the-art methods on various benchmarks. The effectiveness is further verified through ablation studies and visualization.
true
true
Wang, Zerun and Xiang, Liuyu and Huang, Lang and Mao, Jiafeng and Xiao, Ling and Yamasaki, Toshihiko
2,025
null
null
null
null
SCOMatch: Alleviating Overtrusting in Open-set Semi-supervised Learning
Alleviating Overtrusting in Open-set Semi-supervised Learning - arXiv
https://arxiv.org/abs/2409.17512
We propose SCOMatch, a novel OSSL method that 1) selects reliable OOD samples as new labeled data with an OOD memory queue and a corresponding update strategy.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_12
\cite{ossl_12}
Rethinking safe semi-supervised learning: Transferring the open-set problem to a close-set one
null
null
true
false
Ma, Qiankun and Gao, Jiyao and Zhan, Bo and Guo, Yunpeng and Zhou, Jiliu and Wang, Yan
2,023
null
null
null
null
Rethinking safe semi-supervised learning: Transferring the open-set problem to a close-set one
[PDF] Rethinking Safe Semi-supervised Learning - CVF Open Access
https://openaccess.thecvf.com/content/ICCV2023/supplemental/Ma_Rethinking_Safe_Semi-supervised_ICCV_2023_supplemental.pdf
Page 1. Rethinking Safe Semi-supervised Learning: Transferring the Open-set Problem to A Close-set One. -Supplementary Material-. 1. Detailed Datasets. In this
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_16
\cite{ossl_16}
Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch
http://arxiv.org/abs/2308.11874v1
Semi-Supervised Learning (SSL) under class distribution mismatch aims to tackle a challenging problem wherein unlabeled data contain lots of unknown categories unseen in the labeled ones. In such mismatch scenarios, traditional SSL suffers severe performance damage due to the harmful invasion of the instances with unknown categories into the target classifier. In this study, by strict mathematical reasoning, we reveal that the SSL error under class distribution mismatch is composed of pseudo-labeling error and invasion error, both of which jointly bound the SSL population risk. To alleviate the SSL error, we propose a robust SSL framework called Weight-Aware Distillation (WAD) that, by weights, selectively transfers knowledge beneficial to the target task from unsupervised contrastive representation to the target classifier. Specifically, WAD captures adaptive weights and high-quality pseudo labels to target instances by exploring point mutual information (PMI) in representation space to maximize the role of unlabeled data and filter unknown categories. Theoretically, we prove that WAD has a tight upper bound of population risk under class distribution mismatch. Experimentally, extensive results demonstrate that WAD outperforms five state-of-the-art SSL approaches and one standard baseline on two benchmark datasets, CIFAR10 and CIFAR100, and an artificial cross-dataset. The code is available at https://github.com/RUC-DWBI-ML/research/tree/main/WAD-master.
true
true
Du, Pan and Zhao, Suyun and Sheng, Zisen and Li, Cuiping and Chen, Hong
2,023
null
null
null
null
Semi-Supervised Learning via Weight-aware Distillation under Class Distribution Mismatch
Semi-Supervised Learning via Weight-Aware Distillation ...
https://openaccess.thecvf.com/content/ICCV2023/papers/Du_Semi-Supervised_Learning_via_Weight-Aware_Distillation_under_Class_Distribution_Mismatch_ICCV_2023_paper.pdf
by P Du · 2023 · Cited by 11 — Semi-Supervised Learning (SSL) under class distribu- tion mismatch aims to tackle a challenging problem wherein unlabeled data contain lots of unknown
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_5
\cite{ossl_5}
Safe-Student for Safe Deep Semi-Supervised Learning with Unseen-Class Unlabeled Data
null
null
true
false
Rundong He and Zhongyi Han and Xiankai Lu and Yilong Yin
2,022
null
null
null
null
Safe-Student for Safe Deep Semi-Supervised Learning with Unseen-Class Unlabeled Data
SAFER-STUDENT for Safe Deep Semi-Supervised Learning With...
https://openreview.net/forum?id=j8i42Lrh0Z
Missing: 04/08/2025
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_6
\cite{ossl_6}
{SAFER-STUDENT} for Safe Deep Semi-Supervised Learning With Unseen-Class Unlabeled Data
null
null
true
false
Rundong He and Zhongyi Han and Xiankai Lu and Yilong Yin
2,024
null
null
null
{IEEE} Trans. on Knowledge and Data Engineering
{SAFER-STUDENT} for Safe Deep Semi-Supervised Learning With Unseen-Class Unlabeled Data
SAFER-STUDENT for Safe Deep Semi-Supervised Learning With ...
https://www.researchgate.net/publication/371000311_SAFER-STUDENT_for_Safe_Deep_Semi-Supervised_Learning_With_Unseen-Class_Unlabeled_Data
Deep semi-supervised learning (SSL) methods aim to utilize abundant unlabeled data to improve the seen-class classification. Several similar definitions have emerged to describe this scenario, including safe SSL [9], open-set SSL [22,24,31,45], and the challenge of managing UnLabeled data from Unseen Classes in Semi-Supervised Learning (ULUC-SSL) [14]. In particular, we note that existing open-set SSL methods rely on prediction discrepancies between inliers and outliers from a single model trained on labeled data. To effectively alleviate the SVA data labeling cost, we propose an approach SURF, which makes full use of a limited amount of labeled SVA data combined with a large amount of unlabeled SVA data to train the SVA model via semi-supervised learning.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_3
\cite{ossl_3}
Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning
http://arxiv.org/abs/2007.11330v1
Semi-supervised learning (SSL) has been proposed to leverage unlabeled data for training powerful models when only limited labeled data is available. While existing SSL methods assume that samples in the labeled and unlabeled data share the classes of their samples, we address a more complex novel scenario named open-set SSL, where out-of-distribution (OOD) samples are contained in unlabeled data. Instead of training an OOD detector and SSL separately, we propose a multi-task curriculum learning framework. First, to detect the OOD samples in unlabeled data, we estimate the probability of the sample belonging to OOD. We use a joint optimization framework, which updates the network parameters and the OOD score alternately. Simultaneously, to achieve high performance on the classification of in-distribution (ID) data, we select ID samples in unlabeled data having small OOD scores, and use these data with labeled data for training the deep neural networks to classify ID samples in a semi-supervised manner. We conduct several experiments, and our method achieves state-of-the-art results by successfully eliminating the effect of OOD samples.
true
true
Qing Yu and Daiki Ikami and Go Irie and Kiyoharu Aizawa
2,020
null
null
null
null
Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning
YU1ut/Multi-Task-Curriculum-Framework-for-Open-Set-SSL
https://github.com/YU1ut/Multi-Task-Curriculum-Framework-for-Open-Set-SSL
This is the official PyTorch implementation of Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning. architecture. Requirements. Python 3.7
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_9
\cite{ossl_9}
Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning
http://arxiv.org/abs/2108.05617v1
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data. While the mainstream technique seeks to completely filter out the OOD samples for semi-supervised learning (SSL), we propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning while avoiding its adverse impact on the SSL. We achieve this goal by first introducing a warm-up training that leverages all the unlabeled data, including both the in-distribution (ID) and OOD samples. Specifically, we perform a pretext task that enforces our feature extractor to obtain a high-level semantic understanding of the training images, leading to more discriminative features that can benefit the downstream tasks. Since the OOD samples are inevitably detrimental to SSL, we propose a novel cross-modal matching strategy to detect OOD samples. Instead of directly applying binary classification, we train the network to predict whether the data sample is matched to an assigned one-hot class label. The appeal of the proposed cross-modal matching over binary classification is the ability to generate a compatible feature space that aligns with the core classification task. Extensive experiments show that our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
true
true
Junkai Huang and Chaowei Fang and Weikai Chen and Zhenhua Chai and Xiaolin Wei and Pengxu Wei and Liang Lin and Guanbin Li
2,021
null
null
null
null
Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning
[PDF] Harvesting OOD Data With Cross-Modal Matching for Open-Set ...
https://guanbinli.com/papers/4-Huang_Trash_To_Treasure_Harvesting_OOD_Data_With_Cross-Modal_Matching_for_ICCV_2021_paper.pdf
Open-set semi-supervised learning (open-set SSL) inves- tigates a challenging but practical scenario where out-of- distribution (OOD) samples are contained
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_10
\cite{ossl_10}
Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning
http://arxiv.org/abs/2305.18158v2
Recent advances in robust semi-supervised learning (SSL) typically filter out-of-distribution (OOD) information at the sample level. We argue that an overlooked problem of robust SSL is its corrupted information on semantic level, practically limiting the development of the field. In this paper, we take an initial step to explore and propose a unified framework termed OOD Semantic Pruning (OSP), which aims at pruning OOD semantics out from in-distribution (ID) features. Specifically, (i) we propose an aliasing OOD matching module to pair each ID sample with an OOD sample with semantic overlap. (ii) We design a soft orthogonality regularization, which first transforms each ID feature by suppressing its semantic component that is collinear with paired OOD sample. It then forces the predictions before and after soft orthogonality decomposition to be consistent. Being practically simple, our method shows a strong performance in OOD detection and ID classification on challenging benchmarks. In particular, OSP surpasses the previous state-of-the-art by 13.7% on accuracy for ID classification and 5.9% on AUROC for OOD detection on TinyImageNet dataset. The source codes are publicly available at https://github.com/rain305f/OSP.
true
true
Wang, Yu and Qiao, Pengchong and Liu, Chang and Song, Guoli and Zheng, Xiawu and Chen, Jie
2,023
null
null
null
null
Out-of-Distributed Semantic Pruning for Robust Semi-Supervised Learning
[PDF] Out-of-Distributed Semantic Pruning for Robust Semi-Supervised ...
https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Out-of-Distributed_Semantic_Pruning_for_Robust_Semi-Supervised_Learning_CVPR_2023_paper.pdf
Recent advances in robust semi-supervised learning. (SSL) typically filter out-of-distribution (OOD) information at the sample level.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_8
\cite{ossl_8}
Unknown-Aware Graph Regularization for Robust Semi-supervised Learning from Uncurated Data
null
null
true
false
Heejo Kong and Suneung Kim and Ho{-}Joong Kim and Seong{-}Whan Lee
2,024
null
null
null
null
Unknown-Aware Graph Regularization for Robust Semi-supervised Learning from Uncurated Data
Unknown-Aware Graph Regularization for Robust Semi- ...
https://www.researchgate.net/publication/379297624_Unknown-Aware_Graph_Regularization_for_Robust_Semi-supervised_Learning_from_Uncurated_Data
In this paper, we propose a robust SSL method for learning from uncurated real-world data within the context of open-set semi-supervised learning (OSSL). Unlike
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_4
\cite{ossl_4}
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers
http://arxiv.org/abs/2105.14148v2
Semi-supervised learning (SSL) is an effective means to leverage unlabeled data to improve a model's performance. Typical SSL methods like FixMatch assume that labeled and unlabeled data share the same label space. However, in practice, unlabeled data can contain categories unseen in the labeled set, i.e., outliers, which can significantly harm the performance of SSL algorithms. To address this problem, we propose a novel Open-set Semi-Supervised Learning (OSSL) approach called OpenMatch. Learning representations of inliers while rejecting outliers is essential for the success of OSSL. To this end, OpenMatch unifies FixMatch with novelty detection based on one-vs-all (OVA) classifiers. The OVA-classifier outputs the confidence score of a sample being an inlier, providing a threshold to detect outliers. Another key contribution is an open-set soft-consistency regularization loss, which enhances the smoothness of the OVA-classifier with respect to input transformations and greatly improves outlier detection. OpenMatch achieves state-of-the-art performance on three datasets, and even outperforms a fully supervised model in detecting outliers unseen in unlabeled data on CIFAR10.
true
true
Saito, Kuniaki and Kim, Donghyun and Saenko, Kate
2,021
null
null
null
null
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers
VisionLearningGroup/OP_Match
https://github.com/VisionLearningGroup/OP_Match
OpenMatch: Open-set Consistency Regularization for Semi-supervised Learning with Outliers (NeurIPS 2021) ... This is an PyTorch implementation of OpenMatch. This
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_7
\cite{ossl_7}
IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization
http://arxiv.org/abs/2308.13168v1
Semi-supervised learning (SSL) aims to leverage massive unlabeled data when labels are expensive to obtain. Unfortunately, in many real-world applications, the collected unlabeled data will inevitably contain unseen-class outliers not belonging to any of the labeled classes. To deal with the challenging open-set SSL task, the mainstream methods tend to first detect outliers and then filter them out. However, we observe a surprising fact that such approach could result in more severe performance degradation when labels are extremely scarce, as the unreliable outlier detector may wrongly exclude a considerable portion of valuable inliers. To tackle with this issue, we introduce a novel open-set SSL framework, IOMatch, which can jointly utilize inliers and outliers, even when it is difficult to distinguish exactly between them. Specifically, we propose to employ a multi-binary classifier in combination with the standard closed-set classifier for producing unified open-set classification targets, which regard all outliers as a single new class. By adopting these targets as open-set pseudo-labels, we optimize an open-set classifier with all unlabeled samples including both inliers and outliers. Extensive experiments have shown that IOMatch significantly outperforms the baseline methods across different benchmark datasets and different settings despite its remarkable simplicity. Our code and models are available at https://github.com/nukezil/IOMatch.
true
true
Zekun Li and Lei Qi and Yinghuan Shi and Yang Gao
2,023
null
null
null
null
IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization
[ICCV 2023 Oral] IOMatch: Simplifying Open-Set Semi-Supervised ...
https://github.com/nukezil/IOMatch
This is the official repository for our ICCV 2023 paper: IOMatch: Simplifying Open-Set Semi-Supervised Learning with Joint Inliers and Outliers Utilization.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_11
\cite{ossl_11}
SSB: Simple but Strong Baseline for Boosting Performance of Open-Set Semi-Supervised Learning
http://arxiv.org/abs/2311.10572v1
Semi-supervised learning (SSL) methods effectively leverage unlabeled data to improve model generalization. However, SSL models often underperform in open-set scenarios, where unlabeled data contain outliers from novel categories that do not appear in the labeled set. In this paper, we study the challenging and realistic open-set SSL setting, where the goal is to both correctly classify inliers and to detect outliers. Intuitively, the inlier classifier should be trained on inlier data only. However, we find that inlier classification performance can be largely improved by incorporating high-confidence pseudo-labeled data, regardless of whether they are inliers or outliers. Also, we propose to utilize non-linear transformations to separate the features used for inlier classification and outlier detection in the multi-task learning framework, preventing adverse effects between them. Additionally, we introduce pseudo-negative mining, which further boosts outlier detection performance. The three ingredients lead to what we call Simple but Strong Baseline (SSB) for open-set SSL. In experiments, SSB greatly improves both inlier classification and outlier detection performance, outperforming existing methods by a large margin. Our code will be released at https://github.com/YUE-FAN/SSB.
true
true
Fan, Yue and Kukleva, Anna and Dai, Dengxin and Schiele, Bernt
2,023
null
null
null
null
SSB: Simple but Strong Baseline for Boosting Performance of Open-Set Semi-Supervised Learning
SSB: Simple but Strong Baseline for Boosting Performance ...
https://ieeexplore.ieee.org/iel7/10376473/10376477/10377450.pdf
by Y Fan · 2023 · Cited by 17 — Semi-supervised learning. (SSL) aims to improve model performance by exploiting both labeled and unlabeled data. As one of the most widely used techniques,
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_1
\cite{ossl_1}
Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data
null
null
true
false
Lan{-}Zhe Guo and Zhenyu Zhang and Yuan Jiang and Yufeng Li and Zhi{-}Hua Zhou
2,020
null
null
null
null
Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data
[PDF] Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled ...
http://proceedings.mlr.press/v119/guo20i/guo20i.pdf
Deep semi-supervised learning (SSL) is proposed to uti- lize a large number of cheap unlabeled data to help deep neural networks improve performance, reducing
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_13
\cite{ossl_13}
Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning
null
null
true
false
Hang, Jun-Yi and Zhang, Min-Ling
2,024
null
null
null
null
Binary Decomposition: A Problem Transformation Perspective for Open-Set Semi-Supervised Learning
Binary decomposition | Proceedings of the 41st International ...
https://dl.acm.org/doi/10.5555/3692070.3692767
Binary decomposition: a problem transformation perspective for open-set semi-supervised learning. Computing methodologies · Machine learning.
Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers
2505.24443v1
ossl_17
\cite{ossl_17}
They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning
http://arxiv.org/abs/2011.13529v4
Semi-Supervised Learning (SSL) with mismatched classes deals with the problem that the classes-of-interests in the limited labeled data is only a subset of the classes in massive unlabeled data. As a result, the classes only possessed by the unlabeled data may mislead the classifier training and thus hindering the realistic landing of various SSL methods. To solve this problem, existing methods usually divide unlabeled data to in-distribution (ID) data and out-of-distribution (OOD) data, and directly discard or weaken the OOD data to avoid their adverse impact. In other words, they treat OOD data as completely useless and thus the potential valuable information for classification contained by them is totally ignored. To remedy this defect, this paper proposes a "Transferable OOD data Recycling" (TOOR) method which properly utilizes ID data as well as the "recyclable" OOD data to enrich the information for conducting class-mismatched SSL. Specifically, TOOR firstly attributes all unlabeled data to ID data or OOD data, among which the ID data are directly used for training. Then we treat the OOD data that have a close relationship with ID data and labeled data as recyclable, and employ adversarial domain adaptation to project them to the space of ID data and labeled data. In other words, the recyclability of an OOD datum is evaluated by its transferability, and the recyclable OOD data are transferred so that they are compatible with the distribution of known classes-of-interests. Consequently, our TOOR method extracts more information from unlabeled data than existing approaches, so it can achieve the improved performance which is demonstrated by the experiments on typical benchmark datasets.
true
true
Huang, Zhuo and Yang, Jian and Gong, Chen
2,022
null
null
null
{IEEE} Trans. on Multimedia
They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning
Towards Recycling Transferable Unlabeled Data for Class ... - arXiv
https://arxiv.org/abs/2011.13529
They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning. Authors:Zhuo Huang
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
liu2024deep
\cite{liu2024deep}
Deep Industrial Image Anomaly Detection: A Survey
http://arxiv.org/abs/2301.11514v5
The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection.
true
true
Liu, Jiaqi and Xie, Guoyang and Wang, Jinbao and Li, Shangnian and Wang, Chengjie and Zheng, Feng and Jin, Yaochu
2,024
null
null
10.1109/cvpr52688.2022.01392
Machine Intelligence Research
Deep Industrial Image Anomaly Detection: A Survey
Deep Industrial Image Anomaly Detection: A Survey
http://arxiv.org/pdf/2301.11514v5
The recent rapid development of deep learning has laid a milestone in industrial Image Anomaly Detection (IAD). In this paper, we provide a comprehensive review of deep learning-based image anomaly detection techniques, from the perspectives of neural network architectures, levels of supervision, loss functions, metrics and datasets. In addition, we extract the new setting from industrial manufacturing and review the current IAD approaches under our proposed our new setting. Moreover, we highlight several opening challenges for image anomaly detection. The merits and downsides of representative network architectures under varying supervision are discussed. Finally, we summarize the research findings and point out future research directions. More resources are available at https://github.com/M-3LAB/awesome-industrial-anomaly-detection.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
bergmann2019mvtec
\cite{bergmann2019mvtec}
{MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection}
null
null
true
false
Bergmann, Paul and Fauser, Michael and Sattlegger, David and Steger, Carsten
2,019
null
null
10.1007/978-3-031-20056-4_23
null
{MVTec AD — A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection}
The MVTec Anomaly Detection Dataset - ACM Digital Library
https://dl.acm.org/doi/abs/10.1007/s11263-020-01400-4
(2019a). MVTec AD: A comprehensive real-world dataset for unsupervised anomaly detection. In Proceedings of the IEEE conference on computer vision and pattern
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
bergmann2018improving
\cite{bergmann2018improving}
Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders
http://arxiv.org/abs/1807.02011v3
Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a pixel-wise reconstruction error based on an $\ell^p$ distance. This procedure, however, leads to large residuals whenever the reconstruction encompasses slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that it cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity which examines inter-dependencies between local image regions, taking into account luminance, contrast and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over the state of the art approaches for unsupervised defect segmentation that use pixel-wise reconstruction error metrics.
true
true
Bergmann, Paul and Löwe, Sindy and Fauser, Michael and Sattlegger, David and Steger, Carsten
2,019
null
null
null
null
Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders
(PDF) Improving Unsupervised Defect Segmentation by Applying ...
https://www.researchgate.net/publication/331779705_Improving_Unsupervised_Defect_Segmentation_by_Applying_Structural_Similarity_to_Autoencoders
Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders ; Paul Bergmann at Technical University of Munich. Paul Bergmann.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
liu2020towards
\cite{liu2020towards}
Towards Visually Explaining Variational Autoencoders
http://arxiv.org/abs/1911.07389v7
Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.
true
true
Liu, Wenqian and Li, Runze and Zheng, Meng and Karanam, Srikrishna and Wu, Ziyan and Bhanu, Bir and Radke, Richard J. and Camps, Octavia
2,020
null
null
10.1007/978-3-030-20893-6_39
null
Towards Visually Explaining Variational Autoencoders
Towards Visually Explaining Variational Autoencoders
http://arxiv.org/pdf/1911.07389v7
Recent advances in Convolutional Neural Network (CNN) model interpretability have led to impressive progress in visualizing and understanding model predictions. In particular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g. variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methods to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec-AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning improved latent space disentanglement, demonstrated on the Dsprites dataset.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
akcay2019ganomaly
\cite{akcay2019ganomaly}
GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training
http://arxiv.org/abs/1805.06725v3
Anomaly detection is a classical problem in computer vision, namely the determination of the normal from the abnormal when datasets are highly biased towards one class (normal) due to the insufficient sample size of the other class (abnormal). While this can be addressed as a supervised learning problem, a significantly more challenging problem is that of detecting the unknown/unseen anomaly case that takes us instead into the space of a one-class, semi-supervised learning paradigm. We introduce such a novel anomaly detection model, by using a conditional generative adversarial network that jointly learns the generation of high-dimensional image space and the inference of latent space. Employing encoder-decoder-encoder sub-networks in the generator network enables the model to map the input image to a lower dimension vector, which is then used to reconstruct the generated output image. The use of the additional encoder network maps this generated image to its latent representation. Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples. As a result, a larger distance metric from this learned data distribution at inference time is indicative of an outlier from that distribution - an anomaly. Experimentation over several benchmark datasets, from varying domains, shows the model efficacy and superiority over previous state-of-the-art approaches.
true
true
Akcay, Samet and Atapour-Abarghouei, Amir and Breckon, Toby P.
2,019
null
null
null
null
GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training
GANomaly Paper Review: Semi-Supervised Anomaly Detection via ...
https://towardsdatascience.com/ganomaly-paper-review-semi-supervised-anomaly-detection-via-adversarial-training-a6f7a64a265f/
GANomaly is an anomaly detection model that employs adversarial training to capture the data distribution.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
damm2024anomalydino
\cite{damm2024anomalydino}
AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2
http://arxiv.org/abs/2405.14529v3
Recent advances in multimodal foundation models have set new standards in few-shot anomaly detection. This paper explores whether high-quality visual features alone are sufficient to rival existing state-of-the-art vision-language models. We affirm this by adapting DINOv2 for one-shot and few-shot anomaly detection, with a focus on industrial applications. We show that this approach does not only rival existing techniques but can even outmatch them in many settings. Our proposed vision-only approach, AnomalyDINO, follows the well-established patch-level deep nearest neighbor paradigm, and enables both image-level anomaly prediction and pixel-level anomaly segmentation. The approach is methodologically simple and training-free and, thus, does not require any additional data for fine-tuning or meta-learning. The approach is methodologically simple and training-free and, thus, does not require any additional data for fine-tuning or meta-learning. Despite its simplicity, AnomalyDINO achieves state-of-the-art results in one- and few-shot anomaly detection (e.g., pushing the one-shot performance on MVTec-AD from an AUROC of 93.1% to 96.6%). The reduced overhead, coupled with its outstanding few-shot performance, makes AnomalyDINO a strong candidate for fast deployment, e.g., in industrial contexts.
true
true
Damm, Simon and Laszkiewicz, Mike and Lederer, Johannes and Fischer, Asja
2,024
null
null
10.1561/0600000110
null
AnomalyDINO: Boosting Patch-based Few-shot Anomaly Detection with DINOv2
[PDF] Boosting Patch-Based Few-Shot Anomaly Detection with DINOv2
https://openaccess.thecvf.com/content/WACV2025/papers/Damm_AnomalyDINO_Boosting_Patch-Based_Few-Shot_Anomaly_Detection_with_DINOv2_WACV_2025_paper.pdf
Our approach, termed AnomalyDINO, follows the well- established AD framework of patch-level deep nearest neighbor [34, 46], and leverages DINOv2 [30] as a back-.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
roth2022towards
\cite{roth2022towards}
Towards Total Recall in Industrial Anomaly Detection
http://arxiv.org/abs/2106.08265v2
Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing approaches combine embeddings from ImageNet models with an outlier detection model. In this paper, we extend on this line of work and propose \textbf{PatchCore}, which uses a maximally representative memory bank of nominal patch-features. PatchCore offers competitive inference times while achieving state-of-the-art performance for both detection and localization. On the challenging, widely used MVTec AD benchmark PatchCore achieves an image-level anomaly detection AUROC score of up to $99.6\%$, more than halving the error compared to the next best competitor. We further report competitive results on two additional datasets and also find competitive results in the few samples regime.\freefootnote{$^*$ Work done during a research internship at Amazon AWS.} Code: github.com/amazon-research/patchcore-inspection.
true
true
Roth, Karsten and Pemula, Latha and Zepeda, Joaquin and Scholkopf, Bernhard and Brox, Thomas and Gehler, Peter
2,022
null
null
10.1109/cvpr52688.2022.00951
null
Towards Total Recall in Industrial Anomaly Detection
Towards Total Recall in Industrial Anomaly Detection
http://arxiv.org/pdf/2106.08265v2
Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing approaches combine embeddings from ImageNet models with an outlier detection model. In this paper, we extend on this line of work and propose \textbf{PatchCore}, which uses a maximally representative memory bank of nominal patch-features. PatchCore offers competitive inference times while achieving state-of-the-art performance for both detection and localization. On the challenging, widely used MVTec AD benchmark PatchCore achieves an image-level anomaly detection AUROC score of up to $99.6\%$, more than halving the error compared to the next best competitor. We further report competitive results on two additional datasets and also find competitive results in the few samples regime.\freefootnote{$^*$ Work done during a research internship at Amazon AWS.} Code: github.com/amazon-research/patchcore-inspection.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
jiang2022softpatch
\cite{jiang2022softpatch}
SoftPatch: Unsupervised Anomaly Detection with Noisy Data
http://arxiv.org/abs/2403.14233v1
Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.
true
true
Jiang, Xi and Liu, Jianlin and Wang, Jinbao and Nie, Qiang and Wu, Kai and Liu, Yong and Wang, Chengjie and Zheng, Feng
2,022
null
null
null
null
SoftPatch: Unsupervised Anomaly Detection with Noisy Data
SoftPatch: Unsupervised Anomaly Detection with Noisy Data
http://arxiv.org/pdf/2403.14233v1
Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
li2024sam
\cite{li2024sam}
A SAM-guided Two-stream Lightweight Model for Anomaly Detection
http://arxiv.org/abs/2402.19145v2
In industrial anomaly detection, model efficiency and mobile-friendliness become the primary concerns in real-world applications. Simultaneously, the impressive generalization capabilities of Segment Anything (SAM) have garnered broad academic attention, making it an ideal choice for localizing unseen anomalies and diverse real-world patterns. In this paper, considering these two critical factors, we propose a SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection (STLM) that not only aligns with the two practical application requirements but also harnesses the robust generalization capabilities of SAM. We employ two lightweight image encoders, i.e., our two-stream lightweight module, guided by SAM's knowledge. To be specific, one stream is trained to generate discriminative and general feature representations in both normal and anomalous regions, while the other stream reconstructs the same images without anomalies, which effectively enhances the differentiation of two-stream representations when facing anomalous regions. Furthermore, we employ a shared mask decoder and a feature aggregation module to generate anomaly maps. Our experiments conducted on MVTec AD benchmark show that STLM, with about 16M parameters and achieving an inference time in 20ms, competes effectively with state-of-the-art methods in terms of performance, 98.26% on pixel-level AUC and 94.92% on PRO. We further experiment on more difficult datasets, e.g., VisA and DAGM, to demonstrate the effectiveness and generalizability of STLM.
true
true
Li, Chenghao and Qi, Lei and Geng, Xin
2,025
null
null
10.1109/cvpr.2019.00982
ACM Transactions on Multimedia Computing, Communications, and Applications
A SAM-guided Two-stream Lightweight Model for Anomaly Detection
A SAM-guided Two-stream Lightweight Model for Anomaly Detection
https://arxiv.org/html/2402.19145v1
In this paper, we propose a novel framework called SAM-guided Two-stream Lightweight Model for unsupervised anomaly detection tasks.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
li2024multimodal
\cite{li2024multimodal}
Multimodal Foundation Models: From Specialists to General-Purpose Assistants
http://arxiv.org/abs/2309.10020v1
This paper presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language capabilities, focusing on the transition from specialist models to general-purpose assistants. The research landscape encompasses five core topics, categorized into two classes. (i) We start with a survey of well-established research areas: multimodal foundation models pre-trained for specific purposes, including two topics -- methods of learning vision backbones for visual understanding and text-to-image generation. (ii) Then, we present recent advances in exploratory, open research areas: multimodal foundation models that aim to play the role of general-purpose assistants, including three topics -- unified vision models inspired by large language models (LLMs), end-to-end training of multimodal LLMs, and chaining multimodal tools with LLMs. The target audiences of the paper are researchers, graduate students, and professionals in computer vision and vision-language multimodal communities who are eager to learn the basics and recent advances in multimodal foundation models.
true
true
Li, Chunyuan and Gan, Zhe and Yang, Zhengyuan and Yang, Jianwei and Li, Linjie and Wang, Lijuan and Gao, Jianfeng
2,024
null
null
null
Foundations and Trends in Computer Graphics and Vision
Multimodal Foundation Models: From Specialists to General-Purpose Assistants
Multimodal Foundation Models: From Specialists to ...
https://www.nowpublishers.com/article/Details/CGV-110
by C Li · 2024 · Cited by 316 — This monograph presents a comprehensive survey of the taxonomy and evolution of multimodal foundation models that demonstrate vision and vision-language
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
radford2021learning
\cite{radford2021learning}
Learning Transferable Visual Models From Natural Language Supervision
http://arxiv.org/abs/2103.00020v1
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
true
true
Radford, Alec and Kim, Jong Wook and Hallacy, Chris and Ramesh, Aditya and Goh, Gabriel and Agarwal, Sandhini and Sastry, Girish and Askell, Amanda and Mishkin, Pamela and Clark, Jack and Krueger, Gretchen and Sutskever, Ilya
2,021
null
null
null
null
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
http://arxiv.org/pdf/2103.00020v1
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
kirillov2023segment
\cite{kirillov2023segment}
Segment Anything
http://arxiv.org/abs/2304.02643v1
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.
true
true
Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Dollar, Piotr and Girshick, Ross
2,023
null
null
10.1109/tip.2023.3293772
null
Segment Anything
Segment Anything
http://arxiv.org/pdf/2304.02643v1
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
caron2021emerging
\cite{caron2021emerging}
Emerging Properties in Self-Supervised Vision Transformers
http://arxiv.org/abs/2104.14294v2
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
true
true
Caron, Mathilde and Touvron, Hugo and Misra, Ishan and J\'egou, Herv\'e and Mairal, Julien and Bojanowski, Piotr and Joulin, Armand
2,021
null
null
null
null
Emerging Properties in Self-Supervised Vision Transformers
[PDF] Emerging Properties in Self-Supervised Vision Transformers
https://openaccess.thecvf.com/content/ICCV2021/papers/Caron_Emerging_Properties_in_Self-Supervised_Vision_Transformers_ICCV_2021_paper.pdf
Self-supervised ViT features contain semantic segmentation, scene layout, object boundaries, and perform well with k-NN classifiers, unlike supervised ViTs or
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
oquab2023dinov2
\cite{oquab2023dinov2}
DINOv2: Learning Robust Visual Features without Supervision
http://arxiv.org/abs/2304.07193v2
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
true
true
Maxime Oquab and Timoth{\'e}e Darcet and Th{\'e}o Moutakanni and Huy V. Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel HAZIZA and Francisco Massa and Alaaeldin El-Nouby and Mido Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Herve Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski
2,024
null
null
null
Transactions on Machine Learning Research
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
http://arxiv.org/pdf/2304.07193v2
The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
zhang2023faster
\cite{zhang2023faster}
Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
http://arxiv.org/abs/2306.14289v2
Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In this work, we aim to make SAM mobile-friendly by replacing the heavyweight image encoder with a lightweight one. A naive way to train such a new SAM as in the original SAM paper leads to unsatisfactory performance, especially when limited training sources are available. We find that this is mainly caused by the coupled optimization of the image encoder and mask decoder, motivated by which we propose decoupled distillation. Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM. The training can be completed on a single GPU within less than one day, and the resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM. For inference speed, With a single GPU, MobileSAM runs around 10ms per image: 8ms on the image encoder and 4ms on the mask decoder. With superior performance, our MobileSAM is around 5 times faster than the concurrent FastSAM and 7 times smaller, making it more suitable for mobile applications. Moreover, we show that MobileSAM can run relatively smoothly on CPU. The code for our project is provided at \href{https://github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{MobileSAM}}), with a demo showing that MobileSAM can run relatively smoothly on CPU.
true
true
Zhang, Chaoning and Han, Dongshen and Qiao, Yu and Kim, Jung Uk and Bae, Sung-Ho and Lee, Seungkyu and Hong, Choong Seon
2,023
null
null
10.1109/iccv48922.2021.00822
arXiv preprint arXiv:2306.14289
Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
Faster Segment Anything: Towards Lightweight SAM for Mobile Applications
http://arxiv.org/pdf/2306.14289v2
Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In this work, we aim to make SAM mobile-friendly by replacing the heavyweight image encoder with a lightweight one. A naive way to train such a new SAM as in the original SAM paper leads to unsatisfactory performance, especially when limited training sources are available. We find that this is mainly caused by the coupled optimization of the image encoder and mask decoder, motivated by which we propose decoupled distillation. Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM. The training can be completed on a single GPU within less than one day, and the resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM. For inference speed, With a single GPU, MobileSAM runs around 10ms per image: 8ms on the image encoder and 4ms on the mask decoder. With superior performance, our MobileSAM is around 5 times faster than the concurrent FastSAM and 7 times smaller, making it more suitable for mobile applications. Moreover, we show that MobileSAM can run relatively smoothly on CPU. The code for our project is provided at \href{https://github.com/ChaoningZhang/MobileSAM}{\textcolor{red}{MobileSAM}}), with a demo showing that MobileSAM can run relatively smoothly on CPU.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
capogrosso2024machine
\cite{capogrosso2024machine}
A Machine Learning-oriented Survey on Tiny Machine Learning
http://arxiv.org/abs/2309.11932v2
The emergence of Tiny Machine Learning (TinyML) has positively revolutionized the field of Artificial Intelligence by promoting the joint design of resource-constrained IoT hardware devices and their learning-based software architectures. TinyML carries an essential role within the fourth and fifth industrial revolutions in helping societies, economies, and individuals employ effective AI-infused computing technologies (e.g., smart cities, automotive, and medical robotics). Given its multidisciplinary nature, the field of TinyML has been approached from many different angles: this comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions. The survey is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodological flow, allowing for a systematic and complete literature survey. In particular, firstly we will examine the three different workflows for implementing a TinyML-based system, i.e., ML-oriented, HW-oriented, and co-design. Secondly, we propose a taxonomy that covers the learning panorama under the TinyML lens, examining in detail the different families of model optimization and design, as well as the state-of-the-art learning techniques. Thirdly, this survey will present the distinct features of hardware devices and software tools that represent the current state-of-the-art for TinyML intelligent edge applications. Finally, we discuss the challenges and future directions.
true
true
Capogrosso, Luigi and Cunico, Federico and Cheng, Dong Seon and Fummi, Franco and Cristani, Marco
2,024
null
null
10.1109/access.2022.3182659
IEEE Access
A Machine Learning-oriented Survey on Tiny Machine Learning
(PDF) A Machine Learning-Oriented Survey on Tiny Machine Learning
https://www.researchgate.net/publication/378163073_A_Machine_Learning-oriented_Survey_on_Tiny_Machine_Learning
This comprehensive survey wishes to provide an up-to-date overview focused on all the learning algorithms within TinyML-based solutions.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
vadera2022methods
\cite{vadera2022methods}
Methods for Pruning Deep Neural Networks
http://arxiv.org/abs/2011.00241v2
This paper presents a survey of methods for pruning deep neural networks. It begins by categorising over 150 studies based on the underlying approach used and then focuses on three categories: methods that use magnitude based pruning, methods that utilise clustering to identify redundancy, and methods that use sensitivity analysis to assess the effect of pruning. Some of the key influencing studies within these categories are presented to highlight the underlying approaches and results achieved. Most studies present results which are distributed in the literature as new architectures, algorithms and data sets have developed with time, making comparison across different studied difficult. The paper therefore provides a resource for the community that can be used to quickly compare the results from many different methods on a variety of data sets, and a range of architectures, including AlexNet, ResNet, DenseNet and VGG. The resource is illustrated by comparing the results published for pruning AlexNet and ResNet50 on ImageNet and ResNet56 and VGG16 on the CIFAR10 data to reveal which pruning methods work well in terms of retaining accuracy whilst achieving good compression rates. The paper concludes by identifying some promising directions for future research.
true
true
Vadera, Sunil and Ameen, Salem
2,022
null
null
10.1201/9781003162810-13
IEEE Access
Methods for Pruning Deep Neural Networks
Methods for Pruning Deep Neural Networks
http://arxiv.org/pdf/2011.00241v2
This paper presents a survey of methods for pruning deep neural networks. It begins by categorising over 150 studies based on the underlying approach used and then focuses on three categories: methods that use magnitude based pruning, methods that utilise clustering to identify redundancy, and methods that use sensitivity analysis to assess the effect of pruning. Some of the key influencing studies within these categories are presented to highlight the underlying approaches and results achieved. Most studies present results which are distributed in the literature as new architectures, algorithms and data sets have developed with time, making comparison across different studied difficult. The paper therefore provides a resource for the community that can be used to quickly compare the results from many different methods on a variety of data sets, and a range of architectures, including AlexNet, ResNet, DenseNet and VGG. The resource is illustrated by comparing the results published for pruning AlexNet and ResNet50 on ImageNet and ResNet56 and VGG16 on the CIFAR10 data to reveal which pruning methods work well in terms of retaining accuracy whilst achieving good compression rates. The paper concludes by identifying some promising directions for future research.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
gholami2022survey
\cite{gholami2022survey}
A Survey of Quantization Methods for Efficient Neural Network Inference
http://arxiv.org/abs/2103.13630v3
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
true
true
Gholami, Amir and Kim, Sehoon and Dong, Zhen and Yao, Zhewei and Mahoney, Michael W. and Keutzer, Kurt
2,022
null
null
10.1007/s11263-021-01453-z
null
A Survey of Quantization Methods for Efficient Neural Network Inference
A Survey of Quantization Methods for Efficient Neural Network Inference
http://arxiv.org/pdf/2103.13630v3
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
gou2021knowledge
\cite{gou2021knowledge}
Knowledge Distillation: A Survey
http://arxiv.org/abs/2006.05525v7
In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.
true
true
Gou, Jianping and Yu, Baosheng and Maybank, Stephen J. and Tao, Dacheng
2,021
null
null
null
International Journal of Computer Vision
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
http://arxiv.org/pdf/2006.05525v7
In recent years, deep neural networks have been successful in both industry and academia, especially for computer vision tasks. The great success of deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher-student architecture, distillation algorithms, performance comparison and applications. Furthermore, challenges in knowledge distillation are briefly reviewed and comments on future research are discussed and forwarded.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
ren2021comprehensive
\cite{ren2021comprehensive}
A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions
http://arxiv.org/abs/2006.02903v3
Deep learning has made breakthroughs and substantial in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers' prior knowledge and experience. And due to the limitations of human' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture. Neural Architecture Search (NAS) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. Besides, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions.
true
true
Ren, Pengzhen and Xiao, Yun and Chang, Xiaojun and Huang, Po-yao and Li, Zhihui and Chen, Xiaojiang and Wang, Xin
2,021
null
null
10.1109/tkde.2021.3126456
ACM Computing Surveys
A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions
A quick look at NAS (Neural Architecture Search) - Welcome
https://gachiemchiep.github.io/machine%20learning/NAS-survey-2020/
On this page. 2020 NAS surveyr A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions. The current research results
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
brauwers2021general
\cite{brauwers2021general}
A General Survey on Attention Mechanisms in Deep Learning
http://arxiv.org/abs/2203.14263v1
Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed, and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in the field of attention models is considered.
true
true
Brauwers, Gianni and Frasincar, Flavius
2,023
null
null
null
IEEE Transactions on Knowledge and Data Engineering
A General Survey on Attention Mechanisms in Deep Learning
A General Survey on Attention Mechanisms in Deep Learning
http://arxiv.org/pdf/2203.14263v1
Attention is an important mechanism that can be employed for a variety of deep learning models across many different domains and tasks. This survey provides an overview of the most important attention mechanisms proposed in the literature. The various attention mechanisms are explained by means of a framework consisting of a general attention model, uniform notation, and a comprehensive taxonomy of attention mechanisms. Furthermore, the various measures for evaluating attention models are reviewed, and methods to characterize the structure of attention models based on the proposed framework are discussed. Last, future work in the field of attention models is considered.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
vaswani2017attention
\cite{vaswani2017attention}
Attention Is All You Need
http://arxiv.org/abs/1706.03762v7
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
true
true
Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia
2,017
null
null
10.1145/3505244
null
Attention Is All You Need
Attention Is All You Need
http://arxiv.org/pdf/1706.03762v7
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
KairosAD: A SAM-Based Model for Industrial Anomaly Detection on Embedded Devices
2505.24334v1
khan2022transformers
\cite{khan2022transformers}
Transformers in Vision: A Survey
http://arxiv.org/abs/2101.01169v5
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works.
true
true
Khan, Salman and Naseer, Muzammal and Hayat, Munawar and Zamir, Syed Waqas and Khan, Fahad Shahbaz and Shah, Mubarak
2,022
null
null
10.1007/978-3-031-73209-6_1
ACM Computing Surveys
Transformers in Vision: A Survey
Transformers in Vision: A Survey
http://arxiv.org/pdf/2101.01169v5
Astounding results from Transformer models on natural language tasks have intrigued the vision community to study their application to computer vision problems. Among their salient benefits, Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM). Different from convolutional networks, Transformers require minimal inductive biases for their design and are naturally suited as set-functions. Furthermore, the straightforward design of Transformers allows processing multiple modalities (e.g., images, videos, text and speech) using similar processing blocks and demonstrates excellent scalability to very large capacity networks and huge datasets. These strengths have led to exciting progress on a number of vision tasks using Transformer networks. This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline. We start with an introduction to fundamental concepts behind the success of Transformers i.e., self-attention, large-scale pre-training, and bidirectional encoding. We then cover extensive applications of transformers in vision including popular recognition tasks (e.g., image classification, object detection, action recognition, and segmentation), generative modeling, multi-modal tasks (e.g., visual-question answering, visual reasoning, and visual grounding), video processing (e.g., activity recognition, video forecasting), low-level vision (e.g., image super-resolution, image enhancement, and colorization) and 3D analysis (e.g., point cloud classification and segmentation). We compare the respective advantages and limitations of popular techniques both in terms of architectural design and their experimental value. Finally, we provide an analysis on open research directions and possible future works.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
TaylorKYMKRHM17
\cite{TaylorKYMKRHM17}
A deep learning approach for generalized speech animation
null
null
true
false
Sarah L. Taylor and Taehwan Kim and Yisong Yue and Moshe Mahler and James Krahe and Anastasio Garcia Rodriguez and Jessica K. Hodgins and Iain A. Matthews
2,017
null
null
null
TOG
A deep learning approach for generalized speech animation
[PDF] A Deep Learning Approach for Generalized Speech Animation - TTIC
https://home.ttic.edu/~taehwan/taylor_etal_siggraph2017.pdf
We introduce a simple and efective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
cao2005expressive
\cite{cao2005expressive}
Expressive Speech-driven Facial Animation with controllable emotions
http://arxiv.org/abs/2301.02008v2
It is in high demand to generate facial animation with high realism, but it remains a challenging task. Existing approaches of speech-driven facial animation can produce satisfactory mouth movement and lip synchronization, but show weakness in dramatic emotional expressions and flexibility in emotion control. This paper presents a novel deep learning-based approach for expressive facial animation generation from speech that can exhibit wide-spectrum facial expressions with controllable emotion type and intensity. We propose an emotion controller module to learn the relationship between the emotion variations (e.g., types and intensity) and the corresponding facial expression parameters. It enables emotion-controllable facial animation, where the target expression can be continuously adjusted as desired. The qualitative and quantitative evaluations show that the animation generated by our method is rich in facial emotional expressiveness while retaining accurate lip movement, outperforming other state-of-the-art methods.
true
true
Cao, Yong and Tien, Wen C and Faloutsos, Petros and Pighin, Fr{\'e}d{\'e}ric
2,005
null
null
null
ACM TOG
Expressive Speech-driven Facial Animation with controllable emotions
Expressive Speech-driven Facial Animation with ...
https://github.com/on1262/facialanimation
EXPRESSIVE SPEECH-DRIVEN FACIAL ANIMATION WITH CONTROLLABLE EMOTIONS. Source code for: Expressive Speech-driven Facial Animation with controllable emotions.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
FaceFormer
\cite{FaceFormer}
FaceFormer: Speech-Driven 3D Facial Animation with Transformers
http://arxiv.org/abs/2112.05329v4
Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data. Prior works typically focus on learning phoneme-level features of short audio windows with limited context, occasionally resulting in inaccurate lip movements. To tackle this limitation, we propose a Transformer-based autoregressive model, FaceFormer, which encodes the long-term audio context and autoregressively predicts a sequence of animated 3D face meshes. To cope with the data scarcity issue, we integrate the self-supervised pre-trained speech representations. Also, we devise two biased attention mechanisms well suited to this specific task, including the biased cross-modal multi-head (MH) attention and the biased causal MH self-attention with a periodic positional encoding strategy. The former effectively aligns the audio-motion modalities, whereas the latter offers abilities to generalize to longer audio sequences. Extensive experiments and a perceptual user study show that our approach outperforms the existing state-of-the-arts. The code will be made available.
true
true
Yingruo Fan and Zhaojiang Lin and Jun Saito and Wenping Wang and Taku Komura
2,022
null
null
null
null
FaceFormer: Speech-Driven 3D Facial Animation with Transformers
[PDF] FaceFormer: Speech-Driven 3D Facial Animation With Transformers
https://openaccess.thecvf.com/content/CVPR2022/papers/Fan_FaceFormer_Speech-Driven_3D_Facial_Animation_With_Transformers_CVPR_2022_paper.pdf
An autoregressive transformer-based architecture for speech-driven 3D facial animation. FaceFormer encodes the long-term audio context and the history of face
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
CodeTalker
\cite{CodeTalker}
CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
http://arxiv.org/abs/2301.02379v2
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness due to the highly ill-posed nature and scarcity of audio-visual data. Existing works typically formulate the cross-modal mapping into a regression task, which suffers from the regression-to-mean problem leading to over-smoothed facial motions. In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty. The codebook is learned by self-reconstruction over real facial motions and thus embedded with realistic facial motion priors. Over the discrete motion space, a temporal autoregressive model is employed to sequentially synthesize facial motions from the input speech signal, which guarantees lip-sync as well as plausible facial expressions. We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively. Also, a user study further justifies our superiority in perceptual quality.
true
true
Jinbo Xing and Menghan Xia and Yuechen Zhang and Xiaodong Cun and Jue Wang and Tien{-}Tsin Wong
2,023
null
null
null
null
CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
Speech-Driven 3D Facial Animation with Discrete Motion Prior - arXiv
https://arxiv.org/abs/2301.02379
In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
FaceDiffuser
\cite{FaceDiffuser}
FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion
http://arxiv.org/abs/2309.11306v1
Speech-driven 3D facial animation synthesis has been a challenging task both in industry and research. Recent methods mostly focus on deterministic deep learning methods meaning that given a speech input, the output is always the same. However, in reality, the non-verbal facial cues that reside throughout the face are non-deterministic in nature. In addition, majority of the approaches focus on 3D vertex based datasets and methods that are compatible with existing facial animation pipelines with rigged characters is scarce. To eliminate these issues, we present FaceDiffuser, a non-deterministic deep learning model to generate speech-driven facial animations that is trained with both 3D vertex and blendshape based datasets. Our method is based on the diffusion technique and uses the pre-trained large speech representation model HuBERT to encode the audio input. To the best of our knowledge, we are the first to employ the diffusion method for the task of speech-driven 3D facial animation synthesis. We have run extensive objective and subjective analyses and show that our approach achieves better or comparable results in comparison to the state-of-the-art methods. We also introduce a new in-house dataset that is based on a blendshape based rigged character. We recommend watching the accompanying supplementary video. The code and the dataset will be publicly available.
true
true
Stefan Stan and Kazi Injamamul Haque and Zerrin Yumak
2,023
null
null
null
null
FaceDiffuser: Speech-Driven 3D Facial Animation Synthesis Using Diffusion
Speech-Driven 3D Facial Animation Synthesis Using Diffusion
https://dl.acm.org/doi/10.1145/3623264.3624447
We present FaceDiffuser, a non-deterministic deep learning model to generate speech-driven facial animations that is trained with both 3D vertex and blendshape
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
li2023mask
\cite{li2023mask}
Mask-fpan: Semi-supervised face parsing in the wild with de-occlusion and uv gan
null
null
true
false
Li, Lei and Zhang, Tianfang and Kang, Zhongfeng and Jiang, Xikun
2,023
null
null
null
Computers \& Graphics
Mask-fpan: Semi-supervised face parsing in the wild with de-occlusion and uv gan
Mask-FPAN: Semi-Supervised Face Parsing in the Wild ...
https://arxiv.org/abs/2212.09098
by L Li · 2022 · Cited by 22 — We propose a novel framework termed Mask-FPAN. It uses a de-occlusion module that learns to parse occluded faces in a semi-supervised way.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
haque2023facexhubert
\cite{haque2023facexhubert}
FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning
http://arxiv.org/abs/2303.05416v1
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows to capture personalized and subtle cues in speech (e.g. identity, emotion and hesitation). It is also very robust to background noise and can handle audio recorded in a variety of situations (e.g. multiple people speaking). Recent approaches employ end-to-end deep learning taking into account both audio and text as input to generate facial animation for the whole face. However, scarcity of publicly available expressive audio-3D facial animation datasets poses a major bottleneck. The resulting animations still have issues regarding accurate lip-synching, expressivity, person-specific information and generalizability. We effectively employ self-supervised pretrained HuBERT model in the training process that allows us to incorporate both lexical and non-lexical information in the audio without using a large lexicon. Additionally, guiding the training with a binary emotion condition and speaker identity distinguishes the tiniest subtle facial motion. We carried out extensive objective and subjective evaluation in comparison to ground-truth and state-of-the-art work. A perceptual user study demonstrates that our approach produces superior results with respect to the realism of the animation 78% of the time in comparison to the state-of-the-art. In addition, our method is 4 times faster eliminating the use of complex sequential models such as transformers. We strongly recommend watching the supplementary video before reading the paper. We also provide the implementation and evaluation codes with a GitHub repository link.
true
true
Haque, Kazi Injamamul and Yumak, Zerrin
2,023
null
null
null
null
FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning
Text-less Speech-driven E(X)pressive 3D Facial Animation ...
https://www.researchgate.net/publication/372492333_FaceXHuBERT_Text-less_Speech-driven_EXpressive_3D_Facial_Animation_Synthesis_Using_Self-Supervised_Speech_Representation_Learning
This paper presents FaceXHuBERT, a text-less speech-driven 3D facial animation generation method that allows us to capture facial cues related to emotional
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
EMOTE
\cite{EMOTE}
Emotional Speech-Driven Animation with Content-Emotion Disentanglement
http://arxiv.org/abs/2306.08990v2
To be widely adopted, 3D facial avatars must be animated easily, realistically, and directly from speech signals. While the best recent methods generate 3D animations that are synchronized with the input audio, they largely ignore the impact of emotions on facial expressions. Realistic facial animation requires lip-sync together with the natural expression of emotion. To that end, we propose EMOTE (Expressive Model Optimized for Talking with Emotion), which generates 3D talking-head avatars that maintain lip-sync from speech while enabling explicit control over the expression of emotion. To achieve this, we supervise EMOTE with decoupled losses for speech (i.e., lip-sync) and emotion. These losses are based on two key observations: (1) deformations of the face due to speech are spatially localized around the mouth and have high temporal frequency, whereas (2) facial expressions may deform the whole face and occur over longer intervals. Thus, we train EMOTE with a per-frame lip-reading loss to preserve the speech-dependent content, while supervising emotion at the sequence level. Furthermore, we employ a content-emotion exchange mechanism in order to supervise different emotions on the same audio, while maintaining the lip motion synchronized with the speech. To employ deep perceptual losses without getting undesirable artifacts, we devise a motion prior in the form of a temporal VAE. Due to the absence of high-quality aligned emotional 3D face datasets with speech, EMOTE is trained with 3D pseudo-ground-truth extracted from an emotional video dataset (i.e., MEAD). Extensive qualitative and perceptual evaluations demonstrate that EMOTE produces speech-driven facial animations with better lip-sync than state-of-the-art methods trained on the same data, while offering additional, high-quality emotional control.
true
true
Dan{\v{e}}{\v{c}}ek, Radek and Chhatre, Kiran and Tripathi, Shashank and Wen, Yandong and Black, Michael and Bolkart, Timo
2,023
null
null
null
null
Emotional Speech-Driven Animation with Content-Emotion Disentanglement
Emotional Speech-Driven Animation with Content-Emotion ...
https://dl.acm.org/doi/10.1145/3610548.3618183
We propose EMOTE (Expressive Model Optimized for Talking with Emotion), which generates 3D talking-head avatars that maintain lip-sync from speech.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
peng2023emotalk
\cite{peng2023emotalk}
EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
http://arxiv.org/abs/2303.11089v2
Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech content and emotion. However, existing methods often neglect emotional facial expressions or fail to disentangle them from speech content. To address this issue, this paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels. Then an emotion-guided feature fusion decoder is employed to generate a 3D talking face with enhanced emotion. The decoder is driven by the disentangled identity, emotional, and content embeddings so as to generate controllable personal and emotional styles. Finally, considering the scarcity of the 3D emotional talking face data, we resort to the supervision of facial blendshapes, which enables the reconstruction of plausible 3D faces from 2D emotional data, and contribute a large-scale 3D emotional talking face dataset (3D-ETF) to train the network. Our experiments and user studies demonstrate that our approach outperforms state-of-the-art methods and exhibits more diverse facial movements. We recommend watching the supplementary video: https://ziqiaopeng.github.io/emotalk
true
true
Peng, Ziqiao and Wu, Haoyu and Song, Zhenbo and Xu, Hao and Zhu, Xiangyu and He, Jun and Liu, Hongyan and Fan, Zhaoxin
2,023
null
null
null
null
EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation
Speech-Driven Emotional Disentanglement for 3D Face Animation
https://arxiv.org/abs/2303.11089
This paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
thambiraja20233diface
\cite{thambiraja20233diface}
3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing
http://arxiv.org/abs/2312.00870v1
We present 3DiFACE, a novel method for personalized speech-driven 3D facial animation and editing. While existing methods deterministically predict facial animations from speech, they overlook the inherent one-to-many relationship between speech and facial expressions, i.e., there are multiple reasonable facial expression animations matching an audio input. It is especially important in content creation to be able to modify generated motion or to specify keyframes. To enable stochasticity as well as motion editing, we propose a lightweight audio-conditioned diffusion model for 3D facial motion. This diffusion model can be trained on a small 3D motion dataset, maintaining expressive lip motion output. In addition, it can be finetuned for specific subjects, requiring only a short video of the person. Through quantitative and qualitative evaluations, we show that our method outperforms existing state-of-the-art techniques and yields speech-driven animations with greater fidelity and diversity.
true
true
Balamurugan Thambiraja and Sadegh Aliakbarian and Darren Cosker and Justus Thies
2,023
null
null
null
CoRR
3DiFACE: Diffusion-based Speech-driven 3D Facial Animation and Editing
[2312.00870] 3DiFACE: Diffusion-based Speech-driven 3D ...
https://arxiv.org/abs/2312.00870
by B Thambiraja · 2023 · Cited by 18 — Abstract:We present 3DiFACE, a novel method for personalized speech-driven 3D facial animation and editing.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
VOCA
\cite{VOCA}
Capture, Learning, and Synthesis of 3D Speaking Styles
http://arxiv.org/abs/1905.03079v1
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.
true
true
Daniel Cudeiro and Timo Bolkart and Cassidy Laidlaw and Anurag Ranjan and Michael J. Black
2,019
null
null
null
null
Capture, Learning, and Synthesis of 3D Speaking Styles
Capture, Learning, and Synthesis of 3D Speaking Styles
http://arxiv.org/pdf/1905.03079v1
Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
LG-LDM
\cite{LG-LDM}
Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion
null
null
true
false
Song, Wenfeng and Wang, Xuan and Jiang, Yiming and Li, Shuai and Hao, Aimin and Hou, Xia and Qin, Hong
2,024
null
null
null
TVCG
Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion
wangxuanx/Face-Diffusion-Model: The official pytorch code ...
https://github.com/wangxuanx/Face-Diffusion-Model
Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion ... Our method generates realistic facial animations by syncing lips with
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
fu2024mimic
\cite{fu2024mimic}
Mimic: Speaking Style Disentanglement for Speech-Driven 3D Facial Animation
http://arxiv.org/abs/2312.10877v1
Speech-driven 3D facial animation aims to synthesize vivid facial animations that accurately synchronize with speech and match the unique speaking style. However, existing works primarily focus on achieving precise lip synchronization while neglecting to model the subject-specific speaking style, often resulting in unrealistic facial animations. To the best of our knowledge, this work makes the first attempt to explore the coupled information between the speaking style and the semantic content in facial motions. Specifically, we introduce an innovative speaking style disentanglement method, which enables arbitrary-subject speaking style encoding and leads to a more realistic synthesis of speech-driven facial animations. Subsequently, we propose a novel framework called \textbf{Mimic} to learn disentangled representations of the speaking style and content from facial motions by building two latent spaces for style and content, respectively. Moreover, to facilitate disentangled representation learning, we introduce four well-designed constraints: an auxiliary style classifier, an auxiliary inverse classifier, a content contrastive loss, and a pair of latent cycle losses, which can effectively contribute to the construction of the identity-related style space and semantic-related content space. Extensive qualitative and quantitative experiments conducted on three publicly available datasets demonstrate that our approach outperforms state-of-the-art methods and is capable of capturing diverse speaking styles for speech-driven 3D facial animation. The source code and supplementary video are publicly available at: https://zeqing-wang.github.io/Mimic/
true
true
Hui Fu and Zeqing Wang and Ke Gong and Keze Wang and Tianshui Chen and Haojie Li and Haifeng Zeng and Wenxiong Kang
2,024
null
null
null
null
Mimic: Speaking Style Disentanglement for Speech-Driven 3D Facial Animation
[PDF] Speaking Style Disentanglement for Speech-Driven 3D Facial ...
https://ojs.aaai.org/index.php/AAAI/article/view/27945/27910
We propose Mimic for style-content disentanglement and synthesizing facial animations matching an identity-specific speaking style, as illustrated in Figure 2.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
wav2lip
\cite{wav2lip}
A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild
http://arxiv.org/abs/2008.10010v1
In this work, we investigate the problem of lip-syncing a talking face video of an arbitrary identity to match a target speech segment. Current works excel at producing accurate lip movements on a static image or videos of specific people seen during the training phase. However, they fail to accurately morph the lip movements of arbitrary identities in dynamic, unconstrained talking face videos, resulting in significant parts of the video being out-of-sync with the new audio. We identify key reasons pertaining to this and hence resolve them by learning from a powerful lip-sync discriminator. Next, we propose new, rigorous evaluation benchmarks and metrics to accurately measure lip synchronization in unconstrained videos. Extensive quantitative evaluations on our challenging benchmarks show that the lip-sync accuracy of the videos generated by our Wav2Lip model is almost as good as real synced videos. We provide a demo video clearly showing the substantial impact of our Wav2Lip model and evaluation benchmarks on our website: \url{cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild}. The code and models are released at this GitHub repository: \url{github.com/Rudrabha/Wav2Lip}. You can also try out the interactive demo at this link: \url{bhaasha.iiit.ac.in/lipsync}.
true
true
K. R. Prajwal and Rudrabha Mukhopadhyay and Vinay P. Namboodiri and C. V. Jawahar
2,020
null
null
null
null
A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild
[2008.10010] A Lip Sync Expert Is All You Need for Speech ...
https://arxiv.org/abs/2008.10010
**arXiv:2008.10010** (cs) View a PDF of the paper titled A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild, by K R Prajwal and 3 other authors (or arXiv:2008.10010v1 [cs.CV] for this version) View a PDF of the paper titled A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild, by K R Prajwal and 3 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
DBLP:conf/bmvc/ChenLLYW21
\cite{DBLP:conf/bmvc/ChenLLYW21}
Talking Head Generation with Audio and Speech Related Facial Action Units
null
null
true
false
Sen Chen and Zhilei Liu and Jiaxing Liu and Zhengxiang Yan and Longbiao Wang
2,021
null
null
null
null
Talking Head Generation with Audio and Speech Related Facial Action Units
Talking Head Generation with Audio and Speech Related Facial ...
https://arxiv.org/abs/2110.09951
In this paper, we propose a novel recurrent generative network that uses both audio and speech-related facial action units (AUs) as the driving information.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
DeepSpeech
\cite{DeepSpeech}
Deep Speech: Scaling up end-to-end speech recognition
http://arxiv.org/abs/1412.5567v2
We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a "phoneme." Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems.
true
true
Awni Y. Hannun and Carl Case and Jared Casper and Bryan Catanzaro and Greg Diamos and Erich Elsen and Ryan Prenger and Sanjeev Satheesh and Shubho Sengupta and Adam Coates and Andrew Y. Ng
2,014
null
null
null
CoRR
Deep Speech: Scaling up end-to-end speech recognition
[PDF] Deep Speech: Scaling up end-to-end speech recognition - arXiv
https://arxiv.org/pdf/1412.5567
Deep Speech is an end-to-end speech recognition system using deep learning, a simpler architecture, and a large RNN trained with multiple GPUs.
Wav2Sem: Plug-and-Play Audio Semantic Decoupling for 3D Speech-Driven Facial Animation
2505.23290v1
wav2vec
\cite{wav2vec}
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
http://arxiv.org/abs/2006.11477v3
We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
true
true
Alexei Baevski and Yuhao Zhou and Abdelrahman Mohamed and Michael Auli
2,020
null
null
null
null
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech ...
https://arxiv.org/abs/2006.11477
wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations