Show simple item record

dc.contributor.authorHuynh, M. Thanh
dc.contributor.authorTruong, Q. H. Steven
dc.contributor.authorNguyen, D. T. Chanh
dc.contributor.authorTa, Duc Huy
dc.contributor.authorHoang, Cao Huyen
dc.contributor.authorBui, Trung
dc.date.accessioned2025-03-24T05:40:09Z
dc.date.available2025-03-24T05:40:09Z
dc.date.issued2020
dc.identifier.urihttps://vinspace.edu.vn/handle/VIN/618
dc.description.abstractUnsupervised pretraining is an approach that leverages a large unlabeled data pool to learn data features. However, it requires billion-scale datasets and a month-long training time to surpass its supervised counterpart on fine-tuning in many computer vision tasks. In this study, we propose a novel method, Diffeomorphism Matching (DM), to overcome those challenges. The proposed method combines self-supervised learning and knowledge distillation to equivalently map the feature space of a student model to that of a big pretrained teacher model. On the Chest X-ray dataset, our method alleviates the need to acquire billions of radiographs and substantially reduces pretraining time by 95%. In addition, our pretrained model outperforms other pretrained models by at least 4.2% in F1 score on the CheXpert dataset and 0.7% in Dice score on the SIIM Pneumothorax dataset. Code and pretrained model are available at https://github.com/jokingbear/DM.git.en_US
dc.language.isoen_USen_US
dc.titleDiffeomorphism matching for fast unsupervised pretraining on radiographsen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record