Show simple item record

dc.contributor.authorNguyen, M. Tan
dc.contributor.authorNguyen, Tam
dc.contributor.authorBui, Long
dc.contributor.authorDo, Hai
dc.contributor.authorNguyen, Duy Khuong
dc.contributor.authorLe, Duy Dung
dc.contributor.authorTran, The Hung
dc.contributor.authorHo, Nhat
dc.contributor.authorOsher, Stan J.
dc.contributor.authorBaraniuk, Richard G.
dc.date.accessioned2025-02-22T18:42:35Z
dc.date.available2025-02-22T18:42:35Z
dc.date.issued2023-04-11
dc.identifier.urihttps://vinspace.edu.vn/handle/VIN/567
dc.description.abstractPairwise dot product-based self-attention is key to the success of transformers which achieve state-of-the-art performance across a variety of applications in language and vision, but are costly to compute. It has been shown that most attention scores and keys in transformers are redundant and can be removed without loss of accuracy. In this paper, we develop a novel probabilistic framework for pruning attention scores and keys in transformers. We first formulate an admixture model of attention keys whose input data to be clustered are attention queries. We show that attention scores in self-attention correspond to the posterior distribution of this model when attention keys admit a uniform prior distribution. We then relax this uniform prior constraint and let the model learn these priors from data, resulting in a new Finite Admixture of Keys (FiAK). The learned priors are used for pruning away redundant attention scores and keys in the baseline transformers, improving the diversity of attention patterns that the models capture. We corroborate the efficiency of transformers pruned with FiAK on the ImageNet object classification and WikiText-103 language modeling tasks. Our experiments demonstrate that transformers pruned with FiAK yield similar or better accuracy than the baseline dense transformers while being much more efficient in terms of memory and computational cost.en_US
dc.language.isoen_USen_US
dc.subjecttransformersen_US
dc.subjectadmixture modelsen_US
dc.subjectpruningen_US
dc.titleA probabilistic framework for pruning transformers via a finite admixture of keysen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Le Duy Dung, PhD [4]
    Assistant Professor, Computer Science program, College of Engineering and Computer Science

Show simple item record


Vin University Library
Da Ton, Gia Lam
Vinhomes Oceanpark, Ha Noi, Viet Nam
Phone: +84-2471-089-779 | 1800-8189
Contact: library@vinuni.edu.vn