• English
    • Tiếng Việt
  • Tiếng Việt 
    • English
    • Tiếng Việt
  • Đăng nhập
View Item 
  •   Trang chủ
  • The College of Engineering and Computer Science
  • Le Duy Dung, PhD
  • View Item
  •   Trang chủ
  • The College of Engineering and Computer Science
  • Le Duy Dung, PhD
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Improving transformers with probabilistic attention keys

Thumbnail
Xem/Mở
Improving Transformers with Probabilistic Attention Keys.pdf (3.770Mb)
Năm xuất bản
2022
Tác giả
Le, Duy Dung
Tran, Viet Anh
Nguyen, M. Tan
Nguyen, Tam
Nguyen, Duy Khuong
Baraniuk, Richard G.
Ho, Nhat
Osher, Stanley J.
Metadata
Hiển thị đầy đủ biểu ghi
Tóm tắt
Multi-head attention is a driving force behind state-of-the-art transformers, which achieve remarkable performance across a variety of natural language processing (NLP) and computer vision tasks. It has been observed that for many applications, those attention heads learn redundant embedding, and most of them can be removed without degrading the performance of the model. Inspired by this observation, we propose Transformer with a Mixture of Gaussian Keys (Transformer-MGK), a novel transformer architecture that replaces redundant heads in transformers with a mixture of keys at each head. These mixtures of keys follow a Gaussian mixture model and allow each attention head to focus on different parts of the input sequence efficiently. Compared to its conventional transformer counterpart, Transformer-MGK accelerates training and inference, has fewer parameters, and requires fewer FLOPs to compute while achieving comparable or better accuracy across tasks. Transformer-MGK can also be easily extended to use with linear attention. We empirically demonstrate the advantage of Transformer-MGK in a range of practical applications, including language modeling and tasks that involve very long sequences. On the Wikitext-103 and Long Range Arena benchmark, Transformer-MGKs with 4 heads attain comparable or better performance to the baseline transformers with 8 heads.
Định danh
https://vinspace.edu.vn/handle/VIN/297
Collections
  • Le Duy Dung, PhD [5]

Liên hệ | Gửi phản hồi
 

 

Duyệt theo

Toàn bộ thư việnĐơn vị và Bộ sưu tậpNăm xuất bảnTác giảNhan đềChủ đềTrong Bộ sưu tậpNăm xuất bảnTác giảNhan đềChủ đề

Tài khoản

Đăng nhậpĐăng ký

Liên hệ | Gửi phản hồi