Show simple item record

dc.contributor.authorMac, C. Khoi Nguyen
dc.contributor.authorDo, N. Minh
dc.contributor.authorVo, P. Minh
dc.date.accessioned2025-02-22T19:07:25Z
dc.date.available2025-02-22T19:07:25Z
dc.date.issued2022-07-14
dc.identifier.urihttps://vinspace.edu.vn/handle/VIN/577
dc.description.abstractAdaptive sampling that exploits the spatiotemporal redundancy in videos is critical for always-on action recognition on wearable devices with limited computing and battery resources. The commonly used fixed sampling strategy is not context-aware and may under-sample the visual content, and thus adversely impacts both computation efficiency and accuracy. Inspired by the concepts of foveal vision and pre-attentive processing from the human visual perception mechanism, we introduce a novel adaptive spatiotemporal sampling scheme for efficient action recognition. Our system pre-scans the global scene context at low-resolution and decides to skip or request high-resolution features at salient regions for further processing. We validate the system on EPIC-KITCHENS and UCF-101 datasets for action recognition, and show that our proposed approach can greatly speed up inference with a tolerable loss of accuracy compared with those from state-of-the-art baselines. Source code is available at https://github.com/knmac/adaptive_spatiotemporal.en_US
dc.language.isoen_USen_US
dc.titleEfficient human vision inspired action recognition using adaptive spatiotemporal samplingen_US
dc.typeArticleen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record