Abstract
This work presents Adaptive Local-then-Global Merging (ALGM), a token reduction method for semantic segmentation networks that use plain Vision Transformers. ALGM merges tokens in two stages: (1) In the first network layer, it merges similar tokens within a small local window and (2) halfway through the network, it merges similar tokens across the entire image. This is motivated by an analysis in which we found that, in those situations, tokens with a high cosine similarity can likely be merged without a drop in segmentation quality. With extensive experiments across multiple datasets and network configurations, we show that ALGM not only significantly improves the throughput by up to 100%, but can also enhance the mean IoU by up to + 1.1, thereby achieving a better trade-off between segmentation quality and efficiency than existing methods. Moreover, our approach is adaptive during inference, meaning that the same model can be used for optimal efficiency or accuracy, depending on the application. Code is available at https://tue-mps.github.io/ALGM.
Original language | English |
---|---|
Title of host publication | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 15773-15782 |
Number of pages | 10 |
ISBN (Electronic) | 979-8-3503-5300-6 |
DOIs | |
Publication status | Published - 16 Sept 2024 |
Event | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States Duration: 17 Jun 2024 → 21 Jun 2024 |
Conference
Conference | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
---|---|
Abbreviated title | CVPRW 2024 |
Country/Territory | United States |
City | Seattle |
Period | 17/06/24 → 21/06/24 |
Keywords
- Computer vision
- Adaptation models
- Codes
- Adaptive systems
- Semantic segmentation
- Computational modeling
- Merging
- Semantic Segmentation
- Token Merging
- Efficient Vision Transformers