Gyutae Park, Sungjoon Son, Jaeyoung Yoo, SeHo Kim, Nojun Kwak
In this paper, we propose a transformer-based image matting model called MatteFormer, which takes full advantage of trimap information in the transformer block. Our method first introduces a prior-token which is a global representation of each trimap region (e.g. foreground, background and unknown). These prior-tokens are used as global priors and participate in the self-attention mechanism of each block. Each stage of the encoder is composed of PAST (Prior-Attentive Swin Transformer) block, which is based on the Swin Transformer block, but differs in a couple of aspects: 1) It has PA-WSA (Prior-Attentive Window Self-Attention) layer, performing self-attention not only with spatial-tokens but also with prior-tokens. 2) It has prior-memory which saves prior-tokens accumulatively from the previous blocks and transfers them to the next block. We evaluate our MatteFormer on the commonly used image matting datasets: Composition-1k and Distinctions-646. Experiment results show that our proposed method achieves state-of-the-art performance with a large margin. Our codes are available at https://github.com/webtoon/matteformer.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Matting | Composition-1K | Conn | 18.9 | MatteFormer |
| Image Matting | Composition-1K | Grad | 8.7 | MatteFormer |
| Image Matting | Composition-1K | MSE | 4 | MatteFormer |
| Image Matting | Composition-1K | SAD | 23.8 | MatteFormer |