Skip to content

Add Mask2Former to SMP #1044

Open
Open
@caxel-ap

Description

@caxel-ap

Mask2Former model was introduced in the paper Masked-attention Mask Transformer for Universal Image Segmentation and first released in this repository.

Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, MaskFormer both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.

Papers with Code
https://paperswithcode.com/paper/masked-attention-mask-transformer-for

Paper:
https://arxiv.org/abs/2112.01527

HF Reference implementation:
https://huggingface.co/docs/transformers/main/en/model_doc/mask2former
https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/modeling_mask2former.py

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions