allencbzhang commited on
Commit
d7107d7
·
verified ·
1 Parent(s): e6a8186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -16
README.md CHANGED
@@ -9,27 +9,17 @@ library_name: detectron2
9
  [![Conference](https://img.shields.io/badge/CVPR-2025-blue)]()
10
  [![Paper](https://img.shields.io/badge/arXiv-2412.10028-brightgreen)](https://arxiv.org/abs/2412.10028)
11
  [![Project](https://img.shields.io/badge/Project-red)](https://visual-ai.github.io/mrdetr/)
 
 
12
 
13
- **Paper:** [Mr. DETR: Instructive Multi-Route Training for Detection Transformers](https://huggingface.co/papers/2412.10028)
14
  **Project Page:** https://visual-ai.github.io/mrdetr
 
15
 
16
  ## Abstract
17
 
18
  Existing methods enhance the training of detection transformers by incorporating an auxiliary one-to-many assignment. In this work, we treat the model as a multi-task framework, simultaneously performing one-to-one and one-to-many predictions. We investigate the roles of each component in the transformer decoder across these two training targets, including self-attention, cross-attention, and feed-forward network. Our empirical results demonstrate that any independent component in the decoder can effectively learn both targets simultaneously, even when other components are shared. This finding leads us to propose a multi-route training mechanism, featuring a primary route for one-to-one prediction and two auxiliary training routes for one-to-many prediction. We enhance the training mechanism with a novel instructive self-attention that dynamically and flexibly guides object queries for one-to-many prediction. The auxiliary routes are removed during inference, ensuring no impact on model architecture or inference cost. We conduct extensive experiments on various baselines, achieving consistent improvements as shown in Figure 1.
19
 
20
- **(Content of the original README below)**
21
-
22
- # Mr. DETR
23
- **<center><font size=4>[CVPR 2025] Mr. DETR: Instructive Multi-Route Training for Detection Transformers</font></center>**
24
- [Chang-Bin Zhang](https://zhangchbin.github.io)<sup>1</sup>, Yujie Zhong<sup>2</sup>, Kai Han<sup>1</sup>
25
- <sup>1</sup> <sub>The University of Hong Kong</sub>
26
- <sup>2</sup> <sub>Meituan Inc.</sub>
27
-
28
- [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mr-detr-instructive-multi-route-training-for/object-detection-on-coco-2017-val)](https://paperswithcode.com/sota/object-detection-on-coco-2017-val?p=mr-detr-instructive-multi-route-training-for)
29
- <a href="mailto: [email protected]">
30
- <img alt="emal" src="https://img.shields.io/badge/contact_me-email-yellow">
31
- </a>
32
-
33
 
34
  ## Updates
35
  - [04/25] Mr. DETR supports Instance segmentation now. We release the code and pre-trained weights.
@@ -43,5 +33,3 @@ Existing methods enhance the training of detection transformers by incorporating
43
 
44
  ## Method
45
  <img width="1230" alt="" src="assets/mrdetrmethod.png">
46
-
47
- ...(rest of the original README content)
 
9
  [![Conference](https://img.shields.io/badge/CVPR-2025-blue)]()
10
  [![Paper](https://img.shields.io/badge/arXiv-2412.10028-brightgreen)](https://arxiv.org/abs/2412.10028)
11
  [![Project](https://img.shields.io/badge/Project-red)](https://visual-ai.github.io/mrdetr/)
12
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mr-detr-instructive-multi-route-training-for/object-detection-on-coco-2017-val)](https://paperswithcode.com/sota/object-detection-on-coco-2017-val?p=mr-detr-instructive-multi-route-training-for)
13
+
14
 
15
+ **Paper:** [Mr. DETR: Instructive Multi-Route Training for Detection Transformers](https://huggingface.co/papers/2412.10028)
16
  **Project Page:** https://visual-ai.github.io/mrdetr
17
+ **Code:** https://github.com/Visual-AI/Mr.DETR
18
 
19
  ## Abstract
20
 
21
  Existing methods enhance the training of detection transformers by incorporating an auxiliary one-to-many assignment. In this work, we treat the model as a multi-task framework, simultaneously performing one-to-one and one-to-many predictions. We investigate the roles of each component in the transformer decoder across these two training targets, including self-attention, cross-attention, and feed-forward network. Our empirical results demonstrate that any independent component in the decoder can effectively learn both targets simultaneously, even when other components are shared. This finding leads us to propose a multi-route training mechanism, featuring a primary route for one-to-one prediction and two auxiliary training routes for one-to-many prediction. We enhance the training mechanism with a novel instructive self-attention that dynamically and flexibly guides object queries for one-to-many prediction. The auxiliary routes are removed during inference, ensuring no impact on model architecture or inference cost. We conduct extensive experiments on various baselines, achieving consistent improvements as shown in Figure 1.
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## Updates
25
  - [04/25] Mr. DETR supports Instance segmentation now. We release the code and pre-trained weights.
 
33
 
34
  ## Method
35
  <img width="1230" alt="" src="assets/mrdetrmethod.png">