Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,71 @@
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-4.0
|
3 |
---
|
4 |
+
|
5 |
+
---
|
6 |
+
task_categories:
|
7 |
+
- object-detection
|
8 |
+
license: mit
|
9 |
+
tags:
|
10 |
+
- computer vision
|
11 |
+
- amodal-tracking
|
12 |
+
- object-tracking
|
13 |
+
- amodal-perception
|
14 |
+
---
|
15 |
+
|
16 |
+
# Segment-Object Dataset
|
17 |
+
|
18 |
+
<!-- Provide a quick summary of the dataset. -->
|
19 |
+
This dataset is collected from [LVIS](https://www.lvisdataset.org/) and [COCO](https://cocodataset.org/#home). We employed the segments in this dataset to implement [PasteNOcclude](https://github.com/WesleyHsieh0806/Amodal-Expander?tab=readme-ov-file#rabbit2-pastenocclude) augmentation proposed in [Tracking Any Object Amodally]((https://tao-amodal.github.io/)).
|
20 |
+
|
21 |
+
[**π Project Page**](https://tao-amodal.github.io/) | [**π» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**π Paper Link**](https://arxiv.org/abs/2312.12433) | [**βοΈ Citations**](#citations)
|
22 |
+
|
23 |
+
<div align="center">
|
24 |
+
<a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
|
25 |
+
</div>
|
26 |
+
|
27 |
+
</br>
|
28 |
+
|
29 |
+
Contact: [ππ»ββοΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)
|
30 |
+
|
31 |
+
### Dataset Download
|
32 |
+
|
33 |
+
```bash
|
34 |
+
git lfs install
|
35 |
+
git clone [email protected]:datasets/chengyenhsieh/TAO-Amodal-Segment-Object-Large
|
36 |
+
```
|
37 |
+
|
38 |
+
After downloading this dataset, check [here](https://github.com/WesleyHsieh0806/Amodal-Expander/tree/main?tab=readme-ov-file#running-training-and-inference) to see how to train our Amodal Expander with PasteNOcclude.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
## π Dataset Structure
|
43 |
+
|
44 |
+
The dataset should be structured like this:
|
45 |
+
```bash
|
46 |
+
TAO-Amodal-Segment-Object-Large
|
47 |
+
βββ train-2017
|
48 |
+
β βββ OOOOOO_XXX.jpg
|
49 |
+
βββ segment_object.json
|
50 |
+
```
|
51 |
+
|
52 |
+
## π File Descriptions
|
53 |
+
|
54 |
+
| File Name | Description |
|
55 |
+
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
56 |
+
| segment_object.json | Mask annotations of each segment object |
|
57 |
+
|
58 |
+
|
59 |
+
## Citation
|
60 |
+
|
61 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
62 |
+
```
|
63 |
+
@misc{hsieh2023tracking,
|
64 |
+
title={Tracking Any Object Amodally},
|
65 |
+
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
|
66 |
+
year={2023},
|
67 |
+
eprint={2312.12433},
|
68 |
+
archivePrefix={arXiv},
|
69 |
+
primaryClass={cs.CV}
|
70 |
+
}
|
71 |
+
```
|