Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,107 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- image-feature-extraction
|
4 |
+
- birder
|
5 |
+
- pytorch
|
6 |
+
library_name: birder
|
7 |
+
license: apache-2.0
|
8 |
+
base_model:
|
9 |
+
- facebook/PE-Core-B16-224
|
10 |
+
---
|
11 |
+
|
12 |
+
# Model Card for rope_i_vit_b16_pn_ap_c1_pe-core
|
13 |
+
|
14 |
+
A ViT-B16 image encoder from the PE-Core model by Bolya et al., converted to the Birder format for image feature extraction.
|
15 |
+
This version retains the original model weights and architecture, with the exception of removing the CLIP projection layer to expose raw image embeddings.
|
16 |
+
It is a general-purpose visual backbone.
|
17 |
+
|
18 |
+
See: <https://huggingface.co/facebook/PE-Core-B16-224> for further details.
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
- **Model Type:** Image classification and detection backbone
|
23 |
+
- **Model Stats:**
|
24 |
+
- Params (M): 92.9
|
25 |
+
- Input image size: 224 x 224
|
26 |
+
|
27 |
+
- **Papers:**
|
28 |
+
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: <https://arxiv.org/abs/2010.11929>
|
29 |
+
- Rotary Position Embedding for Vision Transformer: <https://arxiv.org/abs/2403.13298>
|
30 |
+
- Perception Encoder: The best visual embeddings are not at the output of the network: <https://arxiv.org/abs/2504.13181>
|
31 |
+
|
32 |
+
## Model Usage
|
33 |
+
|
34 |
+
### Image Embeddings
|
35 |
+
|
36 |
+
```python
|
37 |
+
import birder
|
38 |
+
from birder.inference.classification import infer_image
|
39 |
+
|
40 |
+
(net, model_info) = birder.load_pretrained_model("rope_i_vit_b16_pn_ap_c1_pe-core", inference=True)
|
41 |
+
|
42 |
+
# Get the image size the model was trained on
|
43 |
+
size = birder.get_size_from_signature(model_info.signature)
|
44 |
+
|
45 |
+
# Create an inference transform
|
46 |
+
transform = birder.classification_transform(size, model_info.rgb_stats)
|
47 |
+
|
48 |
+
image = "path/to/image.jpeg" # or a PIL image
|
49 |
+
(out, embedding) = infer_image(net, image, transform, return_embedding=True)
|
50 |
+
# embedding is a NumPy array with shape of (1, 768)
|
51 |
+
```
|
52 |
+
|
53 |
+
### Detection Feature Map
|
54 |
+
|
55 |
+
```python
|
56 |
+
from PIL import Image
|
57 |
+
import birder
|
58 |
+
|
59 |
+
(net, model_info) = birder.load_pretrained_model("rope_i_vit_b16_pn_ap_c1_pe-core", inference=True)
|
60 |
+
|
61 |
+
# Get the image size the model was trained on
|
62 |
+
size = birder.get_size_from_signature(model_info.signature)
|
63 |
+
|
64 |
+
# Create an inference transform
|
65 |
+
transform = birder.classification_transform(size, model_info.rgb_stats)
|
66 |
+
|
67 |
+
image = Image.open("path/to/image.jpeg")
|
68 |
+
features = net.detection_features(transform(image).unsqueeze(0))
|
69 |
+
# features is a dict (stage name -> torch.Tensor)
|
70 |
+
print([(k, v.size()) for k, v in features.items()])
|
71 |
+
# Output example:
|
72 |
+
# [('neck', torch.Size([1, 768, 14, 14]))]
|
73 |
+
```
|
74 |
+
|
75 |
+
## Citation
|
76 |
+
|
77 |
+
```bibtex
|
78 |
+
@misc{dosovitskiy2021imageworth16x16words,
|
79 |
+
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
|
80 |
+
author={Alexey Dosovitskiy and Lucas Beyer and Alexander Kolesnikov and Dirk Weissenborn and Xiaohua Zhai and Thomas Unterthiner and Mostafa Dehghani and Matthias Minderer and Georg Heigold and Sylvain Gelly and Jakob Uszkoreit and Neil Houlsby},
|
81 |
+
year={2021},
|
82 |
+
eprint={2010.11929},
|
83 |
+
archivePrefix={arXiv},
|
84 |
+
primaryClass={cs.CV},
|
85 |
+
url={https://arxiv.org/abs/2010.11929},
|
86 |
+
}
|
87 |
+
|
88 |
+
@misc{heo2024rotarypositionembeddingvision,
|
89 |
+
title={Rotary Position Embedding for Vision Transformer},
|
90 |
+
author={Byeongho Heo and Song Park and Dongyoon Han and Sangdoo Yun},
|
91 |
+
year={2024},
|
92 |
+
eprint={2403.13298},
|
93 |
+
archivePrefix={arXiv},
|
94 |
+
primaryClass={cs.CV},
|
95 |
+
url={https://arxiv.org/abs/2403.13298},
|
96 |
+
}
|
97 |
+
|
98 |
+
@misc{bolya2025perceptionencoderbestvisual,
|
99 |
+
title={Perception Encoder: The best visual embeddings are not at the output of the network},
|
100 |
+
author={Daniel Bolya and Po-Yao Huang and Peize Sun and Jang Hyun Cho and Andrea Madotto and Chen Wei and Tengyu Ma and Jiale Zhi and Jathushan Rajasegaran and Hanoona Rasheed and Junke Wang and Marco Monteiro and Hu Xu and Shiyu Dong and Nikhila Ravi and Daniel Li and Piotr Dollár and Christoph Feichtenhofer},
|
101 |
+
year={2025},
|
102 |
+
eprint={2504.13181},
|
103 |
+
archivePrefix={arXiv},
|
104 |
+
primaryClass={cs.CV},
|
105 |
+
url={https://arxiv.org/abs/2504.13181},
|
106 |
+
}
|
107 |
+
```
|