Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Docling Model for Layout
|
2 |
+
|
3 |
+
This is the **Docling model for layout detection**, designed to facilitate easy importing and usage like any other Hugging Face model.
|
4 |
+
|
5 |
+
This model is part of the [Docling repository](https://huggingface.co/ds4sd/docling-models), which provides document layout analysis tools.
|
6 |
+
|
7 |
+
## **Usage Example**
|
8 |
+
Here's how you can load and use the model:
|
9 |
+
|
10 |
+
```python
|
11 |
+
import torch
|
12 |
+
from PIL import Image
|
13 |
+
from transformers import RTDetrForObjectDetection, RTDetrImageProcessor
|
14 |
+
|
15 |
+
# Load the model and processor
|
16 |
+
image_processor = RTDetrImageProcessor.from_pretrained("your-username/your-model-name")
|
17 |
+
model = RTDetrForObjectDetection.from_pretrained("your-username/your-model-name")
|
18 |
+
|
19 |
+
# Load an image
|
20 |
+
image = Image.open("your-image.png")
|
21 |
+
|
22 |
+
# Preprocess the image
|
23 |
+
inputs = image_processor(images=image, return_tensors="pt")
|
24 |
+
|
25 |
+
# Perform inference
|
26 |
+
with torch.no_grad():
|
27 |
+
outputs = model(**inputs)
|
28 |
+
|
29 |
+
# Post-process results
|
30 |
+
results = image_processor.post_process_object_detection(
|
31 |
+
outputs,
|
32 |
+
target_sizes=torch.tensor([(image.height, image.width)]),
|
33 |
+
threshold=0.3
|
34 |
+
)
|
35 |
+
|
36 |
+
# Print detected objects
|
37 |
+
for result in results:
|
38 |
+
for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]):
|
39 |
+
score, label = score.item(), label_id.item()
|
40 |
+
box = [round(i, 2) for i in box.tolist()]
|
41 |
+
print(f"{model.config.id2label[label]}: {score:.2f} {box}")
|
42 |
+
```
|
43 |
+
|
44 |
+
|
45 |
+
## **Model Information**
|
46 |
+
- **Base Model:** RT-DETR (Robust Transformer-based Object Detector)
|
47 |
+
- **Intended Use:** Layout detection for documents
|
48 |
+
- **Framework:** [Hugging Face Transformers](https://huggingface.co/docs/transformers/index)
|
49 |
+
- **Dataset Used:** Internal dataset for document structure recognition
|
50 |
+
- **License:** Apache 2.0
|
51 |
+
|
52 |
+
## **Citing This Model**
|
53 |
+
If you use this model in your work, please cite the main **Docling repository**:
|
54 |
+
|
55 |
+
```
|
56 |
+
@misc{docling2024, title={Docling Models for Document Layout Analysis}, author={DS4SD Team}, year={2024}, howpublished={Hugging Face Repository}, url={https://huggingface.co/ds4sd/docling-models} }
|
57 |
+
```
|
58 |
+
|
59 |
+
For more details, visit the main repo: [ds4sd/docling-models](https://huggingface.co/ds4sd/docling-models).
|
60 |
+
|
61 |
+
## **Contact**
|
62 |
+
For questions or issues, please open a discussion on Hugging Face or contact [[email protected]].
|
63 |
+
|