nielsr HF Staff commited on
Commit
7e9f419
·
verified ·
1 Parent(s): 7d4b740

Improve model card: Add metadata, update paper & add GitHub links

Browse files

This Pull Request significantly enhances the model card for POINTS-Reader.

Key improvements include:
- **Metadata addition**: Added `license: apache-2.0`, `library_name: transformers`, and `pipeline_tag: image-text-to-text`. This ensures the model is easily discoverable on the Hugging Face Hub and correctly displays the "how to use" widget.
- **Paper Link Update**: The existing `arXiv` badge link has been fixed to point to the official Hugging Face paper page: https://huggingface.co/papers/2509.01215.
- **GitHub Link Inclusion**: A prominent GitHub badge linking to https://github.com/Tencent/POINTS-Reader has been added to improve code discoverability.
- **Introduction**: Briefly updated the introductory sentence to include a direct link to the paper.

These changes will make the model card more informative and user-friendly.

Files changed (1) hide show
  1. README.md +23 -10
README.md CHANGED
@@ -1,3 +1,9 @@
 
 
 
 
 
 
1
  <p align="center">
2
  <img src="images/logo.png" width="700"/>
3
  <p>
@@ -8,17 +14,20 @@ POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Docume
8
 
9
  <p align="center">
10
  <a href="https://huggingface.co/tencent/POINTS-Reader">
11
- <img src="https://img.shields.io/badge/HuggingFace%20Weights-black.svg?logo=HuggingFace" alt="HuggingFace">
 
 
 
12
  </a>
13
- <a href="">
14
- <img src="https://img.shields.io/badge/arXiv__-POINTS--Reader-d4333f?logo=arxiv&logoColor=white&colorA=cccccc&colorB=d4333f&style=flat" alt="arXiv">
15
  </a>
16
- <a href="">
17
  <img src="https://komarev.com/ghpvc/?username=tencent&repo=POINTS-Reader&color=brightgreen&label=Views" alt="view">
18
  </a>
19
  </p>
20
 
21
- We are delighted to announce that the WePOINTS family has welcomed a new member: POINTS-Reader, a vision-language model for end-to-end document conversion.
22
 
23
  ## News
24
 
@@ -578,8 +587,10 @@ import torch
578
  # We recommend using the following prompt to better performance,
579
  # since it is used throughout the training process.
580
  prompt = (
581
- 'Please extract all the text from the image with the following requirements:\n'
582
- '1. Return tables in HTML format.\n'
 
 
583
  '2. Return all other text in Markdown format.'
584
  )
585
  image_path = '/path/to/your/local/image'
@@ -712,8 +723,10 @@ def call_wepoints(messages: List[dict],
712
  return response
713
 
714
  prompt = (
715
- 'Please extract all the text from the image with the following requirements:\n'
716
- '1. Return tables in HTML format.\n'
 
 
717
  '2. Return all other text in Markdown format.'
718
  )
719
 
@@ -772,4 +785,4 @@ If you use this model in your work, please cite the following paper:
772
  journal={arXiv preprint arXiv:2405.11850},
773
  year={2024}
774
  }
775
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ ---
6
+
7
  <p align="center">
8
  <img src="images/logo.png" width="700"/>
9
  <p>
 
14
 
15
  <p align="center">
16
  <a href="https://huggingface.co/tencent/POINTS-Reader">
17
+ <img src="https://img.shields.io/badge/%F0%9F%A4%97_HuggingFace-Model-ffbd45.svg" alt="HuggingFace">
18
+ </a>
19
+ <a href="https://github.com/Tencent/POINTS-Reader">
20
+ <img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub Code">
21
  </a>
22
+ <a href="https://huggingface.co/papers/2509.01215">
23
+ <img src="https://img.shields.io/badge/Paper-POINTS--Reader-d4333f?logo=arxiv&logoColor=white&colorA=cccccc&colorB=d4333f&style=flat" alt="Paper">
24
  </a>
25
+ <a href="https://komarev.com/ghpvc/?username=tencent&repo=POINTS-Reader&color=brightgreen&label=Views" alt="view">
26
  <img src="https://komarev.com/ghpvc/?username=tencent&repo=POINTS-Reader&color=brightgreen&label=Views" alt="view">
27
  </a>
28
  </p>
29
 
30
+ We are delighted to announce that the WePOINTS family has welcomed a new member: POINTS-Reader, a vision-language model for end-to-end document conversion, as introduced in the paper [POINTS-Reader: Distillation-Free Adaptation of Vision-Language Models for Document Conversion](https://huggingface.co/papers/2509.01215).
31
 
32
  ## News
33
 
 
587
  # We recommend using the following prompt to better performance,
588
  # since it is used throughout the training process.
589
  prompt = (
590
+ 'Please extract all the text from the image with the following requirements:
591
+ '
592
+ '1. Return tables in HTML format.
593
+ '
594
  '2. Return all other text in Markdown format.'
595
  )
596
  image_path = '/path/to/your/local/image'
 
723
  return response
724
 
725
  prompt = (
726
+ 'Please extract all the text from the image with the following requirements:
727
+ '
728
+ '1. Return tables in HTML format.
729
+ '
730
  '2. Return all other text in Markdown format.'
731
  )
732
 
 
785
  journal={arXiv preprint arXiv:2405.11850},
786
  year={2024}
787
  }
788
+ ```