Update README.md
Browse files
README.md
CHANGED
@@ -10,11 +10,104 @@ tags:
|
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
12 |
---
|
|
|
13 |
|
14 |
-
# Disclaimer
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
12 |
---
|
13 |
+
# The ArGiMI Ardian datasets : text and images
|
14 |
|
|
|
15 |
|
16 |
+

|
17 |
|
18 |
+
The ArGiMi project is committed to open-source principles and data sharing.
|
19 |
+
Thanks to our generous partners, we are releasing several valuable datasets to the public.
|
20 |
|
21 |
+
## Dataset description
|
22 |
+
|
23 |
+
This dataset comprises 11,000 financial annual reports, written in english, meticulously
|
24 |
+
extracted from their original PDF format to provide a valuable resource for researchers and developers in financial
|
25 |
+
analysis and natural language processing (NLP). These reports were published from the late 90s to 2023.
|
26 |
+
|
27 |
+
This dataset provides images of each document pages. A lighter, **text-only version**, is also available at
|
28 |
+
[`artefactory/Argimi-Ardian-Finance-10k-text`](https://huggingface.co/datasets/artefactory/Argimi-Ardian-Finance-10k-text).
|
29 |
+
|
30 |
+
You can load the dataset with:
|
31 |
+
|
32 |
+
```python
|
33 |
+
from datasets import load_dataset
|
34 |
+
|
35 |
+
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text-image", split="train")
|
36 |
+
|
37 |
+
# Or you can stream the dataset to save memory space :
|
38 |
+
|
39 |
+
ds = load_dataset("artefactory/Argimi-Ardian-Finance-10k-text-image", split="train", streaming=True)
|
40 |
+
```
|
41 |
+
|
42 |
+
## Dataset composition:
|
43 |
+
|
44 |
+
* Each PDF was divided into **individual pages** to facilitate granular analysis.
|
45 |
+
* For each page, the following data points were extracted:
|
46 |
+
* **Raw Text:** The complete textual content of the page, capturing all textual information present.
|
47 |
+
* **Screenshot:** A high-resolution image of the page, preserving the visual layout and formatting.
|
48 |
+
* **Cells:** Each cell within tables was identified and represented as a `Cell` object within the `docling` framework. Each `Cell` object encapsulates:
|
49 |
+
* `id`: A unique identifier assigned to each cell, ensuring unambiguous referencing.
|
50 |
+
* `text`: The textual content contained within the cell.
|
51 |
+
* `bbox`: The precise bounding box coordinates of the cell, defining its location and dimensions on the page.
|
52 |
+
* When OCR is employed, cells are further represented as `OcrCell` objects, which include an additional `confidence` attribute. This attribute quantifies the confidence level of the OCR process in accurately recognizing the cell's textual content.
|
53 |
+
* **Segments:** Beyond individual cells, `docling` segments each page into distinct content units, each represented as a `Segment` object. These segments provide a structured representation of the document's layout and content, encompassing elements such as tables, headers, paragraphs, and other structural components. Each `Segment` object contains:
|
54 |
+
* `text`: The textual content of the segment.
|
55 |
+
* `bbox`: The bounding box coordinates, specifying the segment's position and size on the page.
|
56 |
+
* `label`: A categorical label indicating the type of content the segment represents (e.g., "table," "header," "paragraph").
|
57 |
+
|
58 |
+
* To guarantee unique identification, each document is assigned a distinct identifier derived from the hash of its content.
|
59 |
+
|
60 |
+
|
61 |
+
## Parsing description:
|
62 |
+
|
63 |
+
The dataset creation involved a systematic process using the `docling` library ([Documentation](https://ds4sd.github.io/docling/)).
|
64 |
+
|
65 |
+
* PDFs were processed using the `DocumentConverter` class, employing the `PyPdfiumDocumentBackend` for handling of the PDF format.
|
66 |
+
* To ensure high-quality extraction, the following `PdfPipelineOptions` were configured:
|
67 |
+
```python
|
68 |
+
pipeline_options = PdfPipelineOptions(ocr_options=EasyOcrOptions(use_gpu=True))
|
69 |
+
pipeline_options.images_scale = 2.0 # Scale image resolution by a factor of 2
|
70 |
+
pipeline_options.generate_page_images = True # Generate page images
|
71 |
+
pipeline_options.do_ocr = True # Perform OCR
|
72 |
+
pipeline_options.do_table_structure = True # Extract table structure
|
73 |
+
pipeline_options.table_structure_options.do_cell_matching = True # Perform cell matching in tables
|
74 |
+
pipeline_options.table_structure_options.mode = TableFormerMode.ACCURATE # Use accurate mode for table structure extraction
|
75 |
+
```
|
76 |
+
* These options collectively enable:
|
77 |
+
* GPU-accelerated Optical Character Recognition (OCR) via `EasyOcr`.
|
78 |
+
* Upscaling of image resolution by a factor of 2, enhancing the clarity of visual elements.
|
79 |
+
* Generation of page images, providing a visual representation of each page within the document.
|
80 |
+
* Comprehensive table structure extraction, including cell matching, to accurately capture tabular data within the reports.
|
81 |
+
* The "accurate" mode for table structure extraction, prioritizing precision in identifying and delineating tables.
|
82 |
+
|
83 |
+
## Disclaimer:
|
84 |
+
|
85 |
+
This dataset, made available for experimental purposes as part of the ArGiMi research project, is provided "as is"
|
86 |
+
for informational purposes only. The original publicly available data was provided by Ardian.
|
87 |
+
Artefact has processed this dataset and now publicly releases it through Ardian, with Ardian's agreement.
|
88 |
+
None of ArGiMi, Artefact, or Ardian make any representations or warranties of any kind (express or implied) regarding the completeness,
|
89 |
+
accuracy, reliability, suitability, or availability of the dataset or its contents.
|
90 |
+
Any reliance you place on such information is strictly at your own risk.
|
91 |
+
In no event shall ArGiMi, Artefact, or Ardian be liable for any loss or damage, including without limitation,
|
92 |
+
indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of,
|
93 |
+
or in connection with, the use of this dataset. This disclaimer includes, but is not limited to,
|
94 |
+
claims relating to intellectual property infringement, negligence, breach of contract, and defamation.
|
95 |
+
|
96 |
+
## Acknowledgement:
|
97 |
+
|
98 |
+
The ArGiMi consortium gratefully acknowledges Ardian for their invaluable contribution in gathering the documents that
|
99 |
+
comprise this dataset. Their effort and collaboration were essential in enabling the creation and release of this dataset for public use.
|
100 |
+
The ArGiMi project is a collaborative project with Giskard, Mistral, INA and BnF, and is sponsored by the
|
101 |
+
France 2030 program of the French Government.
|
102 |
+
|
103 |
+
## Citation:
|
104 |
+
|
105 |
+
If you find our datasets useful for your research, consider citing us in your works:
|
106 |
+
|
107 |
+
```latex
|
108 |
+
@misc{argimi2024Datasets,
|
109 |
+
title={The ArGiMi datasets},
|
110 |
+
author={Hicham Randrianarivo, Charles Moslonka and Emmanuel Malherbe},
|
111 |
+
year={2024},
|
112 |
+
}
|
113 |
+
```
|