Datasets:

Languages:
English
ArXiv:
License:
bookcoref / README.md
tommasobonomo's picture
Add badges for ACL and arXiv
dd4f9c8 verified
---
dataset_info:
- config_name: full
features:
- name: doc_key
dtype: string
- name: gutenberg_key
dtype: string
- name: sentences
sequence:
sequence: string
- name: clusters
sequence:
sequence:
sequence: int64
- name: characters
list:
- name: name
dtype: string
- name: mentions
sequence:
sequence: int64
splits:
- name: train
num_bytes: 118643409
num_examples: 45
- name: validation
num_bytes: 5893208
num_examples: 5
- name: test
num_bytes: 2732407
num_examples: 3
download_size: 317560335
dataset_size: 127269024
- config_name: split
features:
- name: doc_key
dtype: string
- name: gutenberg_key
dtype: string
- name: sentences
sequence:
sequence: string
- name: clusters
sequence:
sequence:
sequence: int64
- name: characters
list:
- name: name
dtype: string
- name: mentions
sequence:
sequence: int64
splits:
- name: train
num_bytes: 118849212
num_examples: 7544
- name: validation
num_bytes: 5905814
num_examples: 398
- name: test
num_bytes: 2758250
num_examples: 152
download_size: 317560335
dataset_size: 127513276
language:
- en
pretty_name: BOOKCOREF
size_categories:
- 10M<n<100M
tags:
- coreference-resolution
license: cc-by-sa-4.0
---
<div align="center">
<img src="assets/bookcoref.png" width="700">
</div>
<div style="display: flex; justify-content: center; align-items: center; gap: 8px;">
<a href="https://2025.aclweb.org/" style="line-height: 0;"><img src="http://img.shields.io/badge/ACL-2025-4b44ce.svg" style="display: block; margin: 0;"/></a>
<a href="https://aclanthology.org/2025.acl-long.1197/" style="line-height: 0;"><img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg" style="display: block; margin: 0;"/></a>
<a href="https://arxiv.org/abs/2507.12075" style="line-height: 0;"><img src="https://img.shields.io/badge/arXiv-2507.12075-008080.svg" style="display: block; margin: 0;"/></a>
</div>
<!-- Aggiungi nome degli autori, ACL 2025, link -->
This data repository contains the <span style="font-variant: small-caps;">BookCoref</span> dataset, introduced in the paper <a href="https://aclanthology.org/2025.acl-long.1197/"><span style="font-variant: small-caps;">BookCoref</span>: Coreference Resolution at Book Scale</a> by G. Martinelli, T. Bonomo, P. Huguet Cabot and R. Navigli, presented at the <a href="https://2025.aclweb.org/">ACL 2025</a> conference.
We release both the manually-annotated `test` split (<span style="font-variant: small-caps;">BookCoref</span><sub>gold</sub>) and the pipeline-generated `train` and `validation` splits (<span style="font-variant: small-caps;">BookCoref</span><sub>silver</sub>).
In order to enable the replication of our results, we also release a version of the `train`, `validation`, and `test` partitions split into 1500 tokens under the configuration name `split`.
<!-- As specified in the paper, this version is obtained through chunking the text into contiguous windows of 1500 tokens, retaining the coreference clusters of each window. -->
## ⚠️ Project Gutenberg license disclaimer
<span style="font-variant: small-caps;">BookCoref</span> is based on books from Project Gutenberg, which are publicly available under the [Project Gutenberg License](https://www.gutenberg.org/policy/license.html).
This license holds for users located in the United States, where the books are in the public domain.
We do not distribute the original text of the books, rather our dataset consists of a script that downloads and preprocesses the books from an archived verion of Project Gutenberg through the [Wayback Machine](https://web.archive.org/).
Users are responsible for checking the copyright status of each book in their country.
## 📚 Quickstart
To use the <span style="font-variant: small-caps;">BookCoref</span> dataset, you need to install the following Python packages in your environment:
```bash
pip install "datasets==3.6.0" "deepdiff==8.5.0" "spacy==3.8.7" "nltk==3.9.1"
```
You can then load each configuration through Huggingface's `datasets` library:
```python
from datasets import load_dataset
bookcoref = load_dataset("sapienzanlp/bookcoref")
bookcoref_split = load_dataset("sapienzanlp/bookcoref", name="split")
```
These commands will download and preprocess the books, add the coreference annotations, and return a `DatasetDict` according to the requested configuration.
```python
>>> bookcoref
DatasetDict({
train: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 45
})
validation: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 5
})
test: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 3
})
})
>>> bookcoref_split
DatasetDict({
train: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 7544
})
validation: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 398
})
test: Dataset({
features: ['doc_key', 'gutenberg_key', 'sentences', 'clusters', 'characters'],
num_rows: 152
})
})
```
## ℹ️ Data format
<span style="font-variant: small-caps;">BookCoref</span> is a collection of annotated books.
Each item contains the annotations of one book following the structure of OntoNotes:
```python
{
doc_id: "pride_and_prejudice_1342", # (str) i.e., ID of the document
gutenberg_key: "1342", # (str) i.e., key of the book in Project Gutenberg
sentences: [["CHAPTER", "I."], ["It", "is", "a", "truth", "universally", "acknowledged", ...], ...], # list[list[str]] i.e., list of word-tokenized sentences
clusters: [[[79,80], [81,82], ...], [[2727,2728]...], ...], # list[list[list[int]]] i.e., list of clusters' mention offsets
characters: [
{
name: "Mr Bennet",
cluster: [[79,80], ...],
},
{
name: "Mr. Darcy",
cluster: [[2727,2728], [2729,2730], ...],
}
] # list[character], list of characters objects consisting of name and mentions offsets, i,e., dict[name: str, cluster: list[list[int]]]
}
```
<!-- Add description of fields in example, maybe OntoNotes format is not enough -->
We also include character names, which are not exploited in traditional coreference settings but could inspire future directions in Coreference Resolution.
## 📊 Dataset statistics
<span style="font-variant: small-caps;">BookCoref</span> has distinctly book-scale characteristics, as summarized in the following table:
<!-- chage to markdown table -->
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64f85270ceabf1e6fc524bb8/DgYU_2yKlZuwDTV-duGWh.png" width=1000/>
</div>
## 🖋️ Cite this work
This work has been published at ACL 2025 (main conference). If you use any artifact of this dataset, please consider citing our paper as follows:
```bibtex
@inproceedings{martinelli-etal-2025-bookcoref,
title = "{BOOKCOREF}: Coreference Resolution at Book Scale",
author = "Martinelli, Giuliano and
Bonomo, Tommaso and
Huguet Cabot, Pere-Llu{\'i}s and
Navigli, Roberto",
editor = "Che, Wanxiang and
Nabende, Joyce and
Shutova, Ekaterina and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-long.1197/",
pages = "24526--24544",
ISBN = "979-8-89176-251-0",
}
```
## Authors
- [Giuliano Martinelli](https://www.linkedin.com/in/giuliano-martinelli-20a9b2193/)
- [Tommaso Bonomo](https://www.linkedin.com/in/tommaso-bonomo/)
- [Pere-lluis Huguet Cabot](https://www.linkedin.com/in/perelluis/)
- [Roberto Navigli](https://www.linkedin.com/in/robertonavigli/)
## ©️ License information
All the annotations provided by this repository are licensed under the [ Creative Commons Attribution-NonCommercial-ShareAlike 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license.
<!-- The tokenized text of books is a modification of books from Project Gutenberg, following [their license](https://www.gutenberg.org/policy/license.html). -->