Datasets:
metadata
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-MS^2
- extended|other-Cochrane
task_categories:
- summarization
- text2text-generation
task_ids:
- summarization-other-query-based-summarization
- summarization-other-query-based-multi-document-summarization
- summarization-other-scientific-documents-summarization
paperswithcode_id: multi-document-summarization
pretty_name: MSLR Shared Task
This is a copy of the MS^2 dataset, except the input source documents of its validation
split have been replaced by a sparse retriever. The retrieval pipeline used:
- query: The
background
field of each example - corpus: The union of all documents in the
train
,validation
andtest
splits. A document is the concatenation of thetitle
andabstract
. - retriever: BM25 via PyTerrier with default settings
- top-k strategy:
"oracle"
, i.e. the number of documents retrieved,k
, is set as the original number of input documents for each example
Retrieval results on the test
set:
ndcg | recall@100 | recall@1000 | Rprec |
---|---|---|---|
0.4012 | 0.3780 | 0.6601 | 0.1833 |
Note: The abstract
field of the validation
split contains both the title and the abstract. Accordingly, the title
field contains empty strings. This decision was made in order to simplify the retrieval pipeline.