TRoTR / README.md
FrancescoPeriti's picture
Update README.md
bdec8c4 verified
metadata
dataset_info:
  features:
    - name: instanceID
      dtype: string
    - name: dataID1
      dtype: string
    - name: dataID2
      dtype: string
    - name: lemma
      dtype: string
    - name: context1
      dtype: string
    - name: context2
      dtype: string
    - name: indices_target_token1
      dtype: string
    - name: indices_target_sentence1
      dtype: string
    - name: indices_target_sentence2
      dtype: string
    - name: indices_target_token2
      dtype: string
    - name: dataIDs
      dtype: string
    - name: label_set
      dtype: string
    - name: non_label
      dtype: string
    - name: label
      dtype: float64
    - name: fold1
      dtype: string
    - name: fold2
      dtype: string
    - name: fold3
      dtype: string
    - name: fold4
      dtype: string
    - name: fold5
      dtype: string
    - name: fold6
      dtype: string
    - name: fold7
      dtype: string
    - name: fold8
      dtype: string
    - name: fold9
      dtype: string
    - name: fold10
      dtype: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 2863071
      num_examples: 3823
  download_size: 783700
  dataset_size: 2863071
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: apache-2.0
task_categories:
  - text-classification
  - sentence-similarity
language:
  - en
tags:
  - Topic Relatedness
  - Semantic Relatedness
pretty_name: TRoTR

TRoTR

This is the training dataset used in our work: TRoTR: A Framework for Evaluating the Recontextualization of Text by Francesco Periti, Pierluigi Cassotti, Stefano Montanelli, Nina Tahmasebi, and Dominik Schlechtweg. Check our paper for training details.

The original human-annotated judgments are available in the repository for our project: https://github.com/FrancescoPeriti/TRoTR.

Citation

Francesco Periti, Pierluigi Cassotti, Stefano Montanelli, Nina Tahmasebi, and Dominik Schlechtweg. 2024. TRoTR: A Framework for Evaluating the Re-contextualization of Text Reuse. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13972–13990, Miami, Florida, USA. Association for Computational Linguistics.

BibTeX:

@inproceedings{periti2024trotr,
    title = {{TRoTR: A Framework for Evaluating the Re-contextualization of Text Reuse}},
    author = "Periti, Francesco  and Cassotti, Pierluigi  and Montanelli, Stefano  and Tahmasebi, Nina  and Schlechtweg, Dominik",
    editor = "Al-Onaizan, Yaser  and Bansal, Mohit  and Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.774",
    pages = "13972--13990",
    abstract = "Current approaches for detecting text reuse do not focus on recontextualization, i.e., how the new context(s) of a reused text differs from its original context(s). In this paper, we propose a novel framework called TRoTR that relies on the notion of topic relatedness for evaluating the diachronic change of context in which text is reused. TRoTR includes two NLP tasks: TRiC and TRaC. TRiC is designed to evaluate the topic relatedness between a pair of recontextualizations. TRaC is designed to evaluate the overall topic variation within a set of recontextualizations. We also provide a curated TRoTR benchmark of biblical text reuse, human-annotated with topic relatedness. The benchmark exhibits an inter-annotator agreement of .811. We evaluate multiple, established SBERT models on the TRoTR tasks and find that they exhibit greater sensitivity to textual similarity than topic relatedness. Our experiments show that fine-tuning these models can mitigate such a kind of sensitivity.",
}