You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

ScholaWrite: A Dataset of End-to-End Scholarly Writing Process

Linghe Wang, Minhwa Lee, Ross Volkov, Luan Tuyan Chau, Dongyeop Kang

Minnesota NLP, University of Minnesota Twin Cities

Equal Contribution

Arxiv

Project Page

Dataset Summary and Purpose

ScholaWrite: A Dataset of End-to-End Cognitive Writing Process in Scholarly Manuscripts.

The purpose of this dataset is aim for improving the scholarly writing capability of language models.

Languages

Scholarly writing data in NLP field in English

Dataset Structure

Data Instances

{
    "Project": 1,
    "timestamp": 1702958491535,
    "author": "1",
    "before text": "One important expct of studying LLMs is ..",
    "after text": "One important aspect of studying LLMs is ..",
    "label": "fluency",
    "high-level": "REVISION"
}

Data Fields

  • Project: Overleaf Project ID (int64)
  • timestamp: Recorded time when change was made (int64)
  • author: Author ID in the project scope (int64)
  • before text: the text in the overleaf editor visible to the author before making the edit. (object, string)
  • after text: the text in the overleaf editor visible to the author after making the edit. (Object, string)
  • label: One of 15 labels defined in [paper link] to represent the current intention of author when making the edit. (Object, string)
  • high-level: High level intention category where lable belongs to. (Object, string)

Data Splits

The dataset is divided into three splits:

  • train: 49,212 entries
  • test: 12,292 entries
  • test_small: 3,238 entries
    Derived from the test set with a maximum of 300 entries per label. This subset is used for classification comparison in the paper.

Label-wise Data Distribution

Label train test test_small all_sorted
Text Production 28,258 7,065 300 35323
Clarity 5,682 1,414 300 7096
Idea Generation 3,460 865 300 4325
Object Insertion 2,256 558 300 2814
Structural 1,816 453 300 2269
Coherence 1,600 404 300 2004
Visual Formatting 1,557 395 300 1952
Section Planning 1,061 263 263 1324
Cross-reference 816 200 200 1016
Linguistic Style 771 191 191 962
Fluency 691 172 172 863
Citation Integration 532 135 135 667
Scientific Accuracy 356 87 87 443
Idea Organization 247 63 63 310
Macro Insertion 109 27 27 136

How to Access

import os
from huggingface_hub import login
from datasets import load_dataset
import pandas as pd
from dotenv import load_dotenv
load_dotenv()

HUGGINGFACE_TOKEN = os.getenv("HF_TOKEN")
login(token=HUGGINGFACE_TOKEN)

dataset = load_dataset("minnesotanlp/scholawrite")
train_df = pd.DataFrame(dataset["train"])
test_df = pd.DataFrame(dataset["test"])
test_small_df = pd.DataFrame(dataset["test_small"])
all_sorted_df = pd.DataFrame(dataset["all_sorted"])

Dataset Creation

Curation Rationale

Scholarly writing requires researchers to produce texts that deliver concise but precise delivery of novel findings and to follow a systematized structure and style by their targeted venue. To address this need, researchers have leveraged large language models (LLMs) to develop intelligent support systems in several writing tasks, such as the revision process or feedback generation.

LLMs are generally trained to progress autoregressively (i.e., generating text sequentially according to predicted probability distributions). In contrast, human writing generally involves multiple iterations of complex and non-linear cognitive actions to refine the main messages. Therefore, considering the distinct patterns in the human writing process, it is imperative to understand the underlying cognitive processes of how humans form entire texts in a scholarly setting.

While much literature examines the patterns and complexities of cognitive processes in distinct writing actions, less exploration has been done for scholarly writing. Our work observes the end-to-end writing process of scholarly writing, inspired by the method of keystroke collections that have been a major methodology to observe individual writing processes in cognitive science or a few recent works in NLP at a small scale.

To the best of our knowledge, this is the first work to present a keystroke corpus of scholarly writing with annotations regarding cognitive processes, which were collected over multiple months and produced by early-career researchers. We also present a comprehensive taxonomy of cognitive writing processes specific to the scholarly writing domain, based on the annotated keystrokes.

We present ScholaWrite , a curated dataset of 63K LaTeX-based keystrokes that were turned into publications in the computer science domain, annotated by experts in linguistics and computer science. We develop a taxonomy of scholarly writing intentions, providing an overall understanding of how scholars tend to develop writing manuscripts.

Source Data

Initial Data Collection and Normalization

We designed and implemented a Chrome extension, which enables the real-time collection of keystroke trajectories in the Overleaf platform. Participants can create their account credentials, and after logging into the system, the extension monitors the user's keystrokes in the background silently, without disrupting the typical writing process.

The extension collects viewable text in the code editor for every keyup event fired in a browser. When one of these actions occurs:

  1. Inserting or deleting a space/newline
  2. Copy/paste
  3. Undo/redo
  4. Switching files
  5. Scrolling a page

the extension uses the diff_match_patch package to generate an array of differences between two subsequent texts. Then, the extension sends the difference array along with other metadata (e.g., timestamp, author ID, file name, etc.) to the backend server.

Who are the source language producers?

We recruited 10 graduate students, all of them currently attend a four-year university in the U.S. and are proficient in English, from the computer science department who were actively preparing manuscripts in Overleaf LaTeX editor for submission to peer-reviewed conferences. We collect data from November 2023 to February 2024, totaling up to 4 months.

Annotations

Who are the annotators?

Due to privacy concerns, we did not hire external freelancers with expertise, rather the two corresponding authors of this paper annotated the data are graduate students who possess extensive scholarly writing experiences in natural language processing and data annotation skills. The raw keystroke data collected by our Chrome extension could potentially contain personally identifiable information, such as specific content edits or metadata that could reveal the identity of the authors. To ensure the confidentiality and ethical handling of sensitive information, we restricted access to the data to the authors only. This annotation process was also authorized by the IRB of the authors' institution. Please note that the public final dataset, the one you are currently viewing, we post-processed to ensured not contain any personally identifiable information.

Post-process for annotation

For use during the annotation phase, each keystroke entry from the raw collection includes the following fields:

  1. A valid file name.
  2. A valid writing action that triggered keystroke logging (e.g., type, paste, etc.).
  3. A valid array of differences to enable visualization of writing trajectories.
  4. The line numbers in the Overleaf editor.

Annotation process

The two authors (or annotators) collaborated with a cognitive linguist to develop a codebook and review the results of the annotations. Also, the two annotators conducted an iterative open coding approach to identify several unique writing intentions from keystrokes and developed a codebook of intention labels (“ground-truth labels”) within each high-level process (Planning, Implementation, and Revision) based on the findings from flower and Hayes and Koo et al.. Using this codebook, those annotators re-labeled each span of keystrokes with the corresponding label during the annotation process.

The annotators were fully informed about all the labels and had complete access to them when annotating each data point. The annotation process for all the labels is the same: First, they view through multiple consecutive data points and identify which high-level label occurs (e.g., Planning, Implementation, or Revision). Once annotators have identified the current high-level label, attempting to identify where it ends. Then, they decide on the low-level label within the high-level label (e.g., idea generation or organization under the Planning stage, etc.). Finally, they identify the interval for low-level labels and annotate data points in the interval with the identified low-level label. If a keystroke does not deliver any insight, then label it as an artifact.

Post-process for public use

Based on the annotated data, we did the following additional processing for model training and public use.

  1. Anonymize personally identifiable information (PII) such as affiliation, name, email, etc using regular expression.
  2. Remove the data entries that are labeled as artifact.
  3. Filter out the data entries that each of them annotated with multiple intention labels
  4. Filter our the data entries that each of them have a difference array length longer than $300$

Personal and Sensitive Information

Author names and afflications has been anonymized, but the draft content in the dataset might still enables people to find the actual paper through online search.

Considerations for Using the Data

Limitations

First, the ScholaWrite dataset is currently limited to the computer science domain, as LaTeX is predominantly used in computer science journals and conferences. This domain-specific focus may restrict the dataset's generalizability to other scientific disciplines. Future work could address this limitation by collecting keystroke data from a broader range of fields with diverse writing conventions and tools, such as the humanities or biological sciences.

Second, our dataset includes contributions from only 10 participants, resulting in five final preprints on arXiv. This small sample size is partly due to privacy concerns, as the dataset captures raw keystrokes that transparently reflect real-time human reasoning. To mitigate these concerns, we removed all PII during post-processing and obtained full IRB approval for the study's procedures. However, the highly transparent nature of keystroke data may still have discouraged broader participation. Future studies could explore more robust data collection protocols, such as advanced anonymization or de-identification techniques, to better address privacy concerns and enable larger-scale participation.

Furthermore, all participants were early-career researchers (e.g., PhD students) at an R1 university in the United States. Expanding the dataset to include senior researchers, such as post-doctoral fellows and professors, could offer valuable insights into how writing strategies and revision behaviors evolve with research experience and expertise.

Despite these limitations, our study captured an end-to-end writing process for 10 unique authors, resulting in a diverse range of writing styles and revision patterns. The dataset contains approximately 62,000 keystrokes, offering fine-grained insights into the human writing process, including detailed editing and drafting actions over time. While the number of articles is limited, the granularity and volume of the data provide a rich resource for understanding writing behaviors. Prior research has shown that detailed keystroke logs, even from small datasets, can effectively model writing processes. Unlike studies focused on final outputs, our dataset enables a process-oriented analysis, emphasizing the cognitive and behavioral patterns underlying scholarly writing.

Third, collaborative writing is underrepresented in our dataset, as only one Overleaf project involved multiple authors. This limits our ability to analyze co-authorship dynamics and collaborative writing practices, which are common in scientific writing. Future work should prioritize collecting multi-author projects to better capture these dynamics. Additionally, the dataset is exclusive to English-language writing, which restricts its applicability to multilingual or non-English writing contexts. Expanding to multilingual settings could reveal unique cognitive and linguistic insights into writing across languages.

Term of Use

By using this dataset, you hereby acknowledge and agree to abide by these terms of use, including all restrictions and responsibilities outlined herein, and understand that any violation of these terms may result in the revocation of access to the dataset and potential legal consequences.

  1. You will not use this dataset, in whole or in part, to conduct reverse searches or other methods to identify the authors, papers, projects, or applications associated with it. This includes, but is not limited to, direct or indirect efforts to deduce personal identities or project affiliations.

  2. You will not disclose any contents of this dataset on public or private platforms, publications, or presentations in a manner that could identify or lead to the identification of authors, papers, projects, or applications. Aggregated or anonymized data derived from this dataset may be disclosed only if it cannot be used to reverse identify the original sources.

  3. You are prohibited from modifying, streamlining, or adding to this dataset in ways that include or generate Personally Identifiable Information (PII). Any derivative work must comply with these terms and ensure that no PII is included or introduced.

  4. If any PII is discovered within the dataset:

  • You must not make it public under any circumstances.
  • You must immediately notify the dataset authors and provide them with details of the discovered PII.
  1. Use of this dataset is strictly limited to the purposes explicitly permitted by the dataset authors. Any use beyond the intended scope must receive prior written approval.

Additional Information

Contributions

Linghe Wang, Minhwa Lee, Ross Volkov, Luan Chau, Dongyeop Kang

BibTeX

@misc{wang2025scholawritedatasetendtoendscholarly,
      title={ScholaWrite: A Dataset of End-to-End Scholarly Writing Process},
      author={Linghe Wang and Minhwa Lee and Ross Volkov and Luan Tuyen Chau and Dongyeop Kang},
      year={2025},
      eprint={2502.02904},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.02904},
      }
Downloads last month
184

Models trained or fine-tuned on minnesotanlp/scholawrite