gsarti commited on
Commit
79c18e8
·
1 Parent(s): 5d71c53

Corrected MQM annotations

Browse files
README.md CHANGED
@@ -11,6 +11,10 @@ tags:
11
  - post-editing
12
  - translation
13
  - behavioral-data
 
 
 
 
14
  language_creators:
15
  - machine-generated
16
  - expert-generated
@@ -68,12 +72,14 @@ configs:
68
 
69
  ### Dataset Summary
70
 
71
- This dataset provides a convenient access to the processed `main` and `pretask` splits of the QE4PE dataset. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform.
72
 
73
  We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
74
 
75
  ### News 📢
76
 
 
 
77
  **October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
78
 
79
  ### Repository Structure
@@ -89,6 +95,7 @@ qe4pe/
89
  │ └── ... # Configurations reporting the exact questionnaires questions and options.
90
  ├── setup/
91
  │ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
 
92
  │ ├── processed/ # Intermediate outputs of the selection process for the main task
93
  │ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
94
  └── task/
@@ -115,7 +122,7 @@ The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) an
115
 
116
  ### Data Instances
117
 
118
- The dataset contains two configurations, corresponding to the two tasks: `main` and `pretask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase, before the main task begins.
119
 
120
  ### Data Fields
121
 
@@ -221,25 +228,25 @@ A single entry in the dataframe represents a segment (~sentence) in the dataset,
221
  |`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
222
  |`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
223
  |`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
224
- |**MQM annotations (`main` config only)**| |
225
- |`mqm_mt_annotator_id` | Annotator ID for the MQM evaluation of `mqm_mt_annotated_text`. |
226
- |`mqm_pe_annotator_id` | Annotator ID for the MQM evaluation of `mqm_pe_annotated_text`. |
227
- |`mqm_mt_rating` | 0-100 quality rating for the `mqm_mt_annotated_text` translation. |
228
- |`mqm_pe_rating` | 0-100 quality rating for the `mqm_pe_annotated_text` translation. |
229
- |`mqm_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `mqm_mt_errors` indices are computed on this string. |
230
- |`mqm_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `mqm_pe_errors` indices are computed on this string. |
231
- |`mqm_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. |
232
- |`mqm_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. |
233
- |`mqm_mt_errors` | List of error spans detected by the MQM annotator for the `mqm_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `mqm_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `mqm_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `mqm_mt_fixed_text` for the error span in `mqm_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
234
- |`mqm_pe_errors` | List of error spans detected by the MQM annotator for the `mqm_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `mqm_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `mqm_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `mqm_pe_fixed_text` for the error span in `mqm_pe_annotated_text`. `correction_start`: the start index of the error span in `mqm_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `mqm_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
235
-
236
  ### Data Splits
237
 
238
  |`config` | `split`| |
239
  |------------------------------------:|-------:|--------------------------------------------------------------:|
240
  |`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) |
241
- |`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) |
242
- |`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) |
243
  |`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) |
244
  |`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) |
245
  |`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) |
@@ -349,7 +356,47 @@ The following is an example of the subject `oracle_t1` post-editing for segment
349
  "mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
350
  "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
351
  " SS SS SS ",
352
- "highlights": "[{'text': 'sneller', 'severity': 'minor', 'start': 43, 'end': 50}, {'text': 'onderwijs.', 'severity': 'major', 'start': 96, 'end': 106}]"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
353
  }
354
  ```
355
 
@@ -359,6 +406,10 @@ The text is provided as-is, without further preprocessing or tokenization.
359
 
360
  The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
361
 
 
 
 
 
362
  ## Additional Information
363
 
364
  ### Metric signatures
 
11
  - post-editing
12
  - translation
13
  - behavioral-data
14
+ - multidimensional-quality-metric
15
+ - mqm
16
+ - comet
17
+ - qe
18
  language_creators:
19
  - machine-generated
20
  - expert-generated
 
72
 
73
  ### Dataset Summary
74
 
75
+ This dataset provides a convenient access to the processed `pretask`, `main` and `posttask` splits and the questionnaires for the QE4PE study. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform. For the main task, a subset of the data was annotated with Multidimensional Quality Metrics (MQM) by professional annotators.
76
 
77
  We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
78
 
79
  ### News 📢
80
 
81
+ **January 2025**: MQM annotations are now available for the `main` task.
82
+
83
  **October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
84
 
85
  ### Repository Structure
 
95
  │ └── ... # Configurations reporting the exact questionnaires questions and options.
96
  ├── setup/
97
  │ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
98
+ │ ├── mqm/ # MQM annotations for the main task
99
  │ ├── processed/ # Intermediate outputs of the selection process for the main task
100
  │ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
101
  └── task/
 
122
 
123
  ### Data Instances
124
 
125
+ The dataset contains two configurations, corresponding to the two tasks: `pretask`, `main` and `posttask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase before the main task, in which all translators worked on texts highlighted in the `supervised` modality. `posttask` contains the data collected in the final phase in which all translators worked on texts in the `no_highlight` modality.
126
 
127
  ### Data Fields
128
 
 
228
  |`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
229
  |`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
230
  |`highlights` | List of dictionaries for highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
231
+ |**MQM annotations (`main` config only)**| |
232
+ |`qa_mt_annotator_id` | Annotator ID for the MQM evaluation of `qa_mt_annotated_text`. |
233
+ |`qa_pe_annotator_id` | Annotator ID for the MQM evaluation of `qa_pe_annotated_text`. |
234
+ |`qa_mt_esa_rating` | 0-100 quality rating for the `qa_mt_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
235
+ |`qa_pe_esa_rating` | 0-100 quality rating for the `qa_pe_annotated_text` translation, following the [ESA framework](https://aclanthology.org/2024.wmt-1.131/). |
236
+ |`qa_mt_annotated_text` | Version of `mt_text` annotated with MQM errors. Might differ (only slightly) from `mt_text`, included since `qa_mt_mqm_errors` indices are computed on this string. |
237
+ |`qa_pe_annotated_text` | Version of `pe_text` annotated with MQM errors. Might differ (only slightly) from `pe_text`, included since `qa_pe_mqm_errors` indices are computed on this string. |
238
+ |`qa_mt_fixed_text` | Proposed correction of `mqm_mt_annotated_text` following MQM annotation. |
239
+ |`qa_pe_fixed_text` | Proposed correction of `mqm_pe_annotated_text` following MQM annotation. |
240
+ |`qa_mt_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_mt_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `mqm_mt_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_mt_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_mt_fixed_text` for the error span in `qa_mt_annotated_text`. `correction_start`: the start index of the error span in `mqm_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_mt_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
241
+ |`qa_pe_mqm_errors` | List of error spans detected by the MQM annotator for the `qa_pe_annotated_text`. Each error span dictionary contains the following fields: `text`: the span in `qa_pe_annotated_text` containing an error. `text_start`: the start index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `text_end`: the end index of the error span in `qa_pe_annotated_text`. -1 if no annotated span is present (e.g. for omissions) `correction`: the proposed correction in `qa_pe_fixed_text` for the error span in `qa_pe_annotated_text`. `correction_start`: the start index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `correction_end`: the end index of the error span in `qa_pe_fixed_text`. -1 if no corrected span is present (e.g. for additions) `description`: an optional error description provided by the annotator. `mqm_category`: the error category assigned by the annotator for the current span. One of: Addition, Omission, Mistranslation, Inconsistency, Untranslated, Punctuation, Spelling, Grammar, Inconsistent Style, Readability, Wrong Register. `severity`: the error severity for the current span. One of: Minor, Major, Neutral. `comment`: an optional comment provided by the annotator for the current span. `edit_order`: index of the edit in the current segment edit order (starting from 1). |
242
+
243
  ### Data Splits
244
 
245
  |`config` | `split`| |
246
  |------------------------------------:|-------:|--------------------------------------------------------------:|
247
  |`main` | `train`| 8100 (51 docs i.e. 324 sents x 25 translators) |
248
+ |`pretask` | `train`| 950 (6 docs i.e. 38 sents x 25 translators) |
249
+ |`posttask` | `train`| 1200 (8 docs i.e. 50 sents x 24 translators) |
250
  |`pretask_questionnaire` | `train`| 26 (all translators, including replaced/replacements) |
251
  |`posttask_highlight_questionnaire` | `train`| 19 (all translators for highlight modalities + 1 replacement) |
252
  |`posttask_no_highlight_questionnaire`| `train`| 6 (all translators for `no_highlight` modality) |
 
356
  "mt_pe_char_aligned": "MT: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.\n" \
357
  "PE: De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.\n" \
358
  " SS SS SS ",
359
+ "highlights": """[
360
+ {
361
+ 'text': 'sneller',
362
+ 'severity': 'minor',
363
+ 'start': 43,
364
+ 'end': 50
365
+ },
366
+ {
367
+ 'text': 'onderwijs.',
368
+ 'severity': 'major',
369
+ 'start': 96,
370
+ 'end': 106
371
+ }
372
+ ]"""
373
+ # QA annotations
374
+ "qa_mt_annotator_id": 'qa_nld_3',
375
+ "qa_pe_annotator_id": 'qa_nld_1',
376
+ "qa_mt_esa_rating": 100.0,
377
+ "qa_pe_esa_rating": 80.0,
378
+ "qa_mt_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
379
+ "qa_pe_annotated_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en opleiding.",
380
+ "qa_mt_fixed_text": "De snelheid van de opkomende groei is vaak sneller dan de ontwikkeling van kwaliteitsborging en onderwijs.",
381
+ "qa_pe_fixed_text": "De snelheid van de ontluikende groei overtreft vaak de ontwikkeling van kwaliteitsborging en onderwijs.",
382
+ "qa_mt_mqm_errors": [],
383
+ "qa_pe_mqm_errors": [
384
+ {
385
+ "text": "opkomende",
386
+ "text_start": 19,
387
+ "text_end": 28,
388
+ "correction":
389
+ "ontluikende",
390
+ "correction_start": 19,
391
+ "correction_end": 30,
392
+ "description": "Mistranslation - not the correct word",
393
+ "mqm_category": "Mistranslation",
394
+ "severity": "Minor",
395
+ "comment": "",
396
+ "edit_order": 1
397
+ }
398
+ ]
399
+
400
  }
401
  ```
402
 
 
406
 
407
  The datasets were parsed from GroTE inputs, logs and outputs for the QE4PE study, available in this repository. Processed dataframes using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
408
 
409
+ ### MQM Annotations
410
+
411
+ MQM annotations were collected using Google Sheets and highlights were parsed from HTML exported output, ensuring their compliance with well-formedness checks. Out of the original 51 docs (324 segments) in `main`, 24 docs (10 biomedical, 14 social, totaling 148 segments) were samples at random and annotated by professional translators.
412
+
413
  ## Additional Information
414
 
415
  ### Metric signatures
task/main/doc_id_map.json CHANGED
@@ -6,6 +6,7 @@
6
  "unsupervised": "../../setup/highlights/unsupervised/grote_files"
7
  },
8
  "original_config": "../../setup/wmt23/wmttest2023.eng.jsonl",
 
9
  "map":{
10
  "doc1": "doc39",
11
  "doc2": "doc33",
 
6
  "unsupervised": "../../setup/highlights/unsupervised/grote_files"
7
  },
8
  "original_config": "../../setup/wmt23/wmttest2023.eng.jsonl",
9
+ "qa_path": "../../setup/qa/qa_df.csv",
10
  "map":{
11
  "doc1": "doc39",
12
  "doc2": "doc33",
task/main/processed_main.csv CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fbae8d32a5fe4aca324c71b6be655242b486a07094274f60375029d2a001e5a0
3
- size 22813556
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:228b65c4fd8adf1ebf5f7a7cf4dc7e8d748d01751735a956bb851ecc207853e5
3
+ size 23222322