ArielUW commited on
Commit
4e78b8f
·
verified ·
1 Parent(s): f6d979a

Add jobtitles dataset.

Browse files
dataset_creation.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import pandas as pd
3
+ from huggingface_hub import HfApi, Repository
4
+ import shutil
5
+
6
+ def merge_and_sort(tables, output_files, output_folder, columns):
7
+ frames = []
8
+ for table in tables:
9
+ try:
10
+ frames.append(pd.read_csv(table, sep=";"))
11
+ print(f"File '{table}' successfully prepared for merging.")
12
+ except:
13
+ try:
14
+ frames.append(pd.read_csv(table, sep=","))
15
+ print(f"File '{table}' successfully prepared for merging.")
16
+ except:
17
+ try:
18
+ frames.append(pd.read_csv(table, sep="\t"))
19
+ print(f"File '{table}' successfully prepared for merging.")
20
+ except Exception as e:
21
+ print(f"Unable to merge file '{table}': {e}")
22
+ full_df = pd.concat(frames).filter(items=columns,axis=1)
23
+ zero = full_df.loc[(full_df['type'] == 0)]
24
+ zero = zero.sample(frac = 1) # shuffling examples
25
+ zero.to_csv(f"{output_folder}/{output_files[0]}", index=False)
26
+ one = full_df.loc[(full_df['type'] == 1)]
27
+ one = one.sample(frac = 1) # shuffling examples
28
+ one.to_csv(f"{output_folder}/{output_files[1]}", index=False)
29
+
30
+ def validate_csv(file_path: str, columns):
31
+ """
32
+ Validates the structure of the CSV file to ensure it contains valid columns.
33
+ """
34
+ df = pd.read_csv(file_path, sep=',')
35
+ required_columns = set(columns)
36
+ if not required_columns.issubset(df.columns):
37
+ raise ValueError(f"The TSV file must contain the following columns: {required_columns}")
38
+ print(f"CSV file '{file_path}' is valid with {len(df)} rows.")
39
+
40
+ def create_splits(output_folder, output_files, dataset_name, dataset_structure):
41
+ zero = pd.read_csv(f"{output_folder}/{output_files[0]}")
42
+ one = pd.read_csv(f"{output_folder}/{output_files[1]}")
43
+ for split, structure in dataset_structure.items():
44
+ try:
45
+ for key, value in structure.items():
46
+ if key=="zero":
47
+ rows_zero = zero.iloc[:value]
48
+ zero.drop(rows_zero.index, inplace=True)
49
+ elif key=="one":
50
+ rows_one = one.iloc[:value]
51
+ one.drop(rows_one.index, inplace=True)
52
+ else:
53
+ print(f"Invalid key in dataset structure: {key} in f{split} part.")
54
+ df = pd.concat([rows_zero, rows_one])
55
+ df = df.sample(frac = 1) # shuffling examples
56
+ print(df)
57
+ df.to_csv(f"{output_folder}/{dataset_name}/{split}.csv", index=False)
58
+ print(f"Created {split} split.")
59
+ except Exception as e:
60
+ print(f"Failure while creating the {split} splt: {e}")
61
+
62
+ def push_dataset_to_HF(folder, dataset_name, user):
63
+ try:
64
+ # Initialize Hugging Face Hub API
65
+ api = HfApi()
66
+ repo_id = f"{user}/{dataset_name}"
67
+ # Specify repo_type="dataset" here
68
+ api.create_repo(
69
+ repo_id=repo_id,
70
+ exist_ok=True,
71
+ repo_type="dataset"
72
+ )
73
+ # Push files to Hub
74
+ api.upload_folder(
75
+ folder_path=folder,
76
+ repo_id=repo_id,
77
+ repo_type="dataset",
78
+ commit_message=f"Add {dataset_name} dataset."
79
+ )
80
+ print(f"Dataset '{folder}/{dataset_name}' has been uploaded to {user}'s HuggingFace repo")
81
+
82
+ except Exception as e:
83
+ print(f"Error occurred during upload: {str(e)}")
84
+ raise
85
+
86
+ if __name__=="__main__":
87
+ os.chdir="/Users/arieldrozd/Downloads/IMLLA-FinalProject"
88
+ tables = ["./examples_monika/final_table_together.csv", "./cleaned_examples_ariel/nkjp_ariel.csv", "./cleaned_examples_ariel/wikipedia.csv", "cleaned_examples_ariel/nowela.csv"]
89
+ output_folder = "./dataset"
90
+ output_files = ["zero.csv", "one.csv"]
91
+ dataset_name = "jobtitles"
92
+ columns=['type', 'source_sentence', 'target_sentence']
93
+ merge_and_sort(tables, output_files, output_folder, columns)
94
+ for file in output_files:
95
+ validate_csv(f"{output_folder}/{file}", columns)
96
+ #test split -> zero: 250, one: 250
97
+ #validation split -> zero: 500, one: 50
98
+ #training split -> zero: 4221, one: 610 -> actually: all that is left
99
+ dataset_structure = {"test":{"zero":250,"one":250}, "validation":{"zero":500,"one":50}, "train":{"zero":4221,"one":610}}
100
+ final_dataset = create_splits(output_folder, output_files, dataset_name, dataset_structure)
101
+ user = "ArielUW"
102
+ push_dataset_to_HF(output_folder, dataset_name, user)
jobtitles/readme.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset structure
2
+
3
+ Dataset has a simple structure. It contains CSV files with 3 columns each:
4
+ type,source_sentence,target_sentence
5
+ separated by commas.
6
+
7
+ There are 2 distinct example types (1 and 0) which are described below.
8
+
9
+ ### Type 1
10
+
11
+ Those are the sentences that contain nouns designating job titles. In the [source_sentence] column, the job title is gendered. In the [target_sencence] column, the gendered job title has been replaced with so-called "personative" (see below).
12
+
13
+ An effort has been made to pick examples containing only one job title per sentence, preferebly in nominative case of the singular number. However, those criteria are not met for every sentence in the set.
14
+
15
+ Semantic and morphological edge cases for the most part have been removed from type 1.
16
+ _Semantic_ edge cases include personal nouns that are similar to job titles but are not job titles per se, eg. [terrorysta], [kolekcjoner], [anarchista], [zwolennik]. In general, they are nouns which describe people in relation to their activities, interests or political/buisness/organizational affiliations. Some semantic edge cases have been kept if there exists a homonym job title, eg. [kierowca].
17
+ _Morphological_ edge cases include job titles that are very hard to neutralise, eg. [listonosz].
18
+
19
+ ### Type 0
20
+
21
+ Those are the sentences that *do not* contain any job titles. Both sentences in the same row ([source_sentence] and [target_sentence] columns) are exactly the same.
22
+
23
+ Sentences with job titles or semantic/morphological edge cases have been removed or edited, so that they do not contain words such as:
24
+ * [gitarzyst(k)a], [pianist(k)a], [wokalist(k)a] and other names of musicians, hobbyists
25
+ * [partner(ka)], [członek/członkini], [przedstawiciel(k)a] and other names denoting people from the POV of political or business relationships
26
+ * [mieszkaniec/mieszkanka] and names denoting people by ethnicity, nationality or locality
27
+
28
+ Some nouns designating persons were left. Typye 0 sentences may still contain some gendered and non-gendered personal nouns, such as:
29
+ * [kobieta], [mężczyzna]
30
+ * [matka], [ojcicec], [rodzic], [syn], [córka], [dziecko]
31
+ * [mąż], [żona], [małżonek], [małżonka]
32
+ * [osoba], [człowiek], [ludzie]
33
+ * [pan], [pani], [państwo]
34
+ * [bóg], [bogini], [bóstwo]
35
+
36
+ ### Splits
37
+
38
+ The dataset is split into 3 parts of the following amount of sentences:
39
+
40
+ split | type 1 | type 0 | TOTAL
41
+ ---------------------------------------------
42
+ train.csv | 610 | |
43
+ validation.csv | 50 | 500 | 250
44
+ test.csv | 250 | 250 | 500
45
+
46
+ ## Sources
47
+
48
+ Type 1 example are found in the NKJP corpus or with targeted searches in Google. Some sentences have been altered so that better meet the criteria.
49
+
50
+ ### Splitting source texts into sentences
51
+
52
+ Most of the rows contain single sentences. However, some of the automatic or manual delimitations may contain errors. In other cases sentence delimitation may be controversial (especially for literary texts or spoken language transcripts).
53
+
54
+ Sentences from _Wikipedia_ and from _WolneLektury_ have been split into sentences using _Sentence Splitter_ Python library: [https://github.com/mediacloud/sentence-splitter]. Results from _WolneLektury_ were additionally corrected manually.
55
+ Sentences from _NKJP_ have been split into sentences using _Spacy_ library (without corrections) or with an ad-hoc heuristic method combined with plenty of manual corrections.
56
+ Sentences found with the help of _Google Search Engine_ have been manually copied.
jobtitles/test ADDED
The diff for this file is too large to render. See raw diff
 
jobtitles/test.csv ADDED
The diff for this file is too large to render. See raw diff
 
jobtitles/train ADDED
The diff for this file is too large to render. See raw diff
 
jobtitles/train.csv ADDED
The diff for this file is too large to render. See raw diff
 
jobtitles/validation ADDED
The diff for this file is too large to render. See raw diff
 
jobtitles/validation.csv ADDED
The diff for this file is too large to render. See raw diff
 
one.csv ADDED
The diff for this file is too large to render. See raw diff
 
zero.csv ADDED
The diff for this file is too large to render. See raw diff