hhoangphuoc commited on
Commit
974526a
·
verified ·
1 Parent(s): 09b4186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -44
README.md CHANGED
@@ -1,47 +1,60 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: audio
5
- dtype:
6
- audio:
7
- sampling_rate: 16000
8
- - name: sampling_rate
9
- dtype: int64
10
- - name: transcript
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 26537763371.78
15
- num_examples: 185402
16
- - name: validation
17
- num_bytes: 2948998696.305
18
- num_examples: 20601
19
- - name: test
20
- num_bytes: 7390220553.37
21
- num_examples: 51501
22
- download_size: 29378895903
23
- dataset_size: 36876982621.455
24
- configs:
25
- - config_name: default
26
- data_files:
27
- - split: train
28
- path: data/train-*
29
- - split: validation
30
- path: data/validation-*
31
- - split: test
32
- path: data/test-*
33
- task_categories:
34
- - automatic-speech-recognition
35
- tags:
36
- - paralinguistic
37
- pretty_name: a
38
- size_categories:
39
- - 100K<n<1M
40
- ---
 
 
41
  A preprocessed version of `Switchboard Corpus`. The corpus audio has been upsampled to 16kHz, separated channels and the transcripts have been processed
42
  with special treats for paralinguistic events, particularly laughter and speech-laughs.
43
- This preprocessed dataset has been processed for ASR task. For the original dataset, please check out the original link: https://catalog.ldc.upenn.edu/LDC97S62
44
 
 
 
 
 
 
 
 
 
 
 
 
45
  The dataset has been splitted into train, test and validation sets with 70/20/10 ratio, as following summary:
46
 
47
  ```python
@@ -63,12 +76,12 @@ An example of the content is this dataset:
63
  ```
64
  ```
65
 
66
- Regarding the total amount of laughter and speech-laugh existing in the dataset, here is the overview:
 
67
  ```bash
68
  Train Dataset (swb_train): {'laughter': 16044, 'speechlaugh': 9586}
69
 
70
  Validation Dataset (swb_val): {'laughter': 1845, 'speechlaugh': 1133}
71
 
72
  Test Dataset (swb_test): {'laughter': 4335, 'speechlaugh': 2775}
73
- ```
74
-
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: audio
5
+ dtype:
6
+ audio:
7
+ sampling_rate: 16000
8
+ - name: sampling_rate
9
+ dtype: int64
10
+ - name: transcript
11
+ dtype: string
12
+ splits:
13
+ - name: train
14
+ num_bytes: 26537763371.78
15
+ num_examples: 185402
16
+ - name: validation
17
+ num_bytes: 2948998696.305
18
+ num_examples: 20601
19
+ - name: test
20
+ num_bytes: 7390220553.37
21
+ num_examples: 51501
22
+ download_size: 29378895903
23
+ dataset_size: 36876982621.455
24
+ configs:
25
+ - config_name: default
26
+ data_files:
27
+ - split: train
28
+ path: data/train-*
29
+ - split: validation
30
+ path: data/validation-*
31
+ - split: test
32
+ path: data/test-*
33
+ task_categories:
34
+ - automatic-speech-recognition
35
+ tags:
36
+ - paralinguistic
37
+ - laughter
38
+ pretty_name: switchboard-speechlaugh
39
+ size_categories:
40
+ - 100K<n<1M
41
+ ---
42
+ ## Corpus Overview
43
  A preprocessed version of `Switchboard Corpus`. The corpus audio has been upsampled to 16kHz, separated channels and the transcripts have been processed
44
  with special treats for paralinguistic events, particularly laughter and speech-laughs.
45
+ This preprocessed dataset has been processed for ASR task. For the original dataset, please check out the original link: https://catalog.ldc.upenn.edu/LDC97S62 for contributed original authors.
46
 
47
+ To download the original dataset, it can be found at:
48
+
49
+ https://drive.google.com/drive/folders/1YhpWgzCwc4cVhYJPcjuLWy84s-L0hJbf
50
+
51
+ or using `gdown`:
52
+
53
+ ```bash
54
+ gdown 1YhpWgzCwc4cVhYJPcjuLWy84s-L0hJbf -O /path/to/dataset/switchboard
55
+ ```
56
+
57
+ ## Corpus Structure
58
  The dataset has been splitted into train, test and validation sets with 70/20/10 ratio, as following summary:
59
 
60
  ```python
 
76
  ```
77
  ```
78
 
79
+ ## Specifications
80
+ Regarding the total amount of `laughter` and `speech-laugh` existing in the dataset, which using for specific task for `Laughter and Speech-laugh Recognition`, here is the additional overview:
81
  ```bash
82
  Train Dataset (swb_train): {'laughter': 16044, 'speechlaugh': 9586}
83
 
84
  Validation Dataset (swb_val): {'laughter': 1845, 'speechlaugh': 1133}
85
 
86
  Test Dataset (swb_test): {'laughter': 4335, 'speechlaugh': 2775}
87
+ ```