Datasets:

Modalities:
Image
Libraries:
Datasets
License:
rassulya commited on
Commit
1d60b6b
·
verified ·
1 Parent(s): d011155

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -2
README.md CHANGED
@@ -1,2 +1,40 @@
1
- <!-- Since the provided content is missing (indicated by the "Error fetching README" message), I cannot create a Hugging Face dataset card. A Hugging Face dataset card requires information from the README and other metadata associated with the dataset. The necessary information is not available to generate a card.
2
- -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras
2
+
3
+
4
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/B1ZLZEdrgFUXVImVi_rLO.png)
5
+
6
+ <h6><p align="center"> Figure 1: Faces and facial landmarks detected using the model produced by the given project.</p></h6>
7
+
8
+ <p align="center"> Our dataset contains 689 minutes of recorded event streams, and 1.6 million annotated faces with bounding box and five point facial landmarks. </p>
9
+
10
+
11
+ ## Dataset description
12
+
13
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67056b2e6409e548690b1b6f/5EsGAZHrPTcacR8DEW2W7.png)
14
+
15
+ <h6>Figure 2: File structure of the FES dataset, with green representing an event stream and blue representing annotations: a)The preprocessed data are divided into three folders, with each folder containing only bounding box annotations,both bounding box and facial landmark annotations, and event streams in the h5 format. The raw dataset contains lab and wild folders with raw videos and annotations. b)Each controlled experiment (Lab) file has an individual subject ID and an experiment ID. Each file in the uncontrolled (Wild) dataset contains a scene ID that provides information about the location of a recording and the number (ID) of an experiment.</h6>
16
+
17
+ The final dataset contains both the originally collected raw files and the preprocessed data. To produce preprocessed data out of raw files, the reader can refer to preprocessing folder of this repo. The raw files contain video in the “raw” format that can be rendered, and annotations in the “xml” format. Meanwhile, the converted files contain a dataset ready for machine learning training in the “npy” format, annotations for bounding box and facial landmarks, and “h5” files representing the Python binary format to work with the event stream data as an array.
18
+
19
+ The integration of event streams with annotated labels was based on the time dimension. Since events were recorded at microsecond precision, the timeline of the labels was also converted to microseconds, although it originally had millisecond precision and was derived based on a frame number and frame rate of 30 Hz.
20
+
21
+
22
+ ### If you use the dataset/source code/pre-trained models in your research, please cite our work:
23
+ ```
24
+ @Article{s24051409,
25
+ AUTHOR = {Bissarinova, Ulzhan and Rakhimzhanova, Tomiris and Kenzhebalin, Daulet and Varol, Huseyin Atakan},
26
+ TITLE = {Faces in Event Streams (FES): An Annotated Face Dataset for Event Cameras},
27
+ JOURNAL = {Sensors},
28
+ VOLUME = {24},
29
+ YEAR = {2024},
30
+ NUMBER = {5},
31
+ ARTICLE-NUMBER = {1409},
32
+ URL = {https://www.mdpi.com/1424-8220/24/5/1409},
33
+ ISSN = {1424-8220},
34
+ DOI = {10.3390/s24051409}
35
+ }
36
+ ```
37
+
38
+
39
+
40
+