Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,91 @@ tags:
|
|
14 |
|
15 |
217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
Below the rest of the original dataset card.
|
18 |
|
19 |
---
|
|
|
14 |
|
15 |
217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair.
|
16 |
|
17 |
+
## Filter Process
|
18 |
+
|
19 |
+
It was harder than expected. The main problem is that sometimes the connection to HF hub breaks. Afaik DuckDB cannot quite deal with this very well, so this code does not work:
|
20 |
+
|
21 |
+
```python
|
22 |
+
import duckdb
|
23 |
+
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
|
24 |
+
df = duckdb.sql(f"SELECT latitude, longitude FROM 'hf://datasets/bigdata-pw/Flickr/*.parquet' WHERE latitude IS NOT NULL AND longitude IS NOT NULL").df()
|
25 |
+
```
|
26 |
+
|
27 |
+
Instead, I used a more granular process to make sure I really got all the files.
|
28 |
+
|
29 |
+
```python
|
30 |
+
import duckdb
|
31 |
+
import pandas as pd
|
32 |
+
from tqdm import tqdm
|
33 |
+
import os
|
34 |
+
|
35 |
+
# Create a directory to store the reduced parquet files
|
36 |
+
output_dir = "reduced_flickr_data"
|
37 |
+
if not os.path.exists(output_dir):
|
38 |
+
os.makedirs(output_dir)
|
39 |
+
|
40 |
+
# Install and load the httpfs extension (only needs to be done once)
|
41 |
+
try:
|
42 |
+
duckdb.sql("INSTALL httpfs; LOAD httpfs;") # required extension
|
43 |
+
except Exception as e:
|
44 |
+
print(f"httpfs most likely already installed/loaded: {e}") # most likely already installed
|
45 |
+
|
46 |
+
duckdb.sql("SET enable_progress_bar = false;")
|
47 |
+
# Get a list of already downloaded files to make the script idempotent
|
48 |
+
downloaded_files = set(os.listdir(output_dir))
|
49 |
+
|
50 |
+
|
51 |
+
|
52 |
+
for i in tqdm(range(0, 5150), desc="Downloading and processing files"):
|
53 |
+
part_number = str(i).zfill(5) # Pad with zeros to get 00000 format
|
54 |
+
file_name = f"part-{part_number}.parquet"
|
55 |
+
output_path = os.path.join(output_dir, file_name)
|
56 |
+
|
57 |
+
# Skip if the file already exists
|
58 |
+
if file_name in downloaded_files:
|
59 |
+
#print(f"Skipping {file_name} (already downloaded)") # Optional: Uncomment for more verbose output.
|
60 |
+
continue
|
61 |
+
|
62 |
+
try:
|
63 |
+
# Construct the DuckDB query, suppressing the built-in progress bar
|
64 |
+
query = f"""
|
65 |
+
SELECT *
|
66 |
+
FROM 'hf://datasets/bigdata-pw/Flickr/{file_name}'
|
67 |
+
WHERE latitude IS NOT NULL
|
68 |
+
AND longitude IS NOT NULL
|
69 |
+
AND (
|
70 |
+
(
|
71 |
+
TRY_CAST(latitude AS DOUBLE) IS NOT NULL AND
|
72 |
+
TRY_CAST(longitude AS DOUBLE) IS NOT NULL AND
|
73 |
+
(TRY_CAST(latitude AS DOUBLE) != 0.0 OR TRY_CAST(longitude AS DOUBLE) != 0.0)
|
74 |
+
)
|
75 |
+
OR
|
76 |
+
(
|
77 |
+
TRY_CAST(latitude AS VARCHAR) IS NOT NULL AND
|
78 |
+
TRY_CAST(longitude AS VARCHAR) IS NOT NULL AND
|
79 |
+
(latitude != '0' AND latitude != '0.0' AND longitude != '0' AND longitude != '0.0')
|
80 |
+
)
|
81 |
+
|
82 |
+
)
|
83 |
+
"""
|
84 |
+
df = duckdb.sql(query).df()
|
85 |
+
|
86 |
+
# Save the filtered data to a parquet file, creating the directory if needed.
|
87 |
+
df.to_parquet(output_path)
|
88 |
+
#print(f"saved part {part_number}") # optional, for logging purposes
|
89 |
+
|
90 |
+
|
91 |
+
except Exception as e:
|
92 |
+
print(f"Error processing {file_name}: {e}")
|
93 |
+
continue # Continue to the next iteration even if an error occurs
|
94 |
+
|
95 |
+
print("Finished processing all files.")
|
96 |
+
```
|
97 |
+
|
98 |
+
The first run took roughly 15 hours on my connection. When finished a handfull of files will have failed, for me less than 5. Rerun the script to add the missing files; it will finish in a minute.
|
99 |
+
|
100 |
+
---
|
101 |
+
|
102 |
Below the rest of the original dataset card.
|
103 |
|
104 |
---
|