Flickr-Geo / README.md
do-me's picture
Update README.md
710d3d1 verified
metadata
license: odc-by
pretty_name: Flickr
size_categories:
  - 100M<n<1B
task_categories:
  - image-classification
  - image-to-text
  - text-to-image
tags:
  - geospatial

Dataset Card for Flickr

217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair.

Filter Process

It was harder than expected. The main problem is that sometimes the connection to HF hub breaks. Afaik DuckDB cannot quite deal with this very well, so this code does not work:

import duckdb
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
df = duckdb.sql(f"SELECT latitude, longitude FROM 'hf://datasets/bigdata-pw/Flickr/*.parquet' WHERE latitude IS NOT NULL AND longitude IS NOT NULL").df()

Instead, I used a more granular process to make sure I really got all the files.

import duckdb
import pandas as pd
from tqdm import tqdm
import os

# Create a directory to store the reduced parquet files
output_dir = "reduced_flickr_data"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# Install and load the httpfs extension (only needs to be done once)
try:
    duckdb.sql("INSTALL httpfs; LOAD httpfs;")  # required extension
except Exception as e:
    print(f"httpfs most likely already installed/loaded: {e}") # most likely already installed

duckdb.sql("SET enable_progress_bar = false;")
# Get a list of already downloaded files to make the script idempotent
downloaded_files = set(os.listdir(output_dir))



for i in tqdm(range(0, 5150), desc="Downloading and processing files"):
    part_number = str(i).zfill(5)  # Pad with zeros to get 00000 format
    file_name = f"part-{part_number}.parquet"
    output_path = os.path.join(output_dir, file_name)

    # Skip if the file already exists
    if file_name in downloaded_files:
        #print(f"Skipping {file_name} (already downloaded)")  # Optional:  Uncomment for more verbose output.
        continue

    try:
        # Construct the DuckDB query, suppressing the built-in progress bar
        query = f"""
            SELECT *
            FROM 'hf://datasets/bigdata-pw/Flickr/{file_name}'
            WHERE latitude IS NOT NULL
                AND longitude IS NOT NULL
                AND (
                (
                    TRY_CAST(latitude AS DOUBLE) IS NOT NULL AND
                    TRY_CAST(longitude AS DOUBLE) IS NOT NULL AND
                    (TRY_CAST(latitude AS DOUBLE) != 0.0 OR TRY_CAST(longitude AS DOUBLE) != 0.0)
                )
                OR
                (
                    TRY_CAST(latitude AS VARCHAR) IS NOT NULL AND
                    TRY_CAST(longitude AS VARCHAR) IS NOT NULL AND
                   (latitude != '0' AND latitude != '0.0' AND longitude != '0' AND longitude != '0.0')
                )

                )
        """
        df = duckdb.sql(query).df()

        # Save the filtered data to a parquet file, creating the directory if needed.
        df.to_parquet(output_path)
        #print(f"saved part {part_number}") # optional, for logging purposes


    except Exception as e:
        print(f"Error processing {file_name}: {e}")
        continue  # Continue to the next iteration even if an error occurs

print("Finished processing all files.")

The first run took roughly 15 hours on my connection. When finished a handfull of files will have failed, for me less than 5. Rerun the script to add the missing files; it will finish in a minute.


Below the rest of the original dataset card.


Dataset Details

Dataset Description

Approximately 5 billion images from Flickr. Entries include URLs to images at various resolutions and available metadata such as license, geolocation, dates, description and machine tags (camera info).

  • Curated by: hlky
  • License: Open Data Commons Attribution License (ODC-By) v1.0

Citation Information

@misc{flickr,
  author = {hlky},
  title = {Flickr},
  year = {2024},
  publisher = {hlky},
  journal = {Hugging Face repository},
  howpublished = {\url{[https://huggingface.co/datasets/bigdata-pw/Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr)}}
}

Attribution Information

Contains information from [Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr) which is made available
under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).