Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    TypeError
Message:      Couldn't cast array of type string to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 93, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 71, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2102, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1948, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type string to null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Flickr

217.646.487 images with lat lon coordinates from Flickr. This repo is a filtered version of https://huggingface.co/datasets/bigdata-pw/Flickr for all rows containing a valid lat lon pair.

Filter Process

It was harder than expected. The main problem is that sometimes the connection to HF hub breaks. Afaik DuckDB cannot quite deal with this very well, so this code does not work:

import duckdb
duckdb.sql("INSTALL httpfs;LOAD httpfs;") # required extension
df = duckdb.sql(f"SELECT latitude, longitude FROM 'hf://datasets/bigdata-pw/Flickr/*.parquet' WHERE latitude IS NOT NULL AND longitude IS NOT NULL").df()

Instead, I used a more granular process to make sure I really got all the files.

import duckdb
import pandas as pd
from tqdm import tqdm
import os

# Create a directory to store the reduced parquet files
output_dir = "reduced_flickr_data"
if not os.path.exists(output_dir):
    os.makedirs(output_dir)

# Install and load the httpfs extension (only needs to be done once)
try:
    duckdb.sql("INSTALL httpfs; LOAD httpfs;")  # required extension
except Exception as e:
    print(f"httpfs most likely already installed/loaded: {e}") # most likely already installed

duckdb.sql("SET enable_progress_bar = false;")
# Get a list of already downloaded files to make the script idempotent
downloaded_files = set(os.listdir(output_dir))



for i in tqdm(range(0, 5150), desc="Downloading and processing files"):
    part_number = str(i).zfill(5)  # Pad with zeros to get 00000 format
    file_name = f"part-{part_number}.parquet"
    output_path = os.path.join(output_dir, file_name)

    # Skip if the file already exists
    if file_name in downloaded_files:
        #print(f"Skipping {file_name} (already downloaded)")  # Optional:  Uncomment for more verbose output.
        continue

    try:
        # Construct the DuckDB query, suppressing the built-in progress bar
        query = f"""
            SELECT *
            FROM 'hf://datasets/bigdata-pw/Flickr/{file_name}'
            WHERE latitude IS NOT NULL
                AND longitude IS NOT NULL
                AND (
                (
                    TRY_CAST(latitude AS DOUBLE) IS NOT NULL AND
                    TRY_CAST(longitude AS DOUBLE) IS NOT NULL AND
                    (TRY_CAST(latitude AS DOUBLE) != 0.0 OR TRY_CAST(longitude AS DOUBLE) != 0.0)
                )
                OR
                (
                    TRY_CAST(latitude AS VARCHAR) IS NOT NULL AND
                    TRY_CAST(longitude AS VARCHAR) IS NOT NULL AND
                   (latitude != '0' AND latitude != '0.0' AND longitude != '0' AND longitude != '0.0')
                )

                )
        """
        df = duckdb.sql(query).df()

        # Save the filtered data to a parquet file, creating the directory if needed.
        df.to_parquet(output_path)
        #print(f"saved part {part_number}") # optional, for logging purposes


    except Exception as e:
        print(f"Error processing {file_name}: {e}")
        continue  # Continue to the next iteration even if an error occurs

print("Finished processing all files.")

The first run took roughly 15 hours on my connection. When finished a handfull of files will have failed, for me less than 5. Rerun the script to add the missing files; it will finish in a minute.


Below the rest of the original dataset card.


Dataset Details

Dataset Description

Approximately 5 billion images from Flickr. Entries include URLs to images at various resolutions and available metadata such as license, geolocation, dates, description and machine tags (camera info).

  • Curated by: hlky
  • License: Open Data Commons Attribution License (ODC-By) v1.0

Citation Information

@misc{flickr,
  author = {hlky},
  title = {Flickr},
  year = {2024},
  publisher = {hlky},
  journal = {Hugging Face repository},
  howpublished = {\url{[https://huggingface.co/datasets/bigdata-pw/Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr)}}
}

Attribution Information

Contains information from [Flickr](https://huggingface.co/datasets/bigdata-pw/Flickr) which is made available
under the [ODC Attribution License](https://opendatacommons.org/licenses/by/1-0/).
Downloads last month
8