
Datasets:
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Error code: ClientConnectionError
Accessing the font-square-v2
Dataset on Hugging Face
The font-square-v2
dataset is hosted on Hugging Face at blowing-up-groundhogs/font-square-v2. It is stored in WebDataset format, with tar files organized as follows:
- tars/train/: Contains
{000..499}.tar
shards for the main training split. - tars/fine_tune/: Contains
{000..049}.tar
shards for fine-tuning.
Each tar file contains multiple samples, where each sample includes:
- An RGB image (
.rgb.png
) - A black-and-white image (
.bw.png
) - A JSON file (
.json
) with metadata (e.g. text and writer ID)
For details on how the synthetic dataset was generated, please refer to our paper: Synthetic Dataset Generation.
You can access the dataset either by downloading it locally or by streaming it directly over HTTP.
1. Downloading the Dataset Locally
You can download the dataset locally using either Git LFS or the huggingface_hub
Python library.
Using Git LFS
Clone the repository (ensure Git LFS is installed):
git lfs clone https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2
This creates a local directory font-square-v2
containing the tars/
folder with the subdirectories train/
and fine_tune/
.
Using the huggingface_hub Python Library
Alternatively, download a snapshot of the dataset:
from huggingface_hub import snapshot_download
# Download the repository; the local path is returned
local_dir = snapshot_download(repo_id="blowing-up-groundhogs/font-square-v2", repo_type="dataset")
print("Dataset downloaded to:", local_dir)
After downloading, the tar shards are located in:
local_dir/tars/train/{000..499}.tar
local_dir/tars/fine_tune/{000..049}.tar
Using WebDataset with the Local Files
Once downloaded, you can load the dataset using WebDataset. For example, to load the training split:
import webdataset as wds
import os
local_dir = "path/to/font-square-v2" # Update as needed
# Load training shards
train_pattern = os.path.join(local_dir, "tars", "train", "{000..499}.tar")
train_dataset = wds.WebDataset(train_pattern).decode("pil")
for sample in train_dataset:
rgb_image = sample["rgb.png"] # PIL image
bw_image = sample["bw.png"] # PIL image
metadata = sample["json"]
print("Training sample metadata:", metadata)
break
And similarly for the fine-tune split:
fine_tune_pattern = os.path.join(local_dir, "tars", "fine_tune", "{000..049}.tar")
fine_tune_dataset = wds.WebDataset(fine_tune_pattern).decode("pil")
2. Streaming the Dataset Directly Over HTTP
If you prefer not to download the shards, you can stream them directly from Hugging Face using the CDN (provided the tar files are public). For example:
import webdataset as wds
url_pattern = (
"https://huggingface.co/datasets/blowing-up-groundhogs/font-square-v2/resolve/main"
"/tars/train/{000000..000499}.tar"
)
dataset = wds.WebDataset(url_pattern).decode("pil")
for sample in dataset:
rgb_image = sample["rgb.png"]
bw_image = sample["bw.png"]
metadata = sample["json"]
print("Sample metadata:", metadata)
break
(Adjust the shard range accordingly for the fine-tune split.)
Additional Considerations
Decoding:
The.decode("pil")
method in WebDataset converts image bytes into PIL images. To use PyTorch tensors, add a transform step:import torchvision.transforms as transforms transform = transforms.ToTensor() dataset = ( wds.WebDataset(train_pattern) .decode("pil") .map(lambda sample: { "rgb": transform(sample["rgb.png"]), "bw": transform(sample["bw.png"]), "metadata": sample["json"] }) )
Shard Naming:
Ensure your WebDataset pattern matches the following structure:tars/ βββ train/ β βββ {000..499}.tar βββ fine_tune/ βββ {000..049}.tar
By following these instructions, you can easily integrate the font-square-v2
dataset into your project for training and fine-tuning.
- Downloads last month
- 219
Models trained or fine-tuned on blowing-up-groundhogs/font-square-v2
