Empatixx's picture
Update README.md
e851b60 verified
---
license: mit
language:
- cs
---
# Czech Synthetic Multiline Text Recognition Dataset
A large-scale synthetic dataset for Czech multiline text recognition, containing 100,000 text images with corresponding transcriptions. Created using [SynthTiger](https://github.com/clovaai/synthtiger).
## Dataset Description
This dataset consists of synthetically generated images of Czech text with multiple lines per image, designed for training optical character recognition (OCR) models that can handle complex multiline text layouts. Each image contains 3 lines of Czech text rendered with various fonts and alignments.
### Dataset Statistics
- **Total samples**: 100,000 image-text pairs
- **Language**: Czech (cs_CZ)
- **Lines per image**: 3 lines
- **Image format**: JPEG
- **Storage format**: Parquet files (10 files total)
- **Total size**: ~6.3 GB
## Dataset Structure
The dataset is organized into 10 Parquet files for efficient storage and loading:
```
data/
├── train-00000-of-00010.parquet # 10,000 samples
├── train-00001-of-00010.parquet # 10,000 samples
├── ...
├── train-00008-of-00010.parquet # 10,000 samples
└── train-00009-of-00010.parquet # 10,000 samples
```
Each Parquet file contains two columns:
- `image`: PIL Image object (JPEG format, properly typed for HuggingFace)
- `text`: Ground truth text transcription (multiple lines)
## Usage
### Loading with Hugging Face Datasets
```python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("Empatixx/synth-text-recognition-multilines-cs")
# Access samples
sample = dataset['train'][0]
image = sample['image'] # PIL Image object
text = sample['text'] # Multiline text transcription
# Load specific splits or streaming
dataset = load_dataset("Empatixx/synth-text-recognition-multilines-cs", split="train[:1000]") # First 1000 samples
dataset = load_dataset("Empatixx/synth-text-recognition-multilines-cs", streaming=True) # Stream the dataset
```
### Working with Multiline Text
```python
# Split text into individual lines
sample = dataset['train'][0]
lines = sample['text'].split('\n')
print(f"Number of lines: {len(lines)}")
for i, line in enumerate(lines):
print(f"Line {i+1}: {line}")
```
### PyTorch DataLoader Example
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
import torch
# Load dataset
dataset = load_dataset("Empatixx/synth-text-recognition-multilines-cs")
# Define transforms
transform = transforms.Compose([
transforms.Resize((512, 512)), # Larger size for multiline text
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
# Create DataLoader
def collate_fn(batch):
images = [transform(sample['image']) for sample in batch]
texts = [sample['text'] for sample in batch]
return torch.stack(images), texts
dataloader = DataLoader(
dataset['train'],
batch_size=16, # Smaller batch size due to larger images
shuffle=True,
collate_fn=collate_fn
)
```
### TrOCR Fine-tuning Example
```python
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
# Load model and processor
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")
# Load dataset
dataset = load_dataset("Empatixx/synth-text-recognition-multilines-cs")
# Preprocessing function
def preprocess_function(examples):
# Process images
pixel_values = processor(images=examples['image'], return_tensors="pt").pixel_values
# Process text labels
labels = processor.tokenizer(
examples['text'],
padding=True,
truncation=True,
max_length=512 # Longer for multiline text
).input_ids
return {
'pixel_values': pixel_values,
'labels': labels
}
# Apply preprocessing
processed_dataset = dataset.map(preprocess_function, batched=True)
```
## Generation Details
The dataset was generated using SynthTiger with the following characteristics:
### Text Sources
- Czech words from Czech corpus (524,474 unique words)
- Text lengths: 1-25 characters per line
- Lines per image: 10 lines (configurable)
- Character set: Full Czech alphabet including diacritics
### Visual Variations
- **Image size**: 1024x1024 pixels canvas
- **Fonts**: Multiple font families with sizes 32-64px
- **Font weight**: 50% probability of bold text
- **Colors**: Black text on white background
- **Text alignment**: Left, center, right, and justify
- **Line spacing**: 0-16 pixels between lines
- **Text case**: Lowercase, uppercase, and capitalized variations
### Layout Properties
- **Orientation**: Horizontal text (left-to-right)
- **Line arrangement**: Top-to-bottom
- **Text positioning**: Centered on canvas with variable line heights
- **Real-world simulation**: Natural text flow with proper line breaks
## Dataset Creation
The dataset was created using the following process:
1. **Text Selection**: Random selection of Czech words from corpus
2. **Line Generation**: Combining words to create 10 lines per image
3. **Layout Computation**: Calculating positions for multiline text rendering
4. **Visual Rendering**: Text rendered with random fonts and alignments
5. **Image Generation**: Creating final images with proper spacing and layout
6. **Format Conversion**: Converting to HuggingFace-compatible Parquet format
### Generation Command
The dataset was generated using:
```bash
synthtiger -o results_100k_lines -c 100000 -w 8 -v examples/multiline/template.py Multiline examples/multiline/config.yaml
```
## Use Cases
This multiline dataset is particularly suitable for:
1. **Document OCR**: Training models for document text extraction
2. **Multiline Recognition**: Models that need to handle multiple text lines
3. **Layout Analysis**: Understanding text structure and organization
4. **Czech Language Models**: Specialized OCR for Czech text
5. **Benchmark Dataset**: Evaluating multiline OCR performance
## Citation
If you use this dataset, please cite:
```bibtex
@misc{czech-synth-multiline-text-2025,
title={Czech Synthetic Multiline Text Recognition Dataset},
author={Empatixx},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/Empatixx/synth-text-recognition-multilines-cs}
}
```
Also cite the SynthTiger paper:
```bibtex
@inproceedings{yim2021synthtiger,
title={SynthTiger: Synthetic Text Image GEneratoR Towards Better Text Recognition Models},
author={Yim, Moonbin and Kim, Yoonsik and Cho, Han-Cheol and Park, Sungrae},
booktitle={International Conference on Document Analysis and Recognition},
pages={109--124},
year={2021},
organization={Springer}
}
```
## License
This dataset is released under the MIT license, following the SynthTiger licensing terms.
## Acknowledgments
- Dataset generated using [SynthTiger](https://github.com/clovaai/synthtiger)
- Czech word corpus with 524,474 unique Czech words