OCR UV Scripts
Part of uv-scripts - ready-to-run ML tools powered by UV
Ready-to-run OCR scripts that work with uv run
- no setup required!
🚀 Quick Start with HuggingFace Jobs
Run OCR on any dataset without needing your own GPU:
# Quick test with 10 samples
hf jobs uv run --flavor l4x1 \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 10
That's it! The script will:
- ✅ Process first 10 images from your dataset
- ✅ Add OCR results as a new
markdown
column - ✅ Push the results to a new dataset
- 📊 View results at:
https://huggingface.co/datasets/[your-output-dataset]
📋 Available Scripts
RolmOCR (rolm-ocr.py
)
Fast general-purpose OCR using reducto/RolmOCR based on Qwen2.5-VL-7B:
- 🚀 Fast extraction - Optimized for speed and efficiency
- 📄 Plain text output - Clean, natural text representation
- 💪 General-purpose - Works well on various document types
- 🔥 Large context - Handles up to 16K tokens
- ⚡ Batch optimized - Efficient processing with vLLM
Nanonets OCR (nanonets-ocr.py
)
State-of-the-art document OCR using nanonets/Nanonets-OCR-s that handles:
- 📐 LaTeX equations - Mathematical formulas preserved
- 📊 Tables - Extracted as HTML format
- 📝 Document structure - Headers, lists, formatting maintained
- 🖼️ Images - Captions and descriptions included
- ☑️ Forms - Checkboxes rendered as ☐/☑
SmolDocling (smoldocling-ocr.py
)
Ultra-compact document understanding using ds4sd/SmolDocling-256M-preview with only 256M parameters:
- 🏷️ DocTags format - Efficient XML-like representation
- 💻 Code blocks - Preserves indentation and syntax
- 🔢 Formulas - Mathematical expressions with layout
- 📊 Tables & charts - Structured data extraction
- 📐 Layout preservation - Bounding boxes and spatial info
- ⚡ Ultra-fast - Tiny model size for quick inference
NuMarkdown (numarkdown-ocr.py
)
Advanced reasoning-based OCR using numind/NuMarkdown-8B-Thinking that analyzes documents before converting to markdown:
- 🧠 Reasoning Process - Thinks through document layout before generation
- 📊 Complex Tables - Superior table extraction and formatting
- 📐 Mathematical Formulas - Accurate LaTeX/math notation preservation
- 🔍 Multi-column Layouts - Handles complex document structures
- ✨ Thinking Traces - Optional inclusion of reasoning process with
--include-thinking
🆕 New Features
Multi-Model Comparison Support
All scripts now include inference_info
tracking for comparing multiple OCR models:
# First model
uv run rolm-ocr.py my-dataset my-dataset --max-samples 100
# Second model (appends to same dataset)
uv run nanonets-ocr.py my-dataset my-dataset --max-samples 100
# View all models used
python -c "import json; from datasets import load_dataset; ds = load_dataset('my-dataset'); print(json.loads(ds[0]['inference_info']))"
Random Sampling
Get representative samples with the new --shuffle
flag:
# Random 50 samples instead of first 50
uv run rolm-ocr.py ordered-dataset output --max-samples 50 --shuffle
# Reproducible random sampling
uv run nanonets-ocr.py dataset output --max-samples 100 --shuffle --seed 42
Automatic Dataset Cards
Every OCR run now generates comprehensive dataset documentation including:
- Model configuration and parameters
- Processing statistics
- Column descriptions
- Reproduction instructions
💻 Usage Examples
Run on HuggingFace Jobs (Recommended)
No GPU? No problem! Run on HF infrastructure:
# Basic OCR job
hf jobs uv run --flavor l4x1 \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
your-input-dataset your-output-dataset
# Real example with UFO dataset 🛸
hf jobs uv run \
--flavor a10g-large \
--image vllm/vllm-openai:latest \
-s HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
davanstrien/ufo-ColPali \
your-username/ufo-ocr \
--image-column image \
--max-model-len 16384 \
--batch-size 128
# NuMarkdown with reasoning traces for complex documents
hf jobs uv run \
--image vllm/vllm-openai:latest \
--flavor l4x4 \
-s HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/numarkdown-ocr.py \
your-input-dataset your-output-dataset \
--max-samples 50 \
--include-thinking \
--shuffle
# Private dataset with custom settings
hf jobs uv run --flavor l40sx1 \
-s HF_TOKEN=$(python3 -c "from huggingface_hub import get_token; print(get_token())") \
https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
private-input private-output \
--private \
--batch-size 32
Python API
from huggingface_hub import run_uv_job
job = run_uv_job(
"https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py",
args=["input-dataset", "output-dataset", "--batch-size", "16"],
flavor="l4x1"
)
Run Locally (Requires GPU)
# Clone and run
git clone https://huggingface.co/datasets/uv-scripts/ocr
cd ocr
uv run nanonets-ocr.py input-dataset output-dataset
# Or run directly from URL
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/nanonets-ocr.py \
input-dataset output-dataset
# RolmOCR for fast text extraction
uv run rolm-ocr.py documents extracted-text
uv run rolm-ocr.py images texts --shuffle --max-samples 100 # Random sample
📁 Works With
Any HuggingFace dataset containing images - documents, forms, receipts, books, handwriting.
🎛️ Configuration Options
Common Options (All Scripts)
Option | Default | Description |
---|---|---|
--image-column |
image |
Column containing images |
--batch-size |
32 /16 * |
Images processed together |
--max-model-len |
8192 /16384 ** |
Max context length |
--max-tokens |
4096 /8192 ** |
Max output tokens |
--gpu-memory-utilization |
0.8 |
GPU memory usage (0.0-1.0) |
--split |
train |
Dataset split to process |
--max-samples |
None | Limit samples (for testing) |
--private |
False | Make output dataset private |
--shuffle |
False | Shuffle dataset before processing |
--seed |
42 |
Random seed for shuffling |
*RolmOCR uses batch size 16 **RolmOCR uses 16384/8192
RolmOCR Specific
- Output column is auto-generated from model name (e.g.,
rolmocr_text
) - Use
--output-column
to override the default name
💡 Performance tip: Increase batch size for faster processing (e.g., --batch-size 128
for A10G GPUs)
More OCR VLM Scripts coming soon! Stay tuned for updates!
- Downloads last month
- 335