INT8 Table Transformer

Post-training static quantization

ONNX

This repo contains the models for:

  1. Table detection,
  2. Table structure recognition,

The original FP32 PyTorch model comes from bsmock/tatr-pubtables1m-v1.0. The INT8 ONNX models are quantized with Intel® Neural Compressor.

Refer to this link for model preparation, quantization and benchmark scripts.

Test result

Table detection:

INT8 FP32
COCO metrics (AP) 0.9691 0.9706
Model size (MB) 56 111

Table structure recognition:

INT8 FP32
Model size (MB) 56 111
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Intel/table-transformer-int8-static-inc

Collection including Intel/table-transformer-int8-static-inc