LayoutLMv2

Multimodal (text + layout/format + image) pre-training for document AI

The documentation of this model in the Transformers library can be found here.

Microsoft Document AI | GitHub

Introduction

LayoutLMv2 is an improved version of LayoutLM with new pre-training tasks to model the interaction among text, layout, and image in a single multi-modal framework. It outperforms strong baselines and achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks, including , including FUNSD (0.7895 β†’ 0.8420), CORD (0.9493 β†’ 0.9601), SROIE (0.9524 β†’ 0.9781), Kleister-NDA (0.834 β†’ 0.852), RVL-CDIP (0.9443 β†’ 0.9564), and DocVQA (0.7295 β†’ 0.8672).

LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou, ACL 2021

Downloads last month
916,145
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for microsoft/layoutlmv2-base-uncased

Finetunes
68 models

Spaces using microsoft/layoutlmv2-base-uncased 17

Collection including microsoft/layoutlmv2-base-uncased