gings's picture
Create README.md
4ccd401 verified
metadata
license: mit
language:
  - en
library_name: open_clip

A CLIP (Contrastive Language-Image Pre-training) model trained from scratch on the LivingThings-10M subset of the EntityNet-33M dataset.

See the project page for the paper, code, usage examples, metrics, etc.

The model has seen ~0.2B images at a batch size of 8k.