license: mit | |
language: | |
- en | |
library_name: open_clip | |
A CLIP (Contrastive Language-Image Pre-training) model trained from scratch on EntityNet-33M. | |
See the [project page](https://github.com/lmb-freiburg/entitynet) for the paper, code, usage examples, metrics, etc. | |
The model has seen ~0.6B images at a batch size of 8k. | |