CLIPSeg model

CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper Image Segmentation Using Text and Image Prompts by LΓΌddecke et al. and first released in this repository.

Intended use cases

This model is intended for zero-shot and one-shot image segmentation.

Usage

Refer to the documentation.

Downloads last month
11,683,960
Safetensors
Model size
151M params
Tensor type
I64
Β·
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for CIDAS/clipseg-rd64-refined

Merges
1 model
Quantizations
1 model

Spaces using CIDAS/clipseg-rd64-refined 76