Image Segmentation
Transformers
Safetensors
segformer

Usage

from transformers import AutoImageProcessor, SegformerForSemanticSegmentation
import torch, torch.nn.functional as F
from PIL import Image
import numpy as np

repo = "GlobalWheat/GWFSS_model_v1.0"
processor = AutoImageProcessor.from_pretrained(repo)
model = SegformerForSemanticSegmentation.from_pretrained(repo).eval()

img = Image.open("example.jpg").convert("RGB")
inputs = processor(images=img, return_tensors="pt")
with torch.no_grad():
    logits = model(**inputs).logits
    up = F.interpolate(logits, size=(img.height, img.width), mode="bilinear", align_corners=False)
pred = up.argmax(1)[0].cpu().numpy()  # (H, W) class IDs

This version is based on huggingface Segformer which could be slightly different from the one we used for our paper. The paper version was implemented based on the mmsegmentation. You can find the model weight for mmsegmentation library in this repo as well.

Downloads last month
-
Safetensors
Model size
13.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for GlobalWheat/GWFSS_model_v1.0

Finetuned
(27)
this model

Dataset used to train GlobalWheat/GWFSS_model_v1.0