Vision Transformer for Tumor Classification

This model is a fine-tuned version of google/vit-base-patch16-224 for binary tumor classification in medical images.

Model Details

  • Model Type: Vision Transformer (ViT)
  • Base Model: google/vit-base-patch16-224
  • Task: Binary Image Classification
  • Training Data: Medical image dataset with tumor/non-tumor annotations
  • Input: Medical images (224x224 pixels)
  • Output: Binary classification (tumor/non-tumor)
  • Model Size: 85.8M parameters
  • Framework: PyTorch
  • License: Apache 2.0

Intended Use

This model is designed for tumor classification in medical imaging. It should be used as part of a larger medical diagnostic system and not as a standalone diagnostic tool.

Usage

from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image

# Load model and processor
processor = AutoImageProcessor.from_pretrained("SIATCN/vit_tumor_classifier")
model = AutoModelForImageClassification.from_pretrained("SIATCN/vit_tumor_classifier")

# Load and process image
image = Image.open("path_to_your_image.jpg")
inputs = processor(image, return_tensors="pt")

# Make prediction
outputs = model(**inputs)
predictions = outputs.logits.softmax(dim=-1)
predicted_label = predictions.argmax().item()
confidence = predictions[0][predicted_label].item()

# Get class name
class_names = ["non-tumor", "tumor"]
print(f"Predicted: {class_names[predicted_label]} (confidence: {confidence:.2f})")
Downloads last month
15
Safetensors
Model size
85.8M params
Tensor type
F32
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for SIATCN/vit_tumor_classifier

Finetuned
(578)
this model

Spaces using SIATCN/vit_tumor_classifier 3