ExPLoRA: Parameter-Efficient Extended Pre-Training

Paper | Code | Website | Video

This repository contains pre-trained checkpoints from the ICML 2025 paper:
"ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts"

Overview

ExPLoRA is a parameter-efficient method for adapting pre-trained Vision Transformers (ViT) to new domains using LoRA-based extended pre-training. Instead of training the full architecture, ExPLoRA freezes most of the backbone and trains low-rank adapters and a small subset of ViT blocks during self-supervised pre-training on target domain data.


📁 Checkpoints

Note: All checkpoints have LoRA adapters already merged into the weights. The full checkpoints retain the separate q_proj, k_proj, v_proj layers (with merged LoRA) alongside the combined qkv weights for reference. The encoder-only checkpoints contain just the merged qkv weights, ready for downstream use.

explora_dinov2_fmow_rgb/

ExPLoRA checkpoints using DINOv2 self-supervised pre-training on fMoW high-resolution RGB satellite imagery.

Description ViT-B ViT-L
DinoV2 teacher encoder & decoder weights + ExPLoRA adapters ViT-B/14 ViT-L/14
Encoder-only weights ViT-B/14 ViT-L/14

Usage:

import torch

# Load encoder-only checkpoint (recommended for fine-tuning)
ckpt = torch.load("explora_dinov2_fmow_rgb/explora_dinov2_vit_large_fmow_rgb_encoder_only.pth", map_location="cpu")
state_dict = ckpt["model"]

explora_mae_multispectral/

ExPLoRA checkpoints using MAE self-supervised pre-training on fMoW Sentinel-2 multispectral imagery.

Description ViT-L
MAE encoder & decoder weights + ExPLoRA adapters ViT-L/16
Encoder-only weights ViT-L/16

Usage:

import torch

# Load encoder-only checkpoint (recommended for fine-tuning)
ckpt = torch.load("explora_mae_multispectral/explora_mae_fmow_sentinel_encoder_only.pth", map_location="cpu")
state_dict = ckpt["model"]

Loading Checkpoints

These checkpoints are compatible with the ExPLoRA codebase.

For fine-tuning, use the finetune/finetune.py script:

python finetune/finetune.py \
    --finetune path/to/explora_checkpoint.pth \
    --model vit_large_patch16 \
    --dataset_type rgb \
    ...

Reference scripts are also provided under scripts/ in the codebase, and you can use these checkpoints there.


Citation

If you find these checkpoints useful, please cite our paper:

@inproceedings{khanna2025explora,
  title={Ex{PL}o{RA}: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts},
  author={Samar Khanna and Medhanie Irgau and David B. Lobell and Stefano Ermon},
  booktitle={Forty-second International Conference on Machine Learning},
  year={2025},
  url={https://openreview.net/forum?id=OtxLhobhwb}
}

License

Apache 2.0

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support