Add pipeline tag, paper link, and usage instructions

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +25 -80
README.md CHANGED
@@ -1,8 +1,14 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
 
 
 
 
 
5
  ---
 
6
  # STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking
7
 
8
  <p align="center">
@@ -16,106 +22,45 @@ language:
16
  <img src="https://img.shields.io/badge/-continuous_integration-red" alt="Contiguous"/>
17
  </p>
18
 
19
- ## Introduction
20
- This repository is the official **checkpoint repository** for STEP. It is intended solely for storing the model checkpoints, training logs, and corresponding configuration files (.yaml) used in STEP, to facilitate usage, reproduction, and comparison of Spiking Transformer models by researchers and interested users.
21
-
22
- For the complete STEP framework, including source code and tutorials, please refer to the [official GitHub repository](https://github.com/Fancyssc/STEP).
23
-
24
- <!--
25
- Built on top of **[BrainCog](https://github.com/BrainCog-X/Brain-Cog)**, this repository reproduces state-of-the-art Spiking Transformer models and offers a unified pipeline for **classification, segmentation, and object detection**. By standardizing data loaders, training routines, and logging, it enables fair, reproducible comparisons while remaining easy to extend with new models or tasks.
26
 
27
- - **Modular Design** Swap neuron models, encodings, or attention blocks with a few lines of code.
28
- - **Multi-Task Ready** – Shared backbone, task-specific heads; evaluate *once*, report *everywhere*.
29
- - **Cross-Framework Compatibility** – Runs on BrainCog, SpikingJelly, or BrainPy with a thin adapter layer.
30
- - **End-to-End Reproducibility** – Version-locked configs and CI scripts guarantee “one-command” reruns. -->
31
 
32
- <!-- ### 📂 Task-Specific READMEs
 
33
 
34
- | Task | Documentation |
35
- |------|---------------|
36
- | Classification | [cls/Readme.md](cls/Readme.md) |
37
- | Segmentation | [seg/Readme.md](seg/Readme.md) |
38
- | Detection | [det/Readme.md](det/Readme.md) |
39
- -->
40
- <!-- ## 🔑 Key Features of STEP
41
-
42
- <p align="center">
43
- <img src="imgs/bench.png" alt="mp" style="width: 75%; max-width: 600px; min-width: 200px;" />
44
- </p>
45
 
46
- - **Unified Benchmark for Spiking Transformers**
47
- STEP offers a single, coherent platform for evaluating classification, segmentation, and detection models, removing fragmented evaluation pipelines and simplifying comparison across studies.
48
-
49
- - **Highly Modular Architecture**
50
- All major blocks—neuron models, input encodings, attention variants, surrogate gradients, and task heads—are implemented as swappable modules. Researchers can prototype new ideas by mixing and matching components without rewriting the training loop.
51
-
52
- - **Broad Dataset Compatibility**
53
- Out-of-the-box support spans static vision (ImageNet, CIFAR10/100), event-based neuromorphic data (DVS-CIFAR10, N-Caltech101), and sequential benchmarks. Data loaders follow a common interface, so adding a new dataset is typically a ~50-line effort.
54
-
55
- - **Multi-Task Adaptation**
56
- Built-in pipelines extend beyond image classification to dense prediction tasks. STEP seamlessly plugs Spiking Transformers into MMSeg (segmentation) and MMDet (object detection) heads such as FCN and FPN, enabling fair cross-task studies with minimal glue code.
57
-
58
- - **Backend-Agnostic Implementation**
59
- A thin abstraction layer makes the same model definition runnable on SpikingJelly, BrainCog, or BrainPy. This widens hardware and software coverage while promoting reproducible results across laboratories.
60
-
61
- - **Reproducibility & Best-Practice Templates**
62
- Every experiment ships with version-locked configs, deterministic seeds, and logging utilities. CI scripts validate that reported numbers can be reproduced with a single command, fostering transparent comparison and faster iteration.
63
-
64
- > **TL;DR** STEP lowers the barrier to building, training, and fairly benchmarking Spiking Transformers, accelerating progress toward practical neuromorphic vision systems.
65
- ## Repository Structure
66
-
67
- <!-- ```plaintext
68
- Spiking-Transformer-Benchmark/
69
- ├── cls/ # Classification submodule
70
- ��� ├── README.md
71
- │ ├── configs/
72
- │ ├── datasets/
73
- │ └── ...
74
- ├── seg/ # Segmentation submodule
75
- │ ├── README.md
76
- │ ├── configs/
77
- │ ├── mmseg
78
- │ └── ...
79
- ├── det/ # Object detection submodule
80
- │ ├── README.md
81
- │ ├── configs/
82
- │ ├── mmdet
83
- │ └── ...
84
- └── README.md
85
- ``` -->
86
 
87
  ## 🚀 Quick Start
88
 
89
- To get started, clone the repository and install the required dependencies:
90
 
91
  ```bash
92
  git clone https://github.com/Fancyssc/STEP.git
93
- ```
94
- <!--
95
- ### BrainCog Installation
96
- For the BrainCog framework, we recommend installing it via GitHub. You can use the following command in your terminal to install it from GitHub:
97
- ```angular2html
98
  pip install git+https://github.com/braincog-X/Brain-Cog.git
99
  ```
100
 
 
 
101
 
102
- For the **seg** and **cls** tasks, different environment requirements apply. Please refer to the corresponding README files in each subdirectory for details.
103
-
104
- > **Prerequisites**: Python 3.8 or above, PyTorch, and BrainCog.
105
- -->
106
 
107
  ## Contact & Collaboration
108
 
109
  - **Questions or Feedback**
110
- If you run into any issues, have questions about STEP, or simply want to share suggestions, please open a GitHub Issue or start a discussion thread. We monitor the repository regularly and aim to respond within a few business days.
111
 
112
  - **Integrate Your Model**
113
- Have an exciting Spiking Transformer variant or related module you’d like to see supported? We welcome external contributions! Open an Issue describing your model, its licensing, and any specific requirements, or email the maintainers. We’ll coordinate with you to add the necessary adapters, documentation, and tests.
114
-
115
- We look forward to working with the community to make STEP an ever-stronger platform for neuromorphic research.
116
 
117
- ## 📝Citation
118
- ```angular2html
119
  @misc{shen2025stepunifiedspikingtransformer,
120
  title={STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking},
121
  author={Sicheng Shen and Dongcheng Zhao and Linghao Feng and Zeyang Yue and Jindong Li and Tenglong Li and Guobin Shen and Yi Zeng},
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ pipeline_tag: image-classification
6
+ tags:
7
+ - spiking-neural-networks
8
+ - spiking-transformer
9
+ - brain-inspired-computing
10
  ---
11
+
12
  # STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking
13
 
14
  <p align="center">
 
22
  <img src="https://img.shields.io/badge/-continuous_integration-red" alt="Contiguous"/>
23
  </p>
24
 
25
+ This repository contains the official model checkpoints for the paper [STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking](https://huggingface.co/papers/2505.11151).
 
 
 
 
 
 
26
 
27
+ STEP is a unified benchmark framework for Spiking Transformers that supports a wide range of tasks, including classification, segmentation, and detection across static, event-based, and sequential datasets.
 
 
 
28
 
29
+ - **Paper:** [Hugging Face Papers](https://huggingface.co/papers/2505.11151)
30
+ - **Code:** [GitHub - Fancyssc/STEP](https://github.com/Fancyssc/STEP)
31
 
32
+ ## Introduction
33
+ This repository is the official **checkpoint repository** for STEP. It is intended solely for storing the model checkpoints, training logs, and corresponding configuration files (.yaml) used in STEP, to facilitate usage, reproduction, and comparison of Spiking Transformer models by researchers and interested users.
 
 
 
 
 
 
 
 
 
34
 
35
+ For the complete STEP framework, including source code and tutorials, please refer to the [official GitHub repository](https://github.com/Fancyssc/STEP).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## 🚀 Quick Start
38
 
39
+ To use these checkpoints, clone the STEP repository and install the required dependencies (including [BrainCog](https://github.com/BrainCog-X/Brain-Cog)):
40
 
41
  ```bash
42
  git clone https://github.com/Fancyssc/STEP.git
43
+ cd STEP
 
 
 
 
44
  pip install git+https://github.com/braincog-X/Brain-Cog.git
45
  ```
46
 
47
+ ### Sample Usage
48
+ You can start Spikformer Training on CIFAR10 as a "Hello-world" demo using the provided configurations:
49
 
50
+ ```bash
51
+ python train.py --config configs/spikformer/cifar10.yml
52
+ ```
 
53
 
54
  ## Contact & Collaboration
55
 
56
  - **Questions or Feedback**
57
+ If you run into any issues, have questions about STEP, or simply want to share suggestions, please open a GitHub Issue or start a discussion thread on the main repository.
58
 
59
  - **Integrate Your Model**
60
+ Have an exciting Spiking Transformer variant or related module you’d like to see supported? We welcome external contributions! Open an Issue describing your model, its licensing, and any specific requirements.
 
 
61
 
62
+ ## 📝 Citation
63
+ ```bibtex
64
  @misc{shen2025stepunifiedspikingtransformer,
65
  title={STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking},
66
  author={Sicheng Shen and Dongcheng Zhao and Linghao Feng and Zeyang Yue and Jindong Li and Tenglong Li and Guobin Shen and Yi Zeng},