|
--- |
|
license: bsd-3-clause |
|
tags: |
|
- optics |
|
- microscopic imaging |
|
- physics |
|
task_categories: |
|
- image-to-image |
|
--- |
|
|
|
<div align="center"> |
|
|
|
<h1>OpticalNet🔬: An Optical Imaging Dataset and Benchmark Beyond the Diffraction Limit🔍</h1> |
|
<p><b>Dataset Repository</b></p> |
|
|
|
[](https://cvpr.thecvf.com/) |
|
[](https://cvpr.thecvf.com/virtual/2025/poster/34146) |
|
[](https://Deep-See.github.io/OpticalNet) |
|
|
|
<!-- [](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet?p=opticalnet-an-optical-imaging-dataset-and) |
|
[](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet-1?p=opticalnet-an-optical-imaging-dataset-and) |
|
[](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet-2?p=opticalnet-an-optical-imaging-dataset-and) |
|
[](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet-3?p=opticalnet-an-optical-imaging-dataset-and) |
|
[](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet-4?p=opticalnet-an-optical-imaging-dataset-and) |
|
[](https://paperswithcode.com/sota/semantic-segmentation-on-opticalnet-5?p=opticalnet-an-optical-imaging-dataset-and) --> |
|
|
|
<div> |
|
Benquan Wang  |
|
Ruyi An  |
|
Jin-Kyu So  |
|
Sergei Kurdiumov  |
|
Eng Aik Chan  |
|
Giorgio Adamo  |
|
Yuhan Peng  |
|
Yewen Li  |
|
Bo An |
|
</div> |
|
|
|
<div> |
|
🎈 <strong>Accepted to CVPR 2025</strong> |
|
</div> |
|
|
|
<div> |
|
<h4 align="center"> |
|
• <a href="https://cvpr.thecvf.com/virtual/2025/poster/34146" target='_blank'>[pdf]</a> • |
|
</h4> |
|
</div> |
|
|
|
</div> |
|
|
|
# NOTICE |
|
|
|
<span style="color:red"> |
|
|
|
Due to some technical difficulties with Huggingface, we temporarily host our dataset on Google Drive. Kindly find the experiment dataset below: |
|
|
|
</span> |
|
|
|
[Experiment](https://drive.google.com/file/d/1DdLGEMwKCIecf8l_bMux3mX6KPzFUVpk/view?usp=sharing) |
|
|
|
|
|
Download the dataset locally, and set `dir_path` in the running script to start your optical exploration! |
|
|
|
We are working our best to move the dataset to integrate with Huggingface and stay tuned⚙️! |