Deformation Segmentation Save

PyTorch implementation of Learning to Downsample for Segmentation of Ultra-High Resolution Images

Project README

Learning to Downsample for Segmentation of Ultra-High Resolution Images in PyTorch

This is a PyTorch implementation of Learning to Downsample for Segmentation of Ultra-High Resolution Images which published at ICLR 2022.

Updates

  • Apology for the long delayed code cleaning, which is now done! Please let me know if you would like further clarification of any part :)
  • ICLR 2022 talk available HERE
  • For more details/examples/video demos visit our project page HERE

Table of Contents

  1. Environment-Setup
  2. Data-preparation
  3. Reproduce
  4. Citation

Environment-Setup

Install dependencies

Install dependencies with one of the following options: Conda installation with miniconda3 PATH /home/miniconda3/:

conda env create -f deform_seg_env.yml
conda activate deform_seg_env

Above environment is built with conda version: 4.7.11

Data preparation

  1. Download the Cityscapes, DeepGlobe and PCa-histo datasets.

  2. Your directory tree should be look like this:

$SEG_ROOT/data
├── cityscapes
│   ├── annotations
│   │   ├── testing
│   │   ├── training
│   │   └── validation
│   └── images
│       ├── testing
│       ├── training
│       └── validation
├── histomri
│   ├── train
│   │   ├── images
│   │   ├── labels
│   └── val
│   │   ├── images
│   │   ├── labels
├── deepglob
│   ├── land-train
│   └── land_train_gt_processed

note Histo_MRI is the PCa-histo dataset

  1. Data list .odgt files are provided in ./data prepare correspondingly for local datasets. (Note: for cityscapes please check its ./data/Cityscape/*.odgt, in my example I removed the city subfolders and put all images under one folder, if your data tree is different please modify accordingly e.g. change "images/training/tubingen_000025_000019_leftImg8bit.png" to "images/training/tubingen/000025_000019_leftImg8bit.png"

Reproduce

full configuration bash provided to reproduced paper results, suitable for large scale experiment in multiple GPU Environment, Syncronized Batch Normalization are deployed.

Training

Train a model by selecting the GPUs ($GPUS) and configuration file ($CFG) to use. During training, last checkpoints by default are saved in folder ckpt.

python3 train_deform.py --gpus $GPUS --cfg $CFG
  • To choose which gpus to use, you can either do --gpus 0-7, or --gpus 0,2,4,6.
  • Bashes and configurations are provided to reproduce our results:
  • note you will need to specify your root path 'SEG_ROOT' for DATASET.root_dataset option in those scripts.
bash quick_start_bash/cityscape_64_128_ours.sh
bash quick_start_bash/cityscape_64_128_uniform.sh
bash quick_start_bash/deepglob_300_300_ours.sh
bash quick_start_bash/deepglob_300_300_uniform.sh
bash quick_start_bash/pcahisto_80_800_ours.sh
bash quick_start_bash/pcahisto_80_800_uniform.sh
  • You can also override options in commandline, for example python3 train_deform.py TRAIN.num_epoch 10 .

Evaluation

  1. Evaluate a trained model on the validation set, simply override following options TRAIN.start_epoch 125 TRAIN.num_epoch 126 TRAIN.eval_per_epoch 1 TRAIN.skip_train_for_eval True
  • Alternatively, you can quick start with provided bash script:
bash quick_start_bash/eval/cityscape_64_128_ours.sh
bash quick_start_bash/eval/cityscape_64_128_uniform.sh
bash quick_start_bash/eval/deepglob_300_300_ours.sh
bash quick_start_bash/eval/deepglob_300_300_uniform.sh
bash quick_start_bash/eval/pcahisto_80_800_ours.sh
bash quick_start_bash/eval/pcahisto_80_800_uniform.sh

Citation

If you use this code for your research, please cite our paper:

@article{jin2021learning,
  title={Learning to Downsample for Segmentation of Ultra-High Resolution Images},
  author={Jin, Chen and Tanno, Ryutaro and Mertzanidou, Thomy and Panagiotaki, Eleftheria and Alexander, Daniel C},
  journal={arXiv preprint arXiv:2109.11071},
  year={2021}

@inproceedings{
jin2022learning,
title={Learning to Downsample for Segmentation of Ultra-High Resolution Images},
author={Chen Jin and Ryutaro Tanno and Thomy Mertzanidou and Eleftheria Panagiotaki and Daniel C. Alexander},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=HndgQudNb91}
}
Open Source Agenda is not affiliated with "Deformation Segmentation" Project. README Source: lxasqjc/Deformation-Segmentation

Open Source Agenda Badge

Open Source Agenda Rating