Test Time Augmentation (TTA) wrapper for computer vision tasks: segmentation, classification, super-resolution, ... etc.
Edafa is a simple wrapper that implements Test Time Augmentations (TTA) on images for computer vision problems like: segmentation, classification, super-resolution, Pansharpening, etc. TTAs guarantees better results in most of the tasks.
Applying different transformations to test images and then average for more robust results.
pip install edafa
The easiest way to get up and running is to follow example notebooks for segmentation and classification showing TTA effect on performance.
The whole process can be done in 4 steps:
SegPredictor
) or Classification (ClassPredictor
)from edafa import SegPredictor
predict_patches(self,patches)
: where your model takes image patches (numpy.ndarray) and return prediction (numpy.ndarray)class myPredictor(SegPredictor):
def __init__(self,model,*args,**kwargs):
super().__init__(*args,**kwargs)
self.model = model
def predict_patches(self,patches):
return self.model.predict(patches)
p = myPredictor(model,patch_size,model_output_channels,conf_file_path)
predict_images()
to run the prediction processp.predict_images(images,overlap=0)
Configuration file is a json file containing two pieces of information
Example of a conf file in json
format
{
"augs":["NO",
"FLIP_UD",
"FLIP_LR"],
"mean":"ARITH",
"bits":8
}
Example of a conf file in yaml
format
augs: [NO,FLIP_UD,FLIP_LR]
mean: ARITH
bits: 8
You can either pass file path (json or yaml) or the actual json text to conf
parameter.
All contributions are welcomed. Please make sure that all tests passed before pull request. To run tests
nosetests