Visualization toolkit for neural networks in PyTorch! Demo -->
pip install flashtorch
pip install flashtorch -U
flashtorch.saliency.Backprop
can now handle models with mono-channel/grayscale input imagespip install flashtorch
pip install flashtorch -U
flashtorch.saliency.Backprop.visualize
now correctly passes use_gpu
flag down to the calculate_gradient
.pip install flashtorch
pip install flashtorch -U
README.md
in setup.py
: this is to avoid getting unicode decoding error (reported by #14). setup.py
now gets the long_description
from its docstring.pip install flashtorch
pip install flashtorch -U
flashtorch.utils.visualize
: This functionality was specific for creating saliency maps, and therefore has been moved as a class method for flashtorch.saliency.Backprop
Refer to the notebooks below for details and how to use it:
flashtorch.activmax.GradientAscent
: This is a new API which implements activation maximization via gradient ascent. It has three public facing APIs:
GradientAscent.optimize
: Generates an image that maximally activates the target filter.GradientAscent.visualize
: Optimizes for the target layer/filter and visualizes the output.GradientAscent.deepdream
: Creates DeepDream.Refer to the notebooks below for details and how to use it:
flashtorch.utils.standardize_and_clip
: Users can optionally set the saturation
and brightness
.pip install flashtorch
pip install flashtorch -U
Users can explicitly set a device to use when calculating gradients when using an instance of Backprop
, by setting use_gpu=True
. If it's True and torch.cuda.is_available
, the computation will be moved to GPU. It defaults to False
if not provided.
from flashtorch.saliency import Backprop
... # Prepare input and target_class
model = model()
backprop = Backprop(model)
gradients = backprop. calculate_gradients(input, target_class, use_gpu=True)
setup.py
has better indications of supported Python versions