PyTorch interface for the IPU
poptorch.set_overlap_for_input
and poptorch.set_overlap_for_output
can now be applied to tuples, lists, and dicts of tensors.aten::lstm
directly when compiling with dispatch for PopART, allowing set_available_memory
to work with it.aten::index_fill_.int_Scalar
.tanh
approximate for GELU.torch.scatter_reduce operation
.clamp_max
in cases where the max is large.torch.addmm
.BCEWithLogitsLoss
with a dtype of half
.IPU
DispatchKey
instead of the XLA
DispatchKey
, which means that error messages will now mention IPU rather XLA.None
torch.jit.trace()
. For help on migration issues when using the dispatcher frontend, see the Legacy tracing frontend <https://docs.graphcore.ai/projects/poptorch-user-guide/en/3.0.0/tracing.html>
__ section in the 3.0.0 version of the poptorch-user-guide:index
.Autocast
API (this was only available when using the tracing frontend).index_put
when the indices are a one dimensional vector.torch.jit.trace()
by default to capture the graph on supported platforms (see :numref:dispatcher-support
).
Use poptorch.Options.Jit.traceModel(True)
to revert to previous behaviour.running_var
at training time.expand
, when the desired shape contains both added dimensions and -1
.torch.gather
in some cases where the index tensor has come from an expand
or expand_as
.zero_infinity
in torch.nn.CTCLoss
.torch.jit.trace()
.torch.Tensor.exponential_
and torch.distributions.Exponential
torch.int16
tensors.Ignore missing values when reloading an Optimizer state.
Support saving Optimizer states when compiling offline.
Support for the following functions:
Also save the random number generator's state and the seed when saving a model.
Support for col2im
(used by torch.nn.Fold).
Improve error message of aten::index
, aten::index_put_
when indexing with boolean tensor masks.
Support for torch.argsort
.
Support for torch.nn.RNN
.
Add support for __repr__
in PoplarExecutor.
For models annotated with BeginBlock
, show the IPU blocks in repr(model)
.
Improve implementation of torch.scatter_add
Support for torch.nn.utils.weight_norm
Support for torch.randperm
Support for torch.nn.functional.cosine_similarity
and torch.nn.CosineSimilarity
Support for torch.all
, torch.any
, torch.Tensor.all
and torch.Tensor.any
Support for torch.Tensor.exponential_
and torch.distributions.Exponential
poptorch.AnchorMode
, poptorch.Options.anchorMode
which were deprecated in favour of poptorch.OutputMode
and poptorch.Options.outputMode
respectively.torch.clamp
with integer tensorstorch.index_put_
when operating on slicestorch.chunk
when dim size is indivisible by the specified number of chunkstensor.half()
was in-placetorch.flip
with negative indicespoptorch.Options
classpoptorch.Options.disableModuleNamescope
.poptorch.optim
optimizersoptimizer.state_dict()
and optimizer.load_state_dict()
removeBlocks
function to remove block annotations from a Model / Layer.poptorch.CPU
.im2col
.poptorch.Options.anchorMode
in favour of poptorch.Options.outputMode
poptorch.Options.defaultAnchorMode
in favour of poptorch.Options.defaultOutputMode
poptorch.AnchorMode
in favour of poptorch.OutputMode
torch.nn.Embedding
with padding_idx