Accurate Neural Network Potential on PyTorch
Full Changelog: https://github.com/aiqm/torchani/compare/2.2.3...2.2.4
Full Changelog: https://github.com/aiqm/torchani/compare/2.2.2...2.2.3
unique_consecutive
is now supported by TorchScript, so the workaround for it is removed from TorchANI (#471)requests
(#486)torchani.data
now allow using custom padding values (#489)super
, this is known to have issues on some Python build (#496)torchani.data
for returning species
with wrong dtype (#502)pip
(#500)Edit: This release is not in PyPI because it exceeds the maximum file size limit of PyPI. We will make a new release 2.1.1 to remove models outside TorchANI. Models will be automatically downloaded when used for the first time
torchani.data
has been rewritten. In the new dataset API, we no longer split batches into chunks. Splitting batches into chunks was an optimization to an old implementation of AEVComputer
, and it has become a deoptimization. (#428, #405, #404, #456, #434, #433, #432, #431).AEVComputer
performance improvements and bug fixes (#451, #449, #447, #440, #438, #437, #436, #429, #420, #419, #418, #446)Please update your PyTorch to latest nightly build!
torchani.SpeciesConverter
(#396)periodic_table_index=True
when constructing. (#399)ANIModel
can now have a name. To use this feature, pass an OrderedDict
instead of a list
to its constructor. (#398)torchani.utils.hessian
is now supported by JIT. (#397)Please update your PyTorch to latest nightly build!
EnergyShifter
now always use float64 as datatype (#338, #347)Previously we were supporting Python 2, which limits the language feature we could use. Now PyTorch has started dropping Python 2 support on their nightly builds. So TorchANI also dropped Python 2 support, which enables lots of new language features to improve our code quality:
@
operator for matrix multiplication (#371)In TorchANI 1.0, we added TorchScript support. But due to bugs/lacking features in PyTorch, we had to make many workarounds, which introduce some problems. PyTorch has improved a lot since then, so we remove some of the workarounds to make TorchANI great again:
enumerate
is now correctly supported by JIT (#358)new_zeros
are now correctly supported by JIT (#353, #362)ModuleList
is now supported by JIT (#385)torch.arange
is now fixed (#357)__constants__
is deprecated by torch.jit (#378)nan
as a value in NeuroChem parser (#383)pbc
and cell
to torchani.nn.Sequential
(#386)torch.triu_indices
to simplify code (#367, #368)This is just a dummy release that triggers deployment. See for https://github.com/aiqm/torchani/releases/tag/1.0 changelog.
torch.jit
. Users can now use C++ API for deployments. (#303, #305, #306, #307, #308, #326, #327).AEVComputer
input is changed, cell
and pbc
are now keyword arguments. (#303)Ensemble
is now hardcoded to have a size of 8. (#307)torchani.nn.Sequential
is added to include type annotations for JIT. (#307)self_energies
for a dataset containing only one element (#302)