A highly efficient implementation of Gaussian Processes in PyTorch
dist
by @esantorella in https://github.com/cornellius-gp/gpytorch/pull/2336
Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.10...v1.11
Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.9.1...v1.10
step
by @dannyfriar in https://github.com/cornellius-gp/gpytorch/pull/2118
psd_safe_cholesky
, NotPSDError
, and assert_allclose
by @SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2130
Kernel.covar_dist
by @Balandat in https://github.com/cornellius-gp/gpytorch/pull/2138
_sq_dist
when x1_eq_x2
by @SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2204
expand_batch
by @dannyfriar in https://github.com/cornellius-gp/gpytorch/pull/2185
postprocess
by @SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2205
LazyEvaluatedKernelTensor
recall the grad state at instantiation by @SebastianAment in https://github.com/cornellius-gp/gpytorch/pull/2229
device
property to Kernel
s, add unit tests by @Balandat in https://github.com/cornellius-gp/gpytorch/pull/2234
Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.9.0...v1.9.1
Starting with this release, the LazyTensor
functionality of GPyTorch has been pulled out into its own separate Python package, called linear_operator. Most users won't notice the difference (at the moment), but power users will notice a few changes.
If you have your own custom LazyTensor code, don't worry: this release is backwards compatible! However, you'll see a lot of annoying deprecation warnings 😄
gpytorch.lazy.*LazyTensor
classes now live in the linear_operator
repo, and are now called linear_operator.operator.*LinearOperator
.
gpytorch.lazy.DiagLazyTensor
is now linear_operator.operators.DiagLinearOperator
NonLazyTensor
is now DenseLinearOperator
gpytorch.lazify
and gpytorch.delazify
are now linear_operator.to_linear_operator
and linear_operator.to_dense
, respectively._quad_form_derivative
method has been renamed to _bilinear_derivative
(a more accurate name!)LinearOperator
method names now reflect their corresponding PyTorch names. This includes:
add_diag
-> add_diagonal
diag
-> diagonal
inv_matmul
-> solve
symeig
-> eigh
and eigvalsh
LinearOperator
now has the mT
propertyLinearOperators are now compatible with the torch api! For example, the following code works:
diag_linear_op = linear_operator.operators.DiagLinearOperator(torch.randn(10))
torch.matmul(diag_linear_op, torch.randn(10, 2)) # returns a torch.Tensor!
gpytorch.functions
- all of the core functions used by LazyTensors now live in the LinearOperator repo. This includes: diagonalization, dsmm, inv_quad, inv_quad_logdet, matmul, pivoted_cholesky, root_decomposition, solve (formally inv_matmul), and sqrt_inv_matmulgpytorch.utils
- a few have moved to the LinearOperator repo. This includes: broadcasting, cholesky, contour_intergral_quad, getitem, interpolation, lanczos, linear_cg, minres, permutation, stable_pinverse, qr, sparse, SothcasticLQ, and toeplitz.Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.8.1...v1.9.0
Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.8.0...v1.8.1
Full Changelog: https://github.com/cornellius-gp/gpytorch/compare/v1.7.0...v1.8.0
Important: This release requires Python 3.7 (up from 3.6) and PyTorch 1.10 (up from 1.9)
This release contains several bug fixes and performance improvements.
gpytorch.kernels.PiecewisePolynomialKernel
(#1738)fast_computations
flags are turned off (#1709)stable_qr
function (#1714)num_classes
in gpytorch.likelihoods.DirichletLikelihood
should be an integer (#1728)This release adds 2 new model classes, as well as a number of bug fixes: