Deep universal probabilistic programming with Python and PyTorch
param_store.py
type hints by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3271
pyro.poutine.handlers
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3283
functools.wraps
to preserve handler
signature by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3287
pyro.poutine.runtime
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3288
pyro.poutine.messenger
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3290
pyro.primitives
& poutine.block_messenger
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3292
Trace
, TraceMessenger
, & pyro.poutine.guide
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3299
gate
& gate_logits
in ZeroInflatedDistribution
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3303
warn_unreachable=True
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3312
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.6...1.9.0
ProvenanceTensor
to use pytree
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3223
torch>=1.11.0
by @francois-rozet in https://github.com/pyro-ppl/pyro/pull/3242
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.5...1.8.6
This release includes a number of fixes to support PyTorch 2.
infer.inspect
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3198
CorrLCholeskyTransform
in favor of upstream CorrCholeskyTransform
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3199
pl.Trainer
args to argparse by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3217
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.4...1.8.5
log_prob
corr < -1e-8 for SineBivariateVonMises
by @OlaRonning in https://github.com/pyro-ppl/pyro/pull/3165
GroupedNormalNormal
distribution by @martinjankowiak in https://github.com/pyro-ppl/pyro/pull/3163
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.3...1.8.4
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.2...1.8.3
contrib.funsor.Trace_EnumELBO
model enumeration by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3063
model_args
and model_kwargs
of render_model
by @dilaragokay in https://github.com/pyro-ppl/pyro/pull/3083
batch_expand
helper function in air example by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3086
TraceGraph_ELBO
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3081
sample
in tutorial intro_long
by @fraterenz in https://github.com/pyro-ppl/pyro/pull/3112
examples/contrib/funsor/hmm.py
by @ordabayevy in https://github.com/pyro-ppl/pyro/pull/3126
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.1...1.8.2
render_model()
@karm-patel in https://github.com/pyro-ppl/pyro/pull/3039
minipyro.py
to fix #3003 by @luiarthur in https://github.com/pyro-ppl/pyro/pull/3004
Full Changelog: https://github.com/pyro-ppl/pyro/compare/1.8.0...1.8.1
ProvenanceTensor
and numpyr.render_model()
)AutoMultivariateNormal
, but with sparse precision matrix factorization based on dependency structure in the model.StreamingMCMC
is a drop-in replacement for MCMC
that avoids storing samples during inference by streamingly computing statistics such as mean, variance, and r_hat. You can define your own statistics using the pyro.ops.streaming module by either composing existing statistics or defining your own subclass of StreamingStats #2856 .poutine.reparam
compatible with initialization logic in autoguides and MCMC #2876 . Previously you needed to manually transform the value in init_to_value() when using a reparametrizer. In Pyro 1.7 you can specify a single init_to_value()
output that should work regardless of whether your model is transformed by a reparametrizer. Note this involves a major refactoring of the Reparam interface, namely replacing .call() with .apply(). If you have defined custom reparametrizers using .__call__()
you should refactor them before the next Pyro release.AutoNormal
this guide is interpretable and structured. Like NeuTraReparam
this guide is flexible and can be used to improve geometry for subsequent inference via HMC or NUTS.save_params
option, which can save memory #2793pyro.contrib.funsor.infer_discrete
#2789poutine.do
to avoid duplicate entries in cond_indep_stack
#2846infer.csis
to ignore unused gradients, thanks to @fshipy #2828mypy
for type checking, thanks to @kamathhrishi #2853 #2858black
code formatter #2891Normal(loc, scale, validate_args=False)
.LKJCorrCholesky
distribution to upstream LKJCholesky distribution #2771.VonMises
) #2736..mode
, .edge_mean
#2727positive_ordered_vector
, corr_matrix
#2762sphere
#2736.softplus_positive
and softplus_lower_cholesky
constraints with numerically stable SoftplusTransform and SoftplusLowerCholeskyTransform #2767.VectorizedMarkovMessenger
for parallel scan enumeration #2703, #2703 by @ordabayevy.