Gpax Versions Save

Gaussian Processes for Experimental Sciences

0.0.6

1 year ago
  • Add utility functions for hypothesis learning based on the arXiv:2112.06649 paper. Now the exploration phase of the hypothesis learning can be implemented as follows:
# Lists with physical models and probabilistic priors over their parameters
models = [model1, model2, model3]
model_priors = [model1_priors, model2_priors, model3_priors]

# Initialize the reward and predictive uncertainty records
record = np.zeros((len(models), 2))
obj_history = []

def compute_reward(obj_history):
    """Simple reward function"""
    r = 1 if obj_history[-1] < obj_history[-2] else -1
    return r

# Run active hypothesis learning for 15 steps
for e in range(15):
  
    # Sample model according to softmax or epsilon-greedy selection policy
    idx = gpax.hypo.sample_next(
        rewards=record[:, 1], method="softmax", temperature=1.2)
    
    # Derive fully Bayesian predictive uncertainty with the selected model
    obj, _ = gpax.hypo.step(
        models[idx], model_priors[idx],
        X_measured, y_measured, X_unmeasured,
        gp_wrap=True, gp_kernel='Matern'  # wrap the sampled model into a Gaussian process
    )
    
    # Update predictive uncertainty records
    obj_history.append(jnp.nanmedian(obj).item())
    if e < 1:
        continue

    # Compute reward and update reward records
    r = compute_reward(obj_history)
    record = gpax.hypo.update_record(record, idx, r)
    
    # Evaluate function in the suggested point
    next_point_idx = obj.argmax()
    measured_point = measure(next_point_idx)  # your actual measurement function goes here
    
    # Update arrays with measured and unmeasured data
    X_measured, y_measured, X_unmeasured = update_datapoints(X_measured, y_measured, X_unmeasured)
  • Minor bug fixes
  • Documentation updates
  • Test updates

0.0.5

1 year ago
  • Allow specifying a CPU or GPU device on which one wants to perform training/prediction as a keyword argument (device). This could be useful in small data regimes where the model inference with NUTS runs faster on the CPU, but the computation of predictive means and variances is faster on GPU. Example:
# Specify devices for training and prediction
device_train=jax.devices("cpu")[0]  # training on CPU
device_predict = jax.devices("gpu")[0]  # prediction on GPU
# Initialize model
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
# Run HMC with iterative No-U-turn-sampler on CPU to infer GP model parameters
gp_model.fit(rng_key, X, y, device=device_train)  # X and y are small arrays 
# Make a prediction on new inputs using GPU
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, device=device_predict)
  • Add utility function to visualize numpyro's distributions. Example:
import numpyro
d = numpyro.distributions.Gamma(2, 5)
gpax.utils.dviz(d, samples=10000)

image

  • Add the option to pass a custom jitter value (a small positive term added to the diagonal part of a covariance matrix) for better numerical stability to all models. Example:
gp_model = gpax.ExactGP(input_dim=1, kernel='Matern')
gp_model.fit(rng_key, X, y, jitter=1e-5)
y_pred, y_sampled = gp_model.predict(rng_key_predict, X_new, jitter=1e-5)
  • Add an example on Bayesian optimization and expand descriptions in markdown cells for the existing examples
  • Improve documentation

0.0.4

2 years ago
  • Allow using custom kernel functions
  • Add the option to make noiseless predictions with a trained GP model
  • Add the option to sample from prior predictive distribution of the GP and DKL models
  • Add the option for running fit_predict in viDKL in parallel on multiple GPUs
  • Add a wrapper for NumPyro’s Bayesian Inference on structured probabilistic models

0.0.3

2 years ago

New functionalities

  • Extended all GP and DKL models to vector-valued targets
  • Added an ensemble mode to viDKL for more accurate predictions and better uncertainty estimates
  • Added a 'batch update' mode to Thomson and UCB acquisition functions with ExactGP, vExactGP, and DKL models.

Usage examples:

Ensemble deep kernel learning for vector-valued targets and multi-modal inputs with a custom neural network: (better to run it on GPU)

import numpy as np
import gpax
import haiku as hk
import jax

# Define a custom feature extractor for DKL
class MLP2(hk.Module):
    """Simple custom MLP"""
    def __init__(self, embedim=2):
        super().__init__()
        self._embedim = embedim

    def __call__(self, x):
        x = hk.Linear(128)(x)
        x = jax.nn.tanh(x)
        x = hk.Linear(64)(x)
        x = jax.nn.tanh(x)
        x = hk.Linear(32)(x)
        x = jax.nn.tanh(x)
        x = hk.Linear(self._embedim)(x)
        return x

# Multi-modal high-dimensional inputs
X_train = np.random.randn(2, 32, 144)  # n_modes x n_samples x n_features
X_unmeasured = np.random.randn(2, 100, 144)
# Vector-valued targets
y_train = np.random.randn(2, 32)

# Initialize and run an ensemble of 10 DKL models
key, _ = gpax.utils.get_keys() 
dkl = gpax.viDKL(input_dim=X_train.shape[-1], z_dim=2, kernel='RBF', nn=MLP2)
y_means, y_vars = dkl.fit_predict(
    key, X_train, y_train, X_unmeasured,
    n_models=10, num_steps=1000, step_size=0.005)
# Average ensemble predictions
y_mean = y_means.mean(0)
y_var = y_vars.mean(0)

The next example illustrates a 'batch mode' for Thompson sampling with a fully Bayesian DKL. Here y_train are values of a scalar physical property measured in image patches X_train. The X_unmeasuredrepresents the image patches for which the physical property of interest has not been measured yet. The indices describe locations of image patches in the experimental field of view.

# Initialize DKL model
data_dim = X_train.shape[-1]
key1, key2 = gpax.utils.get_keys()
dkl = gpax.DKL(data_dim, z_dim=2, kernel='RBF')
# Obtain posterior samples for model parameters 
dkl.fit(key1, X_train, y_train, num_warmup=333, num_samples=333, num_chains=3, chain_method='vectorized')
# Batch mode of UCB = alpha*mu + sqrt(beta*var)
# Generate the next 5 points to probe that are maximally apart from each other
obj = gpax.acquisition.bUCB(  
        key2, dkl, X_unmeasured, indices=indices,
        alpha=1, beta=0, n_restarts=10, batch_size=5)
next_points_idx = obj.argmax(-1)

0.0.2

2 years ago

Minor updates and bug fixes

0.0.1

2 years ago

Initial release (beta)