Comparison of Variational Autoencoders with Bayesian Neural Networks. Accuracy, Latent space, Reconstruction and White Noise filtering.
In this project we compare various Autoencoder architectures. We look at each model's reconstruction error with/without noisy test input. This error is measured by the test log-likelihood.
Furthermore, we look at the latent space representation of each model in the case of 2-dimensional encodings. This way we obtain a scatter plot of the latent representation. We also reconstruct the latent space using uniform samples, which helps us to spot encodings that the decoder struggles to decode.
All our experiments are run on the MNIST dataset of handwritten digits. We therefore encode 2 dimensional b/w images. For each image we have 28x28 real-valued inputs between 0 and 1. The dataset can directly be downloaded using the tensorflow python library.
Each notebook contains runs for one specific model from the models folder. The runs have aligned architectures and plots of the latent space.