Quantized Variational Auto Encoders (QVAE) (on going)
Quantized Variational Auto Encoders (QVAE) are a type of generative model that combines the principles of variational autoencoders (VAEs) with quantization techniques.VAE Paper: https://arxiv.org/abs/1312.6114 BETA Vae Paper: https://openreview.net/pdf?id=Sy2fzU9gl
Very useful for generation and combining with transformers and llms to generate other types of data.
We first need to understand auto-encoders and variational auto encoders.
An auto encoders is composed of two networks, an encoder and a decoder that are trained together. The encoder's goal is to take an input (for example an image) with a large dimension (lot of pixels) and compress it into a smaller representation that uses less data (smaller vector). And the deocder's goal is to do the inverse process, given a compressed representation it is supposed to invert the operation and get back the original object (image in our case).
The loss function for the auto-encoder network is just an MSE between the decoder's output and the original input. Autoencoders can be trained in an unsupervised way without labelling data, the compressed representations can later be used as inputs for other networks as they're supposed to contain meanigful information about the original input but in a more compact format.
Now VAE (Variational Auto Encoders) are essentially the same thing, the only difference is that the encoder will output parameters for a distribution rather than the compacted version. To get the compacted version we are required to take a sample from the distribution described by the outputed parameters. The distribution's parameter will be different for each input. The decoder will work the same way and take the compact version and reconstruct the original image. The loss is different than for standard vae, it is a sum of the original MSE loss + another term, KL divergence term that is responsible of making sure that the distribution described by the parameters given by the encoder is close to a standard normal distribution. The reason for this is that we want the distribution to be smooth and continuous, so that small changes in the input will result in small changes in the output, and also to make sure that the latent space is well structured and can be easily sampled from. So the main advantage of VAE over standard auto-encoders is that it allows us to generate new data by sampling from the latent space, which is not possible with standard auto-encoders. And the compact space representation is supposedly more meaningful and structured than the one obtained from standard auto-encoders.
Let the latent (compressed) variable and the input data . We first chose a prior (what we think / believe before seeing the data) over the latents variable (we usually take a standard normal):
We now define the likelihood model which represents our decoder (parametrized by ):
Now if we want a generative model we need the marginal likelihood of a datapoint, meaning the "distribution" / "probability" of given data point, this way we can generate new data points from this distribution:
And now to train our generative model, we need this to be accurate given our dataset, so we are going to maximize it over our whole dataset of size :
But the issue is that the integral is intractable and the same goes for the posterior (which we need as it tells use which latent variables are plausible explanations for the observed data under our current model ). The posterior represents our encoder.
To get around this issue, we'll use variational inference and we are going to introduce an approximate posterior (encoder) which typically is a gaussian and its parameters come from some neural network or anything.
So what we need to do now is optimize the parameters of our approximate posterior which are and this can be a neural network or anything else in order for the outputed distribution to be as close as possible to the groudntruth posterior distribution.
So we are going to get back to the quantity we are interested in for our objectif (generating new samples).
Now we want to optimize this quantity over all our datapoints , it is hard to do it so instead we are going to optimize another quantity that has a lower bound on this one, meaning that if we optimize this other quantity we are also optimizing the original one, and this is called the evidence lower bound (ELBO). We get this quantity by applying Jensen's inequality, jensen's inequality states that for a convex function and a random variable we have:
And in our case we are going to take as the logarithm function which is a concave function, so we get:
And now we get the ELBO by taking the negative of this quantity:
Because
So we added the negative sign to get a quantity that we want to minimize, and this is the loss function for our VAE, we want to minimize the negative ELBO over all our datapoints . It is better than the original quantity because it is easier to compute and optimize, and it also has a nice interpretation as a trade-off between the reconstruction error (the first term) and the KL divergence (the second term) which encourages the approximate posterior to be close to the prior.
A variant of VAE is -VAE where we add a hyperparameter to the KL divergence term to control the trade-off between the reconstruction error and the KL divergence, this allows us to learn more disentangled representations in the latent space.
Also fortunately for the case of gaussian distributions we can compute the KL divergence in closed form, so we don't need to do any approximation for this term, and we can just compute it directly. With
he KL has a closed form:
And then there is a final crucial part, which is the sampling part, a rule for training neural networks is that we need to be able to backpropagate through all the operations, but sampling is a non-differentiable operation, so we need to use a trick called the reparameterization trick, which allows us to backpropagate through the sampling operation by expressing the sampled variable as a deterministic function of the parameters and some noise. So instead of sampling directly from , we can sample from a standard normal distribution and then compute as:
On the ELBO, the reconstruction term pushes to retain information about so the decoder can explain and the KL term pushes the approximate posterior toward N(0, I), thus indirectly limiting the amount of information can retain about and thus encouraging the model to learn a more compact representation of the data.
- Do a toy example where we generate images of clocks with a given time.
Original Article: https://arxiv.org/pdf/1711.00937
Quantization is ...
Now on VQ-VAEs, Vector Quantized Variational Auto Encoders. The idea is the same, a generative model, but the latent space is discrete and represented by indices into a learned codebook. A sort of learned set of tokens. In VQ-VAE we won't have gaussian posterior anymore and we will have a differentiation issue again due to the quantization part.
The encoder will produce a continuous latent vector . We'll have a code book (a learned set of embeddings) . So we'll quantize and choose the nearest code book vector to :
so , the latent is the discrete index or a grid of indices for images if we process the images by patch.
- TODO: organize notes.