Machine Learning — Hard
Key points
- The KL divergence term enforces a standard Gaussian prior on the latent space
- It prevents the encoder from learning complex, unstructured latent representations
- This regularization ensures the latent space is smooth and facilitates interpolation
Ready to go further?
Related questions
