• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Deep Generative Models

2024/2025
Academic Year
ENG
Instruction in English
6
ECTS credits
Delivered at:
Department of Applied Mathematics and Informatics (Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod))
Course type:
Compulsory course
When:
2 year, 3 module

Course Syllabus

Abstract

Generative models in machine learning try to learn the entire distribution of inputs and learn to generate new instances from this distribution. Modern deep generative models draw pictures, write text, compose music, and much more—and that’s exactly what we will see in the course. We will begin with basic definitions and proceed through GANs, VAEs, and Transformers up until the latest state of the art research results. The course requires an understanding of basic machine learning and deep learning.
Learning Objectives

Learning Objectives

  • The objective of this course is to learn generative models based on deep neural networks, starting from basic definitions and reaching the current state of the art in several different directions.
Expected Learning Outcomes

Expected Learning Outcomes

  • to understand the difference between discriminative and generative models
  • to understand the relation between naive Bayes and logistic regression
  • to understand the concept of generative-discriminative pairs
  • to understand the difference between various deep generative models
  • to understand the structure of explicit density models from PixelCNN to WaveNet
  • • understand the basic structure of GANs and the idea of adversarial training
  • • understand various loss functions used in modern GANs, including LSGAN and WGAN
  • • understand modern GAN-based architectures for high-resolution generation
  • • understand the paired style transfer problem setting and its solutions (Gatys et al., pix2pix)
  • understand the unpaired style transfer problem setting and its solutions (CycleGAN, AdaIN, StyleGAN)
  • able to understand the idea of the latent space for a deep autoencoder-based model and sampling from it
  • • understand the structure and training of variational autoencoders
  • understand quantized versions of variational autoencoders
  • • have a basic understanding of attention mechanisms in deep learning
  • • understand the operations of a self-attention layer in Transformers
  • • understand modern Transformers, including BERT and GPT families
Course Contents

Course Contents

  • Introduction to generative models: motivation and the naive example.
  • Deep generative models: general taxonomy and autoregressive models
  • Generative adversarial networks I: introduction, basic ideas, loss functions in GANs
  • GANs II: modern examples of GANs. Case study: GANs for style transfer.
  • Variational autoencoders: from the basics to VQ-VAE
  • Transformers: basic idea, BERT and GPT. Transformer + VQ-VAE = DALL-E
Assessment Elements

Assessment Elements

  • non-blocking weekly tests
  • non-blocking Final test
Interim Assessment

Interim Assessment

  • 2024/2025 3rd module
    0.6 * Final test + 0.4 * weekly tests
Bibliography

Bibliography

Recommended Core Bibliography

  • Goodfellow, I. (2016). NIPS 2016 Tutorial: Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.00160
  • Integrating deep learning algorithms to overcome challenges in big data analytics, , 2022

Recommended Additional Bibliography

  • Mescheder, L., Nowozin, S., & Geiger, A. (2017). Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.04722

Authors

  • Трехлеб Ольга Юрьевна
  • Savchenko Andrei Vladimirovich