• A
  • A
  • A
  • АБB
  • АБB
  • АБB
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта
Книга
Integral Robot Technologies and Speech Behavior

Kharlamov A. A., Pantiukhin D., Borisov V. et al.

Newcastle upon Tyne: Cambridge Scholars Publishing, 2024.

Глава в книге
Neural Networks for Speech Synthesis of Voice Assistants and Singing Machines

Pantiukhin D.

In bk.: Integral Robot Technologies and Speech Behavior. Newcastle upon Tyne: Cambridge Scholars Publishing, 2024. Ch. 9. P. 281-296.

Препринт
DAREL: Data Reduction with Losses for Training Acceleration of Real and Hypercomplex Neural Networks

Demidovskij A., Трутнев А. И., Тугарев А. М. et al.

NeurIPS 2023 Workshop. ZmuLcqwzkl. OpenReview, 2023

Deep Generative Models

2022/2023
Учебный год
ENG
Обучение ведется на английском языке
6
Кредиты

Преподаватель

Course Syllabus

Abstract

Generative models in machine learning try to learn the entire distribution of inputs and learn to generate new instances from this distribution. Modern deep generative models draw pictures, write text, compose music, and much more—and that’s exactly what we will see in the course. We will begin with basic definitions and proceed through GANs, VAEs, and Transformers up until the latest state of the art research results. The course requires an understanding of basic machine learning and deep learning.
Learning Objectives

Learning Objectives

  • The objective of this course is to learn generative models based on deep neural networks, starting from basic definitions and reaching the current state of the art in several different directions.
Expected Learning Outcomes

Expected Learning Outcomes

  • to understand the difference between discriminative and generative models
  • to understand the relation between naive Bayes and logistic regression
  • to understand the concept of generative-discriminative pairs
  • to understand the difference between various deep generative models
  • to understand the structure of explicit density models from PixelCNN to WaveNet
  • • understand the basic structure of GANs and the idea of adversarial training
  • • understand various loss functions used in modern GANs, including LSGAN and WGAN
  • • understand modern GAN-based architectures for high-resolution generation
  • • understand the paired style transfer problem setting and its solutions (Gatys et al., pix2pix)
  • understand the unpaired style transfer problem setting and its solutions (CycleGAN, AdaIN, StyleGAN)
  • able to understand the idea of the latent space for a deep autoencoder-based model and sampling from it
  • • understand the structure and training of variational autoencoders
  • understand quantized versions of variational autoencoders
  • • have a basic understanding of attention mechanisms in deep learning
  • • understand the operations of a self-attention layer in Transformers
  • • understand modern Transformers, including BERT and GPT families
Course Contents

Course Contents

  • Introduction to generative models: motivation and the naive example.
  • Deep generative models: general taxonomy and autoregressive models
  • Generative adversarial networks I: introduction, basic ideas, loss functions in GANs
  • GANs II: modern examples of GANs. Case study: GANs for style transfer.
  • Variational autoencoders: from the basics to VQ-VAE
  • Transformers: basic idea, BERT and GPT. Transformer + VQ-VAE = DALL-E
Assessment Elements

Assessment Elements

  • non-blocking test week 1-2
  • non-blocking test week 2-3
  • non-blocking test week 4-5
  • non-blocking test week 6
  • non-blocking Final Programming assignment
Interim Assessment

Interim Assessment

  • 2022/2023 3rd module
    0.1 * test week 2-3 + 0.1 * test week 6 + 0.6 * Final Programming assignment + 0.1 * test week 1-2 + 0.1 * test week 4-5
Bibliography

Bibliography

Recommended Core Bibliography

  • Goodfellow, I. (2016). NIPS 2016 Tutorial: Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.00160
  • Integrating deep learning algorithms to overcome challenges in big data analytics, , 2022

Recommended Additional Bibliography

  • Mescheder, L., Nowozin, S., & Geiger, A. (2017). Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&site=eds-live&db=edsarx&AN=edsarx.1701.04722