EEE:Generative Models for Signal Processing

Bahman Moraffah, Spring 2024

General Information

Instructor: Professor Bahman Moraffah
Office: GWC 333
Office Hours: TTh 10:30-11:30 am or by appointment
Class Meet: TTh 12:00-1:15 pm in SS105
Email: bahman.moraffah@asu.edu

Course Description

This graduate-level course explores deep generative models with an emphasis on diffusion models. The course includes theoretical foundations, practical implementations, and applications in signal processing. Topics cover various generative models, the mathematics needed for these models, and their applications.

Course Policies

  1. Prerequisites:

    1. Basic knowledge of linear algebra, probability, and programming.

    2. Prior coursework in signal processing or data analysis is beneficial but not required.

  2. Collaboration: You are encouraged to work on homework problems in study groups of no more than 3 people; however, you must always write up the solutions on your own, and you must never read or copy the solutions of other students. Similarly, you may use books or online resources to help solve homework problems, but you must always credit all such sources in your writeup and you must never copy material verbatim. Offering and accepting solutions from others is an act of plagiarism, which is a serious offense and all involved parties will be penalized according to the Academic Honesty Policy.

  3. Quizzes and Exams: All quizzes and exams are closed books and notes, a single letter-sized notes sheet (front and back) is allowed.

  4. Scribe: We need volunteers to take notes each class, type them up, and send them to me so they can be uploaded for the entire class. Each student can scribe at most 2 lectures. Scribing is NOT mandatory but it is highly encouraged. To get extra credit, you must take notes and type them in the provided template. Extra points are at the instructor’s discretion and depend on the student's effort. You can download the template here.

Syllabus

  • Week 1-2: Introduction to Generative Models

    • Lecture 1: Overview of Generative Models

      • Definition and types (VAEs, GANs, Flow-based models, Diffusion Models)

      • Applications in various fields

    • Lecture 2: Mathematical Foundations

      • Probability theory refresher

      • Information theory basics (entropy, KL divergence, mutual information)

      • Linear algebra and matrix calculus essentials

  • Week 3-4: Fundamental Generative Models

    • Lecture 3: Variational Autoencoders (VAEs)

      • Encoder-decoder architecture

      • ELBO and reparameterization trick

    • Lecture 4: Generative Adversarial Networks (GANs)

      • Discriminator-generator framework

      • Loss functions and training stability issues

    • Lecture 5: Flow-based Models

      • Normalizing flows

      • Invertibility and Jacobian determinants

  • Week 5-7: Diffusion Models

    • Lecture 6: Introduction to Diffusion Models

      • Historical context and basic principles

      • Diffusion and reverse processes

    • Lecture 7: Mathematical Foundations of Diffusion Models

      • Stochastic differential equations (SDEs)

      • Score matching and score-based generative modeling

    • Lecture 8: Training Diffusion Models

      • Noise schedules

      • Parameterization techniques

    • Lecture 9: Sampling Techniques

      • Ancestral sampling

      • Accelerated sampling methods

  • Week 8-9: Advanced Topics in Diffusion Models

    • Lecture 10: Accelerating Diffusion Models

      • Techniques for reducing computational cost

      • Approximations and optimizations

    • Lecture 11: Improving Image Quality in Diffusion Models

      • Enhancements and refinements in model architecture

      • Loss functions and perceptual metrics

  • Week 10: Mathematics for Generative Models

    • Lecture 12: Advanced Probability and Statistics

      • Bayesian inference basics

      • Markov Chain Monte Carlo (MCMC)

    • Lecture 13: Optimization Techniques

      • Gradient-based optimization

      • Variational inference methods

  • Week 11-12: Other Generative Models

    • Lecture 14: Energy-based Models (EBMs)

      • Concept and training methods

      • Applications and challenges

    • Lecture 15: Autoregressive Models

      • PixelCNN, PixelRNN

      • Sequential data modeling

  • Week 13-14: Applications in Signal Processing

    • Lecture 16: Generative Models for Signal Processing

      • Denoising, inpainting, and super-resolution

      • Time-series generation and forecasting

    • Lecture 17: Case Studies and Applications

      • Specific applications in biomedical signal processing

      • Case studies on speech and audio processing

  • Week 15: Special Topics and Current Research

    • Lecture 18: Recent Advances and Research Directions

      • Latest papers and breakthroughs

      • Open problems and future directions

    • Lecture 19: Student Presentations and Discussions

      • Research projects and presentations by students

      • Peer feedback and discussions

  • Week 16: Review and Exam Preparation

    • Lecture 20: Course Review and Q&A

      • Summary of key concepts

      • Exam preparation tips and practice questions

Textbooks

  • "Diffusion Generative Models: From Theory to Practice" by Bahman Moraffah

  • "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

  • "Probabilistic Machine Learning: Advanced Topics" by Kevin P. Murphy

  • "Denoising Diffusion Probabilistic Models" by Jonathan Ho, Ajay Jain, and Pieter Abbeel (paper)

  • "Score-Based Generative Modeling through Stochastic Differential Equations" by Yang Song, Stefano Ermon (paper)

Assessment

  • Quizzes and Class Participation: 10%

  • Homework: 20%

  • Midterm: 20%

  • Project: 30%

  • Final Exam: 20%