Variational Inference

Variational Bayesian methods

Variational Bayesian methods are a family of techniques used to approximate intractable integrals in Bayesian inference and machine learning. They are used to estimate posterior distributions of parameters and latent variables, and can be seen as an extension of the expectation-maximization algorithm. Compared to Gibbs sampling, variational Bayes is often faster but requires more work to derive the equations used to update the parameters.

2 courses cover this concept

CS 330 Deep Multi-Task and Meta Learning

Stanford University

Fall 2022

This course emphasizes leveraging shared structures in multiple tasks to enhance learning efficiency in deep learning. It provides a thorough understanding of multi-task and meta-learning algorithms with a focus on topics like self-supervised pre-training, few-shot learning, and lifelong learning. Prerequisites include an introductory machine learning course. The course is designed for graduate-level students.

No concepts data

+ 17 more concepts

CS 228 - Probabilistic Graphical Models

Stanford University

Winter 2023

An in-depth study of probabilistic graphical models, combining graph and probability theory. Equips students with the skills to design, implement, and apply these models to solve real-world problems. Discusses Bayesian networks, exact and approximate inference methods, etc.

No concepts data

+ 14 more concepts