Fall 2022
Stanford University
This course emphasizes leveraging shared structures in multiple tasks to enhance learning efficiency in deep learning. It provides a thorough understanding of multi-task and meta-learning algorithms with a focus on topics like self-supervised pre-training, few-shot learning, and lifelong learning. Prerequisites include an introductory machine learning course. The course is designed for graduate-level students.
While deep learning has achieved remarkable success in many problems such as image classification, natural language processing, and speech recognition, these models are, to a large degree, specialized for the single task they are trained for. This course will cover the setting where there are multiple tasks to be solved, and study how the structure arising from multiple tasks can be leveraged to learn more efficiently or effectively. This includes:
This is a graduate-level course. By the end of the course, students will be able to understand and implement the state-of-the-art multi-task learning and meta-learning algorithms and be ready to conduct research on these topics.
CS 229 or an equivalent introductory machine learning course is required.
No data.
No data
Lecture slides available at Course Schedule and Materials
Videos of previous offerings are available on YouTube at Previous Offerings
Videos of 2019 offering available on YouTube
Homework available at Course Schedule and Materials
Notes and optional readings available at Course Schedule and Materials