Spring 2019

Princeton University

This introductory course focuses on machine learning, probabilistic reasoning, and decision-making in uncertain environments. A blend of theory and practice, the course aims to answer how systems can learn from experience and manage real-world uncertainties.

This course provides a broad introduction to machine learning, probabilistic reasoning and decision making in uncertain environments. The course should be of interest to undergraduate students in computer science, applied mathematics, sciences and engineering, and lower-level graduate students looking to gain an introduction to the tools of machine learning and probabilistic reasoning with applications to data-intensive problems in the applied sciences, natural sciences and social sciences.

For students with interests in the fundamentals of machine learning and probabilistic artificial intelligence, this course will address three central, related questions in the design and engineering of intelligent systems. How can a system process its perceptual inputs in order to obtain a reasonable picture of the world? How can we build programs that learn from experience? How can we design systems to deal with the inherent uncertainty in the real world?

Our approach to these questions will be both theoretical and practical. We will develop a mathematical underpinning for the methods of machine learning and probabilistic reasoning. We will look at a variety of successful algorithms and applications. We will also discuss the motivations behind the algorithms, and the properties that determine whether or not they will work well for a particular task.

Students should be comfortable with writing non-trivial programs in Python. Students should have a background in basic probability theory, and some level of mathematical sophistication, including calculus and linear algebra.

No data.

There is no required textbook for the course. This course has its own notes that are considered the required reading. Nevertheless, people learn in different ways and seeing the material presented in different formats can be valuable. To that end, additional optional material is linked on the course website and several books provide useful additional reading:

- Kevin Murphy.
*Machine Learning: A Probabilistic Perspective*. MIT Press. 2012. - Christopher M. Bishop.
*Pattern Recognition and Machine Learning*. Springer. 2011. - David J.C. MacKay.
*Information Theory, Inference, and Learning Algorithms*. Cambridge University Press. 2003. Freely available online at http://www.inference.org.uk/itila/book.html. - Trevor Hastie, Robert Tibshirani, and Jerome Friedman.
*The Elements of Statistical Learning*. Springer. 2001. Freely available online at http://www-stat.stanford.edu/~tibs/ElemStatLearn/ - Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani.
*An Introduction to Statistical Learning*. Springer. 2013. Freely available online at http://www-bcf.usc.edu/~gareth/ISL/ - Richard S. Sutton and Andrew G. Barto.
*Reinforcement Learning: An Introduction*. MIT Press. 1998. Freely available online at http://incompleteideas.net/book/the-book-2nd.html

Course notes available at Schedule

No videos available

Assignments available at Assignements

No other materials available

Basis FunctionsCross-validation (statistics)Feature (machine learning)Hierarchical ClusteringK-means clusteringKernel-based ClassificationLatent Factor ModelsLinear classificationLinear regressionMarkov Decision Process (MDP)Neural networkOverfittingPlanningPolicy IterationPrincipal Component AnalysisRegularization (mathematics)Reinforcement learning (RL)Singular Value Decomposition (SVD)Supervised learningUnsupervised learningValue iteration