Regularization (mathematics)

Regularization (mathematics)

Regularization is a process used in mathematics, statistics, finance, computer science and machine learning to obtain results for ill-posed problems or to prevent overfitting. It combines data terms with regularization terms using Bayesian statistics to compute a posterior that stabilizes the estimation process. Regularization is also used in machine learning to reduce generalization error and one of the earliest uses is Tikhonov regularization.

6 courses cover this concept

CS 230 Deep Learning

Stanford University

Fall 2022

An in-depth course focused on building neural networks and leading successful machine learning projects. It covers Convolutional Networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. Students are expected to have basic computer science skills, probability theory knowledge, and linear algebra familiarity.

No concepts data

+ 35 more concepts

CS 229: Machine Learning

Stanford University

Winter 2023

This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.

No concepts data

+ 32 more concepts

COS 324: Introduction to Machine Learning

Princeton University

Spring 2019

This introductory course focuses on machine learning, probabilistic reasoning, and decision-making in uncertain environments. A blend of theory and practice, the course aims to answer how systems can learn from experience and manage real-world uncertainties.

No concepts data

+ 21 more concepts

CS 168: The Modern Algorithmic Toolbox

Stanford University

Spring 2022

CS 168 provides a comprehensive introduction to modern algorithm concepts, covering hashing, dimension reduction, programming, gradient descent, and regression. It emphasizes both theoretical understanding and practical application, with each topic complemented by a mini-project. It's suitable for those who have taken CS107 and CS161.

No concepts data

+ 57 more concepts

11-785 Introduction to Deep Learning

Carnegie Mellon University

Spring 2020

This course provides a comprehensive introduction to deep learning, starting from foundational concepts and moving towards complex topics such as sequence-to-sequence models. Students gain hands-on experience with PyTorch and can fine-tune models through practical assignments. A basic understanding of calculus, linear algebra, and Python programming is required.

No concepts data

+ 40 more concepts

CSCI 1470/2470 Deep Learning

Brown University

Spring 2022

Brown University's Deep Learning course acquaints students with the transformative capabilities of deep neural networks in computer vision, NLP, and reinforcement learning. Using the TensorFlow framework, topics like CNNs, RNNs, deepfakes, and reinforcement learning are addressed, with an emphasis on ethical applications and potential societal impacts.

No concepts data

+ 40 more concepts