Backpropagation is a machine-learning algorithm used to minimize the mean squared error of a model by adjusting its parameters. It uses the Leibniz chain rule and is derived through dynamic programming. Gradient descent is then used to update the weights of the network. It was popularized in 1986 by David E. Rumelhart et al.
Stanford University
Winter 2023
CS 224N provides an in-depth introduction to neural networks for NLP, focusing on end-to-end neural models. The course covers topics such as word vectors, recurrent neural networks, and transformer models, among others.
No concepts data
+ 21 more conceptsStanford University
Winter 2023
This comprehensive course covers various machine learning principles from supervised, unsupervised to reinforcement learning. Topics also touch on neural networks, support vector machines, bias-variance tradeoffs, and many real-world applications. It requires a background in computer science, probability, multivariable calculus, and linear algebra.
No concepts data
+ 32 more conceptsStanford University
Autumn 2022-2023
Stanford's CS 221 course teaches foundational principles and practical implementation of AI systems. It covers machine learning, game playing, constraint satisfaction, graphical models, and logic. A rigorous course requiring solid foundational skills in programming, math, and probability.
No concepts data
+ 88 more conceptsCarnegie Mellon University
Spring 2018
A comprehensive exploration of machine learning theories and practical algorithms. Covers a broad spectrum of topics like decision tree learning, neural networks, statistical learning, and reinforcement learning. Encourages hands-on learning via programming assignments.
No concepts data
+ 55 more conceptsStanford University
Spring 2022
This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.
No concepts data
+ 55 more conceptsBrown University
Spring 2022
Brown University's Deep Learning course acquaints students with the transformative capabilities of deep neural networks in computer vision, NLP, and reinforcement learning. Using the TensorFlow framework, topics like CNNs, RNNs, deepfakes, and reinforcement learning are addressed, with an emphasis on ethical applications and potential societal impacts.
No concepts data
+ 40 more concepts