Overfitting is a common problem in mathematical modeling where a model is too closely fit to a particular set of data, resulting in poor performance when predicting future observations. Techniques such as model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout can be used to reduce the chance or amount of overfitting. Underfitting occurs when a model cannot adequately capture the underlying structure of the data.
Princeton University
Spring 2019
This introductory course focuses on machine learning, probabilistic reasoning, and decision-making in uncertain environments. A blend of theory and practice, the course aims to answer how systems can learn from experience and manage real-world uncertainties.
No concepts data
+ 21 more conceptsCarnegie Mellon University
Spring 2020
This course provides a comprehensive introduction to deep learning, starting from foundational concepts and moving towards complex topics such as sequence-to-sequence models. Students gain hands-on experience with PyTorch and can fine-tune models through practical assignments. A basic understanding of calculus, linear algebra, and Python programming is required.
No concepts data
+ 40 more conceptsCarnegie Mellon University
Spring 2018
A comprehensive exploration of machine learning theories and practical algorithms. Covers a broad spectrum of topics like decision tree learning, neural networks, statistical learning, and reinforcement learning. Encourages hands-on learning via programming assignments.
No concepts data
+ 55 more conceptsBrown University
Spring 2022
Brown University's Deep Learning course acquaints students with the transformative capabilities of deep neural networks in computer vision, NLP, and reinforcement learning. Using the TensorFlow framework, topics like CNNs, RNNs, deepfakes, and reinforcement learning are addressed, with an emphasis on ethical applications and potential societal impacts.
No concepts data
+ 40 more concepts