Computer Vision (CV) is a field of developing algorithms and systems that make computers understand visual data from the world. Its applications include image and video recognition, object detection, visual tracking and surveilance, etc. It uses techniques from other fields of Computer Science such as Maching Learning and Deep Learning.
Courses of Computer Vision typically require knowledge of Linear Algebra, Probability Theory and Computer Programming
Stanford University
Spring 2022
This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.
No concepts data
+ 55 more conceptsStanford University
Winter 2023
This course introduces concepts and applications in computer vision, focusing on geometry and 3D understanding. It covers topics like filtering, edge detection, segmentation, clustering, shape reconstruction from stereo, and high-level visual topics. Knowledge of linear algebra, basic probability, and statistics is required.
No concepts data
+ 16 more conceptsUniversity of Washington
Winter 2022
A general introduction to computer vision, this course covers traditional image processing techniques and newer, machine-learning based approaches. It discusses topics like filtering, edge detection, stereo, flow, and neural network architectures.
No concepts data
+ 24 more conceptsCarnegie Mellon University
Spring 2022
This course gives an expansive introduction to computer vision, focusing on image processing, recognition, geometry-based and physics-based vision, and video analysis. Students will gain practical experience solving real-life vision problems. It requires a good understanding of linear algebra, calculus, and programming.
No concepts data
+ 19 more concepts