Self Attention is a mechanism used to relate different positions of a single sequence in order to compute a representation of the same sequence. It has been used in machine reading, abstractive summarization, and image description generation, and has proven to be very effective.
Stanford University
Winter 2023
CS 224N provides an in-depth introduction to neural networks for NLP, focusing on end-to-end neural models. The course covers topics such as word vectors, recurrent neural networks, and transformer models, among others.
No concepts data
+ 21 more conceptsStanford University
Spring 2022
This is a deep-dive into the details of deep learning architectures for visual recognition tasks. The course provides students with the ability to implement, train their own neural networks and understand state-of-the-art computer vision research. It requires Python proficiency and familiarity with calculus, linear algebra, probability, and statistics.
No concepts data
+ 55 more concepts