Richard Socher, Chris Manning and Yoshua Bengio have created a tutorial on “Deep Learning for NLP (without Magic)“.
The tutorial includes slides and two videos of talks held on the subject. It deals with how deep learning algorithms can be applied in natural language processing. Deep learning is a set of algorithms and models which work under the assumption that observed data is generated from multiple layers of hidden representations that interact with each other.
Although not really new and for some time being disregarded in favour of simpler and seemingly more robust models such as support vector machines (SVM) deep learning has gained quite some traction in the recent years due to its success in areas such automatic speech recognition (think Siri) and image processing (think Google Self-Driving Cars). Although in many cases ‘deep learning’ has come to be used as a hype term that stands for anything ‘artificial intelligence’ deep learning indeed is an important approach for modern AI applications.
The authors’ goal is to present the complex mathematics behind these algorithms in a transparent and intuitive manner, which is a highly commendable endeavour, especially given that the maths behind those models isn’t exactly of an easy variety.
One comment