Wiki
Clone wikiNeuralMT / Background
Introduction to Neural MT (by Kyunghyun Cho, May 2015)
- Part 1: Background
- Part 2: Encoder-decoder architectures
- Part 3: Attention and more
- Transformers
- Lecture notes from a course on natural language understanding with distributed representations
NMT - tutorials
- seq2seq tf tutorial
- ACL 2016: slides (Chris Manning, Kyunghyun Cho, Thang Luong)
- dl4mt software and tutorial (Kyunghyun Cho)
- NMT tips (lamtram + CNN, Graham Neubig)
- NMT with TensorFlow
References
- ... should add essential references here ...
- Describing Multimedia Content using Attention-based Encoder–Decoder Networks (Kyunghyun Cho, Aaron Courville, Yoshua Bengio, 2015)
- Publication list at the SMT Research Survey
Background on deep learning
- Yoav Goldberg's Primer on NN models for NLP
- Kyunghyung Cho's Lecture Notes on NLU with distributed representations
- The deep learning book
- http://neuralnetworksanddeeplearning.com/
- Deep learning in a nutshell: core concepts, history and training, sequence learning
- Understanding natural languages using Torch
- speech recognition
- http://deeplearning.net reading list, software, research groups
- training, interpreting seq2seq models
Links
- NMT Seminar in München (Alexander Fraser)
- Deep Munich (links and info)
- SMT - survey (Adam Lopez)
Updated