A repository for sharing the best resources for learning the current state of the art in machine learning. Suggested by my friend, Tommy Unger.
- The Deep End of Deep Learning | Hugo Larochelle | TEDxBoston
- How we teach computers to understand pictures | Fei Fei Li
- Deep Learning and the Future of AI | Yann LeCun | Talk 1/2
- Deep Learning and the Future of AI | Yann LeCun | Q&A 2/2
- MIT OCW - 6.034 lecture 12a
- MIT OCW - 6.034 lecture 12b
- Deep Learning SIMPLIFIED
- Neural Networks Demystified
- Deep Learning | Udacity
- Understanding the Bias-Variance Tradeoff
- A Tutorial on Deep Learning Part 1: Nonlinear Classifiers and The Backpropagation Algorithm
- A Tutorial on Deep Learning Part 2: Autoencoders, Convolutional Neural Networks and Recurrent Neural Networks
- Sparse Autoencoders
- Neural Network Zoo
- A guide to convolution arithmetic for deep learning:
- Convolutional Neural Networks (CNNs): An Illustrated Explanation
- Calc on Computational Graphs
- Linear Xforms
- Notes on Minsky's Perceptrons
- Visual Information Theory
- Automatic Differentiation
- AD on Wikipedia
- Non-convex Optimization Blog
- Yes you should understand backprop
There is no one best optimization algorithm.
A neural network with at least two hidden layers using any activation function can approximate any function to an arbitrary accuracy given appropriate parameters.
There exist differentiable functions of arbitrarily many variables.
- Gentle Tensorflow Intro
- Hacker's guide to Neural Networks
- A Neural Network in 11 lines of Python
- Stanford's Unsupervised Feature Learning and Deep Learning
- Hugo Larochelle's Neural Network video series
- Neural Networks and Deep Learning: Awesome Ebook
- Rojas Neural Networks: A Systematic Introduction
- Andrej Karpathy's CS231n Convolutional Neural Networks for Visual Recognition
- Wild ML: Lots of deep learning writeups
- Anyone Can Learn To Code an LSTM-RNN in Python
- How to Code and Understand DeepMind's Neural Stack Machine
- The Unreasonable Effectiveness of Recurrent Neural Networks
- Char RNN
- Montréal Institute for Learning Algorithms
- University of Central Florida Evolutionary Complexity Research Group
- University of Wyoming Evolving AI Lab
- Stanford NLP
- Stanford Vision Lab
- NYU Computational Intelligence, Learning, Vision, and Robotics
- Harvard NLP
- DeepMind
- IBM DeepQA
- Baidu: Institute of Deep Learning
- Baidu: Big Data Lab
- Baidu: Silicon Valley AI Lab
- Open Review
- ICLR (International Conference on Learning Representations)
- NIPS (Neural Information Processing Systems) Conference
- TensorFlow
- Torch Demos
- Theano
- Keras
- Caffe
- MXNET
- Microsoft Cognitive Toolkit
- Chainer
- Stanford CoreNLP
- Scikit-learn
- DeepMind Lab
- OpenAI Gym
- OpenAI Universe
- DeepLearning4J
- Brief Lua Tutorial + Softmax Classifier Hello World
- Minimal Hello World NN Tutorials
- Torch Overview Slides, Many Topics
- Torch Documentation Template
- Torch Documentation Template - Tensor
- An Intro to Convolutional Networks in Torch: 1d
- Torch Master Documentation
- WordNet
- ImageNet
- CIFAR (Canadian Institute for Advanced Research)
- LSUN (Large-scale Scene Understanding Challenge)
- LSUN 20 Objects
- MS COCO - Common Objects in Context
- Microsoft MAchine Reading COmprehension Dataset
- Kaggle
- DeepLearning.net
- UCI Machine Learning Datasets
- awesome-public-datasets aggregation
- USA Goverment Open Datasets
- TensorFlow Playground
- ConvNetJS Deep Learning in your browser
- Harvard's LSTMVis
- Deep Visualization
- Crazy 3D Visualization
- Google Brain's Distill Blog
- Colah's Blog
- 大トロのブログ (Otoro's Blog)
- Chain Rule + Dynamic Programming = Neural Networks
- Justin Domke's Weblog
- Tomasz Malisiewicz's Computer Vision Blog
- Denny Britz's WildML blog
- r2rt
- Andrej Karpathy Blog
- Michael Nielsen
- i am trask blog
- Sebastian Ruder
- Neural Perspective