Topic outline

  • Course description

    Recent developments in neural network approaches (more known now as "deep learning") have dramatically changed the landscape of several research fields such as image classification, object detection, speech recognition, machine translation, self-driving cars and many more. Due its promise of leveraging large (sometimes even small) amounts of data in an end-to-end manner, i.e. train a model to extract features by itself and to learn from them, deep learning is increasingly appealing to other fields as well: medicine, time series analysis, biology, simulation.

    This course is a deep dive into practical details of deep learning architectures, in which we attempt to demystify deep learning and kick start you into using it in your own field of research. During this course, you will gain a better understanding of the basis of deep learning and get familiar with its applications. We will show how to set up, train, debug and visualize your own neural network. Along the way, we will be providing practical engineering tricks for training or adapting neural networks to new tasks.

    By the end of this class, you will have an overview on the deep learning landscape and its applications to traditional fields, but also some ideas for applying it to new ones. You should also be able to train a multi-million parameter deep neural network by yourself. For the implementations we will be using the PyTorch library in Python.

    The topics covered in this course include:

    • Neural network approaches: feedforward networks, convolutional networks (CNNs), recurrent networks (RNNs)
    • Modern practices: backpropagation, regularization, optimization, fine-tuning
    • Deep Learning research: autoencoders, deep generative models, long short-term memory (LSTM) modules
    • CNN architectures: VGG, ResNet, fully convolutional net, multi input and multi output nets
    • RNN architectures: bidirectional RNNs, encoder-decoder sequence-to-sequence, LSTMs, GRUs