Lectures

Lectures


Mathematics for Deep Learning

Abstract: TBA



Introduction to Deep Learning

Introductory course on deep learning methods and algorithms.

Prerequisites
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb

Legend
T: theory (slides and animations)
P: practice (Jupyter Notebooks)

Schedule
1. Time slot 1 (2h) M 9:00–10:40. Introduction and motivation.
T) Learning paradigms: supervised, unsupervised, and reinforcement learning
P) Getting started with the tools: Jupyter notebook, PyTorch tensors and auto differentiation

2. Time slot 2 (2h) M 11:20–13:00. Classification and regression.
T+P) Neural net’s forward and backward propagation for classification and regression

3. Time slot 3 (2h) M 15:00–16:40. Energy-based models.
T) Latent variable generative energy-based models (LV-GEBMs) part I: foundations

4. Time slot 4 (3h) M 17:20–19:50. Geometric deep learning (grid and set).
T+P) Convolutional neural nets improve performance by exploiting data nature
T+P) Recurrent nets natively support sequential data
T+P) Self/cross and soft/hard attention: a building block for learning from sets

5. Time slot 5 (2h) T 15:00–16:40. Generative models.
T+P) LV-GEBMs part II: autoencoders, adversarial nets



Voting: axiomatic and algorithmic challenges
The goal of this lecture is to offer a general introduction to preference aggregation, desiderata for voting rules and computational complexity of classic preference aggregation problems.
Voting over restricted preference domains
In this lecture we discuss domain restrictions that make voting-related problems computationally tractable and avoid some of the classic social choice paradoxes. We focus on the classic domain on single-peaked preferences and consider the complexity of preference elicitation and learning the preference structure.


Towards Developmental Machine Learning

By and large, most studies of machine learning and pattern recognition are rooted in the framework of statistics. This is primarily due to the way machine learning is traditionally posed, namely by a problem of extraction of regularities from a sample of a probability distribution. This lecture promotes a truly different way of interpreting the learning of that relies on system dynamics. We promote a view of learning as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment.This leads to an in-depth interpretation of causality along with the definition of the principles and the methods for learning to store events without their long-term forgetting that characterize state of the art technologies in recurrent neural networks. Finally, we reinforce the underlying principle that the acquisition of cognitive skills by learning obeys information-based laws based on variational principles, which hold regardless of biology.



State representation learning and evaluation in robotic interaction tasks 1/2

Efficient representations of observed input data have been shown to significantly accelerate the performance of subsequent learning tasks in numerous domains. To obtain such representations automatically, we need to design both i) models that identify useful patterns in the input data and encode them into structured low dimensional representations, and ii) evaluation measures that accurately assess the quality of the resulting representations. We present work that addresses both these requirements. We present a short overview of representation learning techniques and different structures that can be imposed on representation spaces.  We show into how these can be applied in complex robotics tasks considering physical interaction with the environment.

State representation learning and evaluation in robotic interaction tasks 2/2

Efficient representations of observed input data have been shown to significantly accelerate the performance of subsequent learning tasks in numerous domains. To obtain such representations automatically, we need to design both i) models that identify useful patterns in the input data and encode them into structured low dimensional representations, and ii) evaluation measures that accurately assess the quality of the resulting representations. We present work that addresses both these requirements. We present a short overview of representation learning techniques and different structures that can be imposed on representation spaces.  We show into how these can be applied in complex robotics tasks considering physical interaction with the environment.




 

Tutorials


PyTorch 1/5

Abstract TBA

PyTorch 2/5

Abstract TBA

PyTorch 3/5

Abstract TBA

PyTorch 4/5

Abstract TBA

PyTorch 5/5

Abstract TBA