Lectures
Reading the Genome Sequence using Neural Networks: Introduction
Reading the Genome Sequence using Neural Networks: Improving Resolution, Interpretability and Accuracy
Voting: axiomatic and algorithmic challenges
Voting over restricted preference domains
Towards Developmental Machine Learning
By and large, most studies of machine learning and pattern recognition are rooted in the framework of statistics. This is primarily due to the way machine learning is traditionally posed, namely by a problem of extraction of regularities from a sample of a probability distribution. This lecture promotes a truly different way of interpreting the learning of that relies on system dynamics. We promote a view of learning as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment.This leads to an in-depth interpretation of causality along with the definition of the principles and the methods for learning to store events without their long-term forgetting that characterize state of the art technologies in recurrent neural networks. Finally, we reinforce the underlying principle that the acquisition of cognitive skills by learning obeys information-based laws based on variational principles, which hold regardless of biology.
State representation learning and evaluation in robotic interaction tasks
Efficient representations of observed input data have been shown to significantly accelerate the performance of subsequent learning tasks in numerous domains. To obtain such representations automatically, we need to design both i) models that identify useful patterns in the input data and encode them into structured low dimensional representations, and ii) evaluation measures that accurately assess the quality of the resulting representations. We present work that addresses both these requirements. We present a short overview of representation learning techniques and different structures that can be imposed on representation spaces. We show into how these can be applied in complex robotics tasks considering physical interaction with the environment.
AI as a Tool for Investigating Human Intelligence 1/2
Artificial intelligence has a great potential to uncover the underlying mechanisms of human intelligence. Neural networks inspired by the brain can simulate how humans acquire cognitive abilities and thus reveal what enables/disables cognitive development. My lecture introduces a neuroscience theory called predictive coding. We have been designing neural networks based on predictive coding and investigating to what extent the theory accounts for cognitive development. The key idea is that the brain works as a predictive machine and perceives the world and acts on it to minimize prediction errors. Our robot experiments demonstrate that the process of minimizing prediction errors leads to sensorimotor and social cognitive development and that aberrant predictive processing produces atypical development such as developmental disorders. We discuss how these findings facilitate the understanding of human intelligence and provide a new principle for cognitive development.
AI as a Tool for Investigating Human Intelligence 2/2
Artificial intelligence has a great potential to uncover the underlying mechanisms of human intelligence. Neural networks inspired by the brain can simulate how humans acquire cognitive abilities and thus reveal what enables/disables cognitive development. My lecture introduces a neuroscience theory called predictive coding. We have been designing neural networks based on predictive coding and investigating to what extent the theory accounts for cognitive development. The key idea is that the brain works as a predictive machine and perceives the world and acts on it to minimize prediction errors. Our robot experiments demonstrate that the process of minimizing prediction errors leads to sensorimotor and social cognitive development and that aberrant predictive processing produces atypical development such as developmental disorders. We discuss how these findings facilitate the understanding of human intelligence and provide a new principle for cognitive development.
Computational Approaches for Solving Systems of Nonlinear Equations
into an optimization problem, and we will introduce techniques that can be utilized to search for solutions to the global optimization problem that arises when the most common reformulation is performed.
Automated Machine Learning: the state of the art
Automated machine learning is the science of learning how to build machine learning models in a data-driven, efficient, and objective way. It replaces manual (and often frustrating) trial-and-error with automated, principled processes. It also democratizes machine learning, allowing many more people to build high-quality machine learning systems.
In the first lecture, we will explore the state of the art in automated machine learning. We will cover the best techniques for neural architecture search, as well as learning complete machine learning pipelines. We explain how to design model search spaces, and how to efficiently search for the best models within this space. We’ll also cover useful tips and tricks to speed up the search for good models, as well as pitfalls and best practices.
Automated Machine Learning: learning to learn
In the second lecture, we’ll cover techniques to continually learn how to build better machine learning models. Just as human experts get ever better at building better models, automated machine learning systems should also get better every time they run. We’ll cover research on the intersection of automated machine learning, meta-learning, and continual learning that enables us to learn and capture which models work well, and transfer that knowledge to build better machine learning models, faster.
Tutorials
Mathematics for Deep Learning 1/2
Mathematics for Deep Learning 2/2
Introduction to Deep Learning 1/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T) Learning paradigms: supervised, unsupervised, and reinforcement learning
P) Getting started with the tools: Jupyter notebook, PyTorch tensors and auto differentiation
Introduction to Deep Learning 2/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Neural net’s forward and backward propagation for classification and regression
Introduction to Deep Learning 3/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T) Latent variable generative energy-based models (LV-GEBMs) part I: foundations
Introduction to Deep Learning 4/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Convolutional neural nets improve performance by exploiting data nature
Introduction to Deep Learning 5/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Recurrent nets natively support sequential data
T+P) Self/cross and soft/hard attention: a building block for learning from sets
Introduction to Deep Learning 6/6
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) LV-GEBMs part II: autoencoders, adversarial nets
Statistical Physics view of theory of Machine Learning 1/2
Statistical Physics view of theory of Machine Learning 2/2
PyTorch 1/5
Introduction.
PyTorch 2/5
Introduction.
PyTorch 3/5
Optimization.
PyTorch 4/5
Optimization.
PyTorch 5/5
Deployment.