Lecturers
Each Lecturer will hold three/four lessons on a specific topic.
Topics
Deep Learning, GenomicsBiography
Ziga Avsec is a research scientist at DeepMind where he leads the genomics initiative within DeepMind’s Science program. He obtained his Ph.D. from the Technical University of Munich, supervised by Julien Gagneur. His past research work focused on the development of sequence-based predictive models to better understand the human genome.
Lectures
Biography
Edith Elkind is a Professor of Computer Science at University of Oxford. She obtained her PhD from Princeton in 2005, and has worked in the UK, Israel, and Singapore before joining Oxford in 2013. She works in algorithmic game theory, with a focus on algorithms for collective decision making and coalition formation. Edith has published over 100 papers in leading AI conferences and journals, and has served as a program chair of WINE, AAMAS, ACM EC and COMSOC; she will serve as a program chair of IJCAI in 2023.
Lectures
Topics
Constraint-Based Approaches to Machine LearningBiography
Marco Gori received the Ph.D. degree in 1990 from Università di Bologna, Italy, while working partly as a visiting student at the School of Computer Science, McGill University – Montréal. In 1992, he became an associate professor of Computer Science at Università di Firenze and, in November 1995, he joint the Università di Siena, where he is currently full professor of computer science. His main interests are in machine learning, computer vision, and natural language processing. He was the leader of the WebCrow project supported by Google for automatic solving of crosswords, that outperformed human competitors in an official competition within the ECAI-06 conference. He has just published the book “Machine Learning: A Constrained-Based Approach,” where you can find his view on the field.
He has been an Associated Editor of a number of journals in his area of expertise, including The IEEE Transactions on Neural Networks and Neural Networks, and he has been the Chairman of the Italian Chapter of the IEEE Computational Intelligence Society and the President of the Italian Association for Artificial Intelligence. He is a fellow of the ECCAI (EurAI) (European Coordinating Committee for Artificial Intelligence), a fellow of the IEEE, and of IAPR. He is in the list of top Italian scientists kept by VIA-Academy.
Lectures
By and large, most studies of machine learning and pattern recognition are rooted in the framework of statistics. This is primarily due to the way machine learning is traditionally posed, namely by a problem of extraction of regularities from a sample of a probability distribution. This lecture promotes a truly different way of interpreting the learning of that relies on system dynamics. We promote a view of learning as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment.This leads to an in-depth interpretation of causality along with the definition of the principles and the methods for learning to store events without their long-term forgetting that characterize state of the art technologies in recurrent neural networks. Finally, we reinforce the underlying principle that the acquisition of cognitive skills by learning obeys information-based laws based on variational principles, which hold regardless of biology.
Topics
Robotics, Robot Vision, Robot LearningBiography
Danica Kragic is a Professor at the School of Computer Science and Communication at the Royal Institute of Technology, KTH. She received MSc in Mechanical Engineering from the Technical University of Rijeka, Croatia in 1995 and PhD in Computer Science from KTH in 2001. She has been a visiting researcher at Columbia University, Johns Hopkins University and INRIA Rennes. She is the Director of the Centre for Autonomous Systems. Danica received the 2007 IEEE Robotics and Automation Society Early Academic Career Award. She is a member of the Royal Swedish Academy of Sciences, Royal Swedish Academy of Engineering Sciences and Young Academy of Sweden. She holds a Honorary Doctorate from the Lappeenranta University of Technology. She chaired IEEE RAS Technical Committee on Computer and Robot Vision and served as an IEEE RAS AdCom member. Her research is in the area of robotics, computer vision and machine learning. In 2012, she received an ERC Starting Grant. Her research is supported by the EU, Knut and Alice Wallenberg Foundation, Swedish Foundation for Strategic Research and Swedish Research Council. She is an IEEE Fellow.
https://en.wikipedia.org/wiki/Danica_Kragic
Lectures
Efficient representations of observed input data have been shown to significantly accelerate the performance of subsequent learning tasks in numerous domains. To obtain such representations automatically, we need to design both i) models that identify useful patterns in the input data and encode them into structured low dimensional representations, and ii) evaluation measures that accurately assess the quality of the resulting representations. We present work that addresses both these requirements. We present a short overview of representation learning techniques and different structures that can be imposed on representation spaces. We show into how these can be applied in complex robotics tasks considering physical interaction with the environment.
Topics
Computational neuroscience, Human-Robot Interaction, Cognitive Developmental RoboticsBiography
Dr. Yukie Nagai has been investigating underlying neural mechanisms for social cognitive development by means of computational approaches. She designs neural network models for robots to learn to acquire cognitive functions such as self-other cognition, estimation of others’ intention and emotion, altruism, and so on based on the theory of predictive coding. The simulator reproducing atypical perception in autism spectrum disorder (ASD), which has been developed by her group, greatly impacts the society as it enables people with and without ASD to better understand potential causes for social difficulties. She was elected to “30 women in robotics you need to know about” in 2019 and “World’s 50 Most Renowned Women in Robotics” in 2020. She serves as the principal investigator of JST CREST “Cognitive Mirroring” and CREST “Cognitive Feeling” since December 2016 and October 2021, respectively.
She is also a member of International Research Center for Neurointelligence at the University of Tokyo since 2019, and a member of Next Generation Artificial Intelligence Research Center and Forefront Physics and Mathematics Program to Drive Transformation at the University of Tokyo since 2020.
Lectures
Artificial intelligence has a great potential to uncover the underlying mechanisms of human intelligence. Neural networks inspired by the brain can simulate how humans acquire cognitive abilities and thus reveal what enables/disables cognitive development. My lecture introduces a neuroscience theory called predictive coding. We have been designing neural networks based on predictive coding and investigating to what extent the theory accounts for cognitive development. The key idea is that the brain works as a predictive machine and perceives the world and acts on it to minimize prediction errors. Our robot experiments demonstrate that the process of minimizing prediction errors leads to sensorimotor and social cognitive development and that aberrant predictive processing produces atypical development such as developmental disorders. We discuss how these findings facilitate the understanding of human intelligence and provide a new principle for cognitive development.
Artificial intelligence has a great potential to uncover the underlying mechanisms of human intelligence. Neural networks inspired by the brain can simulate how humans acquire cognitive abilities and thus reveal what enables/disables cognitive development. My lecture introduces a neuroscience theory called predictive coding. We have been designing neural networks based on predictive coding and investigating to what extent the theory accounts for cognitive development. The key idea is that the brain works as a predictive machine and perceives the world and acts on it to minimize prediction errors. Our robot experiments demonstrate that the process of minimizing prediction errors leads to sensorimotor and social cognitive development and that aberrant predictive processing produces atypical development such as developmental disorders. We discuss how these findings facilitate the understanding of human intelligence and provide a new principle for cognitive development.
Topics
Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data SciencesBiography
Panos Pardalos was born in Drosato (Mezilo) Argitheas in 1954 and graduated from Athens University (Department of Mathematics). He received his PhD (Computer and Information Sciences) from the University of Minnesota. He is a Distinguished Emeritus Professor in the Department of Industrial and Systems Engineering at the University of Florida, and an affiliated faculty of Biomedical Engineering and Computer Science & Information & Engineering departments.
Panos Pardalos is a world-renowned leader in Global Optimization, Mathematical Modeling, Energy Systems, Financial applications, and Data Sciences. He is a Fellow of AAAS, AAIA, AIMBE, EUROPT, and INFORMS and was awarded the 2013 Constantin Caratheodory Prize of the International Society of Global Optimization. In addition, Panos Pardalos has been awarded the 2013 EURO Gold Medal prize bestowed by the Association for European Operational Research Societies. This medal is the preeminent European award given to Operations Research (OR) professionals for “scientific contributions that stand the test of time.”
Panos Pardalos has been awarded a prestigious Humboldt Research Award (2018-2019). The Humboldt Research Award is granted in recognition of a researcher’s entire achievements to date – fundamental discoveries, new theories, insights that have had significant impact on their discipline.
Panos Pardalos is also a Member of several Academies of Sciences, and he holds several honorary PhD degrees and affiliations. He is the Founding Editor of Optimization Letters, Energy Systems, and Co-Founder of the International Journal of Global Optimization, Computational Management Science, and Springer Nature Operations Research Forum. He has published over 600 journal papers, and edited/authored over 200 books. He is one of the most cited authors and has graduated 71 PhD students so far. Details can be found in www.ise.ufl.edu/pardalos
Panos Pardalos has lectured and given invited keynote addresses worldwide in countries including Austria, Australia, Azerbaijan, Belgium, Brazil, Canada, Chile, China, Czech Republic, Denmark, Egypt, England, France, Finland, Germany, Greece, Holland, Hong Kong, Hungary, Iceland, Ireland, Italy, Japan, Lithuania, Mexico, Mongolia, Montenegro, New Zealand, Norway, Peru, Portugal, Russia, South Korea, Singapore, Serbia, South Africa, Spain, Sweden, Switzerland, Taiwan, Turkey, Ukraine, United Arab Emirates, and the USA.
Lectures
into an optimization problem, and we will introduce techniques that can be utilized to search for solutions to the global optimization problem that arises when the most common reformulation is performed.
Topics
Automatic machine learningBiography
Joaquin Vanschoren is Assistant Professor in Machine Learning at the Eindhoven University of Technology. His research focuses on machine learning, meta-learning, and understanding and automating learning. He founded and leads OpenML.org, an open science platform for machine learning. He received several demo and open data awards, has been tutorial speaker at NeurIPS and ECMLPKDD, and invited speaker at ECDA, StatComp, AutoML@ICML, CiML@NIPS, DEEM@SIGMOD, AutoML@PRICAI, MLOSS@NIPS, and many other occasions. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and he co-organizes the AutoML and meta-learning workshop series at NIPS and ICML. He is also co-editor of the book ‘Automatic Machine Learning: Methods, Systems, Challenges’.
Lectures
Automated machine learning is the science of learning how to build machine learning models in a data-driven, efficient, and objective way. It replaces manual (and often frustrating) trial-and-error with automated, principled processes. It also democratizes machine learning, allowing many more people to build high-quality machine learning systems.
In the first lecture, we will explore the state of the art in automated machine learning. We will cover the best techniques for neural architecture search, as well as learning complete machine learning pipelines. We explain how to design model search spaces, and how to efficiently search for the best models within this space. We’ll also cover useful tips and tricks to speed up the search for good models, as well as pitfalls and best practices.
In the second lecture, we’ll cover techniques to continually learn how to build better machine learning models. Just as human experts get ever better at building better models, automated machine learning systems should also get better every time they run. We’ll cover research on the intersection of automated machine learning, meta-learning, and continual learning that enables us to learn and capture which models work well, and transfer that knowledge to build better machine learning models, faster.
Tutorial Speakers
Each Tutorial Speaker will hold more than five lessons on a specific topic.
Topics
Information Theory, Mathematics for Machine LearningBiography
Roman Belavkin is a Reader in Informatics at the Department of Computer Science, Middlesex University, UK. He has MSc degree in Physics from the Moscow State University and PhD in Computer Science from the University of Nottingham, UK. In his PhD thesis, Roman combined cognitive science and information theory to study the role of emotion in decision-making, learning and problem solving. His main research interests are in mathematical theory of dynamics of information and optimization of learning, adaptive and evolving systems. He used information value theory to give novel explanations of some common decision-making paradoxes. His work on optimal transition kernels showed non-existence of optimal deterministic strategies in a broad class of problems with information constraints.
Roman’s theoretical work on optimal parameter control in algorithms has found applications to computer science and biology. From 2009, Roman lead a collaboration between four UK universities involving mathematics, computer science and experimental biology on optimal mutation rate control, which lead to the discovery in 2014 of mutation rate control in bacteria (reported in Nature Communications http://doi.org/skb and PLOS Biology http://doi.org/cb9s). He also contributed to research projects on neural cell-assemblies, independent component analysis and anomaly detection, such as cyber attacks.
Lectures
Topics
Deep Learning, Artificial IntelligenceBiography
Alfredo Canziani is an Assistant Teaching Professor of Computer Science and a Deep Learning Research Scientist at NYU Courant Institute of Mathematical Sciences, under the supervision of professors Kyunghyun Cho and Yann LeCun. His research mainly focusses on Machine Learning for Autonomous Driving. He has been exploring deep policy networks actions uncertainty estimation and failure detection, and long term planning based on latent forward models, which nicely deal with the stochasticity and multimodality of the surrounding environment. Alfredo obtained both his Bachelor (2009) and Master (2011) degrees in Electrical Engineering cum laude at Trieste University, his MSc (2012) at Cranfield University, and his PhD (2017) at Purdue University.
Lectures
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T) Learning paradigms: supervised, unsupervised, and reinforcement learning
P) Getting started with the tools: Jupyter notebook, PyTorch tensors and auto differentiation
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Neural net’s forward and backward propagation for classification and regression
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T) Latent variable generative energy-based models (LV-GEBMs) part I: foundations
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Convolutional neural nets improve performance by exploiting data nature
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) Recurrent nets natively support sequential data
T+P) Self/cross and soft/hard attention: a building block for learning from sets
Introductory course on deep learning methods and algorithms.
1. going over installation instructions at https://github.com/Atcold/pytorch-Deep-Learning
2. successfully complete https://github.com/Atcold/pytorch-Deep-Learning/blob/master/01-tensor_tutorial.ipynb
T: theory (slides and animations)
P: practice (Jupyter Notebooks)
T+P) LV-GEBMs part II: autoencoders, adversarial nets
Topics
Statistical physics for optimization & learning; Machine Learning; Statistical Mechanics; Disordered SystemsBiography
Bruno Loureiro is currently a research scientist at the “Information, Learning and Physics” (IdePHICS) laboratory at EPFL, working on the crossroads between Machine Learning and Statistical Physics. Before moving to EPFL, he was a postdoctoral researcher at the Institut de Physique Théorique (IPhT) in Paris, and received his PhD from the University of Cambridge. He is interested in Bayesian inference, theoretical machine learning and high-dimensional statistics more broadly. His research aims at understanding how data structure, optimisation algorithms and architecture design come together in successful learning.
Lectures
Topics
PyTorchBiography
Thomas Viehmann is a PyTorch and Machine Learning trainer and consultant. In 2018 he founded the boutique R&D consultancy MathInf based in Munich, Germany. His work spans low-level optimizations to enable efficient AI to developing cutting-edge deep-learning models for clients from startups to large multinational corporations. He is a PyTorch core developer with contributions across almost all parts of PyTorch and co-author of Deep Learning with PyTorch, to appear this summer with Manning Publications. Thomas’ education in computer science included a class in Neural Networks and Pattern Recognition at the turn of the millennium. He went on to do research in pen-and-paper Calculus of Variations and Partial Differential Equations, obtaining a Ph.D. from Bonn University.
Lectures
Introduction.
Introduction.
Optimization.
Optimization.
Deployment.