B024317 - Machine Learning - Fall 2021

MSc degree in Computer Engineering, University of Florence

Contacts

Paolo Frasconi, DINFO, via di S. Marta 3, 50139 Firenze

email: .

Office Hours

Wednesday, 10:45-12:45.

Until further notice, it will be via telco. Please on the day before and I will reply with a tentative meeting time.

Learning Objectives

The course covers some of the most important aspects of modern machine learning including:

You will be able to understand, design, and apply several machine learning techniques for supervised learning and (to a lesser extent) for unsupervised learning. You will be able to reproduce some of the innovative solution described in the recent literature and apply them to closely related problems. The course will serve as a foundation for further study in the many engineering and scientific areas where state-of-the-art solutions are based on (deep) learning algorithms.

Prerequisites

Good knowledge of a programming language (preferably Python), and a solid background in mathematics (calculus, linear algebra, and probability theory) are necessary prerequisites to this course. Previous knowledge of fundamental ideas in supervised learning, probabilistic graphical models, optimization and statistics would be very useful but not strictly necessary.

Suggested readings

[GBC16]
I. Goodfellow, Y. Bengio, A. Courville. Deep Learning. MIT Press, 2016 (free PDF).
[HTF09]
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Data Mining, Inference, and Prediction. 2nd edition. Springer, 2009 (free PDF).
[A18]
Charu C. Aggarwal Neural Networks and Deep Learning. Springer, 2018 (free PDF from Unifi IP).
[B06]
Chris Bishop Pattern Recognition and Machine Learning. Springer, 2006 (free PDF).
[SSBD14]
Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014 (free PDF).
[MRT18]
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar Foundations of Machine Learning. MIT Press, Second Edition, 2018 (free PDF).
[W13]
L. Wasserman. All of statistics: a concise course in statistical inference. Springer Science & Business Media, 2013 (very useful if you need to improve your general background in statistics).
[B12]
D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012 (free PDF).
[D17]
Hal Daume III A Course in Machine Learning 2017 (free PDF).
Misc
Additional reading materials (papers or book chapters) are often listed on the side of each lecture.

Assessment

9 credits:

There is a single oral final exam. You can choose the exam topic but you are strongly advised to discuss with me before you begin working on it. Typically, you will be assigned a set of papers to read and will be asked to reproduce some experimental results. You will be required to give a short (30 min) presentation during the exam. Please ensure that your presentation includes an introduction to the problem being addressed, a brief review of relevant literature, technical derivation of methods, and, if appropriate, a detailed description of experimental work. You are allowed to use multimedia tools to prepare your presentation. You are responsible for understanding all the relevant concepts, the underlying theory, and the necessary background that you will usually find in the textbooks.

You can work in groups of two to carry out experimental works (three is an exceptional number that you must motivate clearly). If you do so, please ensure that personal contributions to the overall work are clearly identifiable.

6 credits:

Same as above except topics are limited to those covered in the first 2/3 of the course and you will not be asked to reimplement the methods or to reproduce experimental results.

Schedule and reading materials

For videos please go to the Moodle-WebEx connector (UniFI credentials required).

Date Topics Readings/Handouts
2021-09-20 Administrivia. Introduction to Machine Learning and the fundamental paradigms.
  • HTF09 1, 2.1, 2.2.
2021-09-22 Linear regression as a supervised learning problem. Loss functions for regression. Ordinary least squares. Unbiasedness. Gauss-Markov theorem.
  • HTF09 3.1, 3.2
2021-09-23 Bias-variance decomposition. Regularization. Ridge regression.
  • B06 3.1, 3.2; HTF09 3.4.1, 7.1, 7.2, 7.3
2021-09-27 Lasso. Regularization paths. Maximum likelihood principle. MLE and KL-Divergence. Very short introduction to Bayesian learning. Ridge regression and Lasso as MAP learning.
  • B12 Ch. 8, HTF09 4.1-4.4
2021-09-29 Classification. Fisher (linear) discriminant analysis. Bayes optimal classifier. Limitations of linear discriminant analysis. Discriminant vs. generative classifiers. Motivation for the logistic function.
  • HTF09 4.4, 6.6.3; A18 2.2.3; B06 4.1.4, 4.2, 4.3.2
2021-09-30 Logistic regression and log-loss (cross-entropy loss). Gradient computation for logistic regression. Optimization. Netwon method. Discriminative/generative conjugate pairs. Learning curves.
  • HTF09 4.4, 6.6.3; A18 2.2.3
2021-10-04 Generalized linear models. Logistic regression and least squares regression as special cases. Softmax regression.
  • SGBC16 6.2
2021-10-06 Gradient calculations for softmax regression. Poisson regression. Maximum margin hyperplane. Dual form of the the maximum-margin hyperplane optimization problem. KKT conditions and support vectors. Soft constraints (support vector machine). Dual SVM problem. Hinge loss.
  • GBC16 6.2.2.3; HTF09 4.5
2021-10-07 Kernel methods. Feature maps. Polynomial and RBF kernels. Mercer's theorem. Closure properties of kernels. Representer's theorem. Kernel ridge regression.
  • HTF09 5.8
2021-10-11 Multiclass SVM. Support vector regression. Multi-instance learning.
  • HTF09 5.8
2021-10-13 Kernel density estimation. Novelty detection and the ν-trick. Learning curves. Qualitative behavior of the excess error and its decomposition.
2021-10-14 Classic learning theory. PAC learning. Agnostic learning and bounds for the estimation error. Learning with infinite function classes: VC-dimension, VC bounds.
2021-10-18 No free lunch theorem. Weak learners. Boosting. Adaboost.
  • HTF09 10.1, 10.4
2021-10-20 Analysis of Adaboost and exponential loss. Boosting decision stumps. Example, face detection. Bagging. Random forests.
  • HTF09 10.1, 10.4, 15
2021-10-21 Gradient boosting. Introduction to representations and representation learning.
  • HTF09 10, GCB16 5.11, 13.4
2021-10-28 Artificial neurons and their biological inspiration. Activation functions. Expressiveness of shallow and deep neural networks. Universality for Boolean functions. Universal approximation.
  • GCB16 Ch. 6.1, 6.3, 6.4, A18 1.
2021-11-04 VC-dimension of neural networks. Training of neural networks, losses and objective function. Gradient computations. Algorithmic differentiation (forward mode).
  • GBC16 6.5, 8.1, 8.2, 8.3.1
2021-11-10 Reverse mode AD and Backpropagation. Gradient descent and stochastic gradient descent. Tradeoffs under a time-budget constraint.
  • GBC16 8
2021-11-11 Momentum. Adagrad. RMSProp. Adam. Gradient clipping. Global minima and non-convexity of overparameterized systems.
  • GBC16 8.5. 10.11.1, A18 3.5
2021-11-15 Overparameterized systems. Polyak-Lojasiewicz condition and tangent kernel. Convergence of GD. Implicit regularization.
2021-11-18 Regularizers (weight decay, dropout, data augmentation). Batch normalization. Convolutions.
2021-11-22 Convolutional networks. N-d signals. Variants of the convolutional operator (strides, dilation, transposed). Pooling. Modules (subnetworks). Highway and residual networks. U-net.
  • GBC16 9.
2021-11-25 Sequence learning. Overview of problems and methods. Recurrent networks. Latching. Vanishing gradients. Gated recurrent units. Long short-term memory networks.
  • GBC16 10
2021-11-29 Temporal convolutional networks. Time-delayed neural networks. Attention mechanisms. Recurrent encoder-decoder with attention.
  • GBC16 10
2021-12-06 Attention and self-attention layers. Multi-headed attention. Transformers. Introduction to hyperparameter optimization. Grid and random search. Sequential model-based optimization.
2021-12-13 Bayesian optimization for hyperparameter optimization. Gaussian Processes. Expected improvement. Examples. Supervised learning on graphs. Graph convolutional networks.

Note

Full text of linked papers is normally accessible when connecting from a UNIFI IP address. Use the proxy proxy-auth.unifi.it:8888 (with your credentals) if you are connecting from outside the campus network.