B024317 - Machine Learning - Fall 2020

MSc degree in Computer Engineering, University of Florence

Contacts

Paolo Frasconi, DINFO, via di S. Marta 3, 50139 Firenze

email: (please do not use my address @unifi.it, it was forcibly moved to gmail by the central administration and it has all sorts of problems: messages may be replied with delay or not replied at all).

Office Hours

Tuesday, 10:45-12:45

Until further notice, it will be on Skype. Please with your Skype ID on the day before and I will reply with a tentative meeting time.

Learning Objectives

In this class you will learn about some fundamental statistical learning algorithms and a number of deep learning techniques. You will learn about the basics of computational learning theory, and will be able to design state-of-the-art solutions to application problems. Broad topics that are covered include: Generalized linear models, kernel methods, ensemble techniques and boosting, core deep learning methodologies, sequence learning and recurrent networks, relational learning.

Prerequisites

A good knowledge of a programming language (preferably Python), and a solid background in mathematics (calculus, linear algebra, and probability theory) are necessary prerequisites to this course. Previous knowledge of fundamental ideas in supervised learning, probabilistic graphical models, optimization and statistics would be very useful but not strictly necessary.

Suggested readings

Textbooks

[GBC16] I. Goodfellow, Y. Bengio, A. Courville. Deep Learning. MIT Press, 2016 (free PDF).
[A18] Charu C. Aggarwal Neural Networks and Deep Learning. Springer, 2018 (free PDF from Unifi IP).
[HTF09] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Data Mining, Inference, and Prediction. 2nd edition. Springer, 2009 (free PDF).
[B12] D. Barber. Bayesian Reasoning and Machine Learning. Cambridge University Press, 2012.

Other texts

[W13] L. Wasserman. All of statistics: a concise course in statistical inference. Springer Science & Business Media, 2013 (very useful if you need to improve your general background in statistics).
[B06] Chris Bishop Pattern Recognition and Machine Learning. Springer, 2006 (free PDF).
[SSBD14] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014 (free PDF).
[MRT18] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar Foundations of Machine Learning. MIT Press, Second Edition, 2018 (free PDF).
[D17] Hal Daume III A Course in Machine Learning 2017 (free PDF).

Assessment

9 credits:

There is a single oral final exam. You can choose the exam topic but you are strongly advised to discuss with me before you begin working on it. Typically, you will be assigned a set of papers to read and will be asked to reproduce some experimental results. You will be required to give a short (30 min) presentation during the exam. Please ensure that your presentation includes an introduction to the problem being addressed, a brief review of relevant literature, technical derivation of methods, and, if appropriate, a detailed description of experimental work. You are allowed to use multimedia tools to prepare your presentation. You are responsible for understanding all the relevant concepts, the underlying theory, and the necessary background that you will usually find in the textbooks.

You can work in groups of two to carry out experimental works (three is an exceptional number that you must motivate clearly). If you do so, please ensure that personal contributions to the overall work are clearly identifiable.

6 credits:

Same as above except topics are limited to those covered in the first 2/3 of the course and you will not be asked to reimplement the methods or reproducing experimental results.

Schedule and reading materials

For videos please go to the Moodle-WebEx connector (UniFI credentials required).

Date Topics Readings/Handouts
2020-09-21 Administrivia. Introduction to the discipline of Machine Learning. Supervised learning.
  • HTF09 1, 2.1, 2.2.
2020-09-22 Linear regression as a supervised learning problem. Loss functions for regression. Ordinary least squares and its statistical analysis. Gauss-Markov theorem.
  • HTF09 3.1, 3.2
2020-09-25 Bias-variance decomposition. Regularization. Ridge regression. Lasso. Regularization paths. Maximum likelihood principle.
  • B06 3.2; HTF09 3.4, 7.1, 7.2, 7.3, 8.2.2.; B12 Ch. 8.
2020-09-28 MLE and KL-Divergence. Very short introduction to Bayesian learning. Ridge regression and Lasso as MAP learning.
  • B12 Ch. 8, HTF09 4.1-4.4
2020-09-28 Practice on ridge regression, tuning the ridge regularizer, bias-variance decomposition.
2020-09-29 Classification. Fisher (linear) discriminant analysis. Bayes optimal classifier. Limitations of linear discriminant analysis. Discriminant vs. generative classifiers. Motivation for the logistic function. Logistic regression and log-loss (cross-entropy loss).
  • HTF09 4.4, 6.6.3; A18 2.2.3; B06 4.1.4, 4.2, 4.3.2
2020-10-02 Gradient computation for logistic regression. Optimization with the Netwon method. Naive Bayes classifier (Bernoulli/Gaussian). Naive Bayes and logistic regression are a discriminative/generative conjugate pair. Learning curves.
  • HTF09 4.4, 6.6.3; A18 2.2.3; B12 10.1, 10.2
2020-10-05 Generalized linear models. Logistic regression and least squares regression as special cases. Softmax regression. Gradient calculations.
  • S18 3.2, 4.4, 7.1; GBC16 6.2.2.3
2020-10-05 Practice on cross-entropy and learning curves
2020-10-06 Maximum margin hyperplane as a constrained optimization problem. Ordinary convex problems and Karush-Kuhn-Tucker theory. Dual form for the maximum-margin hyperplane. KKT conditions and support vectors. Soft constraints (support vector machine). Dual SVM problem. Hinge loss.
  • GBC16 6.2.2.3; HTF09 4.5
2020-10-09 Recap on SVM. Kernel methods. Feature maps. Polynomial and RBF kernels. Mercer's theorem.
  • HTF09 5.8
2020-10-12 More on kernel methods. Representer theorem. Kernel ridge regression. Support vector regression. Reproducing kernel Hilbert spaces. Multiclass SVM.
  • HTF09 5.8
2020-10-13 No class today
2020-10-16 Learning theory. PAC learning. Agnostic learning and bounds for the estimation error. Learning with infinite function classes: VC-dimension, VC bounds.
2020-10-19 No class today
2020-10-20 Weak learners. Boosting. Adaboost. Boosting decision stumps. Analysis of Adaboost and exponential loss.
  • HTF09 10.1, 10.4
2020-10-23 Practice on boosted decision stumps
2020-10-23 Bagging. Random forests. Out-of-bag estimate of the prediction loss. Attribute relevance. CART. Introduction to additive models and gradient boosting.
  • HTF09 9.2.2, 10.3-10.10, 10.12.1, 15
2020-10-26 Gradient boosting. Introduction to representations and representation learning.
  • GCB16 5.11, 13.4
2020-10-31 Artificial neurons and their biological inspiration. Expressiveness of shallow and deep neural networks. Universality for Boolean functions. Universal approximation. Activation functions. VC dimension. Defining the optimization problem for learning.
  • GCB16 Ch. 6, A18 1.5.
2020-11-02 Maximum likelihood training of neural networks. Gradient computations. Algorithmic differentiation (forward and reverse mode). Backpropagation. The structure of the optimization problem for neural networks. Saddle points.
  • GBC16 6.5, 8.1, 8.2, 8.3.1
2020-11-03 No class today
2020-11-06 Tradeoffs of large scale learning. Stochastic gradient descent. Weight initialization for neural networks. Momentum. Nesterov accelerated gradient.
  • GBC16 8
2020-11-09 No class today
2020-11-10 Adagrad. RMSProp. Adam. Gradient clipping. Batch Normalization. Effects of L2 regularization. Early stopping. Dropout.
  • GBC16 8.5, 8.7, 7.1, 7.8, 7.12; A18 3.5, 4.4, 4.6
2020-11-13 No class today
2020-11-16 Practice on Tensorflow (v1 and v2) and Keras.
2020-11-20 Convolutional networks. Variants of the convolutional operator. Pooling. Modules (subnetworks). Highway networks. Resudual networks. Densely connected networks.
  • GBC16 9, A18 8.2, 8.4
2020-11-23 Sequence learning. Overview of problems and methods. Hidden Markov models. Baum-Welch procedure. Viterbi decoding. Brief introduction to conditional random fields.
  • B11 23
2020-11-24 The expectation-maximization algorithm. Mixture models. Introduction to recurrent neural networks.
  • HTF09 8.5; B11 11.1, 11.2, 12; GBC16 19.2, 10.1, 10.2
2020-11-27 Recurrent networks. Vanishing gradients. Gated recurrent units. Attention mechanisms. Recurrent encoder-decoder with attention. Hierarchical attention.
  • GBC16 10
2020-12-01 Multi-headed attention mechanism. Transformer networks. Introduction to generative models and autoencoders.
2020-12-04 Variational autoencoders. Brief introduction to generative adversarial networks. Hyperparameter Optimization. Grid and random search. Sequential model-based and Bayesian approaches.

Note

Full text of linked papers is normally accessible when connecting from a UNIFI IP address. Use the proxy proxy-auth.unifi.it:8888 (with your credentals) if you are connecting from outside the campus network.