Paolo Frasconi, DINFO, via di S. Marta 3, 50139 Firenze
email: .
Wednesday, 14:45-16:45.
The course aims to provide an overview of classic and some current deep learning methodologies. It will tentatively cover the following aspects:
You will be able to understand and apply state-of-the-art algorithms and architectures, to understand the relevant methodological details, and to operate according to the current practices. Deep learning is a fast moving field. To be successful in your future career you will need to develop sufficient skills to competently read and understand a large fraction of the current and future literature (yes, a form of meta-learning). Thus, after succesfully completing this course, you should be able to understand, reimplement and evaluate on your own many novel algorithms, with limited help or guidance from a supervisor.
B031297 - Foundations of Statistical Modeling/Foundations of Statistical Learning is a formal prerequisite. Proficiency on scientific computing with a modern programming language (e.g., NumPy with Python) and familiarity with linear algebra, multivariate calculus, and elementary optimization will be useful.
We will mainly work with papers (see below). Nonetheless, there are some useful books that cover many of the topics discussed in the course:
There is a single oral final exam with an associated project. You can choose topic of your project but you should discuss it with me during office hours and I will give you the details of what should be done.
Typically, you will be assigned one or more papers to read and will be asked to work at home to reproduce some (simplified) experimental results or to apply the same method to different data or in a slightly different setting. You are responsible to study the relevant methodological and theoretical prerequisites of these papers (in some cases, studing the references covered in class may be sufficient but in other cases, especially when dealing with the details of the experimental procedures, readings other ancestors in the citation graph may be necessary).
There is no need to submit a report for your work, but you are
asked to share with me the code (not the data! --- for that a
links is sufficient) you have developed with some short
instructions for reproducing your results. Small zip files can
be shared by email (please send a https://0x0.st/
link if
the zip is over a MByte) but if you prefer to share a git
repository, please create a private one
on
https://codeberg.org/
and share it with me by inviting the user dl.unifi
as a member.
You will be required to give a short presentation during the exam. Please ensure that during your presentation you introduce and motivate the problem being addressed in the context of the relevant literature, explain the technical derivation of the methods, and describe in detail the experimental work and the results. You are allowed (but not required) to use multimedia tools to prepare your presentation. You should be prepared to answer general questions about the background literature supporting your paper(s) (for example if the method uses an optimizer, which happens with overwhelming probability, you are supposed to know how it works) and about the details of your experimental work.
You can work in groups of two to carry out the experimental work (three is an exceptional number that you must motivate clearly). If you do so, please ensure that personal contributions to the overall work are clearly identifiable. In any case, during the exam you will have to answer questions individually.
Relevant papers and/or sections of the textbook(s) are listed on the right side. [Sections of] papers in the "required" list have been covered in class and should be studied while preparing for the exam. Papers listed as "optional" may be useful to get a better picture of the class topic but you do not need to study them, unless they are directly related to the topic of your project.
Date | Topics | Readings/Handouts |
---|---|---|
2022-09-21 | Administrivia. Overview of the course. |
|
2022-09-22 | Limitations of feature engineering. Representation Learning. Biologically inspired features. Sparse coding and self-taught learning. Deep belief networks. |
|
2022-09-28 | Multilayered perceptrons. Computational graphs. Activation functions. Stacked RBMs and denoising autoencoders. |
|
2022-09-29 | Expressive power of MLP. Boolean functions. Arbitrary functions. Benefits of depth. |
|
2022-09-29 | Loss functions, empirical error, maximum likelihood. |
|
2022-10-05 | Canonical form of exponential family distributions. Canonical links, and response functions, loss functions, gradients. Examples: Gaussian, Bernoulli, Poisson, Categorical. |
|
2022-10-06 | Categorical cross-entropy and the log-sum-exp trick. Automatic differentiation in forward and reverse mode. |
|
2022-10-12 | Optimization for deep learning. Stochastic gradient descent and the tradeoffs of large scale learning. |
|
2022-10-13 | Weight initialization. Local minima and saddle points. Momentum and Nesterov accelerated gradient. Adagrad. RMSProp. |
|
2022-10-19 | Adam and AMSGrad. Explicit regularization by penalties. Effects of ridge and L1 regularizers. Early stopping compared to L2. |
|
2022-10-20 | Noise injection. Dropout. Adaptive regularization effect. More activation functions (SiLU, Swish, GELU). Data augmentation. |
|
2022-10-26 | Normalizers. Data standardization. Batch normalization and re-normalization. Weight normalization. Self-normalizing networks. |
|
2022-10-27 | Convolutions and convolutional networks for Nd signals. Basic concepts and some variants. |
|
2022-11-02 | Computing convolutions. Pooling and strides. Bottlenecks (1x1 convolutions). Basic blocks in VGG and Inception. Transposed convolutions. Fully convolutional networks, U-net. |
|
2022-11-03 | Normalization for CNNs (batch, layer, instance, group). Gates. Mixtures of experts. Skip connections: Highway and residual networks. |
|
2022-11-03 | Building and training models in Tensorflow and Keras. |
|
2022-11-09 | Sequence learning tasks with examples. Recurrent neural networks as dynamical systems. Main architecture and bidirectional RNN. |
|
2022-11-10 | Vanishing gradients in RNNs. Architectural variants and stacking RNN layers. RNNs with gates (gated recurrent units, long short-term memories). |
|
2022-11-16 | Language models: from probabilistic models to neural networks. Optimal decoding with beam search. |
|
2022-11-17 | Recurrent language models. Encoder-decoder architecture for sequence-to-sequence learning. Neural Turing machines. Attention mechanisms for neural machine translation. |
|
2022-11-23 | Attention and hierarchical attention for sequence classification. End-to-end memory networks and its application to question answering. |
|
2022-11-23 | Transformers. Graph neural networks, graph convolutional networks, graph attention networks |
|
2022-11-30 | Hyperparameter optimization. Definitions and elementary algorithms. |
|
2022-12-01 | Building and training models in Pytorch (CNN, LSTM, attention) |
|
2022-12-01 | Bayesian (model-based) optimization for tuning hyperparameter. |
|
2022-12-07 | Multi-fidelity approaches to hyperparameter optimization. Successive halving. Hyperband. Gradient-based approaches. |
|
2022-12-14 | The inadequacy of classic learning theory to understand deep learning. Loss surface in overparameterized systems. Double descent. |
|
2022-12-15 | Implicit bias of gradient descent. Neural Collapse. Some settings beyond single task: multi-task, transfer, meta, self-supervised, contrastive learning. Some multi-task architectures. Transfer via fine-tuning: ImageNet, BERT, T5. |
|
Full text of linked papers is normally accessible when connecting from a UNIFI IP address.
Use
proxy-auth.unifi.it:8888
(with your credentials) if you are connecting from outside the
campus network.