Paolo Frasconi, DINFO, via di S. Marta 3, 50139 Firenze
email: .
Friday 11:15-13:15Monday 10:45-12:45 (S. Marta) —
Check here for any variations
The course will introduce problems and basic algorithmic techniques in the following areas:
You will be able to design and apply simple searching agents, solvers for constraint satisfaction problems, understand and apply simple probabilistic models and solve basic data-driven classification problems. The course will serve as a foundation for further study in AI as well as in engineering areas where AI is becoming predominant.
B003368 (Algorithms and Data Structures)
There is a final project and a final oral exam.
The final project will be assigned based on a short discussion during office hours. It would often take several iterations by email so please do not ask the project by email. You have some freedom in choosing the area and the modality of your project but not in choosing the exact problem. The project typically consists the application of AI techniques to simple real problems, or it could involve the implementation and the verification of algorithmic techniques described in the course. Your should be ready to answer practical and theoretical questions about your project during the oral exam.
The oral exam covers the rest of the course. During the exam, you should prove that you can master both theoretically and practically the methods and the algorithmic techniques described in the course.
Relevant sections of the textbook(s) are listed on the right side. You are expected to study these sections thoroughly and be able to discuss their contents during the exam. The listed additional materials (usually papers) are to be studied only when marked as "required" (unless they are relevant to your project work, then you need to study them thoroughly).
| Date | Topics | Readings/Handouts |
|---|---|---|
| 2025-09-17 | Administrivia. Main topics in current AI and what will be covered in this course. |
|
| 2025-09-19 | Agents and environments. Search problems. States, actions, costs. Search graph. Search tree. General best-first algorithm. |
|
| 2025-09-24 | Blind search algorithms: DFS, BFS, Uniform Cost. Analysis. Iterative deepening. Bidirectional search. |
|
| 2025-09-26 | Heuristics. Algorithm A* and conditions for optimality. Dominance and empirical evaluation of heuristics. Pattern databases. |
|
| 2025-10-01 | Searching and optimizing goal states. Metaheuristics for local search: Hill climbing, local beam search, simulated annealing. Constraint satisfaction problems. |
|
| 2025-10-03 | Beam search. Examples of CSPs. Inference for CSPs (constraint propagation): Arc consistency, AC-3 and its analysis. |
|
| 2025-10-08 | Backtracking search. Heuristics for variable and value ordering. Inference during backtracking: maintaining arc consistency and forward checking. Cutset conditioning. Dual problems and cluster trees. |
|
| 2025-10-10 | No class today. |
|
| 2025-10-15 | Junction trees and triangulated graphs. Cliques in triangulated graphs. Triangulation via variable elimination. Introduction to logic. Knowledge bases. Formulae. Syntax, semantics, interpretations. |
|
| 2025-10-17 | Constraint modeling with Minizinc and CPMpy. |
|
| 2025-10-17 | Ontological and epistemological committments of a logic. Semantics. Interpretation functions. Propositional logic and its semantics. Entailment and its decidability. |
|
| 2025-10-22 | Propositional formulae. Valid and satisfiable formulae. Logical equivalence. Deduction theorem. Decidability of propositional logic. Reduction of entailment to SAT. Introduction to SAT solving. Conjunctive normal form. |
|
| 2025-10-24 | SAT and the DPLL algorithm. Randomized algorithms. WalkSAT. Randomized k-SAT and phase transition. Introduction to theorem proving. |
|
| 2025-10-29 | Theorem proving. Inference rules and theorem proving as search. Soundness and completeness. Resolution. Ground resolution theorem. |
|
| 2025-10-31 | Definite clauses. Backward chaining and main ideas behind logic programming. Limitations of logic under uncertainty. Probability distributions on discrete spaces. Introduction to probabilistic inference. Computing conditional probabilities by marginalization. Hugin. |
|
| 2025-11-05 | Beliefs. Evidence and probabilistic reasoning. Independence and conditional independence. Using graphs to represent conditional independence. Markov networks and undirected separation. |
|
| 2025-11-07 | Examples of Markov networks. Markov chains. Language models. Hidden Markov models. I-maps, D-maps, perfect maps. Hammersley-Clifford theorem. Potential functions, normalized exponentials and clique features. Directed probabilistic graphical models. Factorization of joint distributions according to a directed graph. D-separation and conditional independence in directed networks. |
|
| 2025-11-12 | Review of D-separation. Conditional probability tables. Expressiveness of Bayesian and Markov networks. Algorithmic problems for graphical models. Moralization. Junction trees for probabilistic inference. Initialization, message passing, inserting evidence. |
|
| 2025-11-14 | Application example: POS-tagging with HMMs. Sketch of the Viterbi algorithm with beam search. Parameter learning in graphical models. Problem setup and assumptions. Maximum likelihood estimation. |
|
| 2025-11-19 | Frequentist (maximum likelihood) and bayesian parameter learning. Beta distribution. Bayesian conjugacy. Laplace smoothing. Dirichlet priors. Sketch of the structure learning algorithm for Bayesian networks. |
|
| 2025-11-21 | Supervised learning. Examples. Empirical risk minimization. Generalization. Train and test error. Naive Bayes and its discrimination surface. Hyperplanes and the perceptron algorithm. |
|
| 2025-11-26 | The separation surface of Naive Bayes (more details). Linearly and nonlinearly separable data. Block-Novikoff theorem (with proof). Dual form of the perceptron algorithm and its interpretation. Basic ideas behind kernel methods. |
|
| 2025-11-28 | No class today. |
|
| 2025-12-03 | No class today. |
|
| 2025-12-05 | Decision trees for classification. Greedy top-down algorithm. Impurity measures. Hyperparameters. The decision tree hypothesis space. Bias-variance tradeoff. Bagging. Random forests. Bias-variance tradeoff. Bagging. Random forests. |
|
| 2025-12-10 | Adaboost and its analysis. |
|
Full text of linked papers is normally accessible when connecting from a UNIFI IP address.
Use
proxy-auth.unifi.it:8888 (with your credentials) if you are connecting from outside the
campus network.