In this module, we will introduce the course by delving into the philosophical underpinnings of artificial intelligence, integrating the work of important thinkers from Descartes to Alan Turing. We’ll also look at how Science Fiction often foretells the future of artificial intelligence, including examples of AI from hit 1970s and 1980s films that, decades later, have become a reality. This will set us up for some key considerations we’ll make when designing our own AI systems and how they should behave. Should they act like humans do, or think like humans do, or act and think rationally?
Intro: Module 1 [Video]
Lesson 1: Artificial Intelligence in Philosophy [Video]
Lesson 2: Brain in a Vat Thought Experiment (YouTube) [Video]
Lesson 3: Artificial Intelligence in Science Fiction [Video]
Lesson 4: Intelligent Agents [Video]
Lesson 5: Task Environments [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA 4th Edition, Chapter 1 “Introduction”, Section 1.1 “What is AI”
Russell and Norvig, AIMA 4th Edition, Chapter 27 “Philosophical Foundations”, Sections 27.1 “The Limits of AI” and 27.2 “Can Machines Really Think?”
Russell and Norvig, AIMA 4th Edition, Chapter 2 “Intelligent Agents” (whole chapter)
Optional: Herbert A Simon, A Behavioral Model of Rational Choice
Python is an amazing language with a concise syntax that allows you to rapidly prototype algorithms, and we are going to use it for all the assignments in this course. This module will cover all of the fundamentals you need to get started with Python. And as we continue through the rest of the course, you’ll quickly become an expert!
Intro: Module 2 [Video]
Lesson 1: Introduction to Python [Video]
Lesson 2: Objects and Types [Video]
Lesson 3: Functions [Video]
Lesson 4: For Loops [Video]
Lesson 5: Imports [Video]
Lesson 6: A Worked Example [Video]
Slides: [Slides]
Homework:
In artificial intelligence, a surprising number of tasks that we want to solve can be cast as search problems. In this module, we will introduce the formal definition of search problems, and examine some classic algorithms for solving search problems called shortest path algorithms. These are sometimes referred to as “uninformed” search algorithms or “blind” search algorithms, because they are run without any additional knowledge of where our goal lies. We’ll also look at some variants of these algorithms that have computational complexity guarantees.
Intro: Module 3 [Video]
Lesson 1: Search Problems [Video]
Lesson 2: Search Problem Formulation [Video]
Lesson 3: Basic Search Algorithms [Video]
Lesson 4: Uniformed Search Strategies [Video]
Lesson 5: Review of BFS and DFS [Video]
Lesson 6: Depth Limited Search and Iterative Deepening Search [Video]
Slides: [Slides]
Readings:
Homework:
We can often find a solution to a search problem more quickly if we have some knowledge about how close we are to a goal state. In this module we’ll look at the process of incorporating such knowledge into search algorithms, which, when used optimally, can help focus our search efforts so that we avoid exploring actions that move us further away from the goal. We’ll examine the most famous informed search algorithm, A* search, which is guaranteed to find an optimal solution first.
Intro: Module 4 [Video]
Lesson 1: Uniform-cost Search [Video]
Lesson 2: Heuristic Functions [Video]
Lesson 3: Greedy Best-first Search [Video]
Lesson 4: A* Search - part 1 [Video]
Lesson 5: A* Search - part 2 [Video]
Slides: [Slides]
Readings:
Homework:
Games present an interesting challenge for artificial intelligence because they involve reasoning about what actions your opponent is likely to take. In this module we’ll talk about how to represent games as adversarial search problems, using a search tree, which assumes that your opponent is a rational actor that is trying to use the best strategy. We will also investigate a number of helpful algorithms and concepts that apply in the context of games, such as minimax and expectimax.
Intro: Module 5 [Video]
Lesson 1: Game Formulation [Video]
Lesson 2: Game Tree [Video]
Lesson 3: Minimax Rules [Video]
Lesson 4: Alpha-Beta Pruning [Video]
Lesson 5: Alpha-Beta Pruning worked example (Prof. Pieter Abbeel) [Video]
Lesson 6: Expectimax [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA chapter 5 “Adversairal Search” (5.1-5.3, 5.5).
Russell and Norvig, AIMA Chapter 16 “Making Simple Decisions” (16.1-16.3)
Optional: Claude Shannon, Programming a Computer for Playing Chess (1950)
Optional: Frederic Friedel, Reconstructing Turing’s “Paper Machine”
Homework:
In this module, we’ll look at constraint satisfaction problems (CSPs), a kind of search problem that allows us to take advantage of the problem structure. In CSPs, we have a set of variables, each of which we need to assign a value to, subject to some constraints. Through examples like map coloring and sudoku puzzles, we’ll discuss how we represent CSPs as a graph, how we propagate constraints on a partially solved graph, and how we check for consistency in nodes, arcs and paths. This will also introduce us to new concepts including inference and backtracking search.
Intro: Module 6 [Video]
Lesson 1: Intro to CSPs [Video]
Lesson 2: Deeper into CPSs [Video]
Lesson 3: CSPs as a Search Problem [Video]
Lesson 4: Heuristics to Improve Backtracking Efficiency [Video]
Lesson 5: Arc Consistency and AC3 [Video]
Slides: [Slides]
Readings:
Homework:
In this module, we will introduce a new kind of agent: Knowledge-based Agents that are able to create and update a representation of the world in a knowledge base, and use inference rules and logical reasoning to draw conclusions about how the world works. This necessitates a knowledge base, a set of sentences that are written in a knowledge representation language that contain some assertions about the world. In our course, we will talk about Propositional Logic as a knowledge representation language.
Intro: Module 7 [Video]
Lesson 1: Knowledge-based Agents 1 [Video]
Lesson 2: Hunt the Wampas [Video]
Lesson 3: Knowledge-based Agents 2 [Video]
Lesson 4: Propositional Logic [Video]
Lesson 5: Knowledge Bases [Video]
Lesson 6: Theorem Proving 1 [Video]
Lesson 7: Theorem Proving 2 [Video]
Slides: [Slides]
Readings:
So far, we have examined deterministic problem settings, but in this module will now turn to stochastic or probabilistic task environments. We’ll start by introducing a new kind of formalization called Markov Decision Processes (MDPs) which are useful for fully observable task environments where we have to make a sequence of decisions, and where the outcome for actions we take are non-deterministic. Looking at MDPs will also look into new algorithms, such as Policy Iteration and Value Iteration, and the important concepts v and additive rewards.
Intro: Module 8 [Video]
Lesson 1: Introduction to MDPs [Video]
Lesson 2: MDP Formulation (part 1) [Video]
Lesson 3: MDP Formulation (part 2) [Video]
Lesson 4: MDP Examples [Video]
Lesson 5: Value Iteration Algorithm [Video]
Lesson 6: Policy Evaluation [Video]
Lesson 7: Policy Iteration Algorithm [Video]
Lesson 8: Maximum Expected Utility [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA Chapter 16 “Making Simple Decisions” (16.1-16.3)
Russell and Norvig, AIMA Chapter 17 “Making Complex Decision” (17.1-17.2)
Homework:
In this module, we will explore situations where a transition and a reward function are not given to us, but must be learned through interaction with the world. In reinforcement learning, every time an agent takes an action in the world, it receives feedback in the form of a reward, and so the agent wants to maximize its expected rewards. We’ll look at several different learning techniques for reinforcement learning: including model-based learning and temporal difference learning. Finally, we’ll be able to quantify how good different exploration strategies are using regret.
Intro: Module 9 [Video]
Lesson 1: Introduction to Reinforcement Learning 1 [Video]
Lesson 2: Introduction to Reinforcement Learning 2 [Video]
Lesson 3: Passive Reinforcement Learning [Video]
Lesson 4: Temporal Difference Learning [Video]
Lesson 5: Active Reinforcement Learning (Q-Learning) [Video]
Lesson 6: Exploration vs. Exploitation [Video]
Lesson 7: Approximate Q-Learning [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA Section 17.3 “Bandit Problems”
Russell and Norvig, AIMA Chapter 22 “Reinforcement Learning” (Sections 22.1-22.5)
Homework:
In this module, we’ll review the basics of probability, an extremely important concept to master in artificial intelligence. We’ll review the basics of probability theory, including the axioms of probability, what an event is, what a probabilistic model is, and much more. To show some practical applications of these concepts, we’ll take a look at something called probabilistic language models, which power things like autocomplete on your smartphone.
Intro: Module 10 [Video]
Lesson 1: Review of Probabilities [Video]
Lesson 2: Distributions [Video]
Lesson 3: Probabilistic inference and Rules [Video]
Lesson 4: Probabilistic Language Models [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA Chapter 12 “Quantifying Uncertainty” (Sections 12.1-12.7)
Jurafsky and Martin, Speech and Language Processing Chapter 3 “N-Gram Language Models”
Homework:
In this module we will build on last week by reviewing a kind of probabilistic model called a Bayes Net, where the nodes represent variables in our probability model, and the arcs encode our assumption about how the events are related to each other. Bayes Nets are compact representations that allow us to encode independence assumptions between variables. We will revisit independence and look at an algorithm for enumerating the independence assumptions in a Bayes Net called the d-separation algorithm.
Intro: Module 11 [Video]
Lesson 1: Probablistic Reasoning and Bayes Nets [Video]
Lesson 2: Introduction to Independence [Video]
Lesson 3: Conditional Independence [Video]
Lesson 4: The Chain Rule [Video]
Lesson 5: Bayes Nets Examples [Video]
Lesson 6: Graphical Model Notation and Examples [Video]
Lesson 7: Alarm Network Example [Video]
Lesson 8: Bayes’ Nets Probabilities [Video]
Lesson 9: Examples of Bayes’ Nets Probabilities [Video]
Lesson 10: Causality [Video]
Lesson 11: Size of Bayes Nets Probability Tablees [Video]
Lesson 12: Conditional Independence in Bayes Net [Video]
Lesson 13: Causal Chains [Video]
Lesson 14: Common Cause [Video]
Lesson 15: Common Effect [Video]
Lesson 16: Reachability and Paths [Video]
Lesson 17: The D-Separation Algorithm [Video]
Lesson 18: Inference [Video]
Slides: [Slides]
Readings:
Machine learning is an important aspect of artificial intelligence. Broadly speaking machine learning addresses the question: How can an agent use data to learn to make decisions? In this module we’ll look at two different machine learning algorithms: Naive Bayes and perceptrons, that use a set of labeled training data in order to learn how to perform classification tasks.
Intro: Module 12 [Video]
Lesson 1: Classification Tasks [Video]
Lesson 2: Naïve Bayes [Video]
Lesson 3: Basic Concepts in ML [Video]
Lesson 4: Perceptron [Video]
Lesson 5: Improvements to Perceptrons, and SVM [Video]
Slides: [Slides]
Readings:
Russell and Norvig, AIMA Chapter 19 “Learning from Examples” (Sections 19.1-19.6)
Jurafsky and Martin, Speech and Language Processing Chapter 4 “Naive Bayes and Sentiment Classification”
Optional: New York Times (July 13, 1958), Electronic ‘Brain’ Teaches Itself
Homework:
In this module we will investigate neural networks, which consist of a set of neural units that are grouped together in a network. We will look at the neural networks’ training procedure, the errors involved and their attribution, and several neural architectures. We will also look at how neural networks have been used as probabilistic language models.
Intro: Module 13 [Video]
Lesson 1: Logistic Regression [Video]
Lesson 2: Loss Functions [Video]
Lesson 3: Stochastic Gradient Descent 1 [Video]
Lesson 4: Stochastic Gradient Descent 2 [Video]
Lesson 5: Neural Networks 1 [Video]
Lesson 6: Neural Networks 2 [Video]
Slides: [Slides]
Readings:
Jurafsky and Martin, Speech and Language Processing Chapter 5 “Logistic Regression” (Sections 5.1-5.6)
Jurafsky and Martin, Chapter 7 “Neural Networks and Neural Language Models”
Homework:
In our last content module, we will look into an application of AI called Natural Language Processing (NLP.) The goal of NLP is to enable computers to communicate in a human-like fashion. This was one of the first ideas for AI, tracing its roots all the way back to Alan Turing and the imitation game. In this module, we’ll be focusing on how to create representations of words that capture their meanings, using the concept of vector space models.
Intro: Module 14 [Video]
Lesson 1: Neural Language Models [Video]
Lesson 2: Lexical Semantic Models [Video]
Lesson 3: Vector Semantic Models 1 [Video]
Lesson 4: Vector Semantic Models 2 [Video]
Lesson 5: Word Embeddings [Video]
Lesson 6: Word Embeddings Demo [Video]
Slides: [Slides]
Readings:
Jurafsky and Martin, Chapter 9 “Sequence Processing with Recurrent Networks” (Sections 9.1-9.3)
Jurafsky and Martin, Chapter 6 “Vector Semantics and Embeddings” (Sections 6.1-6.4, 6.8, 6.10-6.12)
Optional: Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, Iason Gabriel, Ethical and social risks of harm from Language Models
Optional: David Gray Widder, Dawn Nafus, Laura Dabbish, James Herbsleb, Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes