CS264: Beyond Worst-Case Analysis
Roughgarden (Office hours: Tuesdays 2-3 PM, Gates 474.)
(Office hours: Wednesdays 3-5 PM, Gates 460.
Email: joshua.wang [at] cs [dot] stanford [dot] edu)
10:30-11:50 AM on Tuesdays and Thursdays in Hewlett 102.
Course description: In the worst-case analysis of algorithms,
the overall performance of an algorithm is summarized by its worst
performance on any input. This approach has countless success
stories, but there are also important computational problems --- like
linear programming, clustering, and online caching --- where the
worst-case analysis framework does not provide any helpful advice on
how to solve the problem. This course covers a number of modeling
methods for going beyond worst-case analysis and articulating which
inputs are the most relevant.
List of topics: instance optimality;
perturbation and approximate stability;
parameterized analysis and condition numbers;
models of data (pseudorandomness,
locality, diffuse adversaries, etc.);
robust analogs of
average-case analysis; resource augmentation; planted and
semi-random graph models; sparse recovery (like compressive sensing).
Motivating problems drawn from online
algorithms, machine learning (topic models, clustering),
computational geometry, graph
partitioning, scheduling, linear programming,
local search heuristics, social networks,
and empirical algorithmics.
Prerequisites: undergraduate algorithms (CS161, or equivalent).
Prior exposure to linear programming is recommended but not required
(review materials will be posted as needed).
Weekly exercise sets. Students have the option to substitute a
paper-reading project for 3 of the exercise sets.
No late assignments accepted, although we will drop the lowest
of your 10 scores.
Previous offering (in 2014):
here. Includes lecture videos and lecture notes.
Note: the overlap between this and the previous offering will be roughly 50%.
for a two-hour overview of the entire course, see
- Beyond Worst-Case Analysis,
Part 1 and
Algorithms and Uncertainty Boot Camp, Simons Institute (2016).
Additional background material:
We are using Gradescope for the homework submissions (you should create an account if you don't already have one).
Use the course code
MBY5R9 to register for CS264. One submission per team (but remember to add the names of all team members).
- A LaTeX template that you can use to
type up solutions. Here
and here are good
introductions to LaTeX.
- Homework #1 (Out Thu 1/12, due midnight Thu 1/19.)
- Homework #2 (Out Thu 1/19, due midnight Thu 1/26.)
- Homework #3 (Out Thu 1/26, due midnight Thu 2/2.)
- Homework #4 (Out Thu 2/2, due midnight Thu 2/9.)
- Homework #5 (Out Thu 2/9, due midnight Thu 2/16.)
- Homework #6 (Out Thu 2/16, due midnight Thu 2/23.)
- Homework #7 (Out Thu 2/23, due midnight Thu 3/2.)
- Homework #8 (Out Thu 3/2, due midnight Thu 3/9.)
- Homework #9 (Out Thu 3/9, due midnight Thu 3/16.)
- Homework #10 (Out Thu 3/16, due midnight Thu 3/23.)
Related workshops: Since this course debuted in 2009, there
have been several workshops on the topic:
Detailed schedule and references (under construction):
- Lecture 1 (Tue Jan 10): Introduction.
Three motivating examples (caching, linear programming, clustering).
Course philosophy. Instance optimality. Further reading:
- Lecture 2 (Thu Jan 12):
Instance optimality in computational geometry.
- Lecture 3 (Tue Jan 17):
Online paging. Resource augmentation.
- Lecture 4 (Thu Jan 19): Parameterized analysis of online paging.
Parameterizing page sequences by their locality.
Optimal page fault bounds for LRU.
Rigorously separating LRU and FIFO.
Related video (from the 2011 workshop on BWCA):
- Lecture 5 (Tue Jan 24):
NP-hard problems and what to do about them.
The knapsack problem and parameterized approximation bounds.
Parameterized algorithms and a polynomial-size kernel for the vertex
cover problem. Color-coding and the longest path problem.
- Parameterized algorithms is a huge field: here is a recent textbook on the topic.
- Lecture 6 (Thu Jan 26):
Perturbation stability: clustering is hard only when it doesn't matter.
- Lecture 7 (Tue Jan 31):
When are linear programming relaxations exact?
Case study: perturbation-stable instances of the minimum multiway cut problem.
- Lecture 8 (Thu Feb 2):
Exact recovery in perturbation-stable instances of the maximum cut problem. Metric embeddings and Bourgain's Theorem. Improvements via semidefinite programming. References:
- Lecture 9 (Tue Feb 7):
Random graphs. Planted models. A spectral algorithm for the planted bisection problem.
- Lecture 10 (Thu Feb 9):
A little matrix perturbation theory.
A spectral algorithm for the planted clique problem.
- Lecture 11 (Tue Feb 14):
Semi-random models and semidefinite programming 1.
Case study: minimum bisection.
- Lecture 12 (Thu Feb 16):
Semi-random models and semidefinite programming 2.
Second case study: maximum clique.
- Lecture 13 (Tue Feb 21):
(Guest lecture by Mary Wootters.)
A taste of compressive sensing.
Finding sparse solutions to underdetermined linear systems.
When does l1-minimization work?
- Lecture 14 (Thu Feb 23):
(Guest lecture by Moses Charikar.)
LP and SDP relaxations for k-median and k-means. Exact recovery in the stochastic ball model.
- Lecture 15 (Tue Feb 28):
Exact recovery: topic models and nonnegative matrix factorization.
- Lecture 16 (Thu Mar 2):
Random ordering problems. Secretary problems. Case study: online facility location.
- Lecture 17 (Tue Mar 7):
Smoothed analysis of local search heuristics. Case study: the 2OPT
local search heuristic for TSP in the plane.
Reference for lecture:
- Lecture 18 (Thu Mar 9):
For binary optimization problems,
polynomial smoothed complexity <=> (Las Vegas randomized)
pseudopolynomial worst-case complexity.
Reference for lecture:
- Lecture 19 (Tue Mar 14):
- Lecture 20 (Thu Mar 16):
Application-specific algorithm selection.