Software Model Checking

-- Course Description --



  1. Automated software analysis: what and why?
  2. Overview of reliability methods: program testing, static analysis, deductive techniques, model checking
  3. Examples of model checkers: what can, and can't, they do
  4. Model checking: first principles
  5. Temporal logic as a specification mechanism
  6. Temporal logic model checking
  7. Abstraction: state explosion, canonical abstractions, (counterexample guided) refinement
  8. Generalizations: existential/universal abstractions, (bi-)simulation, sound abstractions; abstract interpretation
  9. Symbolic reasoning: BDDs, SAT/SMT-based bounded MCing; loop invariants, interpolation, completeness thresholds
  10. Concurrency: communication via shared variables, locks, broadcasts; symmetric programs; parameterized MCing

Relevant books (but read my recommendations)

1. The following book is a bit old by now (a second edition is being worked on) and thus does not cover the more modern topics of SAT-based model checking and predicate abstraction. It does present lots of "classical" material from the area.

author    = {Edmund M. Clarke and Orna Grumberg and Doron Peled},
title     = {Model checking},
publisher = {MIT Press},
year      = {2001},
isbn      = {978-0-262-03270-4},
pages     = {I-XIV, 1-314}

2. This book is more up-to-date (note: 2nd edition). It begins with a thorough logic survey and does cover predicate logic, SAT, and temporal logics. It does not cover certain practical aspects of program verification, such as remedies to the state explosion problem: abstraction techniques, symmetry, etc.

author    = {Michael Huth and Mark Dermot Ryan},
title     = {Logic in computer science - modelling and reasoning about systems (2. ed.)},
publisher = {Cambridge University Press},
year      = {2004},
pages     = {I-XIV, 1-427},

3. The following book is very comprehensive and includes material this class will not cover, such as model checking with timing constraints, and probabilistic model checking.

author    = {Christel Baier and Joost-Pieter Katoen},
title     = {Principles of model checking},
publisher = {MIT Press},
year      = {2008},
isbn      = {978-0-262-02649-9},
pages     = {I-XVII, 1-975}

How this class works

This class will consist of:
Results of projects will be presented in class by the authors of those papers/tools.

Projects must be approved by me. This applies both to the topic of the project, and to the size of the team doing it; only 1 or 2 is allowed. If you pair up, you must clearly explain to me how the work will be split. Both parties will be required to present in this case.

Note that a project that simply explains/illustrates a research paper will rarely be approved; you are expected to make your own contributions (i.e., solve a particular problem that we will identify).
There will be no exams.

Standard rules of academic integrity, and of promptness of homework and project submissions will apply. Cooperation is generally encouraged, but you must always give proper credit to your collaborators. In some cases, your individual contribution will be measured, as detailed above.

Grading criteria

I plan to conduct the class in a highly interactive style, so the only way of losing credit on "class participation" is by not showing up.