Karl Lieberherr on Non-Chronological Backtracking Based on Clause Learning and its Connection to Resolution

Comparison between Superresolution (Lieberherr 1975/1977) and newer and older works (by Stallman/Sussman (1976), Marques-Silva/Sakallah (1995) and Beame/Kautz/Sabharwal (2004) and Kautz/Selman (2003)

The goal here is not to accuse productive and well respected authors but to point out how the newer and older works overlap. Indeed I am very grateful to all those authors for their work that touches on clause learning with restarts.

The first conclusion is that the clause learning work by Lieberherr (TR from 1975) appeard at about the same time as the work by Stallman/Sussman (TRs from 1975, 1976) that so far was considered the only early work on clause learning.

A second conclusion is that a 2004 theorem by Beame/Kautz/Sabharwal about the polynomial equivalence of clause learning and general resolution restates a theorem by Lieberherr in the 1970s.

A third conclusion is that a challenge posed by Kautz/Selman in 2003 was solved by 1977.

A fourth conclusion is that the conflict analysis technique behind clause learning similar to the one described by Marques-Silva/Sakallah (1995) was known by 1977.

A fifth conclusion, and read this with a grain of salt, there are still some gems that have not been rediscovered.

Papers by Lieberherr


Comparison with Marques-Silva's PhD thesis: https://wiki.ccs.neu.edu/download/attachments/2158/jpms-phd95.pdf

Joao Marques-Silva's PhD thesis says:

What is novel in GRASP is its approach to exploiting the structure of conflicts from unit propagation.

Superresolution is based on conflicts that arise from unit propagation. See: [Superresolution] and for context about the paper. When unit propagation runs into a conflict clause after setting a set of decision literals d1 d2 ... dn-1 dn and it also runs into a conflict clause after setting d1 d2 ... dn-1 !dn then the superresolvent is the elementwise negation of d1 d2 ... dn-1.

Normal Superresolution goes a step further by exploiting more deeply the structure of conflicts from unit propagation. Normal Superresolution will sometimes produce clauses that are shorter than the ones produced by standard superresolution described above. Those shorter superresolvents are called normal superresolvents. See pages 7 and 8 of [Superresolution].

What is "novel" in GRASP is its approach to learning clauses that are implied by the superresolvents using unit propagation. The superresolvents are the "minimal" elements of the clauses learned by GRASP. However note, that [Clause Learning 1975] already talks about learning other clauses.

A footnote on page 68 says: Conditions similar to implicates of x are referred to as nogoods in truth maintenance systems [60, 116, 161] and in some algorithms for constraint satisfaction problems [143]. Nevertheless, the basic mechanism for creation of conflicting clauses differs, since conflicting clauses are not necessarily expressed in terms of decision variables, whereas nogoods are.

This puts superresolvents into the same category as nogoods because superresolvents are also expressed in terms of decision variables.

Claim: [Superresolution] does not address clause learning as GRASP does. It is only a proof system.

This is not precise. Superresolution is a complete proof system for SAT based on clause learning.


Comparison with the work by Paul Beame and Henry Kautz and Ashish Sabharwal

This work was done at the University of Washington as part of Ashish's PhD dissertation (2005). Dissertation Title: Algorithmic Applications of Propositional Proof Complexity.

Claim: There is a contradiction between theorem 4.5 in [Superresolution]: "Superresolution and Resolution are polynomially equivalent" and Corollary 1 on page 335 by Beame et al.:

@ARTICLE{beame04understanding,
author = "P. Beame and H. Kautz and A. Sabharwal",
title = "Towards Understanding and Harnessing the Potential of Clause Learning",
JOURNAL = "Journal of Artificial Intelligence Research",
YEAR = 2004,
PAGES = "319-351",
VOLUME = 31,
url = "citeseer.ist.psu.edu/beame03understanding.html" }
Corollary 1 says: CL (Clause Learning) can provide exponentially shorter proofs than tree-like, regular and Davis-Putnam resolution.

Answer: There is no contradiction at all. Indeed, when we read on in the paper by Beame we get to section 5: Clause Learning and General Resolution. In this section they prove that CL and General Resolution can efficiently simulate each other. Theorem 2 on page 337:

Theorem 2. CL - with any non-redundant scheme and unlimited restarts 
is polynomially equivalent to RES.
Theorem 2 restates theorem 4.5 in [Superresolution]. The reason is that Superresolution is in a way the most general, non-redundant clause learning scheme and restarts are built into superresolution: a restart happens after each proof step.

Paul Beame et al. write in 2004 in the same paper: "Despite its importance, there has been little work on formal properties of clause learning".

27 years earlier the work on superresolution studied important properties of clause learning. [Superresolution]


Comparison with the work by Richard Stallman and Gerald Sussman

My paper on Superresolution appeared in 1977 with the AMS and a technical report on the same topic was published in 1975. [Clause Learning 1975]

A paper by Kautz and Selman (cp03kautz-final.pdf):

Ten Challenges Redux: Recent Progress in
Propositional Reasoning and Search
Henry Kautz (1) and Bart Selman (2)
1 Department of Computer Science & Engineering
University of Washington
Seattle, WA 98195 USA
kautz@cs.washington.edu
2 Department of Computer Science
Cornell University
Ithaca, NY 14853 USA
selman@cs.cornell.edu

@INPROCEEDINGS{kautz-selman:ten-challenges,
AUTHOR = "H. Kautz and B. Selman",
TITLE = "{Ten Challenges Redux: Recent Progress in Propositional Reasoning and Search}",
BOOKTITLE = "Proc. Principles and Practice of Constraint Programming ",
YEAR = "2003",
PAGES = "1-18",
EDITOR = "",
PUBLISHER = "Springer Verlag, LNCS 2833"
}
attributes the idea of clause learning to an AI paper:
R. Stallman and G. Sussman. and forward reasoning and dependency-directed
backtracking in a system for computer-aided circuit analysis. Artificial Intelligence,
9(2), 1977.
[Stallman/Sussman 1976, AI Memo 380] is the technical report that corresponds to the journal paper.

Question: Is it Lieberherr (1975/1977) or Stallman/Sussman (1977) who were the first to propose, analyze and implement the idea of clause learning?

Stallman and Sussman clearly state the idea: "ARS scans backward along the chains of deduction from the scene of the contradiction to fin those choices which contributed to the contradiction,and records them in a NOGODD assertion to make sure that the same combination is never tried again. ... The NOGOOD assertion is a further innovation that saves even more computation by reducing the size of the search space.

Lieberherr says in [Clause Learning 1975]: Algorithm A tries to find an interpretation I for s such that, if it fails to find a model, it learns a resolvent c. [Superresolution] published in 1977 defines a complete proof system for SAT based on clause learning.

While the Stallman and Sussman work is in the domain of Computer-Aided Circuit Analysis and it was published as a technical report in 1976, Lieberherr's technical report was a year earlier.

Other papers, for example the book chapter by Lynce et al.

The Effect of Nogood Recording in DPLL-CBJ
SAT Algorithms
Ines Lynce and Joao Marques-Silva

attribute clause learning to:

[3] R. Bayardo Jr. and R. Schrag. 
Using CSP look-back techniques to solve real-world
SAT instances. In Proceedings of the National Conference on Artificial Intelligence,
pages 203-208, July 1997.

and

[16]
J. P. Marques-Silva and K. A. Sakallah. GRASP: A new search algorithm for satisfiability.
In Proceedings of the ACM/IEEE International Conference on Computer-
Aided Design, pages 220-227, November 1996.
We read in Lynce/Marques-Silva: In addition, state-of-the-art SAT solvers [3,11,16,17,26] effectively use learning techniques. In these solvers, whenever a conflict (dead-end) is reached, a new clause (nogood) is recorded to prevent the occurrence of the same conflict again during the subsequent search. Moreover, and from the first SAT solvers that incorporated non-chronological backtracking (NCB) [3,16], learning has always been a key component of the search algorithm, where recorded nogoods are used to determine the search point to backtrack to.

In http://eprints.ecs.soton.ac.uk/12056/01/jpms-jar05.pdf

Heuristic-Based Backtracking Relaxation for
Propositional Satisfiability
ATEET BHALLA, INES LYNCE, JOSE T. DE SOUSA
and JOAO MARQUES-SILVA
we read: After a conflict is identified, we may apply a conflict analysis procedure [2, 18, 19] to identify a subset of the decision assignments that represent a sufficient condition for producing the same conflict.

This is an explanation of semi-super resolution. See: http://www.ccs.neu.edu/home/lieber/courses/csg260/f06/project/project-desc-CSU670.txt

Although most papers attribute non-chronological backtracking to Bayardo/Schrag and Marques-Silva/Sakallah, superresolution predates those papers by about two decades. Superresolution is a complete proof procedure that learns strong nogoods.


Comparison with the work by Kautz and Selman
In 2003, Kautz and Selman (see the above reference) posed several challenges to the SAT community, including:

CHALLENGE 3A: Demonstrate that a propositional proof system more powerful than tree-like resolution can be made practical for satisfiability testing.

We introduced a propositional proof system that is more powerful than tree-like resolution and that is practical. It was known in 1975 as algorithm Rj in an ETH Zurich Technical Report [Clause Learning 1975] and in 1977 as an abstract in the American Mathematical Society, as superresolution. [Superresolution] It was shown in the 1977 paper that SuperResolution and Resolution are polynomially equivalent.

Karl Lieberherr, December 2006 updated, Februrary 2007