Presented by Sam Tobin-Hochstadt.

Transcribed by Carl Eastlund.

type(E, var(X), T) | ← member([X,T],E) |

type(E, apply(M,N),T) | ← type(E, M, arrow(S,T)), type(E, N, S) |

type(E, lambda(X,M), arrow(S,T)) | ← type([[X,S] | E], M, T) |

The above is a complete implementation of a type checker - and, in fact, a type inferencer - in logic programming. It is entirely self-contained except for member, which must be defined as simple list-membership. In the example, type and member are relations: type(E,X,T) holds if expression X has type T in environment E and member(V,L) holds if list L contains value V. The other lower-case identifiers are functions: var, apply, arrow, and lambda. They represent syntactic tags on logical records. All remaining (upper-case) identifiers are variables.

Some examples of using this code:

← type([], lambda(X,var(x)), T) T = arrow(T1,T1) |

Note: this is not deducing polymorphism, but is instead introducing a fresh type T1.

← type([], X, arrow(T,T)) X = lambda(x, var(x)) |

Here it has inferred a term of type arrow(T,T), and can in fact be used to deduce more of the possible terms of this type.

*
Dave: So you've just implemented unification in a unification-based language.
[Scribe: More on this later.]
Q: What if [X,S*

Propositions are written in a clausal form:

Each A_{i},B_{i} is an atomic form:

Each t_{i} is a term. Terms are:

Any statement in predicate logic has a clausal form.

*
Q: Does this syntax include negation?
A: No [not directly].
*

Each clause is interpreted as the disjunction of the B_{i}s implied by the
conjunction of the A_{i}s, which holds for all possible variable
substitutions.

≅ ∀x

[∀x

These clauses have some special cases:

Case |
Clause |
Name |

n=0: | ← A_{1}, ..., A_{m} |
Denial: there are no substitutions that make A_{1}, ...,
A_{m} true. |

m=0: | B_{1}, ..., B_{n} ← |
Assertion |

n=m=0: | ← | Null clause (also drawn as a box) |

Given some clauses, we add a "goal" to perform computation. A goal takes the form of a denial:

The computation finds some substitution into the variables of the goal that refutes
it. The goal **denies** A_{1}, ..., A_{m}, so the substitution
**proves** it.

Example:

(Intuition: this is the definition of cons/nil-based list-append.
Append(A,B,C) holds if appending B on the end of A yields C.)

(A1) | Append(nil, Z, Z) | ← |

(A2) | Append(cons(X,Y), Z, cons(X,U)) | ← Append(Y, Z, U) |

We add a goal to these rules:

What falsifies this denial? (What substitution proves Append(cons(a,nil),
cons(b,nil), X)?)

X = cons(a, cons(b,nil))

This is how we compute with predicate logic.

*
Matthias: So the type-checker in the first example has four uses.
Sam: Yes. These are:
*

Matthias: With regard to Dave's earlier claim that "you've just implemented unification in a unification-based language," you have all used unification based languages before. What are they?

A: ML and Haskell.

Matthias: More specifically?

A: Their type systems.

Matthias: So write a type inferencer in them!

*
Q: Why use only one clause on the left, if more than one are expressible?
A: There is no "nice" prodecural interpretation of more than one clause on the left.
*

We categorize four kinds of Horn Clauses:

Case |
Clause |
Name |

n=1, m>0: | B ← A_{1}, ..., A_{m} |
Procedure (where B is the "name", and A_{1}, ..., A_{m} is
the "body") |

n=1, m=0: | B ← | Assertion (B is always true) |

n=0, m>0: | ← A_{1}, ..., A_{m} |
Goal (to be refuted; A_{1}, ..., A_{m} to be proved) |

n=m=0: | ← | Halt (a satisfied goal) |

Given a goal:

And a procedure: [for an inclusive definition of procedure, where n≥0]

If there exists a substition θ for the variables A_{i} and B which
renders them identical (this substitution may replace variables with other
variables), then we introduce a new goal:

When a procedure name matches part of the goal under some substitution, we apply the
substitution to both goal and procedure, then replace the matching goal section with
the procedure body. This process is called **procedure invocation**.

A **computation** is a sequence of procedure invocations. A computation is
**successful** if it ends in the null clause.

*
Q: So why is there no negation?
A: There are top-level denials.
Matthias: This model assumes a closed world. When you can add clauses after the
fact, there are problems with denial and negation. No one knows how to do
negation "right".
Q: Can this diverge?
A: Yes. Consider: Append(A,B,C) ← Append(A,B,C)
Q: Isn't that just a tautology?
A: Yes; all of these propositions are meant as tautologies. See the Append
example; every clause was a tautology by definition of appending lists.
Q: Is this process "tail-recursive"? Do propositions blow up in size if they
diverge?
A: Depending on the procedure invoked, they may or may not grow in a
divergence.
Q: So can we express all propositions?
Matthias: Negation is problematic to express. In the semantics given here, it can
be expressed. In real logic programming implementations, there are problems.
Matthias: Real Prolog performs depth-first traversal of possibilities. There are
other traversals; you need a breadth-first traversal to solve all decidable goals
in bounded time.
*

(F1) | Fact(0,s(0)) | ← |

(F2) | Fact(s(X), U) | ← Fact(X,V), Times(s(X),V,U) |

(T) | Times(X,s(0),X) | ← |

Given the above definitions - the full definition of factorial, and one part of Times - we can compute the factorial of 2. The series of procedure invocations are given below, including the procedure label and substitution used.

Goal |
Justification |

← Fact(s(s(0)), A) | Initial Goal |

← Fact(s(0), V1), Times(s(s(0)), V1, A) | (F2) {X := s(0), U := A, V := V1} |

← Fact(0, V2), Times(s(0), V2, V1), Times(s(s(0)), V1, A) | (F2) {X := 0, U := V1, V := V2} |

← Times(s(0), s(0), V1), Times(s(s(0)), V1, A) | (F1) {V2 := s(0)} |

← Times(s(s(0)), s(0), A) | (T) {X := s(0), V1 := s(0)} |

← | (T) {X := s(s(0)), A := s(s(0))} |

The end result, as we would expect, is A = s(s(0)).

*
Q: Does this handle Fact(8, X)?
A: Yes, although it might need a full definition of Times.
Q: Is this a proof tree? Are these clauses inference rules?
A: No, these are implications and computations.
*

There are two kinds of nondeterminism present in our procedural interpretation. The first kind we have already worked around, in the example above. There is nondeterminism in the choice of procedures to invoke. What is the other kind?

Consider the following example:

What substitutions for X refute this goal?

X = a

or

X = b

So there is also nondeterminism in the results.

In this example, a logic programming language could continue to give you possible lists containing a indefinitely.

So the two kinds of nondeterminism are:

*
Q: Will any program processing lists try all possible lists?
A: Only if the list is an unconstrained variable.
Q: How does this nondeterminism reconcile with "unification is deterministic"?
A: Each separate result is the result of a new unification step.
*

Kowalski and Van Emden formulate several semantic interpretations of predicate logic. Each one defines a meaning function - called a "denotation" - for a proposition written in the logic.

One example they wrote is their **operational semantics** - not the same kind of
operational semantics as we have seen before. In this semantics, the denotation of
a proposition, D(P), is defined as follows:

Here, **A** represents the set of definitions (procedure clauses given
before a goal). We define |- by: X |- Y iff ∃ a refutation of X ∧ ¬
Y. This is a proof theoretic operational semantics.

Another example is the **model theoretic semantics**. The model theoretic
denotation function is:

What has changed? We have replaced |- with |=, so what is the difference? We need some intermediate definitions.

We define the Herbrand base of **A**, H(**A**), as the set
of all terms built from the constants, variables, and function symbols in A.
[Scribe: H should have a circumflex, ˆ, above it.]

*
Q: Does the Herbrand base contain negation?
A: No, it only contains atomic forms as defined before.
Q: Is the Herbrand base finite?
A: No. For instance, the Herbrand base of the factorial model would contain
Fact(X,Y) for all integers X and Y.
*

We define a Herbrand interpretation I as any subset of H(**A**). A
Herbrand model of **A** is any Herbrand interpretation I in which
**A** is true (all the forms in **A** are in I). We
define M(**A**) as the set of all models of **A**.

Finally, **A** |= X iff X is true in
∩M(**A**).

In addition to defining an operational semantics, a model theoretic semantics, a fixed point semantics, and so on, Kowalski and Van Emden proved the equivalence of all of these semantics.

An algorithm can be decomposed into two parts.

For instance, consider the problem of sorting a list. Quicksort is a logic, but its implementation in different languages, or using different tools, would give different control. Control can affect performance, but except for the possibility of nontermination, will not change the semantics of a logic.

The breakdown into logic vs. control is not unique. For instance, we may consider "sorting a list" to be a logic, or "quicksort" to be a logic, and thus grant more importance to either logic or control.

Logic programming gives us a logic to model; the control of the language is the method used for searching the possibilities generated from the procedures. Examples are top-down search, bottom-up search, and depth-first search versus breadth-first search.