# Logic Programming

Transcribed by Carl Eastlund.

### Type Checking Example

 type(E, var(X), T) ← member([X,T],E) type(E, apply(M,N),T) ← type(E, M, arrow(S,T)), type(E, N, S) type(E, lambda(X,M), arrow(S,T)) ← type([[X,S] | E], M, T)

The above is a complete implementation of a type checker - and, in fact, a type inferencer - in logic programming. It is entirely self-contained except for member, which must be defined as simple list-membership. In the example, type and member are relations: type(E,X,T) holds if expression X has type T in environment E and member(V,L) holds if list L contains value V. The other lower-case identifiers are functions: var, apply, arrow, and lambda. They represent syntactic tags on logical records. All remaining (upper-case) identifiers are variables.

Some examples of using this code:

 ← type([], lambda(X,var(x)), T) T = arrow(T1,T1)

Note: this is not deducing polymorphism, but is instead introducing a fresh type T1.

 ← type([], X, arrow(T,T)) X = lambda(x, var(x))

Here it has inferred a term of type arrow(T,T), and can in fact be used to deduce more of the possible terms of this type.

Dave: So you've just implemented unification in a unification-based language. [Scribe: More on this later.]

Q: What if [X,S1] and [X,S2] are both in environment E?
A: The function member is just list membership. This does not mask type membership for duplicate names in an environment.
Matthias: Suspend your disbelief for a moment.

Q: Will this type (λ x.x)(λ x.x)?
A: Yes; the two functions would have the appropriate different types.

A: The Y-combinator cannot be typed by these rules (with or without logic programming).

### Kowalski '74: Predicate Logic as a Programming Language

Propositions are written in a clausal form:

B1, ..., Bn ← A1, ..., Am

Each Ai,Bi is an atomic form:

P(t1, ..., tn)

Each ti is a term. Terms are:

• variables,
• constants,
• or function symbols applied to other terms.
• Any statement in predicate logic has a clausal form.

Q: Does this syntax include negation?
A: No [not directly].

### Semantics

Each clause is interpreted as the disjunction of the Bis implied by the conjunction of the Ais, which holds for all possible variable substitutions.

(B1, ..., Bn ← A1, ..., Am) w/ variables x1, ..., xk
≅ ∀xj ( Bi Ai)
[∀xj means for all substitutions]

These clauses have some special cases:

 Case Clause Name n=0: ← A1, ..., Am Denial: there are no substitutions that make A1, ..., Am true. m=0: B1, ..., Bn ← Assertion n=m=0: ← Null clause (also drawn as a box)

### Computation

Given some clauses, we add a "goal" to perform computation. A goal takes the form of a denial:

← A1, ..., Am

The computation finds some substitution into the variables of the goal that refutes it. The goal denies A1, ..., Am, so the substitution proves it.

Example:
(Intuition: this is the definition of cons/nil-based list-append. Append(A,B,C) holds if appending B on the end of A yields C.)

 (A1) Append(nil, Z, Z) ← (A2) Append(cons(X,Y), Z, cons(X,U)) ← Append(Y, Z, U)

We add a goal to these rules:

← Append(cons(a,nil), cons(b,nil), X)

What falsifies this denial? (What substitution proves Append(cons(a,nil), cons(b,nil), X)?)
X = cons(a, cons(b,nil))
This is how we compute with predicate logic.

Matthias: So the type-checker in the first example has four uses.
Sam: Yes. These are:

• Type checking, when a complete proposition is verified,
• Type inference, when the type is a variable,
• Environment inference, when the environment is a variable,
• and Term inference, when the term is a variable. This is equivalent to theorem proving, though real implementations overflow the stack.

• Matthias: With regard to Dave's earlier claim that "you've just implemented unification in a unification-based language," you have all used unification based languages before. What are they?
Matthias: More specifically?
A: Their type systems.
Matthias: So write a type inferencer in them!

### Horn Clauses

In our computation examples, none of our clauses had more than one atomic form (Bi) to the left of the ←. We will restrict ourselves to this case; these are called Horn Clauses.

Q: Why use only one clause on the left, if more than one are expressible?
A: There is no "nice" prodecural interpretation of more than one clause on the left.

We categorize four kinds of Horn Clauses:

 Case Clause Name n=1, m>0: B ← A1, ..., Am Procedure (where B is the "name", and A1, ..., Am is the "body") n=1, m=0: B ← Assertion (B is always true) n=0, m>0: ← A1, ..., Am Goal (to be refuted; A1, ..., Am to be proved) n=m=0: ← Halt (a satisfied goal)

### Procedural Interpretation

Given a goal:

← A1, ..., Ai-1, Ai, Ai+1, ..., Am

And a procedure: [for an inclusive definition of procedure, where n≥0]

B ← B1, ..., Bn

If there exists a substition θ for the variables Ai and B which renders them identical (this substitution may replace variables with other variables), then we introduce a new goal:

← (A1, ..., Ai-1, B1, ..., Bn, Ai+1, ..., Am) θ

When a procedure name matches part of the goal under some substitution, we apply the substitution to both goal and procedure, then replace the matching goal section with the procedure body. This process is called procedure invocation.

A computation is a sequence of procedure invocations. A computation is successful if it ends in the null clause.

Q: So why is there no negation?
A: There are top-level denials.
Matthias: This model assumes a closed world. When you can add clauses after the fact, there are problems with denial and negation. No one knows how to do negation "right".

Q: Can this diverge?
A: Yes. Consider: Append(A,B,C) ← Append(A,B,C)
Q: Isn't that just a tautology?
A: Yes; all of these propositions are meant as tautologies. See the Append example; every clause was a tautology by definition of appending lists.
Q: Is this process "tail-recursive"? Do propositions blow up in size if they diverge?
A: Depending on the procedure invoked, they may or may not grow in a divergence.

Q: So can we express all propositions?
Matthias: Negation is problematic to express. In the semantics given here, it can be expressed. In real logic programming implementations, there are problems.

Matthias: Real Prolog performs depth-first traversal of possibilities. There are other traversals; you need a breadth-first traversal to solve all decidable goals in bounded time.

### Example of Procedure Invocation

 (F1) Fact(0,s(0)) ← (F2) Fact(s(X), U) ← Fact(X,V), Times(s(X),V,U) (T) Times(X,s(0),X) ←

Given the above definitions - the full definition of factorial, and one part of Times - we can compute the factorial of 2. The series of procedure invocations are given below, including the procedure label and substitution used.

 Goal Justification ← Fact(s(s(0)), A) Initial Goal ← Fact(s(0), V1), Times(s(s(0)), V1, A) (F2) {X := s(0), U := A, V := V1} ← Fact(0, V2), Times(s(0), V2, V1), Times(s(s(0)), V1, A) (F2) {X := 0, U := V1, V := V2} ← Times(s(0), s(0), V1), Times(s(s(0)), V1, A) (F1) {V2 := s(0)} ← Times(s(s(0)), s(0), A) (T) {X := s(0), V1 := s(0)} ← (T) {X := s(s(0)), A := s(s(0))}

The end result, as we would expect, is A = s(s(0)).

Q: Does this handle Fact(8, X)?
A: Yes, although it might need a full definition of Times.

Q: Is this a proof tree? Are these clauses inference rules?
A: No, these are implications and computations.

### Nondeterminism

There are two kinds of nondeterminism present in our procedural interpretation. The first kind we have already worked around, in the example above. There is nondeterminism in the choice of procedures to invoke. What is the other kind?

Consider the following example:

← member(X, cons(a, cons(b,nil)))

What substitutions for X refute this goal?
X = a
or
X = b
So there is also nondeterminism in the results.

← member(a, X)

In this example, a logic programming language could continue to give you possible lists containing a indefinitely.

So the two kinds of nondeterminism are:

• Procedure Selection
• Result Selection
• Q: Will any program processing lists try all possible lists?
A: Only if the list is an unconstrained variable.

Q: How does this nondeterminism reconcile with "unification is deterministic"?
A: Each separate result is the result of a new unification step.

### Van Emden and Kowalski '76: The Semantics of Predicate Logic as a Programming Language

Kowalski and Van Emden formulate several semantic interpretations of predicate logic. Each one defines a meaning function - called a "denotation" - for a proposition written in the logic.

One example they wrote is their operational semantics - not the same kind of operational semantics as we have seen before. In this semantics, the denotation of a proposition, D(P), is defined as follows:

D(P) = {(t1, ..., tn) | A |- P(t1, ..., tn)}

Here, A represents the set of definitions (procedure clauses given before a goal). We define |- by: X |- Y iff ∃ a refutation of X ∧ ¬ Y. This is a proof theoretic operational semantics.

Another example is the model theoretic semantics. The model theoretic denotation function is:

D(P) = {(t1, ..., tn) | A |= P(t1, ..., tn)}

What has changed? We have replaced |- with |=, so what is the difference? We need some intermediate definitions.

We define the Herbrand base of A, H(A), as the set of all terms built from the constants, variables, and function symbols in A. [Scribe: H should have a circumflex, ˆ, above it.]

Q: Does the Herbrand base contain negation?
A: No, it only contains atomic forms as defined before.

Q: Is the Herbrand base finite?
A: No. For instance, the Herbrand base of the factorial model would contain Fact(X,Y) for all integers X and Y.

We define a Herbrand interpretation I as any subset of H(A). A Herbrand model of A is any Herbrand interpretation I in which A is true (all the forms in A are in I). We define M(A) as the set of all models of A.

Finally, A |= X iff X is true in M(A).

In addition to defining an operational semantics, a model theoretic semantics, a fixed point semantics, and so on, Kowalski and Van Emden proved the equivalence of all of these semantics.

### Kowalski '79: Algorithm = Logic + Control

An algorithm can be decomposed into two parts.

• Logic: what to compute
• Control: how we compute it
• For instance, consider the problem of sorting a list. Quicksort is a logic, but its implementation in different languages, or using different tools, would give different control. Control can affect performance, but except for the possibility of nontermination, will not change the semantics of a logic.

The breakdown into logic vs. control is not unique. For instance, we may consider "sorting a list" to be a logic, or "quicksort" to be a logic, and thus grant more importance to either logic or control.

Logic programming gives us a logic to model; the control of the language is the method used for searching the possibilities generated from the procedures. Examples are top-down search, bottom-up search, and depth-first search versus breadth-first search.