Presented by Greg Pettyjohn

Transcribed by Richard Cobbe

Recall from the
previous talk that our model of processes includes action prefix
(`a.P`

), choice (```
a.P +
b.Q
```

), and parallel composition (`P|Q`

). We also
have a synchronization mechanism that uses the connection between
names and conames: `a.P|a'.Q`

can transition to
`P|Q`

. (As before, I will use `a'`

to denote
the coname `a`

rather than an overbar.) The combination
of choices with hidden events gives us non-determinism, and this
requires a sophisticated notion of process equivalence, bisimulation.

This formalism describes how a single process may evolve over time. Today, we concentrate on a formalism that describes how multiple processes may interact at a specific point in time. In this formalism, we can define a clock process as follows:

A1 ::= b'.A2 A2 ::= c'.A1 B1 ::= b.B2 B2 ::= tick'.B3 B3 ::= c.B4 B4 ::= tock'.B1 clock = new(b,c)(A1|B1)Much of this is straightforward; each individual process is defined in terms of sequential composition. (Because these definitions are mutually recursive, the formalism must provide some sort of fixed-point machinery, but we omit that here for simplicity's sake.)

The only new construction here is `new(b,c)(A1|B1)`

. This
restricts the visibility of the symbols in the first `argument'
(here, `b`

and `c`

) to the process in the
second `argument' (here, `A1|B1`

). Since, in our example,
the symbol `b`

is not visible outside the composition
`A1|B1`

, then any occurrence of `b`

(and its
coname `b'`

) within `A1`

and `B1`

*must* indicate an interaction between the two processes. Any
other use would require the symbol to be present outside the
composition, which is expressly forbidden by the `new`

.
Symbols not specified in `new`

's list, like
`tick'`

, are visible outside the composition.

Now, consider the `clock`

process described above from the
perspective of an external observer. Since `b`

and
`c`

(and their conames) are local to this process, we
could simply model `clock`

as follows: ```
clock ::=
tick'.tock'.clock
```

. And, in some sense, we could replace the
actual definition of `clock`

with any implementation that
exhibited this internal behavior.

Dave asked if all possible implementations of this interface are actually observationally equivalent. Greg replied that these two implementations could be shown observationally equivalent, but that this did not necessarily hold in general.

In his 1999 book *Communicating and Mobile Systems*, Milner
defines his calculus by specifying 5 things:

- The syntax of process descriptions.
- A notion of a process context.
- Process congruence relations.
- Structural congruence relations.
- Reaction and transition rules.

We have seen the process description syntax above. Define the process contexts as follows:

C ::= [] | a.C + M | new a C | C|P | P|C

`M`

was not defined in the talk; presumably it's an
arbitrary term in the language.
Using these contexts, we can define a notion of a process congruence
relation. An equivalence relation R on processes is a congruence
relation if and only if, for all processes `P`

and
`Q`

and contexts `C`

, `P`

R
`Q`

implies that `C[P]`

R `C[Q]`

.
As mentioned above, this is very similar to the definition of
observational equivalence used with the lambda calculus. (Note, by
the way, that Milner actually provides a much lower-level definition;
the above statement is actually a theorem; Milner proves that this
follows from the definitions.)

Milner also defines a structural congruence. This is essentially a high-powered alpha-substitution model. Using the structural congruence relation, we can rewrite an arbitrary term into a standard form. Given this, we can get away with fewer reduction rules, since we no longer have to consider all possibilities. The transformations allowed by structural congruence include changing bound names, reordering terms in a sum, and other similar equivalences.

Finally, Milner defines the dynamic behavior of his system through a combination of reaction and transition rules. These are inference rules that define a relation which contains all possible transitions on the system. The relation defined by the reaction rules (which we will call the 'reaction relation') defines only how two or more processes interact with one another; they do not explain how actions can be exposed to the environment.

The relation defined by the transition rules (the 'transition relation') defines the full labeled transition system for the set of processes (similar to a Kripke structure for the system). This relation is an extension of the reaction relation; that is, it contains all transitions in the reaction relation plus some additional ones. Milner proves that the reaction relation corresponds exactly to the tau-transitions in the transition relation.

(Note: the terms 'reaction relation' and 'transition relation' in the previous paragraphs are the scribe's invention, for the sake of brevity, and may or may not occur in the literature.)

In our last talk, we closed with a definition of bisimulation, which allows us to equate processes in the face of nondeterminism. So, we conclude this section by showing how bisimulation fits into this new formalism. Milner proves that the bisimulation relation is, in fact, a process congruence as defined above.

In the graphical representation of processes (not drawn here), two
processes are connected by a line if the processes share an event.
That is to say, one process has a name `a`

and the other
process has its coname `a'`

. Milner establishes the
convention that conames indicate output, while bare names indicate
input. This suggests the idea of communications channels, and Milner
defines just such an extension.

In this formalism, a line between two processes corresponds to a communications channel between those processes. In addition to synchronizing on the channel's name, the processes may also send data across the channel. Further, the data sent across channels will themselves be channels. This provides a significant extension over raw CCS. In CCS, the only communications "channels" available to the processes are those whose existence we can infer statically, from the algebraic descriptions of the processes' behavior. With this new ability to send channels over channels, however, we can create channels dynamically, as demonstrated in the following example.

Consider the following process definitions:

P ::= b'<a>.P' + a'<5>.P' Q ::= b(y).Q' R ::= a(x).R' S ::= new(a,b)(P | Q | R)(In Milner's notation, we surround a value written to a channel with angle brackets, and we surround the id bound to the value read from the channel with parentheses.) In this system's initial configuration, there is a communications channel between P and Q (due to the shared name

`b`

), and a communications channel
between P and R (through name `a`

).
In the first of two possible transitions, `P`

writes
`5`

to channel `a`

, and `R`

reads
it. `P`

transitions to `P'`

, and
`R`

transitions to `R'[5/x]`

(where the square
brackets indicate standard substitution). Further, no communications
channels are left, since there are no further shared names.

In the second transition, `P`

writes `a`

to
channel `b`

, and `Q`

reads it. Again,
`P`

transitions to `P'`

, and `Q`

transitions to `Q'`

. However, since we passed the value
`a`

to `Q`

, there is now a (potential)
communications channel between `Q'`

and `R`

.

As a consequence of the ability to communicate across channels, we now have an abstraction mechanism. We can define a function-like thing in this system, and these `things' are first-class. Further, we now have an idea of values flowing through our system, so the processes in this system now start to look like physical locations.