Is AP a sound methodology compared to OO?

We assume that the reader is familiar with the strategy graph paper.

AP has the Artificial Intelligence flavor that it automatically transforms programs when the context (class graph) changes. Some people see this power as a danger. We point out that when a proper software engineering process is followed there is no danger. We show that when we compare AP and OO that AP tools give similar static error messages as OO tools and that the effort during evolution when using AP is usually smaller than with OO. An OO solution is also an AP solution (although maybe a bad one) so AP will never give more effort than OO.

We consider the case of a class graph G1 for which a traversal T(G1,S1) has been defined. S1 is some traversal specification for G1, written in English, for example. Now we evolve G1 to G2 and we need to evolve T(G1,S1) to T(G2,S2) according to some traversal change specification TS. T(G,S) is a set of methods that define a detailed traversal.

First we consider the evolution in OO. We need to manually produce T(G1,S1) and manually modify it to T(G2,S2). If we only modify G1 to G2 and keeping T(G1,S1), the Java compiler will tell us Errors(G2,T(G1,S1)) that indicate where the old traversal breaks in the new class graph. The Java compiler will also tell us Compatible(G2,T(G2,S2)) that says whether the traversal is compatible with the class graph. Finally we can think of a tool that prints the differences Diff(T(G1,S1),T(G2,S2)) between the two traversals.

Now we consider the evolution in AP. We need to manually produce S1 and based on TS, the traversal change specification, we produce S2. S1 and S2 are formalizations of the informal traversal specifications. To check the strategies for compatability with the class graphs, we use Compatible(G,S). G is compatible with S if for all edges edge (x,y) in S there is a path in G from x to y. To find the places where the old traversal breaks in the new class graph, we compute Errors(G2,Graph(TG(G1,S1))). TG(G,S) is the traversal graph of G and S. Graph(TG(G,S)) is the smallest subgraph of G that contains all the paths specified by TG(G,S). Errors(G2, G1): if G1 has an edge (A,B) then G2 has such an edge, otherwise an error is reported. Finally, a traversal graph comparison tool prints the differences between the traversal graphs: Diff (TG(G1,S1), TG(G2,S2)).

See viewgraphs 14 and 15 in the AP Sound Methodology Viewgraphs for comparing the AP and OO approach. The example in viewgraph 15 provides a concrete context. We notice that for checking the output of Errors(), for difference checking and for testing, the AP and OO approach have a comparable effort regarding human involvement. For writing the traversals and compatibility checking, however, AP requires usually less human effort. The reason is that with AP we only need to get the traversal specification correct and not the detailed traversal. Once the traversal specification is correct, the sometimes much longer detailed traversal is generated correctly. With OO we have the potential for mistakes at every step of the detailed traversal. There are traversals that are not easy to get correct when they are written manually.

Regarding static checking information, AP produces similar static information as OO.

In both the AP and OO approach there is a danger of wrong traversals that are only detected through testing. This testing effort is the same in both cases.

It is important not to abuse AP. An abuse of AP would be not to retest after a class graph change. Reusing a piece of software (e.g., a strategy) in a new context (a new class graph) requires retesting. This is well-known in the testing literature. The Antidecomposition adequacy rule by Weyuker states the need for retesting as follows: There exists a program P and a component Q of P such that T is adequate for P, T' is the set of vectors of values that variables can assume on entrance to Q for some t of T and T' is not adequate for Q.

This rule applies to both program-based (white-box) and specification-based (black-box) testing. The reason for the rule are as follows:

program-based testing: some part of Q might not be reachable in the context of P but might be reachable in other contexts.

specification-based: some part of the specification of Q might not be utilized in the context of P but might be utilized in other contexts.

The application to AP is as follows: Adaptive program Q used in program P (determined by a class graph and Q). If P is adequately tested, Q might not be adequately tested. For every change to the class graph, the adaptive program needs to be retested. New test cases might be needed for each new class graph. This situation is similar to object-oriented programming: an inherited method needs to be retested in the context of the inheriting subclass.

Another abuse of AP is to ignore the output from Errors() and Diff() because the regenerated program compiles. Uncritical acceptance of the analogical transformation done by the AP compiler is an abuse of AP.

To promote proper use of AP, the AP Library provides both the Errors(G2,Graph(TG(G1,S1))) computation (new feature as of Jan. 14, 1999) and the traversal graph comparator Diff (TG(G1,S1), TG(G2,S2)) (in process of implementation).

In Demeter/Java we have so far printed the traversal graphs Graph(TG(G1,S1))) and Graph(TG(G2,S2)) and then users could perform the Errors() and Diff() checks manually. To have direct tool support is an improvement.

Traversal strategies have an amplification surprise factor as any powerful tool. A small change in a strategy may result in a big change in the traversal. If the traversal is written manually, this cannot happen. As the analysis above shows, the static information available from the tools and the required testing helps to detect any wrong traversals.

The idea of participant graphs as described in the OOPSLA paper http://www.ccs.neu.edu/research/demeter/biblio/components.html helps to solve the class graph evolution problem. When the class graph to which a participant graph is mapped is changed, it is necessary to check whether the mapping needs to be updated. What is helpful is that checking a mapping expressed in terms of traversal strategies is easier than checking general code. Class graph evolution needs two steps in this context: First check whether the mapping is correct and then check whether the code works correctly with that mapping. It should be possible to put enough constraints into the adaptive program such that when the path mapping is correct the program will work correctly. (Idea in this paragraph pointed out by Daniel Jackson)

In summary, AP makes it easier to adapt a program to a new class graph and to reduce the likelihood of human error by automating the generation of a detailed traversal from a specification of the traversal. However, AP does not free the programmer from a careful validation of the produced code as if the code would have been produced manually. AP helps you to quickly produce code that is a reasonable generalization of your current code but it is your responsibility to check the correctness of the generalization.

To Demeter home page

Professor Karl J. Lieberherr
College of Computer Science, Northeastern University
Cullinane Hall, Boston, MA 02115
Internet: lieber@ccs.neu.edu
Fax: (617) 373 5121