This is a selection from messages that Dr Bill Richter has posted to comp.lang.scheme. [Editor's note: To make this accessible to a wider audience, I have inserted many editor's notes.] ================================================================ 6 June 2004: Schmidt defines DS to be the study of semantic functions curly-E : Expression-of-our-Language ----> Some set where the 2 sets and the function are mathematically defined. You know what this means! And you know that curly-E can't be a computable function by the Halting problem (i.e. Goedel Incompleteness). [Editor's note: In almost all denotational semantics, curly-E is computable.] So any kind of semantics becomes DS once you fully mathematize it. In that sense, the OpS (as studied I think in EoPL) is also DS. [Editor's note: Operational semantics (OpS) is every bit as mathematical as denotational semantics (DS), but OpS and DS are quite different things.] So OpS must mean the subset of DS where you concentrate on interpreters, and not their full mathematization.... [Editor's note: Not quite. An operational semantics is a fully mathematical characterization of programs' evaluation/execution--- hence the close connection between OpS and interpreters, which are formal machines that evaluate/execute programs. A denotational semantics consists of definitions of semantic functions that assign some kind of mathematical meaning to each phrase of a programming language. Since the denotation (meaning) of a phrase that contain variables usually depends on its context, most denotations are functions that abstract over the relevant contextual information, e.g. the values of the phrase's free variables.] ================================================================ 16 June 2004: Maybe I should clarify about SICP. They stress: to evaluate a combination, first evaluate the arguments, and the 1st must return a procedure value (one of Shriram's triple), and then make a new environment... Sounds like compositionality to me. [Editor's note: In semantics, the word "compositional" means defined by structural induction over syntax. The SICP interpreter is not compositional.] ================================================================ 18 June 2004: > E[{x x}](e1) = E{{x x}](e1) > > You call that a definition? You caught me again! Thanks.... ================================================================ 20 June 2004: I see now I've been wrong about the big-step OpS in PLAI! Many apologies to Shriram, and thanks to everyone for helping me out.... So there's sense to saying I've got to use CPOs or Scott models. It's, "You've gotta do something! You can't just say you're done!" We've gotten into other topics, which I'd be happy to discuss them further, but the original impetus is now dead: I was wrong about PLAI. ================================================================ 21 June 2004: > That's like saying we can't compute pi using a computer, because of > the Halting Problem. Well, they're both uncomputable, and that's all I meant.... [Editor's note: pi is a computable real number.] ================================================================ 22 June 2004: > ``If a property is possessed by 0 and also by the successor of > every natural number which possesses it, then it is possessed by > all natural numbers.'' > > and this, of course, is the foundation for mathematical induction. Joe, that's an important mistake. The Peano induction axiom is only for statements expressed in the language of Peano arithmetic. And the Peano language is so limited that you can't get a lot of induction that way. So the Peano induction axiom is NOT the foundation for mathematical induction. The foundation is the much more powerful ZFC axioms, and it yields the statement I think was in my Algebra II text in High School: Given mathematical statements S(n), n = 0, 1, 2,... if S(0) is true, and S(n) => S(n+1), then S(n) is true for all n. You probably know all this, because CS folks generally know about nonstandard models of (Peano) arithmetic. That is, not every element of the model is a successor of 0.... [Editor's note: The second-order Peano axioms, whose induction axiom Joe Marshall stated above (within quotation marks), are categorical: they have no nonstandard models. When the Peano axioms are stated in first-order logic, the induction axiom becomes the axiom schema that Bill Richter learned in high school, which can only be used with properties that can be expressed in first-order logic. This is what gives rise to the nonstandard models of arithmetic. Boolos and Jeffrey explain this in their textbook, which Bill began to cite in subsequent posts.] ================================================================ 22 June 2004: > ...Suppose further that r0, k0, > and s0 are some particular environment, continuation, and store. Then > the partial function from expressions to integers that is defined by > > GOOD [[ e ]] = E [[ e ]] r0 k0 s0 > > is computable, Will, either I disagree strongly, or I don't understand you.... Sounds interesting. But I say the Halting problem says GOOD is not a computable function.... But there's NO way I'll believe that Schmidt defines compositional to mean proven by structural induction. That's not the way mathematical definitions are phrased.... ================================================================ 23 June 2004, quoting Clinger: > It's really the function's definition that is compositional. I'm sure this is false.... ================================================================ 25 June 2004: > If you extract the definition of primitive recursion from this > context, you will see that it has everything to do with the form of > a function definition. I shouldn't have carped at all. Sure, you're right: finitely presented groups and primitive recursive functions are defined in terms of their definitions. ================================================================ 26 June 2004: Will, I found strong evidence for your position in a 1992 book by Nielson & Nielson, "Semantics with Applications". Anyone like this book? Very early in the book he defines compositionality exactly as you did, and on p 85 he says, "S_{ns} and S_{sos} are *not* denotational definitions because they are *not* defined compositionality [i.e. by structural induction]." I still contend my interp of C-F is correct, that the structural induction definition is equivalent to equations for the function E to satisfy, involving extra functions Phi_P, however E is constructed. But I see I've got real resistance to overcome.... ================================================================ 28 June 2004: No, if we really define E, then this definition of Phi is fine. Or it could be: that doesn't look right to me. And that's of course the reason to believe in compositionality, is that it sounds like Scheme: to evaluate a combination, first evaluate the arguments, etc.. [Editor's note: See the editor's note for 16 June 2004.] ================================================================ 1 July 2004: Will, we're having a communication problem. I think I'm writing too much, and you're not reading it, but just skipping to the end. > Re-defining the word "compositional" doesn't make your semantics > compositional. Sure, but I proved, using Schmidt's Theorem 3.13, that my definition of compositional was equivalent to yours. > In particular, your bog-standard small-step operational semantics > doesn't support proofs by structural induction, as a compositional > semantics would. Well, that's perhaps a weakness of my semantic function.... [Editor's note: As can be seen from the excerpts for 16 June and 28 June, Bill did not even understand what compositional means. As can be seen from the excerpts for 22 June, 23 June, and 26 June, Bill was rejecting the standard definition of compositional.] ================================================================ 2 July 2004, quoting Clinger: You claim to have proved that you are getting close to the point at which you could claim that some as-yet-unspecified correspondence holds between your semantics and a compositional semantics. When you actually specify that correspondence (e.g. the f_ij functions that you claim you might be able to find), I will comment upon it. I did that 2 long posts ago! And above. The post that you gave a long thoughtful reply, but you seem to have only started reading toward the end.... ================================================================ 3 July 2004: > I did that 2 long posts ago! And above. No you didn't. Right, I goofed.... On 29 June you gave a compositional denotational semantics for a four-function calculator language of arithmetic expressions over numerals. There was no recursion in that language, and your semantics was exactly the same as everyone else's. Right. Sorry again. A few minutes later you posted another message that concluded: > But I'm close to saying that any small-step OpS (if I understand > the phrase) is easily recast as DS, if we can only prove > "conservation laws", i.e find f_ij and prove the compositionality > equations. You have never actually stated any f_ij that give an alternative compositional definition of a semantic function originally defined by your style of standard reduction. Never. Not once. That's technically true, as I haven't completely defined an E yet.... ================================================================ 4 July 2004: If you don't stop saying you're "close" to saying something, I'm going to scream. Will, I'm 100% done with a proof that I can produce compositionally defined functions by: 1) first defining some maps EE_i: B_i ---> D_i 2) then defining the f_ij in terms of the EE_i 3) then use the uniqueness of Schmidt's 3.13 to show that the EE_i are defined compositionally by the f_ij. [Editor's note: To prove uniqueness via Schmidt's Theorem 3.13, one must first prove that the EE_i satisfy the equations of that theorem. For Richter's perversely complex f_ij, this was not trivial.] ================================================================ 4 July 2004, addressing Daniel Wang and referring to his argument with Matthias Blume and others concerning the law of excluded middle: Do you want to use the DS approach in Schmidt's book, of CPO minimal fixed points and Scott models of LC? If so, you must work in ZFC! MB take it for granted that you're working with ZFC-caliber axioms, so he can't understand what you're havering about. I think. [Editor's note: None of the mathematics in Schmidt's book depends on the axiom of choice. None of the mathematics in Schmidt's book depends on arcane details of an axiomatic foundation for mathematics.] ================================================================ 6 July 2004: > Will, I'm 100% done with a proof that I can produce > compositionally defined functions by... No, Bill, you've been leaving out the important part: Showing that the unique functions determined by your f_ij are equal to your EE_i. As I keep saying, this follows from the uniqueness of Schmidt's 3.13. But thanks for a substantive objection. Let's go through this: I'm asserting that my EE_i's & f_ij's satisfy these equations: EE_i( Option_ij ) = f_ij ( EE_ij1(S_ij1),..., EE_ijk(S_ijk) ) I mean this is the assumption. In specific cases, we'll have to show that the EE_i actually satisfy these equations. The mere fact that f_ij are defined in terms of the EE_i does not prove these equations.... Since your equality wasn't a definition, you must have been claiming it as a theorem. That is an empty claim: you haven't proved any such theorem. Right, but it's not all my fault.... You've been using the same names for E and E', thereby assuming at the outset that they are equal. I thought I was quite clear above, distinguishing between EE_i and BB_i. I've always been that clear in Schmidt 3.13 discussions. The mathematical term for that is "cheating". You aren't allowed to use notation that assumes the theorem you haven't yet proved. I don't think I did that.... In fact, you have not yet defined a complete set of f_ij for any language that can express iteration or recursion. Absolutely! Then, and only then, can you prove that the unique functions whose existence is asserted by Theorem 3.13 coincide with the E that you defined by reduction. Right.... ================================================================ 6 July 2004, quoting Marshall quoting Richter: > I don't want to try to define them. Then you have not proven that Schmidt's Theorem 3.13 applies and there is no reason to believe that your equations are compositional. No, you're only reading the proof of the existence part of 3.13. There's no structural induction in the statement of Schmidt's 3.13. ================================================================ 8 July 2004: Apart from the terminals symbols handled above, the compositionality equations are only for M in Exp: E: Expr ---> Value_bottom E[x] = x E[n] = n E[(lambda (x) M)] = (lambda (x) M) E[(M N)] = Phi( E[M], E[N]) E[ (alpha M M) ] = alpha(m, n), if E[M] = m, E[N] = n bottom, otherwise Any solution of these equations in E and Phi will show that E is Bill-compositional, which have agreed to assume is equivalent to Will-compositionality. I'll first define Phi in terms of E, and then define E, and then show these equations are actually satisfied. So Phi: Value x Value ---> Value_bottom will be defined (after defining E) by Phi(U, V) = E[ P[y <- V] ], if U = (lambda (y) P), bottom, otherwise Ph is perfectly well-defined now in terms of E. The 3rd compositionality equation now becomes E[(M N)] = E[ P[y <- E[N]] ], if E[M] = (lambda (y) P), bottom, otherwise [....] Now we'll define a map E: Expr ---> Value_bottom by the totalization of F-F's |-->>_v, not quite eval_v^s. F-F explain on p 53 that |-->>_v is the transitive closure of the standard reduction arrow, which reduces the leftmost beta_v redex. We define E[M] = V, if M |-->>_v V bottom, otherwise. [Editor's note: Richter is using the same letter E for two entirely different purposes. E is the existentially quantified unknown in his compositionality equations, and is also the totalization of the multi-step transition relation |-->>_v. Richter has also expunged the existential quantification of E from his exposition, so it is hard for a reader to tell which occurrences of E are which. This amounts to assuming the theorem he is trying to prove.] ================================================================ 8 July 2004: > And I won't give any options for either P or y. I'll give a list of > terminal symbols. > > P ::= exp_1 | exp_2 | ... > > y ::= v_1 | v_2 | v_3 | ... Notice, folks, that Bill is changing the grammar of his language, yes! and that his modified grammar has an infinite number of productions. I think not. The list P ::= exp_1 | ... consists entirely of terminal symbols, just like for y and x and n.... Here's something else you have done: You have given distinct denotations to (lambda (x) (+ 1 2)) and (lambda (x) (+ 2 1)) I certainly meant to. They're certainly distinct in LC_v.... ================================================================ 8 July 2004: P ::= E But that's not what I meant. I meant to enumerate the countably infinite set E as a list of terminal symbols. Good thing we don't have the real line yet, or it wouldn't even be countable.... ================================================================ 11 July 2004: But what about real numbers? I think it would be interesting and useful to specify real numbers with finitely many productions. But you could only hope to capture the real numbers you'd "input" in a Scheme program. That is, it'll be countable, won't it? So I'd rather ignore that issue, and instead say r in Real r = ... | pi | ... | e | ... | all other real numbers So I'd have an uncountable number of productions. But I wouldn't have to sort out the countable subset that we type into Scheme programs.... [Editor's note: By definition, a context-free grammar has only a finite number of productions.] ================================================================ 16 July 2004: He then defines fi as the composition of f with an identity function. That's false, as I just showed, and I corrected you last post, and you responded "Hence this distinction is unimportant." This is a serious problem. I think you're not reading what I say carefully. Perhaps you think I'm making errors, and then you correct them for me, and then send me bug reports for your corrected code. They're not bugs! [Editor's note: See both excerpts for 28 July, and the first two excerpts for 30 July.] ================================================================ 16 July 2004: Joe performed an exercise that Will recommends, to code up the DS. And that's useful, to see that the structural induction really works. In my framework, that's not a useful exercise. I don't use any structural induction to construct my EE_i semantic functions. [Editor's note: Since "compositional" means defined by structural induction, Bill's last sentence above says Bill's definition of his semantic functions is not compositional.] ================================================================ 18 July 2004: I have also demonstrated, by means of a specific small-step operational semantics os[[.]] that is *clearly* not defined by structural induction, that Bill-compositionality does not imply compositionality. You did nothing of the sort, Will. Your Bill-compositional semantic functions are also compositional. Eventually maybe we'll get though the proof of this. I proved in general (a 1-line proof) that Bill-compositionality is equivalent to the c.l.s. consensus definition of compositionality, which is exactly what you say: We must continue to insist on the accepted meaning of compositional: defined by structural induction over the syntax of programs. [Editor's note: See excerpts above from Bill's posts for 16 June, 23 June, 26 June, 28 June, 1 July, 4 July, 5 July, 6 July, 7 July, 8 July, 16 July.] ================================================================ 18 July 2004: In one sense, you're correct, Will. If we wish to define the BB_i (BB for boldface) by structural induction via the f_ij, then the BB_i cannot occur free in the definition of the f_ij. It doesn't even make sense. The BB_i aren't defined when the structural induction begins. But your wrong in another sense. The statement of Schmidt's Theorem 3.13 actually says nothing of the sort. From Joe's quote: If, for each B_i in L's definition and each Option_ij of B_i's rule, there exists an equation of the form: BB_i(Option_ij) = f_ij (BB_ij1 (S_ij1), BB_ij2 (S_ij2), ... BB_ijk(S_ijk)) where f_ij is a function of functionality D_ij1 x D_ij2 x ... D_ijk -> D_i, then the set of equations uniquely defines a family of functions BB_i: B_i -> D_i for 1<=i<=n. As I posted earlier (you didn't respond), this means exactly: Given functions f_ij: D_ij1 x D_ij2 x ... D_ijk -> D_i, for i = 1,..,n, j = 1,...,m, there is a unique family of functions BB_i: B_i -> D_i for 1<=i<=n satisfying the equations BB_i(Option_ij) = f_ij (BB_ij1 (S_ij1), BB_ij2 (S_ij2), ... BB_ijk(S_ijk)) So there's no structural induction in the statement of 3.13, or insistence that the f_ij not us[e] the BB_i.... [Editor's note: Apparently, Bill doesn't recognize that "Given functions f_ij...there exists a unique family of functions BB_i... such that..." is the technical way of saying the f_ij define the BB_i by structural induction.] ================================================================ 19 July 2004: What really happened is that you never understood that your definition of f_14 = Phi was incorrect in the first place. No, it was correct, for Bill-compositionality. It would be nonsense for Will-compositionality. > Anyway, in both of my ISWIM posts, I defined f_13 in terms of E, > which isn't an error, since I'm not using structural induction. I quoted this to underline the fact that you do not recognize the structural induction in Schmidt's Theorem 3.13, You're just mistaken, but we'll do this on the other thread. I'll only repeat here: there's no mention of structural induction in the *statement* of Schmidt's Theorem 3.13. ================================================================ 20 July 2004, quoting Clinger: No, Bill. To say "the interpretation of a phrase is defined by a function of the interpretation of the sub-phrases" is exactly the same as saying "the interpretation of a phrase is defined by structural induction on its syntax", No, Will, as I explained to Shriram, it needn't mean that.... which is exactly the same as saying "the definition of the semantic function that assigns meaning to phrases is by structural induction". No, because that's meaningless. As I've said often, It makes no sense to ask: "What's *the* definition of this function?". You can only ask, "What's *a* definition of this function?" One Function may have a variety of definitions. Your problem, Bill, is that you do not understand the concept of structural induction. That's false. In particular, you do not understand that Schmidt's Theorem 3.13 asserts that definitions of semantic functions by structural induction define those functions uniquely. I understand quite well that Thm 3.13 implies just this. But that's not the *statement* of Thm 3.13. You didn't respond to my post explaining what Thm 3.13 actually say. [Editor's note: In the first excerpt for 18 July, Bill claimed to have proved that Bill-compositionality is equivalent to the "c.l.s. consensus definition of compositionality". Above, two days later, he dismisses the consensus definition as meaningless.] ================================================================ 22 July 2004: The fallacy lies in allowing the definitions of the f_ij to refer to the BB_i. I know what you mean by that, and I agree, that's nonsense. Your BB_i are produced by structural induction by the f_ij. But I said nothing of the sort in the post where you wrote the above, "That is Richter's Fallacy 3.13". I'll quote again: > As I posted earlier (you didn't respond), this means exactly: > > Given functions f_ij: D_ij1 x D_ij2 x ... D_ijk -> D_i, > > for i = 1,..,n, j = 1,...,m, there is a unique family of > functions BB_i: B_i -> D_i for 1<=i<=n satisfying the > equations > > BB_i(Option_ij) > = f_ij (BB_ij1 (S_ij1), BB_ij2 (S_ij2), ... BB_ijk(S_ijk)) > > So there's no structural induction in the statement of 3.13, or > insistence that the f_ij not us the BB_i. Possibly you're being > confused by his strange locution, That is Richter's Fallacy 3.13. Please, Will. Look at this, and just this, and tell me what's wrong with it. Don't add in mistakes that you believe I made elsewhere, or other nonsense that you're sure I believe. Just look at this. [Editor's note: As I wrote in the passage that Bill quoted at the beginning of this excerpt, the fallacy is Bill's claim that the definitions of the f_ij can refer to the BB_i whose existence is asserted by the theorem. See also the first excerpt from 31 July, which gives a simpler example of the same fallacy.] ================================================================ 23 July 2004: So Bill said that transfinite induction (the axiom of choice, the "C" in ZFC) is not a separate axiom, correct. and is not independent of the others, no, that's not a separate statement from the previous one. but is a consequence of the basic ZFC axioms. Yes. That's hardly the sort of thing that is likely to be said by those who understand what they are saying. It's true nonetheless. I saw that Enderton derived transfinite induction from the ZFC axioms. I didn't follow either the proof or even the statement.... [Editor's note: The axiom of choice is a separate axiom, and is independent of the axioms of Zermelo-Frankel (ZF) set theory.] ================================================================ 23 July 2004: What exactly is the "advantage" of your "semantics", Bill? Matthias, I'd say there are are 4 advantages: 1) Simplicity! I'm avoiding Scott models & CPOs. 2) I'm giving total functions Exp ---> Meaning. Pure Math folks like me can't live without that. 3) It's pretty faithful to Scheme: (lambda (x) M) is a doubly-special form: neither x nor M is evaluated. Many Scheme treatments say this, e.g. SICP & the text to R5RS. Although in R5RS DS, M is not special. 4) Therefore my compositionality is meaningful, and Scheme-like. As you've posted, it's easy to attain compositionality in meaningless situations. But compositionality in my semantics mainly amounts to the fundamental Scheme principle: to evaluate a combination, first evaluate the arguments, and the first value must be function, which is the applied to the other values. Perhaps you don't find these advantages overwhelming.... ================================================================ 25 July 2004, quoting Clinger: From EE_1 = E [composed with modding out by alpha] Bill concludes EE_1 o I = EE_1 = E where I is the identity function on terms. The identity function I is compositional: it really can be defined by structural induction on the syntax of terms. I did nothing of the sort. Therefore (according to Bill) E is compositional. I've several times posted that I did not claim E to be compositional, nor do I even know what it means. Perhaps you meant EE_1. The EE_i family is compositional because of the intepolation result which you also proved. I didn't give the proof you offer here :) [Editor's note: See the first excerpt for 28 July below.] ================================================================ 27 July 2004, quoting Blume: Well, if you mean that one can define a function g in a compositional manner in terms of another function f which "happens" to have the same graph as g, then, yes, you can do that without using Scott models. Is that your point? If so, why are we wasting any time with this? I don't understand what this means. We're just having a Math discussion about compositionality. You & others 2 years ago claimed that I could not define compositional semantic functions in my OpS-ified way.... The biggest difference I note between your semantics and mine is that you defined your semantic functions by structural induction, and I did not, until my followup to you, and I don't like to work that way. [Editor's note: See the excerpt for 16 July and the last excerpt for 31 July.] ================================================================ 28 July 2004, addressing and quoting Blume: What I did was take F-F's arrow |--->>_v, turn it (trivially) into a total function E, prefix by the map which mods out by alpha, and that's my big sem function E_1: Exp ---> Value_bottom But for Bill-compositionality, I have to show e.g. that E_1[ (M N) ] = Phi( E_1[ M ], E_1[ N ] ) for some Phi function, which I defined in terms of E.... ....But I'm out of the loop, because clearly your ML code is not my semantics: you defined your sem functions by structural induction, and I did not. You don't even have to do anything nearly as complicated as you are doing: Just make the meaning of each expression itself and provide a compositional definition of the identity function. That's what Will did in his 1st ISWIM post, but his 2nd ISWIM post is much closer to what I did. (In effect, that's what you are doing anyway -- just in slight disguise. No, that's completely false. Will claimed that initially, but when I complained, he wrote a 2nd ISWIM post, much closer to mine, and I would say that Will's 2nd ISWIM post is quite respectable semantics. That's what Will -- and I -- call a "joke".) OK, I'd call it a joke too, and Will posted such a joke, but I never did anything of the sort. We'll quarrel if you uncritically accept Will's word for what I did.... [Editor's note: In my two posts that Bill mentions above, I gave two definitions of an operational semantics and proved that they are equivalent. Bill agrees that my first semantics is a joke, and that my second semantics is similar to Bill's. My second semantics is equivalent to the first, so it is an obfuscated joke.] ================================================================ 28 July 2004: I think you're claiming that for my semantic functions E_i, E_i[ X ] = E_i[ Y ] implies X = Y mod alpha, for i > 1. That's false, but Will did (falsely) attribute something like this to me, and Will wrote up a compositional semantics this way, although he claimed it was not compositional, but only Bill-compositional. ================================================================ 30 July 2004: You caught me, Will! my EE_i are the identity functions for i > 1.... ================================================================ 30 July 2004, addressing Marshall: You have made literally hundreds of posts about the definition of EE_1, Schmidt's theorem 3.13, and compositionality and you were talking of the identity function. That's absolutely false, and you have given no evidence that this is true. You have to read my posts to see how I defined things. ================================================================ 30 July 2004: I'm working in a model of ZFC (which is assumed to exist). That's the way pure Math is done. That includes DS and Advanced Calculus. I don't need to explain myself. If you want to do things differently, you've got to go to lots of trouble to explain what you want. [Editor's note: The arcane details of ZFC have nothing to do with any of this. Bill's fallacy is a matter of logic, not set theory.] As you know, there's a big difference between a syntactic proof in ZFC and a semantic proof in our model of ZFC. A syntactic proof is much like you say, we have to worry about alpha-conversion etc. [Editor's note: Some proofs are formal, most are informal, but all proofs are syntactic. By Goedel's Completeness Theorem, provability in first-order logic is equivalent to validity (truth in all models). Bill's "semantic proof in our model of ZFC" is hot air. A meta-level proof for one model could establish satisfiability, but does not establish validity. See the excerpts for 3 August and 19 August.] But in a model of ZFC, the statement of my (correct) version of 3.13 > Given functions f_ij there exists a unique family of functions > EE_i: B_i ---> D_i > s.t. > EE_i( Option_ij ) = f_ij ( EE_ij1(S_ij1),..., EE_ijk(S_ij{k_ij}) ) means given elements f_ij in our ZFC-model, there exists elements EE_i in our ZFC-model s.t. these equations hold, and so I was correct: > It's perfectly fine to have a solution EE_i for these equations > where f_ij are defined in terms of these EE_i. There's absolutely no way that you can insist that the f_ij are not defined in terms of the EE_i. That's just "plumb crazy", to use your words, the one time I actually deserved your abuse.... > Given functions f_ij there exists a unique family of functions > EE_i: B_i ---> D_i > s.t. > EE_i( Option_ij ) = f_ij ( EE_ij1(S_ij1),..., EE_ijk(S_ij{k_ij}) ) The first part of this is a correct (though slightly incomplete) statement of Schmidt's Theorem 3.13. No, it's the complete statement of Thm Richter-3.13.... Its quantifier structure is (\forall f_ij . ( \exists EE_i . ___ )) In a sense, of course that's true... The EE_i are existentially quantified variables, which mean nothing outside their scope. But that's ridiculous in a model of ZFC. For syntax, I think you've got a good point. My EE_i are not the dummy variables here, but the actual elements of our ZFC-model that make this formula true after substituting. Is that called "instantiating"?.... [Editor's note: No. Instantiation does not replace quantified variables by the elements of a model. Universal instantiation substitutes a term for the variable. Existential instantiation substitutes a fresh name (that does not occur within the axioms and has not been mentioned within the proof prior to that point). These proof rules are explained in the textbook by Boolos and Jeffrey, which Bill had cited on 14 separate occasions in these threads.] Bill must not understand that quantifiers have limited scope. I'm sure my Scheme lexical scope isn't all that it could be. To apply the theorem, one uses universal-elimination, substituting terms t_ij (the definitions of the f_ij) for the f_ij, obtaining Ah, see you must be doing a syntactic ZFC proof! In our ZFC-model, we don't maps "terms". But "term" is definitely a LC_v syntax word. [Editor's note: Although Bill had been citing Boolos and Jeffrey, he does not recognize their terminology.] ( \exists EE_i . ___ ) [ t_ij / f_ij ] Yes, but I prefer F-F's notation [f_ij <- t_ij ] Note that the substitution performs alpha-conversion as necessary to avoid capturing any of the free variables that may occur within the terms t_ij. Yes! I've posted this; I wasn't entirely sure I was right. Thanks. This is exactly the same kind of alpha-conversion that Bill has mentioned in 37 of his last 115 posts. I knew you understood it. I'm still not sure you're up on F-F's LC_v. Because of that alpha-conversion, it is *impossible* for the t_ij to refer to the EE_i in any correct use of the theorem. That's nonsense for the ZFC-model theorem I'm thinking of, but syntactically, this makes perfect sense. The EE_i for you (not me!) are just bound variables, and we'd perform a variable capture if the EE_i occured freely in t_ij! Good thing we don't do Calculus in a ZFC-model, and not as syntactic proofs in the language of set theory, supplied with the ZFC axioms. [Editor's note: There is no "ZFC-model theorem" that would justify Bill's mode of argument. As for his idea that the EE_i are bound variables for me but not him: We're talking about logic and mathematics here. The EE_i are bound variables. They have limited scope. Those are facts. Whether those facts are convenient for Bill is irrelevant.] ================================================================ 31 July 2004, quoting Matthias Blume: HOWEVER, the moment you "define" f in terms of e, e is no longer unique!!! I'd sure say that's false, but maybe we're misunderstanding each other. Let's state your theorem: Thm Mono-IVT: Given a continuous monotonic map f: [0, 1] ---> R, with f(0) = -1, and f(1) = 1, there exists a unique element e in (0, 1), such that f(e) = 0. [Editor's note: Here we should interpret the word "monotonic" to mean "strictly increasing".] Now take e in (0, 1), and as you say, choose e any way you like. Now define my function f in terms of e as above. Then surely Thm Mono-IVT is true for my f: there exists a unique e in (0, 1) s.t. f(e) = 0. Surely you agree? [Editor's note: All conclusions follow from a fallacious argument. The fact that some of those conclusions are true has no bearing on whether the argument is fallacious. Bill's definition of f in terms of e was carefully chosen to result in true conclusions, despite his fallacious reasoning. Almost any other definition of f in terms of e would lead to false conclusions, thereby exposing the fallacy. For example: Let e = 2/3. Define f in terms of e like this: f(x) = 3 * e * x - 1 Then, by Thm Mono-IVT, there exists a unique e in (0, 1) such that f(e) = 0. It is easy to check that f(1/2) = 0. Therefore 1/2 = 2/3. That, at any rate, is what we would conclude from Bill's fallacious argument. See the first excerpt above for 6 July, the editor's note for the first excerpt from 8 July, the excerpt for 22 July, and most of all the excerpt for 19 August.] ================================================================ 31 July 2004, quoting Clinger: > Bill Richter claims that the definitions of the f_ij are > allowed to refer to the function symbols BB_i, which creates > mutual recursion between the f_ij and BB_i. > > That's false, Will. I wrote no such thing. You're identify my > well-defined function EE_i with Schmidt's function symbols BB_i. On 18 July he wrote: > As I posted earlier (you didn't respond), this means exactly: > > Given functions f_ij: D_ij1 x D_ij2 x ... D_ijk -> D_i, > > for i = 1,..,n, j = 1,...,m, there is a unique family of functions > BB_i: B_i -> D_i for 1<=i<=n satisfying the equations > > BB_i(Option_ij) > = f_ij (BB_ij1 (S_ij1), BB_ij2 (S_ij2), ... BB_ijk(S_ijk)) > > So there's no structural induction in the statement of 3.13, or > insistence that the f_ij not us the BB_i. Will, you've found something that's mildly confusing.... ================================================================ 31 July 2004, quoting Blume: You could have "defined" your "DS" by simply making it equal to the OpS. That's actually what I did! a) defined EE_1 directly to be E composed with modding out by alpha, b) showed my EE_i functions were Bill-compositional, i.e. satisfied Schmidt's equations with the f_ij c) proved that Bill-compositionality = c.l.s.-compositionality, which therefore implies I could have defined my EE_i by structural induction, just as you just said d) finally, responding to your ML code, I actually did define my EE_i by structural induction, not using the Schmidt uniqueness, but constructing EE_1 by hand. That would have been far more direct, spared us the aggravation, That's exactly what I did, and I said it over & over & over. If you guys didn't read my posts, you can't blame me when you misunderstand. and exposed the joke that it is. I don't think it's a joke.... [Editor's note: See the excerpts for 16 and 28 July. On 16 July, and once again on 28 July, Bill said he "did not" define his semantic functions by structural induction. On 31 July, above, he said he did define them by structural induction.] ================================================================ 3 August 2004, quoting first Keith (kwright@gis.net) and then himself: there is NO difference between a syntactic proof in ZFC (or any other set of Axioms) and a semantic proof that is valid in all models of ZFC. But we have 2 different types of proofs, syntactic & semantic. [...] Now pure Math is all about proving semantic truth in an arbitrary model of ZFC. By Goedel C, all of our semantic truths then must have syntactic proofs. Keith, this was silly of me. It's claimed in set theory books that all the (true) theorems of pure Math could be re-stated & re-proved as syntactic ZFC statements & proofs, but 1) I don't know how to prove this myself. I've learned in the thread that I'm not clear on how ZFC actually justifies pure Math practices. 2) In practice it's meaningless, because pure Math results are *not* stated as syntactic ZFC statements. As the set theory books say, if you tried to turn Calculus results (like the Intermediate Value Thm) into syntactic ZFC statements, they'd run on for pages & pages! What's done in pure Math (Calculus, DS, and point-set Topology etc.) is to work in an arbitrary model of ZFC, but make statements about the "actual" elements of the ZFC-model, and the proofs, involving these "actual" elements, seem to be related to ZFC axioms. This probably has no relevance to the thread.... [Editor's note: See the excerpt for 19 August.] ================================================================ 4 August 2004: ....R5RS DS makes heavier use of pure Math, by using Scott models, which are uncountable sets.... And as a recent textbook asserts, the language for talking about interpreters abstractly is math. Modern pure Math in fact: not constructivism/intuitionism (too weak), nor syntactic ZFC proofs in the 1st order language of set theory (right idea, but too unwieldy). [Editor's note: The R5RS semantics has countable models, which include actual implementations. The R5RS semantics does not rely on any non-constructive mathematics.] ================================================================ 7 August 2004: I don't believe we can compute the partial function curly-E[e]: U x S x K -> A in a finite amount of time. That would mean computing the actual domain of curly-E[e], and I bet that'd contradict the Halting problem. [Editor's note: curly-E[e] is the denotational semantics of the expression e. It is defined by a compositional, i.e. syntax-directed, translation of the expression into the meta-language of the semantics. Compilers routinely perform this translation or its equivalent in a finite amount of time.] ================================================================ 11 August 2004: It's interesting what MB posted, that the subset Halt of LC_vExp is recursively enumerable: the image of a computable function from N to LC_vExp. That surprises me a lot, but I won't research it now. As to David Rush's question, I thought that to a large extent, the field of DS was an attempt to use noncomputable Math as a way to understand computable programming. I could be wrong. But Schmidt's DS book barely mentions computability, if he mentions it at all. Stoy has a lot to say, but he does not seem to state your result, that curly-E was computable (which I still don't believe). ================================================================ 13 August 2004, quoting Krishnamurthi: You insist on *refusing* to look at the structure of the definition, even though this is a perfectly reasonable entity to examine. It's meaningless to look at THE definition of a function, because it doesn't exist. That's exactly the point I made in my text you cited. [Editor's note: Compare the previous paragraph with the last three words of Bill's very next sentence.] Of course definitions are worth looking at, and we can, as I said in my last post, define compositionality to be a property of the function definition. We can say that some sem functions E_i have a compositional definition if the E_i are defined by structural induction from some functions f_ij. That is, we'll essentially call the f_ij functions a compositional definition of the E_i. That would be fine, but it's chaos and confusion to confuse this notion with a definition of compositionality as a property of the functions E_i themselves. We can't retrieve the f_ij from the E_i. [Editor's note: Bill is the one who confused our notion of compositionality, which is a property of definitions, with his notion of compositionality as a property of the functions themselves. He says our notion is meaningless, and he also claims to have a proof that our allegedly meaningless notion is equivalent to his. See the excerpts for 23 June, 26 June, 1 July, 16 July (second excerpt), 18 July (first excerpt), 20 July, and 27 July.] ================================================================ 15 August 2004: I want to take DEFINED BY out. But either way, it's the sem functions which satisfy the definition of compositionality. I assumed that Will agreed with me, but it seems that he did not, and he hasn't clarified. The job falls to you, Lauri. Please defend the honor of c.l.s., and make mathematical sense, where your elders have inexplicably faltered. [Editor's note: In the excerpt for 13 August, Bill admitted that "we can, as I said in my last post, define compositionality to be a property of the function definition." Two days later, Bill says that doesn't make mathematical sense.] ================================================================ 15 August 2004: After about 2 months of abuse, Will, during which neither you, Shriram or MB ever defined compositionality. [Editor's note: In the excerpt for 23 June, Bill rejected our definition (without giving any reason for his rejection). In the excerpt for 26 June, Bill admitted that our definition of compositionality coincides with the one given by Nielson and Nielson in their textbook.] ================================================================ 16 August 2004, quoting Clinger: he denies that partial recursive functions are computable functions that can be implemented by engineers, Joe & MB have now posted a way in which I'm wrong: The computable partial function is set which is the image of a total computable function. If that's so, then maybe an engineer could build it. What everyone agrees an engineer can't build is the subset Halt subset LC_vExp on which the partial function is actually defined.... [Editor's note: Halt is recursively enumerable. For other examples of computable partial functions and r.e. sets that Bill doesn't believe, see the excerpts for 22 June, 7 August, and 11 August.] ================================================================ 19 August 2004, quoting Alanko: Sorry, but this is just nonsense. A proof is syntactic: you can write it down or read it. Sure, and any pure Math proof I've ever written or read was just a bunch of symbols on a page. The question is what the variables mean: This is true regardless of how formal or informal the language of your proofs is. In syntactic ZFC proofs, variables are part of the 1st order language of set theory, and a proof is a formal deduction, using rules of inference, e.g. modus ponens. That's the formal proofs. In the informal proofs that pure mathematicians give, which Halmos tries to fit under the umbrella of Naive Set Theory, variables are just names for actual objects in our ZFC model.... I show ignorance here. I know about syntax & semantics in Math Logic: semantics concerns models, such as models of ZFC. So I call the formal proofs in the syntax part of Logic "syntactic ZFC proofs". Even though there's no Math Logic term "semantic proofs". [Editor's note: Bill knows so little about logic and set theory that he associates formal proofs with ZFC. See also the last excerpt for 30 July 2004.] ================================================================ 20 August 2004: What's much more important, though, is your failure to state any coherent position about your own work. For instance, your refusal so far to explain whether compositionality is a property of *) function-definitions (as in Nielson^2, or the book Joe found, which talk about denotational definitions of sem functions), or *) functions, as C-F say. [Editor's note: See the excerpts above for 23 June, 26 June, and the second excerpt for 15 August.] ================================================================ 26 August 2004: I say that's just plain silly. We're working in the pure Math framework, because that's where DS and Point Set Topology are done. The DS Math is hard enough without working in a constructivist framework, where everything becomes impossible. [Editor's note: All of the theorems needed for programming language semantics can be proved within a constructivist framenwork.] I'm getting the impression that you think constructivist math is insufficient here, I sure do! All the pure Math folks do. Hey someone told me a story about a constructivist Michael Bishop I think. Bishop said, I know constructivism makes it hard to prove theorems, but so what? If I really wanted to prove theorems, I'd assume 1 = 0!. [Editor's note: Errett Bishop wrote a book on constructive analysis, in which he showed that the major theorems of real analysis have constructive analogues. Hence constructive mathematics is enough for real analysis.] ================================================================ 27 August 2004: Oh, what's the difference? I feel I use all the axioms except the axiom of regularity: \forall x (x \not\in x).... [Editor's note: Although that statement is implied by the axiom of regularity, it is not the axiom of regularity. The axiom of regularity is much stronger than that.] Well, I'm no ZFC model theorist! I know next to zip about any independence results. I know that Cohen constructed a model of ZFC where CH was false by "forcing" in extra sets to Goedel's model where CH was true.... But I have absolutely no idea what forcing means.... ================================================================ 28 August 2004: I don't know zip about Constructivism. It's not studied in the pure Math biz, because it's too weak. Now Constructivism might actually be a lot more powerful than I know, but how would I know? [Editor's note: The real question here is why someone who "don't know zip about Constructivism" would demonstrate his ignorance in a public newsgroup.] [....] *) No one has posted here how to rewrite Stoy, Schmidt, and Barendregt in a constructivist framework. Some constructivist version of these books might be write-able, and maybe these constructivist meta-books might do everything in DS that we really need, but, uh, hmm, err, don't we have enough trouble just reading the books we actually have? [Editor's note: I have read the books by Stoy or Schmidt, and I don't recall any non-constructive mathematics at all. Most of the math in those books is fairly simple, and obviously constructive. In other words, Stoy and Schmidt have already written the constructive versions of their books.] ....Well, we can consider a set M (i.e. M \in V) for which the ZFC axioms hold! I would've before called this set M a sub ZFC-model of V, but V isn't a set. So I think now that the usage is that M is a model of ZFC, but not V. I think there's an interesting subject of such models M of ZFC, but I know nothing about it. I just mean to say: we work in in this huge collection of sets V, and the ZFC axioms hold in V. [Editor's note: Bill is right when he says he knows nothing about models of ZFC. His paragraph above is rather incoherent, but he appears to think there is some distinguished universe V in which the ZFC axioms hold. If ZFC is consistent, however, then there are many different universes in which the ZFC axioms hold. One of those universes is the Herbrand universe, which is countable and constructive, not huge at all.] ================================================================ 1 September 2004, responding to Kevin Millikin: ....I often posted C-F's definition of compositionality, which clearly is a property of the sem functions themselves. Nobody objected to this, but Will wanted to stick in 2 extra words, and I said fine, since I had a proof that you could then take the 2 words out. Much later, Will said he wanted to say that compositionality a property of sem function definitions, as Nielson^2, Slonneger-Kurtz, and Tenant and do.... [Editor's note: By "much later", Bill must mean the very same day, in the very same message. Bill first quoted the Cartwright/Felleisen definition of compositionality on 15 June 2004. On 23 June, in only my third post in these threads (my first was on 21 June), I observed that Bill was reading too much into the inadvertent omission of two words from that definition. 23 lines later, in the very same message, I said "It's really the function's definition that is compositional." See the excerpt for 23 June above.] In other words, the proposition: F is defined by structural induction => the set of pairs F has a definition by structural induction is obviously true, but not the converse. This is just nonsense, Kevin. Trivial proof: the definition of your function E is not by structural induction, but the set of pairs it defines has a definition by structural induction. That's nonsense. [Editor's note: By "nonsense", Bill must mean Kevin is absolutely right.] ================================================================