# CS6120: Natural Language Processing

## Homework 3

Assigned: Friday, 5 April 2013
Due: 11:59pm, Tuesday, 23 April 2013

### Instructions

1. If you collaborated with others, you must write down with whom you worked on the assignment. If this changes from problem to problem, then you should write down this information separately with each problem.
2. Email the instructor the requested written answers, code, and instructions on how to (compile and) run the code.

### Formal Semantics

For background on formal semantics see the lecture slides, Jurafsky & Martin chapters 18-20, or also the NLTK book, chapter 10.

1. [12 points] Translate the following sentences into quantified formulas of first-order logic:

1. Angus likes someone and someone likes Julia.
2. Nobody smiles at Pat.
3. Nobody coughed or sneezed.

2. [12 points] Translate the following verb phrases using $\lambda$-abstracts and quantified formulas of first-order logic:

1. feed Cyril and give a cappucino to Angus
2. be loved by everyone
3. be loved by everyone and detested by no-one

3. [6 points] Let $g = chase$ and $h = \lambda x.\forall y.(dog(y) \Rightarrow chase(x, y))$. If $h = f(g)$, write down a $\lambda$-abstract for $f$.

4. [7 points] Let $g = give$ and $h = \lambda z.\lambda x.\exists y.(present(y) \wedge give(x, y, z))$. If $h = f(g)$, write down a $\lambda$-abstract for $f$.

### Machine Translation

Dekai Wu's inversion transduction grammar is a special case of synchronous context-free grammar. A special case of ITG is a bracketing transduction grammar with only one nonterminal. See the slides for more on ITG and bilingual parsing. If you're interested, you could also look at Dekai Wu's original paper from Computational Linguistics back in 1997.

For this section, we will be aligning English and German sentences with the following BTG: \begin{eqnarray*} A & \xrightarrow{e^{-1}} & [ A\ A ] \\ A & \xrightarrow{e^{-2}} & \langle A\ A \rangle \\ A & \xrightarrow{e^{-20}} & <English\ word> / \epsilon \\ A & \xrightarrow{e^{-21}} & \epsilon / <German\ word> \\ \ldots \end{eqnarray*} The rest of the grammar consists of word-to-word translation probabilities. For the present purposes, note that we assign all English-to-$\epsilon$ (empty word) alignments the uniform, small probability of $e^{-20}$; $\epsilon$-to-German alignments have probability $e^{-21}$. Recall that the rule $A \rightarrow \langle A\ A \rangle$ reverse the order of its children in German compared to English.

1. [5 points] Suppose that the grammar contains the following translation rules: \begin{eqnarray*} A & \xrightarrow{e^{-7.82}} & ! / ! \\ A & \xrightarrow{e^{-15.4}} & \mathrm{wonderful} / \mathrm{wunderbar} \end{eqnarray*} Given the following English-German sentence pair:

  wonderful !
wunderbar !

write down an expression for the probability of the following word alignment: \begin{matrix} \mathrm{wonderful} & \mathrm{!} \\ | & | \\ \mathrm{wunderbar} & \mathrm{!} \end{matrix}

2. [10 points] Given the grammar and sentence pair above, write down an expression for the probability assigned to all possible word alignments of that sentence pair. Are there some derivations that result in identical word alignments? Which alignments?

The next few questions use the following data:

• test.en: 100 short English sentences, of no more than 10 words each, from the European Parliament corpus, with all words lowercased;
• test.de: German translations of those 100 sentences; and
• itg.dict: a probabilistic translation dictionary with 1874 entries. This dictionary was estimated by aligning parallel text from the Europarl corpus (not including the sentences above). Entries that contained words not in the 100 sentence pairs above were removed, and not all word pairs are included. Each line in this file contains an English word, a German word, and a log probability, separated by tab characters.

3. [20 points] Write a parsing program that outputs the best word alignment for each sentence pair, given the grammar and dictionary. As with monolingual CKY, all rules are either binary, or unary terminal productions. In monolingual CKY, you need to keep track of the beginning and ending points of your constituents. Here, you need to keep track of constituent spans in two languages simultaneously. Please email your source code and any instructions for compiling and running it.

4. [5 points] In the alignments produced by your program, what percentage of the German words were aligned to $\epsilon$? What percentage of the English words were aligned to $\epsilon$?

5. [5 points] What percentage of the time was the “swap” rule $A \rightarrow \langle A\ A \rangle$ used, compared to the “in-order” rule?

6. [5 points] Disable the swap rule in your program. Now, alignments will be monotonic'', to use the MT jargon. What percentage of German and English words are aligned to $\epsilon$?