(Due at the beginning of class 12/5/11)

This problem set will be graded out of 50 points. It
will count for 20% of your final grade.

1) Linear Programming –
different forms

a) Prove that any linear
program can be converted to the standard form: min c^{T}x s.t. Ax = b and x >= 0. [2]

b) Prove that any linear
program can be converted to the canonical form: min c^{T}x s.t. Ax >= b and x >= 0. [2]

c) State the rules for dualizing an LP in the general form. Show that the dual of
the dual is primal. [2]

d) Give an example for each of
the following

·
An LP that is infeasible [1]

·
An LP that is unbounded [1]

e) Why is the
ellipsoid algorithm for solving LPs considered a polynomial-time
algorithm but not a strongly polynomial-time algorithm. [2]

2) LP – some tricks

a) In class we wrote an LP for
max flow. Consider the following generalization of maxflow,
called multi-commodity flow: There are k commodities with source si, sink ti and demand di for commodity i.
As before the directed graph has capacitated arcs. Write a linear program to
compute the maximum fraction f, such that fraction f of each commodity’s demand
can be satisfied concurrently. [3]

b) Consider the nonlinear program (NLP):

min cTx + dTy (d is nonnegative)

s.t. Ax + By
<= b (all entries of B are nonnegative)

and y = |x|
(i.e. the ith entry of y is the absolute value of the
ith entry of x)

i)
First scheme. Prove that the above NLP can be converted to an equivalent
LP by adding constraints y >= x and y >= -x. Argue that the NLP and the
LP are simultaneously infeasible or feasible with the
same objective value. [2]

ii) Second scheme. Prove that
the above NLP can be converted to an LP by creating two new nonnegative vector
variables x+ and x_ and setting x = x+ - x_ and y = x+ + x-. Argue that the NLP
and the LP are simultaneously infeasible or feasible with the same objective
value. [4]

iii) Prove that if B has negative
entries then the problem may have a local minimum that is not a global minimum
and hence there can be no equivalent LP formulation. [1]

3) LP – random stuff from class

a) Let f(x) denote the square
of the distance of the point x from a fixed point B in R^{n}. Prove that f is a convex function. [3]

b) Prove that either Ax >= b
has a non-negative solution or there is a non-negative y such that y^{T}A <=0 and y^{T}b
> 0. [3] (Hint: use Farkas’ Lemma)

c) Let M be a n x n matrix with
nonnegative entries such that the sum of each column is 1. Prove that there
exists a probability vector x (i.e. a vector with nonnegative entries whose sum
of entries is 1) such that Mx = x. [4] (Hint: use
duality and the fact that if the maximizing dual is feasible and bounded then
the primal must be feasible and bounded as well)

4) LP - applications

a) Prove that if, at optimality,
the value of the dual (primal) variable is non-zero then the corresponding
primal (dual) constraint is tight. [2]

b) von Neumann minmax
theorem. Let x and y be probability vectors (i.e. nonnegative vectors whose sum
of entries is 1). We will show that

max_{x}min_{y} y^{T}Ax = min_{y}max_{x} y^{T}Ax. These are bilinear optimization function with linear constraints and we will show how they
can be converted to LPs.

1. Give a game-theoretic
interpretation of max_{x}min_{y}
y^{T}Ax
and min_{y}max_{x} y^{T}Ax.
Why is this called a zero-sum game? [2]

2. Consider max_{x} min_{y} y^{T}Ax. Fix x and consider
the LP: {min y^{T}(Ax)| Σ y_{i} = 1, y >= 0}. Now write its
dual and add the constraints ∑ x_{i} = 1, x >= 0. Let this be
LP-maxmin. Give a game-theoretic interpretation of
LP-maxmin. [2]

3. Write LP-minmax
and prove that they are duals of each other. Thus, conclude that max_{x}min_{y} y^{T}Ax = min_{y}max_{x} y^{T}Ax. Show also, that in a zero sum game there is a
unique value for all Nash equilibria[3]

4. Prove that max_{x} min_{y} y^{T}Ax can achieve its
optimal value with a y that has exactly one of its entries set to 1 and the
rest to 0. [1]

5) Consider an undirected
graph, G = (V, E). Define C(G) = max_{S} E(S)/|S| as its
cohesiveness coefficient, see PS4 Q5 (here S is subset of the vertices, V, and
E(S) is the number of edges both of whose endpoints are in S. In PS4 you
computed the cohesiveness coefficient using flow. We will now see how to compute
it using LP.

a) Write an LP to express the
cohesiveness coefficient of G. You can write many different LPs but I am
looking for a simple one. You only need to use edge variables P_{uv} for (u,v)
in E and node variables Q_{u} for u in V, as variables. Q
can be a probability vector, i.e. nonnegative entries that sum to 1. Your
objective function is simply the maximization of the sum of the edge variables. You
need to figure out what constraints you need. [3]

b) Let OPT be the optimal value
of your LP from 3a. Prove that OPT >= C(G). (Hint:
given a maximum cluster S show how to determine a feasible set of values for
the node and edge variables that achieves C(G)). [3]

c) Prove that C(G) >= OPT. Hence conclude that your LP computes C(G). (Hint: There are many ways to prove this. One way is
outlined in this hint. The hard part is defining the set S from the values of
the edge and node variables at the optimum, i.e. rounding to an integral
solution. For r in [0,1] let S(r) be the set of nodes
u such that Q_{u} > r. Show that there exists r such that S(r)
achieves the optimum and that you need only consider a polynomial number of r’s). [4]