Friday, Feb 10, 2:00PM, Chinmoy Dutta (Northeastern University):
Strong Partitions and Universal Steiner Trees for Graphs
[This talk takes place at Boston University Computer Science Department: MCS148, 111 Cummington Street.]
Thursday, Dec 15, 11:00 AM, Chinmoy Dutta (Northeastern University):
More on a problem of Zarankiewicz
[This talk takes place in West Village H, room 166.]
Wednesday, Nov 16, 11:30 AM, Emanuele Viola (Northeastern University):
The communication complexity of addition
[This talk takes place in West Village H, room 166.]
Friday, Nov 12, 3:00 PM, Scott Aaronson (MIT):
The Computational Complexity of Linear Optics
[This talk takes place at Boston University Computer Science Department: MCS135, 111 Cummington Street.]
Friday, October 2, 2:00 PM, Brent Heeringa (Williams College):
Data Structures for Dynamic Partial-Orders
[This talk takes place at Boston University Computer Science Department: MCS135, 111 Cummington Street.]
Thursday, September 17, 4:30 PM, Alexander Gamburd (UC Santa Cruz):
Expanders: from arithmetic to combinatorics and back
[This talk takes place in the Northeastern University Mathematics Department: 509 Lake Hall.
Tea at 4:00 p.m. in 544 Nightingale Hall]
May 8, 1 PM, Emanuele Viola (Northeastern University):
Bits vs. Trits
Apr 10, 4 PM, Lance Fortnow (Northwestern University):
Some Recent Results on Structural Complexity
[This talk takes place at Boston University Computer Science Department: MCS135, 111 Cummington Street.]
Oct 24, 4 PM, Emanuele Viola (Northeastern University):
Polynomials over {0,1}
[This talk takes place at Boston University Computer Science Department: MCS135, 111 Cummington Street.]
Recently, there has been renewed interest in basing cryptographic
primitives on weak secrets, where the only information about the
secret is some non-trivial amount of (min-)entropy. From a formal
point of view, such results require to upper bound the expectation of
some function f(X), where X is a weak source in question. We show an
elementary inequality which essentially upper bounds such "weak
expectation" by two terms, the first of which is *independent* of f,
while the second only depends on the variance of f under the *uniform*
distribution. Quite remarkably, as relatively simple corollaries of
this elementary inequality, we obtain some "unexpected" results, in
several cases noticeably simplifying/improving prior techniques for
the same problem. Examples include non-malleable extractors,
leakage-resilient symmetric encryption, seed-dependent condensers,
improved entropy loss for the leftover hash lemma, and alternative to
the dense model theorem.
Traditionally, cryptography models an adversary as having only input/output access to a given algorithm.
A recent line of work known as leakage-resistant cryptography additionally gives the adversary the output
of a computationally limited leakage function applied to the algorithm's internal state (e.g. to the wires of a
circuit computing the function). A general goal in this area is to compile any circuit into a new "shielded"
circuit that remains secure under these attacks. In this work we give a new such compiler, producing
shielded circuits that withstand leakage from virtually any class of functions against which average-case
lower bounds are known, recovering and extending previous results. We also conjecture that it withstands
NC^1-leakage if NC^1 is not equal to L.
We build on previous constructions by Ishai et al. [Crypto ’03] and Faust et al. [Eurocrypt ’10]. We use and
extend the relationship between group theory and computation first established by Barrington [STOC '86].
In particular we exploit properties of the alternating group beyond what is sufficient for Barrington's theorem.
This is joint work with Emanuele Viola.
There currently exists a troubling gap between the construction of
pseudorandom functions (PRF) and those of their popular,
bounded-input-length counterparts including block ciphers and hash
functions. This gap is both quantitative, because these counterparts
are more efficient than PRF in various ways, and methodological,
because these counterparts are often constructed via the
substitution-permutation network paradigm (SPN) which has not been
used to construct PRF. We take a step towards closing this gap by
giving new candidate PRF that are inspired by the SPN paradigm. This
paradigm involves a "substitution function" (S-box). Our main
candidates are
1. An SPN whose S-box is a random function on b bits, given as part of
the seed. We prove unconditionally that this candidate resists attacks
that run in time at most 2^(Omega(b)). Setting b = omega(log n) we
obtain an inefficient PRF, which seems to be the first such
construction using the SPN paradigm.
2. An SPN whose S-box is field inversion, a common choice in practical
constructions. This candidate is computable with Boolean circuits of
size n * polylog(n), and we prove that it has exponential security
against linear and differential cryptanalysis.
3. A candidate using an extreme setting of the SPN parameters (one
round, one S-box), where the S-box is again field inversion. This
candidate is also computable by circuits of size n * polylog(n), and
we prove that it has exponential security against parity tests that
look at 2^(0.9n) outputs.
Assuming the security of our candidates, our work narrows the gap
between the "Natural Proofs barrier" of Razborov and Rudich and
existing lower bounds, in three models: unbounded-depth circuits, TC0
circuits, and Turing machines.
This is joint work with Emanuele Viola.
Spring 2012 Talk Abstracts
Mar 19, Thomas Vidick (MIT):
Noncommutative Grothendieck inequalities and application to quantum two-player games
The celebrated Grothendieck's inequality is a fundamental result in Banach space theory that has recently found surprising applications in different areas, ranging from theoretical computer science (e.g. work of Alon and Naor showing how it could be used to derive a constant-factor approximation algorithm for the cut-norm), to quantum information theory (e.g. work of Tsirelson using it to derive upper bounds on the maximal violation of certain Bell inequalities by quantum mechanics).
In this talk I will briefly survey these results, before discussing a generalization of Grothendieck's inequality to non-commutative variables due to Pisier and Haagerup. I will motivate this inequality by showing that, just as its commutative counterpart, it leads to an efficient constant-factor approximation algorithm for a certain hard optimization problem. I will also describe a new application of the non-commutative Grothendieck's inequality to the computational complexity of quantum two-player games.
Based on joint work with Oded Regev.
The standard framework in cryptography generally assumes that an adversary A attacks a cryptographic device D by having access *only* to D's input/output behavior. For concreteness, one can think of D as performing encryptions and decryptions under some unknown secret key K; then A is allowed to obtain encryptions and decryptions under K of messages of its choice, but may not ask for things like the Hamming weight of K, or the parity of K, etc. However, there have been successful real-world attacks that deviate from this framework, such as the much-publicized timing attack on RSA (Kocher, 1996). In the last 10 years, researchers have begun formally addressing the problem of constructing devices D that are secure even against adversaries that may obtain some additional "leaked" information about D's internal computation. In this talk I will go over the way(s) in which this problem is formalized, and present two constructions (one classic, one very new) that achieve varying degrees of leakage-resistance. The talk should be accessible even with little to no background in cryptography.
Consider an agency holding a large database of sensitive personal information -- medical records, census survey answers, web search records, or genetic data, for example. The agency would like to discover and publicly release global characteristics of the data (say, to inform policy and business decisions) while protecting the privacy of individuals' records. This problem is known variously as "statistical disclosure control", "privacy-preserving data mining" or simply "database privacy".
In this talk, I will describe "differential privacy", a notion which emerged from a recent line of work in theoretical computer science that seeks to formulate and satisfy rigorous definitions of privacy for such statistical databases. Satisfactory definitions had previously proved elusive largely because of the difficulty of reasoning about "side information" -- knowledge available to an attacker through other channels. Differential privacy provides a meaningful notion of privacy in the presence of arbitrary side information. After explaining some attacks that motivate our approach, I will sketch some of the basic techniques for achieving differential privacy as well as recent results on differentially private statistical analysis and learning.
Given a graph G and a root node r, the Universal Steiner Tree (UST)
problem seeks a single spanning tree $T$ of minimum stretch, where the stretch of
T is defined to be the maximum ratio, over all subsets of terminals X, of the
cost of the sub-tree $T_X$ of T that connects X to r to the cost of an
optimal Steiner tree connecting X to r. UST's are important for data aggregation
problems where computing the Steiner tree from scratch for every input instance of
terminals is costly, as for example in low energy sensor network applications.
We provide polynomial time UST constructions with $2^{O(\sqrt{\log n})}$-stretch
for general graphs, and polylogarithmic-stretch for minor-free graphs. Our work is based
on a kind of strong hierarchical graph partitions.
I will outline our construction and show the connections of this problem with graph partitions.
This is based on joint work with Costas Busch, Jaikumar Radhakrishnan, Rajmohan Rajaraman and
Srivathsan Srinivasagopalan.
With the advent of cloud computing, our view of cryptographic protocols has changed dramatically. In this talk, I will give an overview of some of the newer challenges that we face in cloud cryptography and outline some of the techniques used to solve these problems. In particular, a few questions that I will address are:
1) How can we store sensitive data in the cloud, in an encrypted manner, and yet allow controlled access to certain portions of this data?
2) How can we ensure reliability of data across cloud servers that may be connected by only a low-degree communication network, even when some of the servers may become corrupted?
3) How can users authenticate themselves to the cloud in a user-friendly way?
This talk will assume no prior knowledge of cryptography and is based on works that appear at TCC 2012, ICALP 2010 and STOC 2010.
Cryptography requires secret keys, and we usually assume that these can be generated uniformly at random and kept perfectly secret from an attacker. Nevertheless, there are many scenarios where we need to rely on "imperfect" secret keys for cryptography. This includes passwords and biometrics, which are not uniformly random, but come from some unspecified distribution about which we know very little. It also includes scenarios where partial information about the secret key can leak to an attacker, for example, through various "side-channel attacks". Such attacks are being used to break real-world implementations of current cryptosystems, and are the major source of difficulty when implementing cryptography in practice. My talk will explore new methods for constructing cryptosystems that remain provably secure without requiring the perfect secrecy of their secret keys.
Consider the following model. An infectious disease (or some other item) is to be transmitted on a graph G(V, E), starting from a single vertex. Once a vertex is infected, it stays infected for some time t. An uninfected vertex can be infected by each of its infected neighbors with rate B. Once a vertex is no longer infected, it can be infected again. It is known that the epidemic will die out in finite time, and that the value of the ratio B/t determines many of the properties of the epidemic, such as whether it will die out quickly or in exponential time, as well as the fraction of vertices infected at any given time. However, to date there have been no rigorous results for the questions of 1) How long will it take for the epidemic to infect a constant fraction, e|V| of the vertices? 2) Assuming the epidemic persists in a meta-stable state, will every node be infected at some point, and how long will this take? We will introduce a related process, in which an infected vertex infects anywhere from 1 to k vertices by sampling from its neighbors without replacement, and show cover time results for this process. We will then attempt to extend the results for the related process to the original SIS epidemic model.
In 1982, Yao introduced the field of "secure computation", in which n
parties, holding private inputs x_1,...,x_n, engage in a protocol to
compute some function f(x_1,...,x_n), while revealing nothing more than the
output. In the decade that followed, the topic of secure computation was
thoroughly explored, and almost every theoretical question was answered.
In the past decade, the focus has shifted towards improving efficiency, and
building implementations. Today, secure computation is poised to become an
extremely powerful tool with wide-ranging application. Both bodies of
research were essential in bringing this about.
In this talk, we review several recent results. We first provide insight
into one of the few remaining theoretical questions in secure computation.
We then demonstrate improved efficiency for secure computation in several
particular settings of interest. On the theoretical side, we discuss the
problem of "fairness" in secure computation, which is a security property
guaranteeing simultaneous output delivery to both parties. Until recently,
this was widely believed to be impossible to achieve; we demonstrate that
some interesting functions can in fact be computed with complete fairness.
In the second half of the talk, we will focus on several settings that
arise in more modern, real-world environments, and show how we can tailor
the theoretical results to greatly improve efficiency. In presenting this
selection of research, we hope to demonstrate the importance of both
foundational and applied cryptographic theory.
We show tight necessary and sufficient conditions on the sizes of
small bipartite graphs whose union is a larger bipartite graph that
has no large bipartite independent set. Our main result is a common
generalization of two classical results in graph theory: the theorem
of Kővári, Sós, and Turán on the minimum number of edges in
a bipartite graph that has no large independent set, and the theorem
of Hansel (also Katona and Szemerédi and Krichevskii) on the sum
of the sizes of bipartite graphs that can be used to construct a graph
(non-necessarily bipartite) that has no large independent set.
Joint work with Jaikumar Radhakrishnan (Tata Institute of Fundamental
Research, Mumbai)
A pseudorandom generator is an algorithm that outputs a large amount of "random-looking" data, while taking as input only a small amount of truly random data.
In this talk I will explain how this idea is formalized, and highlight some applications of PRGs.
Specifically, I will mention one way in which PRGs are used in cryptography, and discuss the implications that PRGs have for resolving the P vs. NP question.
Along the way I'll touch on some of my past and ongoing research in this area (joint with Emanuele Viola).
We consider information spreading (also known as gossip) in dynamic networks. In the k-gossip problem, there are k tokens that are initially present in some nodes of a network and the goal is to disseminate the k tokens to all nodes in as few rounds of distributed computation as possible. The problem is especially challenging in dynamic networks where the network topology can change from round to round and can be controlled by an adaptive adversary.
The focus of this talk is on the power of token-forwarding algorithms, which do not manipulate tokens in any way other than storing, copying, and forwarding them. We first consider a worst-case adversarial model introduced by Kuhn, Lynch, and Oshman in which the communication links for each round are chosen by an adversary, and nodes do not know who their neighbors for the current round are before they broadcast their messages. Our main result is an Omega(nk/log(n)) lower bound on the number of rounds needed for any token-forwarding algorithm to solve k-gossip. This resolves an open problem posed by Kuhn-Lynch-Oshman and, taken together with a recent work of Haeupler and Karger, establishes a nearly linear-factor worst-case gap between token-forwarding algorithms and those based on network coding.
We next show that the above "quadratic" barrier can be overcome in an offline version of the problem, where the adversary has to commit all the topology changes in advance at the beginning of the computation. We show that token-forwarding algorithms can achieve O(min{nk, n*sqrt{klog(n)}) time in the offline model. Time-permitting, we will also present a bicriteria approximation algorithm for the offline problem.
Joint work with Chinmoy Dutta (Northeastern), Gopal Pandurangan (Nanyang Technological University, Singapore), and Zhifeng Sun (Northeastern)
Each of 3 players receives an integer number of absolute value at most 2^n. The players
want to know if the sum of their numbers is bigger than 0. We prove that the minimal
amount of randomized communication needed is Theta(log n). Both the upper and the lower bounds were
open for more than 15 years.
The inverse optimization problem for potential energy of spherical
codes asks: given a spherical code C, does there exist a radial pair
potential function f for which C minimizes f-potential energy? I will
describe necessary and sufficient conditions, a heuristic algorithm to
find the potential function in a fixed space of functions, and also
give examples of some familiar configurations for which we can find
natural potentials which are provably minimized at these codes. This
is joint work with Henry Cohn.
Yudin applied linear programming bounds to the question of energy
minimization for spherical codes. I will describe an application which
proves the existence of some universally optimal spherical codes
(which minimize a large class of potential energy functions). The
proof involves Hermite interpolation, as well as spherical design
properties of these codes. I'll also illustrate a generalization of
Yudin's theorem to Euclidean space and some natural conjectures. This
is joint work with Henry Cohn.
How many points can be placed on a sphere in n dimensions, which are
all separated by at least some given angular distance? This classical
problem in discrete geometry is quite hard in general, although it has
some remarkable answers for some special values of the dimension and
angular distance. I'll talk about one of the best techniques for upper
bounds, namely linear programming bounds developed by Delsarte,
Levenshtein and others. These exploit techniques from representation
theory/harmonic analysis. In particular, for some of the these
remarkable configurations, the upper bounds are sharp, which enable
one to prove uniqueness, not just optimality. I'll also indicate how
linear programming bounds are used in coding theory and in the sphere
packing problem, for instance, in the proof that the Leech lattice is
the densest lattice packing in 24 dimensions. This talk will be fairly
accessible, assuming minimal background.
I'll synthesize work showing that randomly sampling the edges of an undirected
graph preserves cut values. I'll give three distinct proofs, based on random
contraction, tree packing, and network reliability, that the values of cuts
are tightly concentrated around their expectations when those expectations are
sufficiently large. I'll give a combinatorial construction for sparsifying any
n-vertex undirected graph down to O(n log n) edges with only small
perturbations in cut value and show how it can be used in fast algorithms for
approximate and exact maximum flow computations. While one can achieve
slightly better sparsification using algebraic techniques, I believe these
combinatorial methods offer useful insights and may yield more in the future.
The talk is self-contained, requiring only elementary knowledge of
combinatorics and probability.
Recommender systems help users identify objects they may find
interesting, where objects may be books to read, films to watch, web
pages to browse, and even other users to contact. Formally, the input
to the system is the known preferences of the users, as deduced
somehow from their past choices. While many interesting ideas have
been developed to analyze characteristics of users (or objects) based
on past choices, this approach suffers from a foundational theoretical
flaw: feedback is ignored. Put simply, the setting is such that
choices determine recommendations, but recommendations are supposed to
influence choices. In a recent line of work this gap was bridged by a
simple algorithmic model that assumes that the system may ask the
user's opinion on any object, and not only make recommendations about
supposedly `nice' objects. Typically, algorithms in this model ask
users for their opinions on controversial objects, and in return, the
output consists of almost complete reconstruction of user
preferences. In this talk we discuss this model and survey some basic
and recent results. Surprisingly, it turns out that there are
algorithms that can reconstruct user preferences (with high
probability), using only a little (polylog factor) more questions than
the minimum possible.
We obtain the first deterministic extractors for sources generated
(or sampled) by small circuits of bounded depth. Our main results are:
(1) We extract k poly( k / n d ) bits with exponentially small error
from n-bit sources of min-entropy k that are generated by functions
that are d-local, i.e., each output bit depends on at most d input bits.
In particular, we extract from NC-zero sources, corresponding to d = O(1).
(2) We extract k poly( k / n^(1.001) ) bits with super-polynomially
small error from n-bit sources of min-entropy k that are generated
by poly(n)-size AC-zero circuits.
As our starting point, we revisit the connection by Trevisan and Vadhan
(FOCS 2000) between circuit lower bounds and extractors for sources
generated by circuits. We note that such extractors (with very weak parameters) are equivalent to lower bounds for generating distributions (FOCS 2010;
with Lovett, CCC 2011). Building on those bounds, we prove that the
sources in (1) and (2) are (close to) a convex combination of high-entropy
"bit-block" sources. Introduced here, such sources are a special case of
affine ones. As extractors for (1) and (2) one can use the extractor for
low-weight affine sources by Rao (CCC 2009).
Along the way, we exhibit an explicit n-bit Boolean function b
such that poly(n)-size AC-zero circuits cannot generate the distribution
(X,b(X)), solving a problem about the complexity of distributions.
Independently, De and Watson (ECCC TR11-037) obtain a result similar to (1)
in the special case d = o(log n).
We provide a generalization of von Neumann's minmax theorem to
zero-sum multi-player games. The games we consider are polymatrix - that
is, graphical games in which every edge is a two-player game between
its endpoints - in which every outcome has zero total sum of players'
payoffs. Given that three-player zero-sum games are already
PPAD-complete, the class of games we consider here, i.e. those with
pairwise separable utility functions, defines essentially the broadest
expanse of multi-player zero-sum games to which we can hope to push
tractability results. Our generalization of the minmax theorem implies
convexity of equilibria, polynomial-time tractability, and convergence
of no-regret learning algorithms to Nash equilibria.
We also explore generalizations of our result to classes of
non-zero-sum multi-player games. A natural candidate is polymatrix
games with strictly competitive games on their edges. In the two
player setting, such games are minmax solvable. Surprisingly we show
that a polymatrix game comprising of strictly competitive games on its
edges is PPAD-complete to solve, proving a striking difference in the
complexity of networks of zero-sum and strictly competitive games.
Finally, we look at the role of coordination in networked
interactions, studying the complexity of polymatrix games with a
mixture of coordination and zero-sum games on their edges. We show
that finding a pure Nash equilibrium in coordination-only polymatrix
games is PLS-complete; hence, computing a mixed Nash equilibrium is in
PLS $\cap$ PPAD, but it remains open whether the problem is in P. If on the
other hand coordination and zero-sum games are combined, we show that
the problem becomes PPAD-complete, establishing that coordination and
zero-sum games achieve the full generality of PPAD.
Joint work with Constantinos Daskalakis.
Feb 24, Joseph Mitchell (Stony Brook):
Approximation Algorithms for Traveling Salesman Problem with
Neighborhoods and Related Geometric Network Optimization
Problems
The Euclidean travelling salesman problem with
neighborhoods (TSPN) seeks a shortest path or cycle that visits a
given collection of $n$ regions (neighborhoods), $R_1, R_2, ...,
R_n$. The TSPN is a generalization of the classic TSP (when $R_i$ are
points) that arises in several other related geometric optimization
problems. We present methods that yield provable approximation
guarantees for the TSPN in the plane. We also discuss some problems
related to the TSPN, including the watchman route problem (compute a
shortest path/cycle that allows a robot to see all of a multiply
connected domain) and the relay placement problem for building a
connected communication network. Key to some of the results is the
$m$-guillotine technique, which gives a tool for obtaining
approximation algorithms in many network optimization
problems.
Polynomial Identity Testing (PIT for short) is
the problem of efficiently and deterministically deciding whether a
given (explicitly or via black-box access) arithmetic circuit computes
the zero polynomial. This is one of the most fundamental problems in
computational complexity and it is strongly related to the problem of
proving lower bounds for arithmetic circuits.
In this talk I will survey several results on the PIT
problem. Specifically I will discuss the relation between
derandomization of PIT and circuit lower bounds, explain the
importance of derandomizing PIT in the bounded depth model and give an
overview of the ideas behind several recent detrerministic PIT
algorithms, designed for restricted circuit classes. At the end of the
talk, time permitting, I will present what I find to be the most
accessible important open questions in the field.
Partially based on joint works with Zeev Dvir, Zohar Karnin, Partha
Mukhopadhyay, Ran Raz, Ilya Volkovich and Amir Yehudayoff.
We
examine the complexity of increasing the stretch of pseudorandom
generators (PRGs). We will introduce the concept of a PRG, revisit
the classic construction of "high-stretch" PRGs from "low-stretch"
PRGs (which is highly sequential in nature), and discuss the
(im)possibility of creating a more parallelizable
construction.
The talk is based on a forthcoming paper of myself & Viola in the
Theory of Cryptography Conference
2011. (http://www.ccs.neu.edu/home/enmiles/stre.pdf)
We discuss a very simple and elegant combinatorial
proof of Chernoff-Hoeffding bound due to Impagliazzo and
Kabanets. Unlike previous proofs of such bounds that used moment
generating function, this proof uses a simple counting
argument. Moreover the proof is constructive in the sense that if the
sum of given random variables is not concentrated around the
expectation, then with high probability one can efficiently find a
subset of the random variables that are statistically dependent. We
also discuss how this proof technique yields threshold direct product
theorems from direct product theorems in computational complexity
theory.
Last week, on November 8 2010, Ryan Williams announced the solution of an embarrassing,
20-year old open problem in Complexity Theory: NEXP is not in ACC0.
http://www.cs.cmu.edu/~ryanw/acc-lbs.pdf
We discuss background and his proof, which is a masterpiece of complexity-theoretic reasoning.
We propose an optical experiment that might be accessible with current
technology, and argue that, if the experiment succeeded, it could
provide strong evidence that superpolynomial speedups are achievable
via quantum computation. The experiment involves generating
single-photon states, sending the photons through a linear-optical
network, and then measuring the number of photons in each location.
The resources we consider are not known or believed to be universal
for quantum computation; nevertheless, we give evidence that they
would allow the solution of classically-intractable sampling and
search problems.
Recent work has shown that finding a mixed Nash equilibrium in normal form games is PPAD-complete,
while the pure Nash equilibrium problem in selfish routing games is PLS-complete. On the other hand,
there are important classes of games, e.g. simple stochastic games, and fixed point problems, e.g.
fixed points of contraction maps, that appear easier. For these problems the PPAD and PLS machinery
seems inadequate to provide an accurate complexity theoretic characterization; at the same time,
polynomial-time algorithms have been evading researchers for decades. We provide complexity theoretic
machinery that could lead to completeness results at the interface of PPAD and PLS, in the form of
geometrically embedded local search. We also survey background results on the complexity of Nash equilibria
and Brouwer fixed points.
We consider noisy distributed computation in the setting of
wireless sensor networks, where processors are placed randomly on a
unit square. They communicate with each other by broadcasting messages
in order to compute some function whose input is distributed among
them. Constraints on power usage limit the range and the number of
transmissions. Furthermore, messages often get corrupted during
transmission. Recently, protocols have been proposed to compute
efficiently in this model. We show the first lower bounds in this
model: in order to compute the PARITY or MAJORITY of N bits reliably,
at least N log log N transmissions are necessary. This shows that the
currently known protocols for these functions are optimal. The
techniques developed in this work can also be used to prove lower
bounds for noisy decision trees.
Inspired by the recent "2-NASH is PPAD-complete" breakthrough we show that a
number of other problems are hard as well. Here is a list of the problems
along with their area of motivation/significance:
1. Fractional Stable Paths Problem - Internet Routing
2. Core of Balanced Games - Economics and Game Theory
3. Scarf's Lemma - Combinatorics
4. Hypergraph Matching - Social Choice and Preference Systems
5. Fractional Bounded Budget Connection Games - Social Networks
6. Strong Fractional Kernel - Graph Theory
We will define these problems, state our results, give a flavor of
our reductions and conclude with some open problems.
Joint work with S. Kintali, L. Poplawski, R. Rajaraman and S. Teng.
A long-standing lacuna in the field of computational learning theory is the learnability of succinctly
representable monotone Boolean functions, i.e., functions that preserve the given order of the input.
We show that Boolean functions computed by polynomial-size monotone circuits are hard to learn assuming
the existence of one-way functions. Having shown the hardness of learning general polynomial-size monotone
circuits, we show that the class of Boolean functions computed by polynomial-size depth-3 monotone circuits
are hard to learn using statistical queries. As a counterpoint, we give a statistical query learning algorithm
that can learn random polynomial-size depth-2 monotone circuits (i.e., monotone DNF formulas).
Privacy is a fundamental problem in modern data analysis. We describe "differential privacy", a mathematically
rigorous and comprehensive notion of privacy tailored to data analysis. Differential privacy requires, roughly,
that any single individualÕs data have little effect on the outcome of the analysis. Given this definition, it is
natural to ask: what computational tasks can be performed while maintaining privacy? In this talk, we focus on the
tasks of machine learning and releasing contingency tables.
Learning problems form an important category of computational tasks that generalizes many of the computations
applied to large real-life datasets. We examine what concept classes can be learned by an algorithm that preserves
differential privacy. Our main result shows that it is possible to privately agnostically learn any concept class
using a sample size approximately logarithmic in the cardinality of the hypothesis class. This is a private analogue
of the classical Occam's razor result.
Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical
data. We provide tight bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees
when the data being summarized is sensitive. Our investigation also leads to new results on the spectra of random
matrices with correlated rows.
Many important problems in discrete mathematics and theoretical computer science, such as attacking the RSA
cryptosystem or decoding error-correcting codes, can be reduced to finding small solutions of polynomial equations.
In this talk, I will survey a general approach that has been independently developed in several different areas, and
I will discuss various applications of these techniques, including recent joint work with Nadia Heninger on a
number-theoretic analogue of the problem of decoding algebraic-geometric error-correcting codes. No special background
in cryptography or coding theory will be assumed.
In Property Testing the goal is to quickly distinguish objects that satisfy a property from objects that need to be
changed in a large fraction of places in order to satisfy the property. In this context, Locally Testable Codes (LTCs)
are error-correcting codes for which membership can be decided from reading only a small section of the input. Recent
studies of LTCs focus on identifying what attributes of codes allow them to be testable by local inspection. In particular,
the ``symmetries'' of a code seem to play an important role in providing sufficient conditions for testing. Kaufman and Sudan
supported this view by showing that the duals of linear codes that have the ``single local orbit property'' under the
affine symmetry group are locally testable. A code has the single local orbit property if it is specified by a single local
constraint and its translations under the symmetry group of the code.
In this talk I will present a result motivated by questions arising in previous studies of LTCs regarding error-correcting
codes that have the single local orbit property. We show that the dual of every ``sparse'' binary code whose coordinates are
indexed by elements of F_{2^n} for prime n, and whose symmetry group includes the group of non-singular affine transformations
of F_{2^n}, has the single local orbit property. (A code is said to be sparse if it contains polynomially many codewords in
its block length.) In particular, this class includes the dual-BCH codes for whose duals (i.e., for BCH codes) simple bases
were not known. Our result gives the first short (O(n)-bit, as opposed to the natural exp(n)-bit) description of a low-weight
basis for BCH codes.
Our techniques involve the use of recent results from additive number theory to prove that the codes we consider, and related
codes emerging from our proofs, have high distance. We then combine these with the MacWilliams identities and some careful
analysis of the invariance properties to derive our results.
Joint work with Tali Kaufman and Madhu Sudan.
Apr 8, Assaf Naor (NYU):
L_1 embeddings of the Heisenberg group and fast estimation of graph isoperimetry
We will show that any L_1-valued mapping of an epsilon net in the unit ball of the Heisenberg group incurs bi-Lipschitz distortion
(log(1/epsilon))^c, where c is a universal constant. We will also explain how this result implies an exponential improvement to
the best known integrality gap for the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem.
Recently, some mainstream e-commerce web sites have begun using "pay-per-bid" auctions to sell items, from video games
to bars of gold. In these auctions, bidders incur a cost for placing each bid in addition to (or sometimes in lieu of)
the winner's final purchase cost. Thus even when a winner's purchase cost is a small fraction of the item's intrinsic
value, the auctioneer can still profit handsomely from the bid fees. Our work provides novel analyses for these auctions,
based on both modeling and datasets derived from auctions at Swoopo.com, the leading pay-per-bid auction site. While
previous modeling work predicts profit-free equilibria, we analyze the impact of information asymmetry broadly, as well
as Swoopo features such as bidpacks and the Swoop It Now option specifically. We find that even small asymmetries across
players (cheaper bids, better estimates of other players' intent, different valuations of items, committed players willing
to play "chicken") can increase the auction duration significantly and thus skew the auctioneer's profit disproportionately.
We discuss our findings in the context of a dataset of thousands of live auctions we observed on Swoopo, which enables us
also to examine behavioral factors, such as the power of aggressive bidding. Ultimately, our findings show that even with
fully rational players, if players overlook or are unaware any of these factors, the result is outsized profits for
pay-per-bid auctioneers.
We are given a large mathematical object, such as an output of a computer program purported to compute a 'nice' function
of its inputs. We need to decide whether this object indeed has the stipulated properties. However, it is too large to
read in its entirety. We are only allowed to read several small (local) fragments. Can we decide, with any degree of certainty,
what it looks like globally?
This is a very general question, and the answer depends on many things, such as the structure of the function, the access
we are allowed, the degree of certainty we are willing to live with, etc.
We will describe one research strand, originally motivated by program-checking and program verification, which, along the way,
became entangled with several other major research threads. One such thread in complexity theory develops methods to verify
whether a given proof of a mathematical claim is correct, while looking at only a few small fragments of the proof. Another,
important both to complexity and to additive number theory, attempts to decide which multivariate functions over a finite group
locally look like low-degree polynomials.
Expanders are highly-connected sparse graphs with wide-ranging applications in computer science and mathematics.
After explaining what expanders are and describing their basic properties, I will focus on recent robust constructions
which use crucially tools from arithmetic combinatorics, and discuss some of the applications.
Statistical machine learning has recently presented a very interesting collection of challenging optimization problems.
Many of these problems belong to well studied classes of convex optimization problems, such as linear and quadratic
optimization problems or semidefinite programming problems. It is well known that these classes of problems can be solved
(up to \epsilon accuracy) by interior point methods in O(log(1\epsilon) iterations (hence in polynomial time). However,
unlike traditional applications of these convex problems, the ML applications are very large scale with completely dense
data matrices. This property makes the use of classical second order optimization methods, such as interior point methods,
prohibitive. Recently fast first order methods have been suggested by Nesterov and others, which obtain an \epsilon-optimal
solution in O(1/\sqrt(\epsilon)) iterations (vs. O(1/\epsilon) complexity of gradient descent). We will discuss extending
theoretical guarantees of fast existing first order methods to alternating minimization methods and other first order methods,
for which complexity results did not exist before. Alternating minimization methods are particularly suitable for many ML
applications, which have the form of \min f(x)+g(x), where f(x) is a convex smooth function and g(x) is convex, possibly
nonsmooth, but has a simple form. We will discuss the examples of such problems related methods, complexity results and
computation experiment.
Sampling from and approximately counting the size of a large set of combinatorial structures has contributed to a renaissance
in research in finite Markov chains in the last two decades. Applications are wide-ranging from sophisticated card shuffles,
deciphering simple substitution ciphers (of prison inmates in the California state prison), estimating the volume of a
high-dimensional convex body, and to understanding the speed of Gibbs sampling heuristics in statistical physics.
After a brief mathematical introduction to the Markov Chain Monte Carlo (MCMC) methodology, the speaker will focus on some
recent rigorous results on J.M. Pollard's classical Rho and Kangaroo algorithms (from 1979) for the discrete logarithm problem
in finite cyclic groups; a birthday theorem for finite Markov chains is a somewhat surprising by-product.
The lecture will be self-contained to a large extent and will include some open problems.
Performing random walks in networks is a fundamental primitive that has found applications in many areas of computer science,
including distributed computing. In this talk, we focus on the problem of performing random walks efficiently in a distributed
network. Given bandwidth constraints, the goal is to minimize the number of rounds required to obtain a random walk sample.
All previous algorithms that compute a random walk sample of length L as a subroutine always do so naively, i.e., in O(L) rounds.
We present a fast distributed algorithm for performing random walks. We show that a random walk sample of length L can be computed
in O(L^{1/2}D^{1/2}) rounds on an undirected unweighted network, where D is the diameter of the network. For small diameter graphs,
this is a significant improvement over the naive O(L) bound. We show that our algorithm is near-optimal by giving a almost matching
lower bound. We also show that our algorithm can be applied to speedup the more general Metropolis-Hastings sampling.
We extend our algorithms to show how k independent walks can be performed in O((kLD)1/2) + K) rounds. Our techniques can be useful
in speeding up distributed algorithms for a variety of applications that use random walks as a subroutine. We mention one such
application involving the decentralized computation of spectral gap and mixing time, subroutines that can be useful in the design
of self-regulating and self-aware networks.
Joint work with Atish Das Sarma, Danupon Nanongkai, and Prasad Tetali (Georgia Tech.).
Random matrix methods have been recently used to prove the existence of quantum channels whose capacity is superadditive over
multiple uses. This phenomenon relies on the use of entangled signal states and has no analog for classical channels. In addition
to reviewing the background for these ideas, this talk will explain some of the random matrix methods used in the proofs of
existence of such channels, including the recent work of Hastings. Some open questions will also be discussed.
We define and study the problem of maintaining a dynamic set of
elements drawn from a partially ordered universe. For tree-like partial
orders we give a linear-sized data structure that supports the operations:
insert; delete; test membership; and predecessor. The performance of our
data structure is within an O(log w)-factor of optimal. Here w <= n is the
width of the partial-order---a natural obstacle in searching a partial
order. Although recent results have shown that optimal search trees can be
built in linear time for tree-like partial orders, they do not appear to
extend easily to the dynamic setting. Thus, our work gives, to the best of
our knowledge, the first fully dynamic data structure for this problem.
Our results address a statement of Daskalakis et al. from the full
version of their SODA'09 paper where they seek data structures analogous to
balanced binary search trees and heaps for partial orders.
Joint work with M. Catalin Iordan (Stanford) and Louis Theran (Temple).
Expanders are highly-connected sparse graphs widely used in computer science. The optimal expanders -- Ramanujan graphs --
were constructed using deep results from the theory of automorphic forms. In recent joint work with Bourgain and Sarnak
tools from additive combinatorics were used to prove that a wide class of "congruence graphs" are expanders; this expansion
property plays crucial role in establishing novel sieving results pertaining to primes in orbits of linear groups.
This talk explore the relationship between the social cost of correlated equilibrium and Nash equilibrium.
In contrast to previous work which models correlated equilibrium as a benevolent mediator, we will define
and bound the Price of Mediation: the ratio of the cost of the worst correlated equilibrium to the cost
of the worst Nash. In practice, the heuristics used for mediation are frequently non-optimal, and mediators
may be inept or self-interested.
The results in this talk focus on symmetric congestion games (also known as load balancing games).
For convex cost functions, the Price of Mediation can grow exponentially in the number of players.
We provide better bounds for linear, concave, and polynomially-bounded cost functions.
It's the end of the semester, and you need to store on a computer your N students' grades from the 3-element universe
{pass, fail, borderline}. Via so-called arithmetic coding you can store them using the minimum number of bits,
N(log_2 3) + O(1), but then to retrieve a grade you need to read all the bits. Or you can use two distinct bits per grade,
but then you waste a linear amount of space.
A breakthrough by Patrascu (FOCS 2008), later refined with Thorup, shows how to store the grades using the minimum number
of bits so that each grade can be retrieved by reading just O(log N) bits.
We prove a matching lower bound: To retrieve the grades by reading b bits, space N(log_2 3) + N/2^b is necessary.
We will talk about some recent work on complexity classes.
1. For any constant c, nondeterministic exponential time (NEXP) cannot be computed in polynomial time with n^c queries
to SAT and n^c bits of advice. No assumptions are needed for this theorem.
2. A relativized world where NEXP is contained in NP for infinitely many input lengths ?
3. If the polynomial-time hierarchy is infinite, one cannot compress the problem of determining whether a clique of
size k exists in a graph of n vertices to solving clique problems of size polynomial in k.
4. If the polynomial-time hierarchy is infinite there are no subexponential-size NP-complete sets (Buhrnan-Hitchcock).
1&2 are from an upcoming ICALP paper by Buhrman, Fortnow and Santhanam.
3 is from a STOC '08 paper by Fortnow and Santhanam answering open questions of Bodlaender-Downey-Fellows-Hermelin and Harnik-Naor.
4 is from a CCC '08 paper by Buhrman and Hitchcock building on 3.
Mar 25, Moses Charikar:
New Insights into Semidefinite Programming for Combinatorial Optimization
Beginning with the seminal work of Goemans and Williamson on Max-Cut, semidefinite programming (SDP) has firmly established
itself as an important ingredient in the toolkit for designing approximation algorithms for NP-hard problems. Algorithms designed
using this approach produce configurations of vectors in high dimensions which are then converted into actual solutions.
In recent years, we have made several strides in understanding the power as well as the limitations of of such SDP approaches.
New insights into the geometry of these vector configurations have led to algorithmic breakthroughs for several basic optimization
problems. At the same time, a sequence of recent results seems to suggest the tantalizing possibility that, for several optimization
problems including Max-Cut, SDP approaches may indeed be the best possible. In this talk, I will present a glimpse of some of this
recent excitement around SDP-based methods and explain some of the new insights we have developed about the strengths and weaknesses
of this sophisticated tool.
Communication Complexity has been used to prove lower bounds in a wide variety of domains,
from the theoretical (circuit lower bounds) to the practical (streaming algorithms, computer security).
In this talk, we present new results for two communication problems. We begin
with a new lower bound for the communication complexity of multiparty pointer
jumping. In the second half of the talk, we give several new results for
distributed functional monitoring problems.
Part of this talk is joint work with Amit Chakrabarti and Chrisil
Arackaparambil.
Obtaining strong lower bounds for the `Number on the Forehead' model of
multiparty communication has many applications. For instance, it yields
lower bounds on the size of constant-depth circuits and the size of proofs
in important proof-systems.
Recently, Shi and Zhu (2008), and Sherstov (2008) independently introduced
two closely related techniques for proving lower bounds on 2-party
communication complexity of certain functions. We extend both techniques to
the multiparty setting. Each of them yields a lower bound of n^{\Omega(1)}
for the k-party communication complexity of Disjointness as long as k is a
constant. No superlogarithmic lower bounds were known previously on the
three-party communication complexity of Disjointness.
I will discuss a recent AC0 lower bound: the k-clique problem on graphs of size
n cannot be solved by constant-depth circuits of size O(n^(k/4)). This result has
an interesting corollary in logic (finite model theory): the first-order
"variable hierarchy" is strict in terms of expressive power on ordered graphs
(it was previously open whether 3 variables suffice to define every
first-order property of ordered graphs).
We will give alternative proofs of two known results on influences of
boolean functions, deriving them from a variant of the
edge-isoperimetric inequality on the boolean cube. The first result is
a theorem of Kahn, Kalai, and Linial stating that every boolean
function has an influential variable. The second result is a theorem
of Friedgut which states that a function with a small sum of
influences essentially depends only a on few of its variables.
Polynomials are fundamental objects in computer science that arise in a variety of contexts,
such as error-correcting codes and circuit lower bounds. Despite intense research, many basic
computational aspects of polynomials remain poorly understood. For example, although it is known
that most functions cannot be approximated by low-degree multivariate polynomials, an explicit
construction of such a function is still unknown.
In this talk we discuss some of the progress we have made towards understanding computational
aspects of polynomials. In particular, we present our recent result that the sum of d pseudorandom
generators for degree-1 polynomials is a pseudorandom generator for polynomials of degree d.