Many problems in computer science naturally require finding some optimum structure in discrete objects like strings, graphs and circuits. Discrete optimization is the study of such computational problems, and it plays a key role in various applied areas like logistics, bioinformatics, chip design, and machine learning.
One of the most successful approaches for solving such problems is via continuous methods. Here, one first considers a tractable “continuous relaxation” for the discrete problem, and then uses some “rounding technique”, often based on geometry or probability, to extract a good discrete solution from the continuous information.
Despite much progress, our understanding of these methods is still quite limited: (i) most rounding approaches are problem-specific, and we lack a unified theory and methodology, (ii) the strength and applicability of powerful new relaxation approaches is barely understood, and (iii) for several fundamental optimization problems, large gaps exist between the known upper and lower bounds on how well they can be solved.
Here, we propose several promising new ideas and directions for future progress, based on our recent work and other impressive developments in the field at interface between discrete and continuous mathematics.
In particular, we will develop new unified rounding techniques that are applicable to a wide class of problems, by building on the connections between rounding and discrepancy theory. The area of algorithmic discrepancy, initiated by us, and has already led to several breakthrough results.
We also explore several very powerful recent approaches for relaxations based on hierarchies, stable polynomials and matrix norms. We will develop techniques to use them in novel ways to resolve several long-standing questions, and problems arising in new emerging applications in theoretical computer science and discrete optimization, where the current approaches fail.