2016 CMS Winter Meeting

Niagara Falls, December 2 - 5, 2016

Combinatorial, Geometric, and Computational Aspects of Optimization
Org: Antoine Deza (McMaster University), Ricardo Fukasawa (University of Waterloo) and Laura Sanita (University of Waterloo)
[PDF]

SANTANU DEY, Georgia Institute of Technology
Some cut generating functions for second-order conic sets  [PDF]

We study cut generating functions for conic sets. Our first main result shows that if the conic set is bounded, then cut generating functions for integer linear programs can easily be adapted to give the integer hull of the conic integer program. Then we introduce a new class of cut generating functions which are non-decreasing with respect to second-order cone. We show that, under some minor technical conditions, these functions together with integer linear programming-based functions are sufficient to yield the integer hull of intersections of conic sections in $\mathbb{R}^2$.

This is joint work with Asteroide Santana.

Primitive Lattice Polytopes  [PDF]

We introduce and study a family of polytopes which can be seen as a generalization of the permutahedron of type $B_d$. We highlight connections with the largest possible diameter of the convex hull of a set of points in dimension $d$ whose coordinates are integers between $0$ and $k$, and with a parameter controlling the computational complexity of multicriteria matroid optimization. Based on a joint work with George Manoussakis and Shmuel Onn.

YURI FAENZA, Columbia University
An invitation to 2-level polytopes  [PDF]

2-level polytopes are a generalization of stable set polytopes of perfect graphs. They naturally appear in several areas of mathematics, including polyhedral combinatorics, combinatorial optimization, and statistics. Those polytopes have received some attention in the last years because of their connections with the theory of linear and semidefinite extended formulations and the log rank conjecture in communication complexity. Still, even basic questions on their structure lie unanswered. In this talk, I will present some known results, open problems, and promising research directions. In particular, I will focus on an algorithm for enumerating, up to affine transformations, all 2-level polytopes.

RICARDO FUKASAWA, University of Waterloo
Implementing the (not so) trivial lifting problem  [PDF]

When generating cuts for MIPs from multiple rows of the simplex tableau, one approach has been to relax the integrality of the non-basic variables, compute an intersection cut, then strengthen the cut coefficients corresponding to integral non-basic variables using the so-called trivial lifting procedure. Although of polynomial-time complexity in theory, this lifting procedure can be computationally costly in practice. For two-row relaxations, we present an algorithm that computes trivial lifting coefficients in constant time, for arbitrary lattice-free sets. This borrows ideas from solving integer programs in fixed dimensions and our results show that a bound on the number of iterations can be derived from the lattice width and covering radius of the lattice free set. Computational experiments confirm that the algorithm works well in practice.

KONSTANTINOS GEORGIOU, Ryerson University
Geometric and Computational Aspects for Attacking Combinatorial Optimization Problems Using Semi-Definite Programs  [PDF]

In this talk we will introduce the paradigm of Semi-Definite Programming (SDP) relaxations as a tool for attacking (intractable) Combinatorial Optimization problems. According to this cornerstone algorithmic technique, the optimization problem at hand is first formulated as an Integer/Quadratic Program. When the integrality condition is dropped, the resulting optimization problem is a SDP, i.e. a tractable (under mild assumptions) optimization problem.

SDP relaxations have given rise to numerous state-of-the-art approximation algorithms for Combinatorial Optimization Problems. Still, the discrepancy between the optimal exact solutions and the solutions to the relaxation is a natural barrier for the performance of any SDP-based algorithm. One way of coping with this obstacle is geometric. Solutions to SDPs give rise to so-called negative-type metrics, i.e. a subsets of ${\mathbb R}^n$ equipped with the $\ell_2^2$ norm. The relaxation of the original formulation to a SDP can be actually interpreted as the relaxation of an $\ell_1$ metric space to a negative-type metric. As such, a natural way for strengthening SDPs is to add $\ell_1$ constraints, including $k$-gonal inequalities, or more generally, hypermetrics inequalities.

The main subject of the talk will be to further elaborate on computational aspects regarding the addition of $\ell_1$ constraints in SDPs, and the induced approximability trade-offs for Combinatorial Optimization Problems.

JON LEE, University of Michigan
Volumetric tuning of sBB for global optimization of factorable formulations  [PDF]

Using volume calculations, I will demonstrate how we can get mathematical guidance for convexification strategies and for branching-point selection, in the context of the spatial branch-and-bound algorithm for factorable global-optimization formulations. We carry out this in considerable detail for the the basic function of multiplying three terms. This is joint work with Emily Speakman.

CARLA MICHINI, University of Wisconsin Madison
Totally Unimodular Congestion Games  [PDF]

We investigate a new class of congestion games, called \emph{Totally Unimodular (TU) Congestion Games}, where the players' strategies are binary vectors inside polyhedra defined by totally unimodular constraint matrices. Network congestion games belong to this class.

In the symmetric case, when all players have the same strategy set, we design an algorithm that finds an optimal aggregated strategy and then decomposes it into the single players' strategies. This approach yields strongly polynomial-time algorithms to (i) find a pure Nash equilibrium, and (ii) compute a socially optimal state, if the delay functions are weakly convex. We also show how this technique can be extended to matroid congestion games.

We then introduce some combinatorial TU congestion games, where the players' strategies are matchings, vertex covers, edge covers, and stable sets of a given bipartite graph. In the asymmetric case, we show that for these games (i) it is PLS-complete to find a pure Nash equilibrium even in case of linear delay functions, and (ii) it is NP-hard to compute a socially optimal state, even in case of weakly convex delay functions.

SEBASTIAN POKUTTA, Georgia Institute of Technology

Both, discrete optimization techniques as well as machine learning approaches have made significant progress over the last decades and can be considered important tools that are regularly used in applications. However, very little has been done at the intersection of the two disciplines despite their unprecedented real-world significance: as soon as the underlying set of decisions that we want to optimize over is of a combinatorial or discrete nature standard learning approaches fail due to unacceptable running times or real-world irrelevant guarantees. At the same time many strategic, tactical, and operational challenges that we face (e.g., in dynamic routing, ad allocation, pick path optimization, dynamic pricing, demand prediction, etc.) require a tight integration of data, learning, and decision making. In this talk I will provide a general method to significantly speed-up convex optimization and learning by modifying conditional gradient algorithms. This new approach is particularly effective in the context of combinatorial problems leading to several orders of magnitude in speed-up.

(joint work with Gábor Braun and Daniel Zink)

LIONEL POURNIN, Université Paris 13
Improved bounds on the diameter of lattice polytopes  [PDF]

The focus of this talk is on the largest possible diameter $\delta(d,k)$ of a lattice polytope contained in the hypercube $[0,k]^d$. A recent result by Del Pia and Michini bounds $\delta(d,k)$ above by $kd-\lceil{d/2}\rceil$ when $k\geq2$. Pursuing their approach, Deza and Pournin have improved this bound when $k\geq3$ as $kd-\lceil{2d/3}\rceil$. The key arguments of the proof will be given as well as a number of reasons why further improving this bound using the same ideas may be challenging.

LAURA SANITÀ, University of Waterloo
On the circuit diameter of some polytopes in combinatorial optimization  [PDF]

The diameter of a polyhedron P is the maximum value of a shortest path between a pair of vertices of P, where one is allowed to walk on the edges (1-dimensional faces) of P. The circuit diameter of P is instead defined as the maximum value of a shortest circuit-path between a pair of vertices of P, where one is allowed to walk using the direction of potential edges of P, i.e. edges that can arise by translating the facets of P. In this talk, we will give new bounds on the diameter and on the circuit diameter of some polytopes that describe the set of feasible solutions of classical combinatorial optimization problems.

TAMON STEPHEN, Simon Fraser University
On the Circuit Diameter Conjecture  [PDF]

A key concept in optimization is the combinatorial diameter of a polyhedron. From the point of view of optimization, we would like to relate it to the number of facets $f$ and dimension $d$ of the polyhedron. In the seminal paper of Klee and Walkup, the Hirsch conjecture, that the bound is $f-d$, was shown to be equivalent to several seemingly simpler statements, and was disproved for unbounded polyhedra through the construction of a particular 4-dimensional polyhedron with 8 facets. The Hirsch bound for polytopes was only recently narrowly exceed by Santos.

We consider analogous questions for a variant of the combinatorial diameter called the circuit diameter. In this variant, paths are built from the circuit directions of the polyhedron, and can travel through the interior. We are able to recover the equivalence results that hold in the combinatorial case. Further, we show that validity of the circuit analogue of the non-revisiting conjecture for polytopes would imply a linear bound on the circuit diameter of all unbounded polyhedra. Finally, we prove a circuit version of the 4-step conjecture. These results offer some hope that the circuit version of the Hirsch conjecture may hold, even for unbounded polyhedra.

Our methods require adapting the notion of simplicity to work with circuits. We show that it suffices to consider such circuit simple polyhedra for studying circuit analogues of the Hirsch conjecture, and use this to prove the equivalences of the different variants.

This is joint work with Steffen Borgwardt and Timothy Yusun.

NORIYOSHI SUKEGAWA, Chuo University
Improving bounds on the diameter of a polyhedron in high dimensions  [PDF]

In 1992, Kalai and Kleitman proved that the diameter of a $d$-dimensional polyhedron with $n$ facets is at most $n^{\log_2(d)+2}$. Recently, Todd improved the Kalai-Kleitman bound to $(n-d)^{\log_2(d)}$, which is tight for $d \le 2$, i.e., in low dimensions. We propose a method for tightening Todd's analysis in high dimensions, and prove further improved upper bounds such as $(n-d)^{\log_2(d)-1}$ for $d \ge 7$, $(n-d)^{\log_2(d)-2}$ for $d \ge 37$, and $(n-d)^{\log_2(d)-3+O(1/d)}$ for $d \ge 1$.

CHAITANYA SWAMY, University of Waterloo
Approximating Min-cost Chain-constrained Spanning Trees: A Reduction From Weighted To Unweighted Problems  [PDF]

We consider the problem of finding a min-cost spanning tree satisfying degree bounds for a nested family (i.e., a chain) of node-sets. We give the first approximation algorithm for this problem that approximates both the cost and degree bounds by a constant factor. This is also the first result that obtains an constant-factor approximation for both the cost and degree bounds for any spanning-tree problem with general degree bounds on node sets, where an edge participates in a non-constant number of degree constraints. Our algorithm is based on a novel use of Lagrangian duality to simplify the cost structure of the underlying problem to obtain a decomposition into uniform-cost subproblems, and then using a known algorithm for the unweighted problem. We show that this Lagrangian-relaxation based idea is in fact applicable more generally and, for any cost-minimization problem with packing side-constraints, yields a reduction from the weighted to the unweighted problem. We believe that this reduction is of independent interest. As another application of our technique, we consider the $k$-budgeted matroid basis problem, where we leverage a recent rounding algorithm for the problem to obtain an improved $n^{O(k^{1.5}/\epsilon)}$-time algorithm that returns a solution that satisfies (any) one of the budget constraints exactly and incurs a $(1+\epsilon)$-violation of the other budget constraints.

TAMÁS TERLAKY, Lehigh University, Bethlehem, PA
A Polynomial Column-wise Rescaling von Neumann Algorithm  [PDF]

Recently Chubanov proposed a method which solves homogeneous linear equality systems with positive variables in polynomial time. Chubanov's method can be considered as a column-wise rescaling procedure. We adapt Chubanov's method to the von Neumann problem, and so we design a polynomial time column-wise rescaling von Neumann algorithm. This algorithm is the rst variant of the von Neumann algorithm with polynomial complexity. Joint work with Dan Li and Kees Roos.

LEVENT TUNCEL, University of Waterloo
Geometric and analytic properties of MaxCut SDP  [PDF]

Given a simple undirected graph with weights on the edges, finding a cut of maximum weight in the graph is a fundamental, and hard combinatorial optimization problem (called MaxCut). Since the early 1990's, one of the most intriguing approaches for solving the MaxCut problem (both in practice and in terms of computational complexity) has been the utilization of convex relaxations based on Semidefinite Programming (SDP). We study some goemetric and analytic properties of the most commonly used SDP relaxation of the MaxCut problem (the MaxCut SDP).

This talk is based on joint work with Marcel de Carli Silva.