Dense subset sum may be the hardest

P. Austrin, P. Kaski, M. Koivisto, J. Nederlof

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

5 Citations (Scopus)

Abstract

The SUBSET SUM problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for SUBSET SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-delta)*n})-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta <= 2^{(0.5-epsilon)*n}, as well as instances with beta >= 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/log_2(t): if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.
Original languageEnglish
Title of host publication33rd Symposium on Theoretical Aspects of Computer Science, STACS 2016, February 17-20, 2016, Orléans, France
EditorsN. Ollinger, H. Vollmer
Place of Publications.l.
PublisherSchloss Dagstuhl - Leibniz-Zentrum für Informatik
Pages13:1-13:14
DOIs
Publication statusPublished - 2016

Fingerprint Dive into the research topics of 'Dense subset sum may be the hardest'. Together they form a unique fingerprint.

Cite this