This is my personal archive of the Algorithms and Complexity Theory seminars at the Department of Computer Science, Aarhus University, since 1996. Previously these were known as ALCOM seminars. The list is by no means complete. It mainly covers the seminar announcements ending up in my mailbox.- Gerth Stølting Brodal
November 27, 2023 | Arthur de Cunha, Aarhus University Convolutional neural networks contain structured strong lottery tickets |
November 13, 2023 | Aniket Basu Roy, Aarhus University Packing Fréchet Balls |
November 1, 2023 | Sourav Chakraborty, Indian Statistical Institute, Kolkata, India Distinct Elements in Streams and the Klee's Measure Problem |
October 25, 2023 | Yakov Nekrich, Michigan Technological University Planar Nearest Neighbor Queries in Optimal Time: Semi-Online and Semi-Dynamic |
October 23, 2023 | Mads Bech Toftrup, Aarhus University On Generalization Bounds for Projective Clustering |
October 23, 2023 | Amik Raj Behera, Aarhus University Efficient PIT for Sparse Polynomials |
October 2, 2023 | Chris Schwiegelshohn, Aarhus University Towards Optimal Generalization Bounds for k-Means |
September 25, 2023 | Chris Schwiegelshohn, Aarhus University Optimal Coresets for Euclidean k-Means |
June 20, 2023 | Jens Kristian Refsgaard Schou, Aarhus University Space Efficient Functional Off-line Partial Persistent Trees with Applications to Planar Point Location |
May 16, 2023 | Mikael Møller Høgsgaard, Aarhus University AdaBoost is not an Optimal Weak to Strong Learner |
May 9, 2023 | Ioana Bercea, IT University of Copenhagen Locality in data structures |
April 25, 2023 | Nikolaj I. Schwartzbach, Aarhus University Hardness Self-Amplification of the Planted k-XOR Problem |
April 18, 2023 | Aniket Basu Roy, Aarhus University Covering Orthogonal Polygons with Rectangles |
March 21, 2023 | Sudarshan Shyam, Aarhus University Finding Fair Allocations under Budget Constraints |
March 14, 2023 | Ishaq Aden-Ali, University of California, Berkeley The One-Inclusion Graph Algorithm is not Always Optimal |
March 7, 2023 | Tomer Ezra, Sapienza University of Rome Contract design in combinatorial settings |
March 2, 2023 | Matteo Russo, Sapienza University of Rome Prophet Inequalities via the Expected Competitive Ratio |
February 28, 2023 | Mikael Møller Høgsgaard, Aarhus University Barriers for Faster Dimensionality Reduction |
February 14, 2023 | Rolf Svenning, Aarhus University Fully Persistent Search Trees in External Memory |
February 7, 2023 | Kasper Green Larsen, Aarhus University Fast Discrepancy Minimization with Hereditary Guarantees |
January 31, 2023 | Chris Schwiegelshohn, Aarhus University Breaching the 2 LMP Approximation Barrier for Facility Location with Applications to k-Median |
May 22, 20223 | David Saulpic, LIP6, Sorbonne Universit ́e, Paris France Clustering and Differential Privacy. |
December 6, 2022 | Zhile Jiang, Aarhus University Computing better approximate pure Nash equilibria in cut games via semidefinite programming |
November 29, 2022 | Maria Kyropoulou, University of Essex Not all Strangers are the Same: The Impact of Tolerance in Schelling Games |
November 22, 2022 | Amik Raj Behera, Aarhus University Shortest Cycle using Algebraic Techniques |
The Strong Lottery Ticket Hypothesis (SLTH) states that randomly-initialised neural networks contain subnetworks that can perform well without any training. Although unstructured pruning has been extensively studied in this context, its structured counterpart, which can deliver significant computational and memory efficiency gains, has been largely unexplored. One of the main reasons for this gap is the limitations of the underlying mathematical tools used in formal analyses of the SLTH. In this paper, we overcome these limitations: we leverage recent advances in the multidimensional generalisation of the Random Subset-Sum Problem and obtain a variant that admits the stochastic dependencies that arise when addressing structured pruning in the SLTH. We apply this result to prove, for a wide class of random Convolutional Neural Networks, the existence of structured subnetworks that can approximate any sufficiently smaller network. This is the first work to address the SLTH for structured pruning, opening up new avenues for further research on the hypothesis and contributing to the understanding of the role of over parameterization in deep learning.
Given a collection of polygonal chains we define a ball for every chain under the Fréchet and Hausdorff metric and study the intersection graphs of these balls. We show that computing the maximum independent set for the discrete and continuous variants is a different ball game altogether. In particular, we show that the discrete variant admits a PTAS. However, the problem becomes hard to approximate beyond a constant even when the polygonal chains are as long as 7 and in the plane. For the discrete case, we use the fact that both the Fréchet and Hausdorff distance metrics have a constant doubling dimension for constant ambient dimension and constant length of the polygonal chain. Thus, one can use the known sublinear separators to run a polynomial time local search algorithm. On the other hand, for the continuous variant, we reduce the problem of finding the maximum independent set of boxes in d-dimensions to a unit ball graph for curves of length O(d). For d=2, the former problem, known as the Maximum Independent Set of Rectangles, enjoys a constant-factor approximation algorithm [Mitchell2021, Galvez et al. 2022]. It is already APX-hard for d>2 [Chlevík and Chlevíková 2007], thus implying that finding a maximum independent set of unit balls under continuous (weak) Fréchet or Hausdorff metric is hard to approximate for even small polygonal chains in the plane.
We will present a very simple streaming algorithm on F0 estimation that also caught the eye of Donald E. Knuth. In a recent article, Donald E. Knuth started with the following two paragraphs: "Sourav Chakraborty, N. V. Vinodchandran, and Kuldeep S. Meel have recently proposed an interesting algorithm for the following problem: A stream of elements (a1, a2,...,am) is input, one at a time, and we want to know how many of them are distinct. In other words, if A = {a1, a2,...,am} is the set of elements in the stream, with multiplicities ignored, we want to know |A|, the size of that set. But we don’t have much memory; in fact, |A| is probably a lot larger than the number of elements that we can hold in memory at any one time. What is a good strategy for computing an unbiased estimate of |A|? Their algorithm is not only interesting, it is extremely simple. Furthermore, it’s wonderfully suited to teaching students who are learning the basics of computer science. (Indeed, ever since I saw it, a few days ago, I’ve been unable to resist trying to explain the ideas to just about everybody I meet.) Therefore I’m pretty sure that something like this will eventually become a standard textbook topic. This note is an initial approximation to what I might write about it if I were preparing a textbook about data streams." This simple algorithm comes out of the first ever "efficient" streaming algorithm (from PODS 21) for the Klee's Measure problem, which was a big open problem in the world of streaming for many years. This work is based on joint works with N. V. Vinodchandran, and Kuldeep S. Meel across multiple articles, notable the following: * Estimating the Size of Union of Sets in Streaming Models. PODS 2021 * Estimation of the Size of Union of Delphic Sets: Achieving Independence from Stream Size. PODS 2022 * Distinct Elements in Streams: An Algorithm for the (Text) Book. ESA 2022
In the nearest neighbor problem we store a set of points S in a data structure so that for any query point q the point p∈ S that is closest to q can be found efficiently. In this talk we study dynamic data structures for the Euclidean nearest neighbor problem in two dimensions. We show that two-dimensional nearest neighbor queries can be answered in optimal O(log n) time in some restricted dynamic scenarios. Joint work with John Iacono (Université libre de Bruxelles).
Given a set of points, clustering consists of finding a partition of a point set into k clusters such that the center to which a point is assigned is as close as possible. Most commonly, centers are points themselves, which leads to the famous k-median and k-means objectives. One may also choose centers to be j dimensional subspaces, which gives rise to subspace clustering. In this paper, we consider learning bounds for these problems. That is, given a set of n samples P drawn independently from some unknown, but fixed distribution D, how quickly does a solution computed on P converge to the optimal clustering of D? We give several near optimal results. In particular, * For center-based objectives, we show a convergence rate of \tildeO(√k/n). This matches the known optimal bounds of [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] and [Bartlett, Linder, and Lugosi, IEEE Trans. Inf. Theory 1998] for k-means and extends it to other important objectives such as k-median. * For subspace clustering with j-dimensional subspaces, we show a convergence rate of \tildeO(√kj2/n). These are the first provable bounds for most of these problems. For the specific case of projective clustering, which generalizes k-means, we show a convergence rate of Ω(√kj/n) is necessary, thereby proving that the bounds from [Fefferman, Mitter, and Narayanan, Journal of the Mathematical Society 2016] are essentially optimal.
Polynomial Identity Testing (PIT) asks the following problem: Given an oracle access to a multivariate polynomial, is the polynomial equal to the zero polynomial? This innocent looking problem turns out to be really difficult and is one of the central problems in algebraic version of P vs NP. Even after roughly three decades of combined effort, we know very little about it. In this talk, I will explain an efficient algorithm for this problem when the polynomial is sparse, i.e. the polynomial has very few monomials. This is a result by Klivans and Spielman from STOC 2001. Link to paper: https://dl.acm.org/doi/10.1145/380752.380801
This talk will present some ongoing work on improving state of the art generalization bounds for k-means. To recall: Learning asks for the sample size necessary for a solution computed on the sample to extend well to the distribution from which the data is drawn from. Kasper in particular gave many results for binary classification. For k-means, this problem is still open. In this talk, Chris will give some preliminary results and highlight futher directions. Also, Chris is trying to make a point. If the point is lost on you, you are probably the target audience.
Coresets are a type of summaries that allow us to query the cost wrt any candidate queries. If the queries happen to be any set of k-centers and the costs are the squared Euclidean distances between points and their closest centers, we are studying coresets for Euclidean k-means, arguably the most important and widely researched coreset question. Following a series of works, we now know that a coreset of size O(k/ε2 ·min(\sqrt(k),1/ε2)) exists and this has recently been shown to be optimal. In this talk we will outline how the construction works and what the main insights are towards proving this result.
In 1989 Driscoll, Sarnak, Sleator, and Tarjan presented general space-efficient techniques/transformations for making ephemeral data structures persistent. The main contribution of this paper is to adapt this transformation to the functional model. We present a general transformation of an ephemeral, linked data structure into an off-line, partially persistent, purely functional data structure with additive O(nlog n) construction time and O(n) space overhead; with n denoting the number of ephemeral updates. An application of our transformation allows the elegant slab-based algorithm for planar point location by Sarnak and Tarjan 1986 to be implemented space efficiently in the functional model using linear space.
AdaBoost is a classic boosting algorithm for combining multiple inaccurate classifiers produced by a weak learner, to produce a strong learner with arbitrarily high accuracy when given enough training data. Determining the optimal number of samples necessary to obtain a given accuracy of the strong learner, is a basic learning theoretic question. Larsen and Ritzert (NeurIPS'22) recently presented the first provably optimal weak-to-strong learner. However, their algorithm is somewhat complicated and it remains an intriguing question whether the prototypical boosting algorithm AdaBoost also makes optimal use of training samples. In this work, we answer this question in the negative. Concretely, we show that the sample complexity of AdaBoost, and other classic variations thereof, are sub-optimal by at least one logarithmic factor in the desired accuracy of the strong learner.
We will talk about ways in which locality is employed in the design of data structures. We will focus on dictionaries, which maintain sets under insertions and deletions and support membership queries of the form "is an element x in the set?". We will discuss how state-of-the-art techniques for dictionaries exploit locality in the word RAM model to obtain space and time gains and also how locality is exploited in the choice of hash functions employed for the design.
In the average-case k-SUM problem, given r integers chosen uniformly at random from {0,...,M-1}, the objective is to find a set of k numbers that sum to 0 modulo M (this set is called a ``solution''). In the related k-XOR problem, given k uniformly random Boolean vectors of length log M, the objective is to find a set of k of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of k, r, and M. The dense regime of M ≤ rk, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of M << rk, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted k-SUM and k-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As M increases past rk, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems. In this talk, however, we focus on showing a hardness self-amplification procedure for k-XOR. We show that if there is an algorithm that runs in time T and solves planted k-XOR recovery with probability Ω(1 / polylog(r)), then there is an algorithm than runs in time \tildeO(T) and solves planted k-XOR with probability 1 - o(1). We show this by constructing a rapidly mixing random walk over k-XOR instances that preserves the planted solution. Based on joint work with Sagnik Saha and Prashant Vasudevan.
We survey the problem of Covering Orthogonal Polygons with Rectangles. For the general problem, the best-known approximation factor achieved in polynomial time is O(√log n) [Kumar and Ramesh '99], whereas, when the polygons do not have holes, there is a 2-approximation algorithm [Franzblau '89]. Furthermore, we discuss a conjecture by Paul Erdős on a related problem [Chaiken et al. '81]. The problem is also studied when we are only interested in covering the boundary of the polygons. For the general polygons, the best-known approximation factor is 4, and the problem is also known to be APX-hard [Berman and DasGupta '97]; and for hole-free polygons, it is only known to be NP-hard [Culberson and Reckhow '94]. We prove that a simple Local Search algorithm yields a PTAS for the Boundary Cover problem for simple polygons. We do this by proving the existence of planar support graphs for the hypergraphs defined on the area-maximal rectangles contained in the polygons where every critical point on its boundary induces a hyperedge.
We study the fair allocation of indivisible goods among agents with identical, additive valuations but individual budget constraints. Here, the indivisible goods—each with a specific size and value—need to be allocated such that the bundle assigned to each agent is of total size at most the agent’s budget. Since envy-free allocations do not necessarily exist in the indivisible goods context, compelling relaxations—in particular, the notion of envy-freeness up to k goods(EFk)—have received significant attention in recent years. In an EFk allocation, each agent prefers its own bundle over that of any other agent, up to the removal of k goods, and the agents have similarly bounded envy against the charity (which corresponds to the set of all unallocated goods). Recently, Wu et al. (2021) showed that an allocation that satisfies the budget constraints and maximizes the Nash social welfare is 1/4-approximately EF1. However, the computation (or even existence) of exact EFk allocations remained an intriguing open problem. We make notable progress towards this by proposing a simple, greedy, polynomial-time algorithm that computes EF2 allocations under budget constraints. Our algorithmic result implies the universal existence of EF2 allocations in this fair division context. The analysis of the algorithm exploits intricate structural properties of envy-freeness. Interestingly, the same algorithm also provides EF1 guarantees for important special cases. Specifically, we settle the existence of EF1 allocations for instances in which: (i) the value of each good is proportional to its size, (ii) all goods have the same size, or (iii) all the goods have the same value. Our EF2 result extends to the setting wherein the goods’ sizes are agent specific.
The one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth achieves an optimal in-expectation risk bound in the standard PAC classification setup. In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm. We refute this conjecture in the strongest sense: for any practically interesting Vapnik-Chervonenkis class, we provide an in-expectation optimal one-inclusion graph algorithm whose high probability risk bound cannot go beyond that implied by Markov's inequality. Our construction of these poorly performing one-inclusion graph algorithms uses Varshamov-Tenengolts error correcting codes. Our negative result has several implications. First, it shows that the same poor high-probability performance is inherited by several recent prediction strategies based on generalizations of the one-inclusion graph algorithm. Second, our analysis shows yet another statistical problem that enjoys an estimator that is provably optimal in expectation via a leave-one-out argument, but fails in the high-probability regime. This discrepancy occurs despite the boundedness of the binary loss for which arguments based on concentration inequalities often provide sharp high probability risk bounds. Based on joint work with Yeshwanth Cherapanamjeri, Abhishek Shetty, and Nikita Zhivotovskiy (arXiv:2212.09270).
We study two combinatorial settings of the contract design problem, in which a principal wants to delegate the execution of a costly task. In the first setting, the principal delegates the task to an agent that can take any subset of a given set of unobservable actions, each of which has an associated cost. The principal receives a reward which is a combinatorial function of the actions taken by the agent. In the second setting, we study the single-principal multi-agent contract problem, in which the principal motivates a team of agents to exert effort toward a given task. We design (approximately) optimal algorithms for both settings along with impossibility results for various classes of combinatorial functions. In particular, for the single agent setting, we show that if the reward function is gross substitutes, then an optimal contract can be computed with polynomially many value queries, whereas if it is submodular, the optimal contract is NP-hard. For the multi-agent setting, we show how using demand and value queries, it is possible to obtain a constant approximation, where for subadditive reward functions it is impossible to achieve an approximation of o(\sqrt(n)). Our analysis uncovers key properties of gross substitutes and XOS functions, and reveals many interesting connections between combinatorial contracts and combinatorial auctions. This talk is based on joint work with Paul Duetting, Michal Feldman, and Thomas Kesselheim. Bio: Tomer Ezra is a postdoc at Sapienza University of Rome, hosted by Prof. Stefano Leonardi. He completed his Ph.D. in Tel Aviv University under the supervision of Prof. Michal Feldman. His research interests lie in the border of Computer Science and Economics, focusing on the analysis and design of simple mechanisms and algorithms in limited information settings.
We consider prophet inequalities under general downward-closed constraints. In a prophet inequality problem, a decision-maker sees a series of online elements with values, and needs to decide immediately and irrevocably whether or not to select each element upon its arrival, subject to an underlying feasibility constraint. Traditionally, the decision-maker's expected performance has been compared to the expected performance of the prophet, i.e., the expected offline optimum. We refer to this measure as the Ratio of Expectations (or, in short, RoE). However, a major limitation of the RoE measure is that it only gives a guarantee against what the optimum would be on average, while, in theory, algorithms still might perform poorly compared to the realized ex-post optimal value. Hence, we study alternative performance measures. In particular, we suggest the Expected Ratio (or, in short, EoR), which is the expectation of the ratio between the value of the algorithm and the value of the prophet. This measure yields desirable guarantees, e.g., a constant EoR implies achieving a constant fraction of the ex-post offline optimum with constant probability. Moreover, in the single-choice setting, we show that the EoR is equivalent (in the worst case) to the probability of selecting the maximum, a well-studied measure in the literature. This is no longer the case for combinatorial constraints (beyond single-choice), which is the main focus of this paper. Our main goal is, thus, to understand the relation between RoE and EoR in combinatorial settings. Specifically, we establish two reductions: for every feasibility constraint, the RoE and the EoR are at most a constant factor apart. Additionally, we show that the EoR is a stronger benchmark than the RoE in that for every instance (feasibility constraint and product distribution) the RoE is at least a constant of the EoR, but not vice versa. Both these reductions imply a wealth of EoR results in multiple settings where RoE results are known.
The Johnson-Lindenstrauss transform allows one to embed a dataset of n points in Rd into Rm, while preserving the pairwise distance between any pair of points up to a factor (1 ± ε), provided that m = Ω(ε -2lg n). The transform has found an overwhelming number of algorithmic applications, allowing to speed up algorithms and reducing memory consumption at the price of a small loss in accuracy. A central line of research on such transforms, focus on developing fast embedding algorithms, with the classic example being the Fast JL transform by Ailon and Chazelle. All known such algorithms have an embedding time of Ω(d lg d), but no lower bounds rule out a clean O(d) embedding time. In this work, we establish the first non-trivial lower bounds (of magnitude Ω(mlg m)) for a large class of embedding algorithms, including in particular most known upper bounds.
We present the first fully persistent external memory search tree achieving amortized IO bounds matching those of the classic (ephemeral) B-tree by Bayer and McCreight. The insertion and deletion of a value in any version requires amortized O(logB Nv) IOs and a range reporting query in any version requires worst-case O(logB Nv + K/B) IOs, where K is the number of values reported, Nv is the number of values in the version v of the tree queried or updated, and B is the external memory block size. The data structure requires space linear in the total number of updates. Compared to the previous best bounds for fully persistent B-trees [Brodal, Sioutas, Tsakalidis, and Tsichlas, SODA 2012], this paper eliminates from the update bound an additive term of O(log2 B) IOs. This result matches the previous best bounds for the restricted case of partial persistent B-trees [Arge, Danner and Teh, JEA 2003]. Central to our approach is to consider the problem as a dynamic set of two-dimensional rectangles that can be merged and split.
Efficiently computing low discrepancy colorings of various set systems, has been studied extensively since the breakthrough work by Bansal (FOCS 2010), who gave the first polynomial time algorithms for several important settings, including for general set systems, sparse set systems and for set systems with bounded hereditary discrepancy. The hereditary discrepancy of a set system, is the maximum discrepancy over all set systems obtainable by deleting a subset of the ground elements. While being polynomial time, Bansal's algorithms were not practical, with e.g. his algorithm for the hereditary setup running in time Ω(mn4.5) for set systems with m sets over a ground set of n elements. More efficient algorithms have since then been developed for general and sparse set systems, however, for the hereditary case, Bansal's algorithm remains state-of-the-art. In this work, we give a significantly faster algorithm with hereditary guarantees, running in O(mn2 lg(2+m/n)+n3) time. Our algorithm is based on new structural insights into set systems with bounded hereditary discrepancy. We also implement our algorithm and show experimentally that it computes colorings that are significantly better than random and finishes in a reasonable amount of time, even on set systems with thousands of sets over a ground set of thousands of elements.
In this talk, we will present some ongoing work on combining fully dynamic algorithms for clustering with privacy considerations.
Cut games are among the most fundamental strategic games in algorithmic game theory. It is well-known that computing an exact pure Nash equilibrium in these games is PLS-hard, so research has focused on computing approximate equilibria. We present a polynomial-time algorithm that computes 2.7371-approximate pure Nash equilibria in cut games. This is the first improvement to the previously best-known bound of 3, due to the work of Bhalgat, Chakraborty, and Khanna from EC 2010. Our algorithm is based on a general recipe proposed by Caragiannis, Fanelli, Gravin, and Skopalik from FOCS 2011 and applied on several potential games since then. The first novelty of our work is the introduction of a phase that can identify subsets of players who can simultaneously improve their utilities considerably. This is done via semidefinite programming and randomized rounding. In particular, a negative objective value to the semidefinite program guarantees that no such considerable improvement is possible for a given set of players. Otherwise, randomized rounding of the SDP solution is used to identify a set of players who can simultaneously improve their strategies considerably and allows the algorithm to make progress. The way rounding is performed is another important novelty of our work. Here, we exploit an idea that dates back to a paper by Feige and Goemans from 1995, but we take it to an extreme that has not been analyzed before. Based on joint work with Ioannis Caragiannis
Schelling's famous model of segregation assumes agents of different types, who would like to be located in neighborhoods having at least a certain fraction of agents of the same type. We consider natural generalizations that allow for the possibility of agents being tolerant towards other agents, even if they are not of the same type. In particular, we consider an ordering of the types, and make the realistic assumption that the agents are in principle more tolerant towards agents of types that are closer to their own according to the ordering. Based on this, we study the strategic games induced when the agents aim to maximize their utility, for a variety of tolerance levels. We provide a collection of results about the existence of equilibria, and their quality in terms of social welfare. Joint work with Panagiotis Kanellopoulos and Alexandros A. Voudouris.
In this talk, we will look at an algorithm to find a shortest cycle in a weighted directed graph. Let G be a directed graph with weight on the edges, where the weight of every edge is an integer in [-W,...,W]. The weight of a cycle is defined to be the sum of the weights of edges in the cycle. Cygan, Gabow, and Sankowski gave an algorithm to find a shortest cycle in time O(W nω log W), where nω is the time to multiply two n × n matrices. The algorithm uses algebraic tools to compute the determinant of a polynomial matrix and find an edge in a shortest cycle efficiently.