swe flag På svenska
bild
School of
Computer Science
and Communication
KTH / CSC / Theory Group / Jakob Nordström / Theory reading group

Theory reading group

The Theory reading group meetings run for 3 hours plus minus, during which time we have the possibility to first get an overview of some interesting research results and then study them in some depth. Meetings in the Theory reading group are announced in the KTH CSC calendar and the Stockholm Mathematics Center calendar, as well as on the TCS seminar mailing list. If you want to receive the announcements but are not (yet) on the TCS mailing list, just contact Jakob Nordström.

We usually start at noon with a light lunch, and at 12:10 pm there is a presentation which ends at 1 pm sharp. This should be an accessible talk, not requiring any particular prerequisites. Then there is a short break. Up to this point the whole exercise is usually pretty much indistinguishable from a regular (lunch) seminar.

After the break, we return for ca two hours of more technical discussions. Now we open up the hood, look at the formal definitions, and see the proofs of some of the key ingredients in the results including all (or at least some of) the gory technical details glossed over in a polished seminar presentation. If the first part of the seminar was with slides, we now switch to the board, and there is usually a lot of interaction (making this into more of a discussion than a presentation at times).

However, for those who feel that the first one-hour regular seminar was enough for today, it is perfectly fine to just discretely drop out during the break. No excuses needed; no questions asked.

In addition to the Theory reading group we also have a series of less formal (and perhaps slightly less listener-friendly) Complexity meetings. These meetings are not announced publicly in any calendars or via the TCS seminars mailing list, but there is a special Complexity meetings mailing list that you can join if you are interested. See the Complexity meetings webpage for more information.

Upcoming seminars

  • Monday Nov 6 at 12:00 in 4523, Lindstedtsvägen 5
    Symmetry exploitation for combinatorial problems
    (Bart Bogaerts, Katholieke Universiteit Leuven)

    The presence of symmetry in combinatorial problems often poses problems to solvers. Consider for instance, the pigeonhole problem "do n pigeons fit in n-1 holes?". This problem can, already for 10 pigeons not be solved efficiently by most Boolean satisfiability (SAT) solvers. In this talk, we revisit three recent developments related to symmetry detection and exploitation, namely BreakID, Symmetric Explantion Learning (SEL) and local domain symmetry.

    BreakID is a static symmetry breaking preprocessor for SAT and ASP (a QBF version is currently being developed). The main contribution of this work is that it intelligently chooses a set of generators of the symmetry group to break in order to optimize completeness of the symmetry breaking. Symmetric Explanation Learning (SEL), a dynamic symmetry exploitation algorithm for SAT, is based on the idea that the clause learning mechanism can be strengthened whenever a clause is learnt, also its symmetric variants can be learnt. Local domain symmetry is a type of symmetry of first-order theories. It generalizes interchangeable domain elements, is efficiently detectable, and can often be broken completely with a small set of symmetry breaking formulas.

  • Monday Nov 13 at 12:00 in 4523, Lindstedtsvägen 5
    TBA
    (Michele Scquizzato, TCS Group, KTH)

  • Monday Nov 27 at 12:00 in 4523, Lindstedtsvägen 5
    TBA
    (Johannes Klaus Fichte, TU Wien)

  • Monday Dec 11 at 12:00 in 4523, Lindstedtsvägen 5
    TBA
    (Thomas Watson, University of Memphis)

Theory reading group meetings autumn 2017

  • Monday Oct 9 at 12:00 in 4523, Lindstedtsvägen 5
    Ill-formedness for automatic program repair
    (Martin Monperrus, TCS Group, KTH)

    In this talk, I will first give an introductory overview of software engineering research. Then I will present the Nopol program repair system [1]. Nopol performs dynamic analysis to reason about the intended behavior and then uses a code synthesis technique that is based on SMT solving. Nopol is an open-science research prototype.

    [1] Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs

  • Monday Sep 25 at 12:00 in 4523, Lindstedtsvägen 5
    Simulation theorems via pseudo-random properties
    (Sagnik Mukhopadhyay, TCS Group, KTH)

    We generalize the deterministic simulation theorem of Raz and McKenzie [RM99] to any gadget which satisfies a certain hitting property. We prove that inner-product and gap-Hamming satisfy this property, and as a corollary we obtain deterministic simulation theorem for these gadgets, where the gadget input-size is logarithmic in the input-size of the outer function. This yields the first deterministic simulation theorem with a logarithmic gadget size, answering an open question posed by Göös, Pitassi and Watson [GPW15].

    Our result also implies the previous results for the Indexing gadget, with better parameters than was previously known. Moreover, logarithmic-sized gadget implies a quadratic separation in deterministic communication complexity and logarithm of partition number, no matter how high the partition number is with respect to the input size—something which is not achievable by previous results of [GPW15, AKK16].

    This talk will mostly consist of showing the sufficiency of the pseudo-random property and, if time permits, an application of Harper's theorem to prove the simulation theorem for gap-Hamming.

    [AKK16] Andris Ambainis, Martins Kokainis, and Robin Kothari. Nearly optimal separations between communication (or query) complexity and partitions. In Proceedings of the 31st CCC, 2016.
    [GPW15] Mika Göös, Toniann Pitassi, and Thomas Watson. Deterministic communication vs. partition number. In Proceedings of the 56th FOCS, 2015.
    [RM99] Ran Raz and Pierre McKenzie. Separation of the monotone NC hierarchy. Combinatorica, 19(3):403–435, 1999.

Theory reading group meetings spring 2017

  • Monday May 29 at 12:00 in 4523, Lindstedtsvägen 5
    Unified and optimal lower bounds for monotone computation
    (Robert Robere, University of Toronto)

    A classic counting argument due to Claude Shannon shows that almost all boolean functions have high complexity — more formally, all but an exponentially small fraction of boolean functions on n bits require strongly exponential (i.e. 2^Ω(n)) size circuits. On the other hand, the best lower bounds on circuit size that we have proven for any explicit function is on the order of 5n - o(1), which is not even superlinear! The state of affairs is not much better for boolean formulas, for which we merely have cubic lower bounds. Even for monotone circuits and formulas, which are much more understood, the best lower bounds for explicit functions are weakly exponential (i.e. of the form 2^{n^c} where c < 1).

    In this talk, we discuss some recent work in which we have nearly matched Shannon's lower bound for computing an explicit function by monotone circuit models. In particular, we prove that a monotone variant of the SAT problem requires monotone formulas of size 2^{c*n} for some universal constant c > 0. Our lower bounds are tight (up to the constant c) for any monotone boolean function, and thus represent the first example of any explicit monotone boolean function with strongly exponential size lower bounds. Furthermore, our techniques are very general and apply to a wide variety of monotone circuits models.

    This work is joint with Toniann Pitassi.

  • Wednesday May 24 at 12:00 in 4523, Lindstedtsvägen 5
    "Secure" execution platforms for application software — some definitions of "secure"
    (Frank Piessens, Katholieke Universiteit Leuven)

    Software applications run on top of execution platforms consisting of hardware (processors, devices, communication networks, and so forth) as well as software (operating systems, compilers, virtual machines, language runtimes, databases, and so forth). It is obviously desirable that these platforms are "secure", but it is less obvious what "secure" means in this context. It is an empirical fact that attacks against application software often rely at least to some extent on aspects of the execution platform, but often the possibility of an attack is a joint responsibility between application code vulnerabilities and execution platform characteristics. For instance, a memory safety vulnerability is only a security concern if the compiler or operating system do not implement adequate countermeasures against exploitation of such vulnerabilities. As another example, an attacker sniffing the network is less of a concern if either the application, or the platform provide adequate cryptographic protection.

    So what exactly should we require of a platform for it to be secure? This seminar will discuss the problem of platform security by proposing and analyzing a number of different formal definitions for platform security, building on classical concepts from programming language theory.

  • Monday May 22 at 10:00 (note the time!) in 1537, Lindstedtsvägen 3
    Part I: Future-proof software + Part II: How to test the universe? - Efficient testing of software product lines
    (Ina Schaefer, Technische Universität Braunschweig)

    Part I: Modern software systems are extremely long-lived and evolve over time in order to adapt to changing user requirements, application contexts or technological changes. This leads to challenges for software design, implementation, quality assurance and re-engineering in order to make those software systems future-proof. In this talk, I will present an overview of ongoing research in my group at the Institute of Software Engineering and Automotive Informatics at TU Braunschweig addressing the challenges of long-living and evolving software systems. I will focus on two areas of our work:

    1. applying machine learning techniques for efficient system-level regression testing and
    2. similarity-based variability model mining in order to transform grown software systems into well-structured software product line.

    Part II: Software product line engineering has gained considerable momentum in recent years, both in industry and in academia. A software product line is a family of software products that share a common set of features. Software product lines challenge traditional analysis, test and verification techniques in their quest of ensuring correctness and reliability of software. Simply creating and testing all products of a product line is usually not feasible, due to the potentially exponential number of valid feature combinations. In this talk, I present different strategies for software product line testing and focus on our recent work on incremental test case selection and test case prioritization of efficient product line testing.

  • Monday May 8 at 12:00 in 4523, Lindstedtsvägen 5
    Information-theoretic thresholds
    (Amin Coja-Oghlan, Goethe-Universität)

    Vindicating a sophisticated but non-rigorous physics approach called the cavity method, we prove a formula for the mutual information in statistical inference problems induced by random graphs. This general result implies the conjecture on the information-theoretic threshold in the disassortative stochastic block model [Decelle et al.: Phys. Rev. E (2011)] and allows us to pinpoint the exact condensation phase transition in random constraint satisfaction problems such as random graph coloring, thereby proving a conjecture from [Krzakala et al.: PNAS (2007)]. As a further application we establish the formula for the mutual information in Low-Density Generator Matrix codes as conjectured in [Montanari: IEEE Transactions on Information Theory (2005)]. Joint work with Florent Krzakala, Will Perkins and Lenka Zdeborova.

  • Monday Apr 24 at 12:00 in 4523, Lindstedtsvägen 5
    Dynamic (minimum) spanning forest with worst-case update time
    (Thatchaphol Saranurak, Theory Group, KTH)

    We will discuss recent algorithms for dynamically maintaining a (minimum) spanning forest of a graph undergoing edge insertions and deletions. This problem played a central role in the study of dynamic graph algorithms.

    We provide the "first polynomial improvement" over the long-standing O(sqrt{n}) bound of [Frederickson STOC'83, Eppstein et. al FOCS'92] for dynamic spanning forest problem. Independently, Wulff-Nilsen shows the first polynomial improvement for dynamic minimum spanning forest problems. Both works will be presented in STOC'17.

    The new crucial techniques are about expanders:

    1. an algorithm for decomposing a graph into a collection of expanders in near-linear time, and
    2. an algorithm for "repairing" the expansion property of an expander after deleting some edges of it.

    These techniques can be of independent interest.

  • Monday April 10 at 12:00 in 4523, Lindstedtsvägen 5
    Complexity results in automated planning
    (Meysam Aghighi, Linköping University)

    Planning is the problem of finding a sequence of actions that achieve a specific goal from an initial state. This problem is computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable.

    Cost-optimal planning (COP) is a type of planning where each action has a cost and the goal is to find a satisfying plan with minimal cost. We show how under different restrictions, COP varies from being polynomial-time solvable to PSPACE-complete. Net-benefit planning (NBP) is a type of planning where the goal is to find a plan that maximizes the achieved utility minus the cost spent. We present a method for efficiently compiling NBP into the ordinary plan existence problem.

    We also show the effect of numeric domains for action costs in COP using parameterized complexity. Planning is PSPACE-complete regardless of whether action costs are positive or non-negative, integer or rational. We distinguish between these cases and show that COP is W[2]-complete for positive integer costs, but it is para-NP-hard if the costs are non-negative integers or positive rationals.

  • Monday Mar 27 at 12:00 in 4523, Lindstedtsvägen 5
    Complexity theory beyond deterministic exponential time and applications
    (Igor Carboni Oliveira, Charles University)

    In the first part of the talk, I'll survey a variety of connections between non-uniform circuit lower bounds, non-trivial proof systems, non-trivial learning algorithms, and variants of natural properties. In particular, I plan to discuss how one can prove new unconditional lower bounds for standard complexity classes such as REXP, ZPEXP, BPEXP, and NEXP by designing algorithms with a non-trivial running time.

    After the break, we will focus on a recent application of some of the complexity-theoretic techniques behind these results. I'll describe in more detail recent progress on the problem of efficiently generating large canonical prime numbers, via pseudodeterministic pseudorandomness. If there is enough time and interest, I'll also discuss some non-constructive aspects of the result, and connections to derandomization.

    Based on joint work with Rahul Santhanam.

  • Thu Mar 16 at 13:15 (note the time!) in 4523, Lindstedtsvägen 5
    Dag-like communication and its applications
    (Dmitry Sokolov, St. Petersburg Department of V. A. Steklov Institute of Mathematics)

    In this talk we consider a generalization of communication protocols to dag-like case (instead of classical one where protocols correspond to trees). We prove an analogue of Karchmer-Wigderson Theorem for this model and boolean circuits. We also consider a relation between this model and communication PLS games proposed by Razborov in 1995 and simplify the proof of Razborov's analogue of Karchmer-Wigderson Theorem for PLS games.

    We also consider a dag-like analogue of real-valued communication protocols and adapt a lower bound technique for monotone real circuits to prove a lower bound for these protocols. We use real-valued dag-like communication protocols to generalize ideas of "feasible interpolation" technique, which helps us to prove a lower bound on the Cutting Planes proof system (CP) and adapt it to "random CP" by using recent result of Krajíček.

    Our notion of dag-like communication games allows us to use a Raz-McKenzie transformation from Goos and Pitassi paper, which yields a lower bound on the real monotone circuit size for the CSPSAT problem.

  • Monday Mar 13 at 12:00 in 4523, Lindstedtsvägen 5
    A nearly tight sum-of-squares lower bound for the planted clique problem
    (Aaron Potechin, Institute for Advanced Study)

    The sum of squares (SoS) hierarchy, a method for deciding the feasibility of polynomial equations over the reals, is an exciting frontier of complexity theory. SoS is exciting because it can be applied to a wide variety of combinatorial optimization problems and is in fact conjectured to be optimal for many important problems. However, its performance is only partially understood.

    In this talk, I will describe recent work proving sum of squares lower bounds for the planted clique problem, a classic average case problem. In the first part of the talk, I will describe the sum of squares hierarchy, planted clique, and pseudo-calibration, a key idea for the lower bound. In the second part of the talk, I will describe the main technical ideas needed in the proof.

  • Friday Mar 10 at 12:00 in 4523, Lindstedtsvägen 5
    Finding little graphs inside big graphs (in parallel)
    (Ciaran McCreesh, University of Glasgow)

    I'll give a high level overview of the best practical algorithms for three families of subgraph problems: maximum clique (finding the largest group of people, all of whom are friends with everyone else in the group), subgraph isomorphism (finding a pattern graph inside a target graph), and maximum common subgraph (finding what two graphs have in common). These problems are all NP-hard, but nonetheless we can solve some instances involving graphs with tens of thousands of vertices, and millions of edges. I'll discuss the kinds of graphs that make subgraph problems easy to solve in practice, and then explain how to generate "really hard" instances (and why this matters). Finally, I'll present an overview of my research on parallelising these algorithms to exploit multicore hardware: with the right work-splitting mechanisms, we can get strong, consistent, reproducible speedups up to at least the 64 core mark. This may come as a surprise to those familiar with parallel SAT, CP or MIP solvers, so I'll explain what makes these subgraph algorithms different.

    After the break, we will have an in-depth look at a state-of-the-art maximum clique algorithm due to Tomita et al. This algorithm is easy to understand from a correctness perspective, but it features several design choices which are hard to justify theoretically. We'll discuss three questions, and I'll show some empirical work which suggests that they share a common answer:

    • How exactly does this algorithm behave on random graphs, and why?
    • Why is it so much better for the main loop in the algorithm to iterate backwards rather than forwards?
    • Why does introducing parallelism via randomised work-stealing behave so erratically, and how can we do better?

  • Monday Mar 6 at 13:15 (note the time!) in 4523, Lindstedtsvägen 5
    SAT-based learning to verify liveness of randomised parameterised systems
    (Philipp Rümmer, Uppsala University)

    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method constructs a progress relation by means of a suitable Boolean encoding and incremental SAT solving. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.

    Joint work with Anthony W. Lin.

  • Monday Jan 23 at 12:00 in 4523, Lindstedtsvägen 5
    Cumulative space in black-white pebbling and resolution
    (Susanna F. de Rezende, Theory Group, KTH)

    The space usage of a computation is usually defined as the maximum memory it requires, and this is essentially everything one would want to know if the computational model is static. But what if we are computing in an environment where we can allocate memory dynamically? Then it makes a big difference whether we only have one single spike of high memory consumption or whether we consistently need large amounts of memory throughout the whole computation.

    In this talk we will explore a complexity measure, namely cumulative space, which allows us to measure this difference by accounting for the sum of memory usage over all time steps rather than just the maximum. It was introduced for pebble games by [Alwen and Serbinenko '15] and used to propose a more robust definition of cryptographic functions that are hard to invert (memory-hard functions).

    We will present some results regarding the cumulative space of another pebble game, namely the black-white pebble game, and relate this to the cumulative space complexity of refuting formulas in the resolution proof system.

    This is joint work with Joël Alwen, Jakob Nordström and Marc Vinyals.

  • Wednesday Jan 18 at 12:00 in 1537, Lindstedtsvägen 3
    The QRAT proof system
    (Martina Seidl, Johannes Kepler University Linz)

    We present QRAT, a novel proof system for quantified Boolean formulas. With QRAT it is now possible to produce certificates for all reasoning techniques implemented in state-of-the-art QBF preprocessors. Such certificates are not only useful to efficiently validate the correctness of a result, but they also allow the extraction of Skolem functions, i.e., winning strategies for satisfiable formula instances.

    In this talk, we first introduce the rules of QRAT, then we show how they can be used to express some powerful reasoning techniques implemented in modern preprocessors, and finally we outline how to extract a Skolem function from a QRAT proof.

Theory reading group meetings autumn 2016

During the autumn of 2016 we kept ourselves quite busy with the course DD2442 Seminars on Theoretical Computer Science: Proof Complexity, which served pretty much as a long series of proof complexity seminars.

  • Monday Nov 28 at 12:00 in 4523, Lindstedtsvägen 5
    Space in proof complexity
    (Marc Vinyals, Theory Group, KTH)

    Space in proof complexity measures the amount of memory that a verifier needs to check a proof, which imposes a lower bound on the memory a SAT solver needs to generate a proof. Space is very well understood for the most common proof system, resolution, but less so in other proof systems. In this talk we will survey some recent results about space in other settings: stronger proof systems such as polynomial calculus and cutting planes on the one hand, and a weaker "CDCL" proof system that is closer to the actual capabilities of SAT-solving algorithms on the other hand. We will even explore alternative definitions of space. The proof techniques will lead us to discussing adjacent areas of computational complexity such as pebble games and communication complexity.

  • Monday Nov 21 at 12:00 in 4523, Lindstedtsvägen 5
    Strong size lower bounds in regular resolution via games
    (Ilario Bonacina, Theory Group, KTH)

    The Strong Exponential Time Hypothesis (SETH) says that solving the SATISFIABILITY problem on formulas that are k-CNFs in n variables require running time 2^(n(1 - c_k)) where c_k goes to 0 as k goes to infinity. Beck and Impagliazzo (2013) proved that regular resolution cannot disprove SETH, that is they prove that there are unsatisfiable k-CNF formulas in n variables such that each regular resolution refutation of those has size at least 2^(n(1 - c_k)) where c_k goes to 0 as k goes to infinity. We give a different/simpler proof of such lower bound based on the known characterizations of width and size in resolution and our technique indeed works for a proof system stronger than regular resolution. The problem of finding k-CNF formulas for which we can prove such strong size lower bounds in general resolution is still open.

    This is a joint work with Navid Talebanfard.

  • Monday Nov 7 at 12:00 in 4523, Lindstedtsvägen 5
    Graph-based pseudo-industrial random SAT instance generators
    (Jesús Giráldez Crú, Theory Group, KTH)

    The Boolean Satisfiability problem (SAT) is the canonical NP-complete problem. However, Conflict-Driven Clause Learning (CDCL) SAT solvers are nowadays able to efficiently solve many industrial, or real-world, SAT instances — as hardware and software verification, cryptography, planning or scheduling, among others. On the other hand, relatively small random SAT instances are still too hard. The common intuition to explain the success of CDCL solving industrial instances is the existence of some hidden structure in them (whereas random formulas would not show such structure). In some works, this structure is studied representing SAT instances as graphs and analyzing some graph features, showing that these features are shared by the majority of existing industrial SAT instances. Some examples are the scale-free structure or the community structure.

    Unfortunately, the process of development and testing of new SAT solving techniques (specialized in industrial problems) is conditioned to the reduced number of existing industrial benchmarks. Therefore, the generation of random SAT instances that captures realistically some computational properties of such industrial instances is an interesting open problem.

    In this talk, we review some models of pseudo-industrial random SAT instances generation. They are the scale-free model and the community attachment model, which are related to some well-known concepts in real-world complex networks: preferential attachment and high modularity, respectively. In the scale-free model, the number of variable occurrences k follows a power-law distribution P(k) \propto k^{-\alpha}. With the community attachment model, it is possible to generate SAT instances with clear community structure (i.e., high modularity). These models will be reviewed from the perspectives of both graphs and SAT instances. Finally, we discuss some models for generating complex networks — not adapted to SAT instances yet — that may reduce the limitations of the previous models.

  • Monday Oct 17 at 12:00 in 4523, Lindstedtsvägen 5
    Simulation theorem and fork-lift
    (Sagnik Mukhopadhyay, Tata Institute of Fundamental Research, Mumbai)

    Recently, proving theorems of the form that the communication complexity of a composed function f \circ g is essentially of the order of the decision tree complexity of f times the communication complexity of g has received a lot of attention. In particular, Goos-Pitassi-Watson (2015) simplified the proof of such a theorem for deterministic complexity due to Raz-McKenzie (1997) that worked only when g is the Indexing function. They used this theorem to settle a longstanding open problem in communication complexity. The Raz-McKenzie theorem needs the size of the Indexing gadget to be at least p^20, where p is the number of instances of Index.

    We identify a simple sufficient condition for g to be satisfied to prove such deterministic simulation theorems. Using this, we show that CC(f \circ IP) = Omega(DT(f). n), provided n = Omega(log p), where IP is the inner-product function. This gives an exponential improvement over the gadget size of Raz-McKenzie.

    We also prove a tight lower bound for randomized communication complexity of OrdSearch \circ IP in terms of randomized decision tree complexity of the function OrdSearch, which is Omega(log p). Proving a randomized simulation theorem remains elusive and we will discuss the hurdles that are needed to be faced and overcome.

    This is a joint work with Arkadev Chattopadhyay (TIFR Mumbai), Michal Koucky and Bruno Loff (Charles University, Prague).

Theory reading group meetings spring 2016

  • Monday May 2 at 12:00 in 4523, Lindstedtsvägen 5
    Dynamic primal-dual algorithms for vertex cover and matching
    (Sayan Bhattacharya, Institute of Mathematical Sciences Chennai)

    Consider a scenario where we are given an input graph G = (V, E) with n nodes and m edges. The node-set of the graph remains unchanged over time, but the edge-set is dynamic. Specifically, at each time-step an adversary either inserts an edge into the graph, or deletes an already existing edge from the graph. The problem is to maintain an approximately maximum matching (resp. minimum vertex cover) in this dynamic setting.

    We present a clean primal-dual algorithm for this problem that has O(log n/\epsilon^2) amortized update time. The approximation ratio of the algorithm is (2+\epsilon) for minimum vertex cover and (3+\epsilon) for maximum matching. We also show how to extend this result to the dynamic b-matching and set-cover problems. This is the first application of the primal-dual framework in a dynamic setting.

    Joint work with M. Henzinger and G. F. Italiano (based on papers that appeared in SODA 2015 and ICALP 2015).

  • Monday April 25 at 12:00 in 4523, Lindstedtsvägen 5
    Memory-hard functions and parallel graph pebbling
    (Joël Alwen, Institute of Science and Technology Austria)

    There is growing interest in the security community in Memory-Hard Functions (MHFs). These require moderate amounts of memory to compute on a general-purpose single-processor (i.e. sequential) machine, but cannot be repeatedly computed with significantly less memory amortized per instance on dedicated custom hardware (i.e. a parallel generalized circuit). For example, such functions are used for password-hashing and key-derivation to prevent brute-force attacks being cost-effectively implemented on custom circuits and in proofs-of-work for more egalitarian decentralized cryptocurrencies.

    In (STOC 2015) Alwen and Serbinenko showed that, in order to construct a secure MHF it suffices to find a constant in-degree directed acyclic graph with a high cumulative pebbling complexity in a simple game of parallel pebbling. Conversely a wide class of candidate MHF constructions from the literature are given by fixing some particular (constant in-degree) DAG and showing an efficient way to pebble these DAGs immediately leads to a break of the MHF construction (i.e. a method of computing the MHF with low parallel memory complexity).

    The first part of this talk will be aimed at providing an overview of this area of research. In particular will cover the following:

    • The motivation for and definition of MHFs.
    • The close connection to a certain parallel pebbling game over DAGs and new pebbling complexity measure called the cumulative complexity (CC) of the DAG.
    • What won't work: line graphs, bit-reversal graphs, superconcentrators and stacks of superconcentrators.
    • A powerful parallel pebbling algorithm with low CC. In particular we show how this gives us a non-trivial general lower-bound on the CC of any DAG of a fixed size.
    • A method for lower-bounding the CC of a DAG using a well-studied combinatorial property of the DAG called its depth-robustness.
    • Finally we conclude with two strong positive results. Namely a pair of constant in-degree DAGs enjoying very high CC. Indeed the second has maximal CC for any constant in-degree DAG of equal size. Moreover it can be sequentially pebbled with this same CC. Thus we obtain a provably secure MHF.

    To demonstrate the power of these tools we will also briefly describe their implications for several of the most important MHF candidate constructions from the literature including the winner and several of the finalists of the recent multi-year international Password Hashing Competition. For each candidate we will see an attack strongly invalidating the conjectured security of the construction. We will also see a (weak) security proof for the construction showing that the attack is (essentially) optimal.

    The second part of this talk will focus on some of the most important proof techniques underlying these results. In particular we will cover the following:

    • The "meta-node" technique for analysing the CC of random DAGs.
    • A method for in-degree reduction in graphs.
    • Lower-bounding CC using depth-robustness.
    • Using the "dispersal" property of a graph to lower-bound its CC.
    • Stacks of depth-robust graphs with maximal parallel CC which can nevertheless be sequentially pebbled with the same CC.

  • Monday April 18 at 12:00 in 4523, Lindstedtsvägen 5
    Experimenting with CDCL SAT solvers and glue clauses
    (Laurent Simon, Université de Bordeaux)

    Trying to tackle in practice NP-Complete problems was believed hopeless 20 years ago. However, with the introduction of Conflict Driven Clause Learning algorithms (CDCL for short) in the late 90's, we observed one of the most fascinating victories against hard problems. However, despite these impressive results, the underlying reasons for these successes are just partially known. We will begin this talk by presenting a set of experiments showing why SAT solvers are so hard to study in practice.

    In a second part of the talk, we will focus on one of the many ingredients of SAT solvers: the concept of glue clauses and Literal Bock Distance. This measure for the quality of learnt clauses was introduced in 2009 and is now used in most of CDCL solvers. However, despite its interest, this measure is not fully understood. We will present the concept of glue clauses, as it was stated five years ago, and develop new insights in what may explain its performance, for instance by trying to find correlations between blocks as stated in the LBD measure and communities.

  • Monday April 4 at 12:00 in 4523, Lindstedtsvägen 5
    Unconditional lower bounds for data structures
    (Kasper Green Larsen, Aarhus University)

    In the first part of this talk, we will introduce and motivate the cell probe model for proving data structure lower bounds. We will then survey some of the recent techniques for proving lower bounds in this model, with an emphasis on the results obtained by the speaker and co-authors. The talk will highlight the current limitations of our techniques and we will also briefly discuss work by the speaker on lower bounds in more restricted models of computation.

    The second part of the talk is more technical and will be based on a FOCS'15 paper joint with Raphal Clifford (Bristol) and Allan Grønlund (Aarhus). The main focus here is a new type of threshold lower bound proved for the well-studied Multiphase Problem. The Multiphase Problem, presented by Patrascu at STOC'10, was one of the problems that really sparked the recently very popular discipline of proving conditional lower bounds. Our focus is on proving unconditional lower bounds for the Multiphase Problem in the regime of parameters where we can actually make useful statements. More specifically, we show that any data structure for the Multiphase Problem which insist on having a very fast query time of o(lgn/lglgn) must have n^{1-o(1)} update time. This is a whole new type of lower bound as previous techniques could only prove n^{eps} update time lower bounds, even when insisting on O(1) query time. We will also briefly touch on new lower bounds we prove for Matrix-vector multiplication.

  • Monday March 21 at 12:00 in 4523, Lindstedtsvägen 5
    Deterministic communication vs. partition number
    (Marc Vinyals, Theory Group, KTH)

    Alice is given a clique in a graph and Bob an independent set in the same graph. How much communication do they need to decide if these two sets of vertices intersect? This seemingly innocent question is connected to deep topics in communication complexity and analysis of Boolean functions.

    In a breakthrough paper in FOCS 2015, Göös, Pitassi and Watson solved this and other problems by proving lower bounds in query complexity, and then giving an explicit way of amplifying query complexity lower bounds to communication complexity lower bounds. This solved a problem that had been open since 1979, and the paper has already generated a long (and growing) list of follow-up works that have made progress on other long-standing open problems in different areas of communication complexity and query complexity.

    In this seminar, we will go over the GPW paper. During the first hour we will review the new results, and after the break we will present a detailed proof of their main theorem.

  • Wednesday March 16 at 12:00 in 4523, Lindstedtsvägen 5
    Title: Structural restrictions of CNF formulas: applications and limitations
    (Florent Capelli, Université Paris Diderot)

    It is well-known that clauses restrictions of CNF formulas such as 2-SAT or Horn-SAT are easy instances of the problem SAT. It is however not the case for harder problems such as #SAT, the problem of counting the satisfying assignments of a CNF formula: #2-SAT is already as hard as the general case. Fortunately, restrictions on the way the variables interact with the clauses have been a successful approach to find large classes of formulas for which #SAT was doable in polynomial time.

    In the first part of this lunch seminar, I will give a broad picture of what can be done with these structural restrictions of CNF formulas. I will first present how such restrictions are defined and give an overview of the tractability results they enable for #SAT. I will then leverage these results to the broader problem of knowledge compilation, that is, the problem of finding a good data structure representing F that supports queries such as model counting or enumeration in polynomial time. This naturally raises the questions of finding hard instances for such algorithmic techniques. We reformulate these questions as proving lower bounds in knowledge compilation and answer this by giving new exponential lower bounds on the compilation of some family of CNF formulas.

    In the second and more technical part of the talk, I will either present the algorithmic techniques in more details or give a complete proof of one of the lower bounds mentioned above depending on what the audience prefers. Most of the results presented in this talk were conceived in collaboration with Simone Bova, Stefan Mengel and Friedrich Slivovsky.

  • Monday March 14 at 12:00 in 4523, Lindstedtsvägen 5
    Verification of bit-vector arithmetic using finite integer algebras
    (Priyank Kalla, University of Utah)

    Finite-precision integer arithmetic plays an important role in many hardware and software systems, minimizing resource usage while maintaining necessary precision. However, operating on these bit-vector (BV) data-types can introduce subtle, unforeseen errors, causing incorrect function or even security vulnerabilities. With the widespread use of finite-precision arithmetic from multimedia DSP to embedded automotive control, it is imperative to devise new techniques to efficiently model and verify such systems at higher levels of abstractions --- raising the abstraction from bits to words, yet maintaining precision.

    A bit-vector of size "m" represents integer values reduced "(mod 2^m)". Therefore, BV-arithmetic can be modeled as a system of polynomial functions over Z_{2^m}; and number-theoretic and algebraic techniques can be devised for verification. In this talk, I will describe decision procedures for verification of bit-vector arithmetic that lie at the cross-roads of number theory and commutative algebra --- such as canonical simplification of polynomial functions, Newton's p-adic iterations, etc. We will look at the challenge of making such techniques practical, and also discuss their potential for integration with SMT-solvers.

  • Tuesday March 8 at 12:00 in 4523, Lindstedtsvägen 5
    SAT-enabled verification of state transition systems
    (Karem Sakallah, University of Michigan and Qatar Computing Research Institute)

    Modern conflict-driven clause-learning (CDCL) SAT solvers, introduced twenty years ago, have had a profound impact in many domains by enabling the solution of large-scale problems that were thought to be out of reach. It is now routine for modern SAT and SAT modulo Theories (SMT) solvers to tackle decision problems containing millions of variables and constraints. Verification of complex hardware and software systems is now increasingly facilitated by the automated reasoning capabilities of modern SAT technology.

    In this seminar I argue that the CDCL paradigm is now sufficiently mature and attempts to improve it further can only yield incremental gains in performance and capacity. Instead, I propose to combine it with two equally powerful concepts to allow for scalable reasoning about exponentially-large state transition systems. The first concept, pioneered by the IC3 and later PDR incremental induction reachability solvers, culminates a decades-old quest for solving the so-called state explosion problem in model checking. The second concept, CounterExample-Guided Abstraction Refinement (CEGAR for short), provides an adaptive computational framework for managing complexity by a) judicious elimination of irrelevant details (abstraction/over-approximation) and by b) automatically filtering any false positives/spurious counterexamples (refinement).

    After briefly describing the salient features of these two concepts I will illustrate their use, along with an underlying SAT/SMT engine, on two example applications of state transition systems: sequential hardware verification and detection of cross-site scripting (XSS) in PHP web servers. In both cases the goal is to show that all states reachable from a good initial state satisfy a given safety property or to produce a counterexample trace demonstrating violation of the property.

Theory reading group meetings autumn 2015

  • Monday December 14 at 12:00 in 4523, Lindstedtsvägen 5
    Linear temporal logic satisfiability checking
    (Kristin Yvonne Rozier, University of Cincinnati)

    Formal verification techniques are growing increasingly vital for the development of safety-critical software and hardware. Techniques such as requirements-based design and model checking have been successfully used to verify systems for air traffic control, airplane separation assurance, autopilots, logic designs, medical devices, and other functions that ensure human safety. Formal behavioral specifications written early in the system-design process and communicated across all design phases increase the efficiency, consistency, and quality of the system under development. We argue that to prevent introducing design or verification errors, it is crucial to test specifications for satisfiability.

    In 2007, we established LTL satisfiability checking as a sanity check: each system requirement, its negation, and the set of all requirements should be checked for satisfiability before being utilized for other tasks, such as property-based system design or system verification via model checking. We demonstrated that LTL satisfiability checking reduces to model checking; an extensive experimental evaluation proved that for LTL satisfiability checking, the symbolic approach is superior to the explicit approach. However, the performance of the symbolic approach critically depends on the encoding of the formula. Since 1994, there had been essentially no new progress in encoding LTL formulas as symbolic automata for BDD-based analysis. We introduced a set of 30 symbolic automata encodings, demonstrating that a portfolio approach utilizing these encodings translates to significant, sometimes exponential, improvement over the standard encoding for symbolic LTL satisfiability checking. In recent years, LTL satisfiability checking has taken off, with others inventing exciting new methods to scale with increasingly complex systems. We revisit the benchmarks for LTL satisfiability checking that have become the de facto industry standard and examine the encoding methods that have led to leaps in performance. We highlight the past and present, and look to the future of LTL satisfiability checking, a sanity check that now has an established place in the development cycles of safety-critical systems.

  • Monday November 30 at 12:00 in 4523, Lindstedtsvägen 5
    Model checking, SAT and bit-vectors + Evaluating CDCL restart schemes
    (Armin Biere, Johannes Kepler University Linz)

    The lunch seminar part is titled "Model checking, SAT and bit-vectors" with abstract as follows.

    Both SAT solving and Model Checking are considered to have saved the reputation of formal methods. We will highlight how their recent history is actually closely related. We further discuss important developments in both domains, mostly from the historical and practical point of view, but then will delve into the complexity of deciding bit-vector logic. This logic is the most important theory for bit-precise reasoning with SMT solvers and has many practical applications in testing and verification both of Hardware and Software. Related to solving certain bit-vector problems is the challenge to make bit-level arithmetic reasoning work.

    After the break, there will be a more technical presentation on evaluating restart schemes for CDCL SAT solvers, which is based on joint work with Andreas Frölich.

    Modern CDCL (conflict-driven clause learning) SAT solvers are used for many practical applications. One of the key ingredients of state-of-the-art CDCL solvers are efficient restart schemes. The main contribution of this work is an extensive empirical evaluation of various restart strategies. We show that optimal static restart intervals are not only correlated with the satisfiability status of a certain instance, but also with the more specific problem class of the given benchmark. We further compare uniform restart intervals with the performance of non-uniform restart schemes, such as Luby restarts. Finally, we revisit the dynamic restart strategy used in Glucose and propose a new variant thereof, which is based on the concept of exponential moving averages. The resulting implementation in Lingeling improves state-of-the-art performance in SAT solving.

  • Monday November 23 at 12:00 in 4523, Lindstedtsvägen 5
    Lower bounds: from circuits to QBF proof systems
    (Ilario Bonacina, Theory Group, KTH)

    A general and long-standing belief in the proof complexity community asserts that there is a close connection between progress in lower bounds for Boolean circuits and progress in proof size lower bounds for strong propositional proof systems. Although there are famous examples where a transfer from ideas and techniques from circuit complexity to proof complexity has been effective, a formal connection between the two areas has never been established so far. Here we provide such a formal relation between lower bounds for circuit classes and lower bounds for Frege systems for quantified Boolean formulas (QBF). Using the full spectrum of the state-of-the-art circuit complexity lower bounds we will prove lower bounds for very strong QBF proof systems (e.g. for what we called AC0[p]-FREGE + \forall red). Such lower bounds corresponds, in the propositional case, to major open problems in proof complexity.

    This talk is based on the joint work with Olaf Beyersdorff and Leroy Chew (ECCC TR15-133 and ITCS16, to appear).

  • Monday November 9 at 12:00 in 4523, Lindstedtsvägen 5
    Oblivious and online matching via continuous linear programming
    (Fei Chen, Theory Group, KTH)

    Variants of the maximum matching problem have wide applications in the real world. Motivated by a kidney exchange program, where kidney transfer is expected to be performed right after patients and donors pass the compatibility tests, the oblivious matching problem was proposed allowing greedy matching algorithms only. Motivated by online advertising, where for each keyword arriving online the advertising system should immediately decide which ad to display to maximize the profit, the online matching setting was proposed that requires the algorithm to maintain a matching in an online fashion.

    We study the oblivious and online matching problems. For oblivious matching, we prove that the Ranking algorithm has a performance ratio of at least 0.523 on arbitrary graphs. For edge-weighted online bipartite b-matching, we give a procedure to construct an asymptotically optimal algorithm. The analysis of both problems relies on linear programming framework. We use a continuous linear programming approach to analyze the limiting behavior of normal LPs. In particular, our results for online bipartite b-matching are based on a generalized secretary problem. The continuous LP gives a clear perspective on the secretary problem from which we are able to make a connection between secretary and online matching.

  • Monday October 12 at 12:00 in 4523, Lindstedtsvägen 5
    Fast and powerful hashing using tabulation + Deterministic edge connectivity in near-linear time
    (Mikkel Thorup, University of Copenhagen)

    The first part of the presentation, titled "Fast and powerful hashing using tabulation," is intended for a general audience.

    Randomized algorithms are often enjoyed for their simplicity, but the hash functions employed to yield the desired probabilistic guarantees are often too complicated to be practical. Here we survey recent results on how simple hashing schemes based on tabulation provide unexpectedly strong guarantees.

    Simple tabulation hashing dates back to Zobrist [1970]. Keys are viewed as consisting of c characters and we have precomputed character tables h_1,...,h_c mapping characters to random hash values. A key x=(x_1,...,x_c) is hashed to h_1[x_1] ⊕ h_2[x_2].....⊕ h_c[x_c]. This schemes is very fast with character tables in cache. While simple tabulation is not even 4-independent, it does provide many of the guarantees that are normally obtained via higher independence, e.g., linear probing and Cuckoo hashing.

    Next we consider twisted tabulation where one input character is "twisted" in a simple way. The resulting hash function has powerful distributional properties: Chernoff-Hoeffding type tail bounds and a very small bias for min-wise hashing. This is also yields an extremely fast pseudo-random number generator that is provably good for many classic randomized algorithms and data-structures.

    Finally, we consider double tabulation where we compose two simple tabulation functions, applying one to the output of the other, and show that this yields very high independence in the classic framework of Carter and Wegman [1977]. In fact, with high probability, for a given set of size proportional to that of the space consumed, double tabulation gives fully-random hashing. We also mention some more elaborate tabulation schemes getting near-optimal independence for given time and space.

    While these tabulation schemes are all easy to implement and use, their analysis is not.

    After the break, there will be a more technical presentation titled "Deterministic edge connectivity in near-linear time," based on joint work with Ken-ichi Kawarabayashi.

    We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time.

    The previous fastest deterministic algorithm by Gabow from STOC'91 took ~O(m+k^2 n), where k is the edge connectivity, but k could be Omega(n).

    At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes.

    Our main technical contribution is a near-linear time algorithm that contract vertex sets of a simple input graph G with minimum degree d, producing a multigraph with ~O(m/d) edges which preserves all minimum cuts of G with at least 2 vertices on each side.

    In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank á la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.

  • Wednesday October 7 at 13:15 in 4523, Lindstedtsvägen 5
    Size-space trade-offs in proof complexity
    (Susanna F. de Rezende, Theory Group, KTH)

    The study of proof size, proof size, and size-space trade-offs has recently been an active line of research in proof complexity. This study was originally motivated by concerns in applied SAT solving, but has also led to questions of intrinsic interest to proof complexity.

    The resolution proof system underlying current state-of-the-art SAT solvers is now fairly well-understood in this regard, but for stronger proof systems many open problems remain. In this talk, we will give an overview of what is known and then present our current research aimed at obtaining size-space trade-offs for the cutting planes proof system.

  • Monday October 5 at 12:00 in 4523, Lindstedtsvägen 5
    An average-case depth hierarchy theorem for Boolean circuits + Circuit complexity of small distance connectivity
    (Li-Yang Tan, Toyota Technological Institute at Chicago)

    In the first hour I will speak about recent work with Ben Rossman and Rocco Servedio. We prove an average-case depth hierarchy theorem for Boolean circuits over the standard basis of AND, OR, and NOT gates. Our hierarchy theorem says that for every d ≥ 2, there is an explicit n-variable Boolean function f, computed by a linear-size depth-d formula, which is such that any depth-(d-1) circuit that agrees with f on (1/2 + on(1)) fraction of all inputs must have size exp(nΩ(1/d)). This answers an open question posed by Håstad in his Ph.D. thesis, and has applications in structural complexity and the analysis of Boolean functions. A key ingredient in our proof is a notion of random projections which generalize random restrictions.

    After the break, I'd be happy to present the technical details and/or speak about related subsequent work with Xi Chen, Igor Oliveira, and Rocco Servedio on the st-connectivity problem:

    We show that any depth-d circuit for determining whether an n-node graph has an s-to-t path of length at most k must have size n^Ω(k^{1/d}/d). The previous best circuit size lower bounds for this problem were nk^(exp(-O(d)) [Beame, Impagliazzo, Pitassi 1998] and nΩ((log k)/d) [Rossman 2014]. Our lower bound is quite close to optimal, since a simple construction gives depth-d circuits of size nO(k^(2/d)) for this problem. Our proof is by reduction to a new lower bound on the size of small-depth circuits computing a skewed variant of the "Sipser functions" that have played an important role in classical circuit lower bounds of Sipser, Yao, and Håstad. A key ingredient in our proof is the use of random projections, an extension of random restrictions which allow us to obtain sharper quantitative bounds while employing simpler arguments, both conceptually and technically, than in the previous works.

  • Monday September 14 at 12:00 in 4523, Lindstedtsvägen 5
    Conflict-driven clause learning and pseudo-Boolean SAT solving
    (Jan Elffers, Theory Group, KTH)

    Conflict-driven clause learning (CDCL) is the most popular method to solve the Boolean satisfiability (SAT) problem in practice. This approach is based on backtracking search and uses clause learning to avoid solving the same subproblem multiple times. I will present the core algorithm and a number of extensions and optimizations that are incorporated in modern SAT solvers. I will also present possible directions for future research aimed at improving the understanding of this method.

    The pseudo-Boolean SAT problem is a generalization of SAT with linear constraints instead of disjunctive clauses. This area is much less well developed. One approach is to use an extension of CDCL with a modified implementation of clause learning to handle linear constraints. I will present this approach as well, and I will go through an example execution of the method on the Pigeonhole Principle. I will also outline some interesting research questions regarding pseudo-Boolean SAT solving.

Theory reading group meetings spring 2015

  • Monday June 22 at 12:00 in 4523, Lindstedtsvägen 5
    Hardness of dynamic problems and the geometry of binary search trees
    (Thatchaphol Saranurak, Theory Group, KTH)

    This talk consists of two separated parts; both about dynamic data structures.

    The first part is about proving hardness of dynamic graph problems. In dynamic graph problems, one wants to maintain some properties (e.g. connectivity, distances between nodes, maximum matching) of a given graph where edges of a graph can be inserted and deleted. While there are several techniques for proving polylogarithmic lower bounds for the time required for the update, these techniques currently do not give polynomial (n^ε) lower bounds. Then, one way to show hardness of the problems is to assume some conjectures (e.g. SETH, 3SUM) and prove that an algorithm with fast update time would contradict the conjecture. I will survey the active development of these techniques for proving hardness based on conjectures. Then, I will talk about our new conjecture called the Online Matrix-Vector Multiplication, which very well tightly captures the hardness of many dynamic problems. This is joint work with Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai.

    The second part is about the geometry of binary search trees. First, I will talk about a Demaine et al. paper which shows how the "execution log" of a binary search tree algorithm can be represented and characterized by a point set in a plane with a simple property. This characterization suggests a natural algorithm called Greedy. Next, I will talk about our work which shows that, using forbidden-pattern theory, Greedy almost solves 30-year-old Traversal Conjecture up to a function depending on alpha(n) (inverse-ackerman function). This is based on a joint work with Parinya Chalermsook, Mayank Goswami, Laszlo Kozma, and Kurt Mehlhorn.

  • Tuesday June 2 at 12:00 in 4423, Lindstedtsvägen 5
    From graphs to matrices, and back: bridging the combinatorial and the continuous
    (Aleksander Mądry, MIT)

    Over the last several years there was an emergence of new type of fast algorithms for various fundamental graph problems. A key primitive employed in these algorithms is electrical flow computation, which corresponds to solving a Laplacian linear system.

    In this talk, I will discuss how this approach enabled improvement over longstanding bounds for the maximum flow problem. This progress will also illustrate a broader emerging theme of employing optimization methods as an algorithmic bridge between our combinatorial and continuous understanding of graphs.

    Additionally, I will briefly outline how this line of work brings a new perspective on some of the core continuous optimization primitives — most notably, interior-point methods.

  • Monday May 18 at 12:00 in 4523, Lindstedtsvägen 5
    Symbol elimination for program analysis
    (Laura Kovács, Chalmers University of Technology)

    Automatic understanding of the intended meaning of computer programs is a very hard problem, requiring intelligence and reasoning. In this talk we describe applications of our symbol elimination methods in automated program analysis. Symbol elimination uses first-order theorem proving techniques in conjunction with symbolic computation methods, and derives non-trivial program properties, such as loop invariants and loop bounds, in a fully automatic way. Moreover, symbol elimination can be used as an alternative to interpolation for software verification.

  • Thursday May 7 at 12:00 in 4523, Lindstedtsvägen 5
    The notion of structure in real-world SAT solving
    (Jesú Giráldez Crú, Artificial Intelligence Research Institute IIIA-CSIC, Barcelona)

    Modern SAT solvers have experienced a remarkable progress on solving industrial (or real-world) SAT instances. Most of the techniques have been developed after an intensive experimental testing process. The common wisdom in the SAT community is that the success of these techniques is because they exploit some kind of hidden structure that industrial formulas actually have. Recently, there have been some attempts to analyze this structure in terms of complex networks, with the long-term aim of explaining the success of these SAT solving techniques, and possibly improving them.

    In this talk, we analyze some structural properties of SAT instances, viewed as graphs. Namely, the scale-free structure, the community structure and the self-similar structure. In addition, we explore how these features are affected during the SAT solving search by effects of variable instantiation or clause learning. Finally, we present some applications in SAT based on these notions of structure.

  • Monday May 4 at 12:00 in 4523, Lindstedtsvägen 5
    The parity of set systems under random restrictions with applications to exponential time problems
    (Thore Husfeldt, Lund University and IT University of Copenhagen)

    We reduce the problem of detecting the existence of an object to the problem of computing the parity of the number of objects in question. In particular, when given any non-empty set system, we prove that randomly restricting elements of its ground set makes the size of the restricted set system an odd number with significant probability. When compared to previously known reductions of this type, ours excel in their simplicity: For graph problems, restricting elements of the ground set usually corresponds to simple deletion and contraction operations, which can be encoded efficiently in most problems. We find three applications of our reductions:

    1. An exponential-time algorithm: We show how to decide Hamiltonicity in directed n-vertex graphs with running time 1.9999n provided that the graph has at most 1.0385n Hamiltonian cycles. We do so by reducing to the algorithm of Björklund and Husfeldt (FOCS 2013) that computes the parity of the number of Hamiltonian cycles in time 1.619n.
    2. A new result in the framework of Cygan et al. (CCC 2012) for analyzing the complexity of NP-hard problems under the Strong Exponential Time Hypothesis: If the parity of the number of Set Covers can be determined in time 1.9999n, then Set Cover can be decided in the same time.
    3. A structural result in parameterized complexity: We define the parameterized complexity class ⊕W[1] and prove that it is at least as hard as W[1] under randomized fpt-reductions with bounded one-sided error; this is analogous to the classical result NP ⊆ RP ⊕ P by Toda (SICOMP 1991).

    This is joint work with Andreas Björklund and Holger Dell.

  • Wednesday April 8 at 12:00 in 4523
    Space for random CNFs in proof complexity
    (Ilario Bonacina, Sapienza – Università di Roma)

    The aim of this talk is to give an overview of some recent results about space lower bounds in proof complexity. In particular, for random 3-CNFs we will present a (reasonably complete) proof of an optimal monomial space lower bound for Polynomial Calculus with Resolution (PCR) and an optimal total space lower bound in Resolution. We will see how to apply a fairly general combinatorial framework for proving space lower bounds in Resolution and PCR and, more in detail, the difficulties we had to overcame for the particular case of random 3-CNFs.

    A crucial point is a result independent from proof complexity: a variation of Hall's Theorem for matchings. We show that in bipartite graphs G with bipartition (L,R) and left-degree at most 3, L can be covered by certain families of disjoint paths, called VW-matchings, provided that L expands in R by a factor of 2-ε for ε > 1/23.

    This talk is mainly based on a joint work with Patrick Bennett, Nicola Galesi, Tony Huynh, Mike Molloy and Paul Wollan.

  • Monday March 23 at 12:00 in 4523, Lindstedtsvägen 5
    An ultimate trade-off in propositional proof complexity
    (Alexander Razborov, University of Chicago)

    We exhibit an unusually strong trade-off between resolution proof width and tree-like proof size. Namely, we show that for any parameter k = k(n) there are unsatisfiable k-CNFs that possess refutations of width O(k), but such that any tree-like refutation of width n1-ε / k must necessarily have double exponential size exp(nΩ(k)). Conceptually, this means that there exist contradictions that allow narrow refutations, but in order to keep the size of such a refutation even within a single exponent, it must necessarily use a high degree of parallelism. Viewed differently, every tree-like narrow refutation is exponentially worse not only than wide refutations of the same contradiction, but of any other contradiction with the same number of variables. This seems to significantly deviate from the established pattern of most, if not all, trade-off results in complexity theory.

    Our construction and proof methods combine, in a non-trivial way, two previously known techniques: the hardness escalation method based on substitution formulas and expansion. This combination results in a hardness compression approach that strives to preserve hardness of a contradiction while significantly decreasing the number of its variables.

  • Thursday March 12 at 12:00 in 4523, Lindstedtsvägen 5
    Limitations of algebraic approaches to graph isomorphism testing
    (Christoph Berkholz, RWTH Aachen and Theory Group, KTH)

    We investigate the power of algebraic techniques to test whether two given graphs are isomorphic. In the algebraic setting, one encodes the graphs into a system of polynomial equations that is satisfiable if the graphs are isomorphic. Afterwards, one tries to test satisfiability of this system using, for example, the Gröbner basis algorithm. In some cases this can be done in polynomial time, in particular, if the equations admit a constant degree refutation in algebraic proof systems such as Nullstellensatz or Polynomial Calculus.

    We compare this approach to other known relaxations and show that the k-dimensional Weisfeiler-Lehman Algorithm can be characterized in terms of algebraic proof system over the reals that lies between degree-k Nullstellensatz and degree-k Polynomial Calculus. Furthermore, we prove a linear lower bound on the Polynomial Calculus degree of Cai-Fürer-Immerman graphs, which holds over any field of characteristic different from two.

    This is joint work with Martin Grohe.

  • Monday January 26 at 12:00 in 4523, Lindstedtsvägen 5
    Distributed graph algorithms and lower bounds
    (Danupon Nanongkai, Theory Group, KTH)

    This talk will focus on distributed approximation algorithms for solving basic graph problems such as shortest paths, minimum spanning trees, and minimum cut. I will cover:

    • Survey of the recent progress.
    • Open problems and some related conjectures.
    • A basic technique for proving lower bounds by a reduction from two-party communication complexity based on [1].

    If time permits, I will touch some algorithmic techniques as well (e.g. [2,3]). No background in distributed algorithms will be assumed in this talk.

    [1] Atish Das Sarma, Stephan Holzer, Liah Kor, Amos Korman, Danupon Nanongkai, Gopal Pandurangan, David Peleg, Roger Wattenhofer: Distributed Verification and Hardness of Distributed Approximation. SIAM J. Comput. 41(5): 1235-1265 (2012)
    [2] Danupon Nanongkai: Distributed Approximation Algorithms for Weighted Shortest Paths. STOC 2014: 565-573
    [3] Danupon Nanongkai, Hsin-Hao Su: Almost-Tight Distributed Minimum Cut Algorithms. DISC 2014: 439-453

Theory reading group meetings autumn 2014

  • Monday December 8 at 12:00 in 4523, Lindstedtsvägen 5
    (2+ε)-SAT is NP-hard
    (Per Austrin, Theory Group, KTH)

    We prove the following hardness result for a natural promise variant of the classical CNF satisfiability problem: given a CNF formula where each clause has width w and the guarantee that there exists an assignment satisfying at least g = w/2 - 1 literals in each clause, it is NP-hard to find a satisfying assignment to the formula (that sets at least one literal to true in each clause). On the other hand, when g = w/2, it is easy to find a satisfying assignment via simple generalizations of the algorithms for 2-SAT.

    We also generalize this to prove strong NP-hardness for discrepancy problems with small size sets.

    The talk will be elementary and (almost) self-contained.

    Joint work with Venkatesan Guruswami and Johan Håstad.

  • Monday November 3 at 12:00 in 4523, Lindstedtsvägen 5
    Polynomial identity testing of read-once oblivious algebraic branching programs
    (Michael Forbes, Simons Institute for the Theory of Computing at UC Berkeley)

    Polynomial Identity Testing (PIT) is the problem of identifying whether a given algebraic circuit computes the identically zero polynomial. It is well-known that this problem can be solved with a small probability of error by testing whether the circuit evaluates to zero on random evaluation points. Recently, there has been much interest in solving this problem deterministically, because it has close connections with circuit lower bounds, and this problem is now one of the frontiers of the area of pseudorandomness.

    In this talk we will discuss recent progress in this area, in particular focusing on a model of algebraic computation known as the "read-once oblivious algebraic branching program" which has been the focus of most PIT research for the past several years. Developing PIT algorithms for this class is a natural algebraic analogue of derandomizing small-space computation (the RL vs L question), and this class of computation naturally has a linear-algebraic flavor. I will discuss deterministic algorithms for determining if computations in this model compute the zero polynomial, and will expose the linear algebraic nature of this question. In particular, I will construct a natural pseudorandom object from linear algebra called a "rank extractor" and show how it yields the desired PIT algorithms.

    Based on joint works with Amir Shpilka and Ramprasad Saptharishi.

  • Monday October 27 at 12:00 in 1537, Lindstedtsvägen 3
    Easy generation and efficient validation of proofs for SAT and QBF
    (Marijn Heule, University of Texas at Austin)

    Several proof systems have been proposed to verify results produced by satisfiability (SAT) and quantified Boolean formula (QBF) solvers. However, existing proof systems are not very suitable for validation purposes: It is either hard to express the actions of solvers in those systems or the resulting proofs are expensive to validate. We present two new proof systems (one for SAT and one for QBF) which facilitate validation of results in a time similar to proof discovery time. Proofs for SAT solvers can be produced by making only minor changes to existing conflict-driven clause-learning solvers and their preprocessors. For QBF, we show that all preprocessing techniques can be easily expressed using the rules of our proof system and that the corresponding proofs can be validated efficiently.

  • Monday October 20 at 12:00 in 4523, Lindstedtsvägen 5
    Lifting locally consistent partial solutions to a global solution
    (Irit Dinur, Weizmann Institute of Science)

    We are given a collection of (alleged) partial views of a function. We are promised "local consistency", i.e., that the partial views agree on their intersection with probability p. The question is whether the partial views can be lifted to a global function f, i.e. whether a p' fraction of the partial views agree with f (aka "global consistency").

    This scenario captures "low degree tests" and "direct product tests", both studied for constructions of PCPs. We describe other possible settings where a lifting theorem may hold. We will relate this to questions on proving a "derandomized" parallel repetition theorem, and the sliding scale conjecture.

  • Wednesday October 1 at 12:00 in 4523, Lindstedtsvägen 5
    Parallel repetition from fortification
    (Dana Moshkovitz, MIT)

    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games — "fortification" — and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short.

    As corollaries, we obtain:

    1. Starting from a PCP Theorem with soundness error bounded away from 1, we get a PCP with arbitrarily small constant soundness error. In particular, starting with the combinatorial PCP of Dinur, we get a combinatorial PCP with low error. The latter can be used for hardness of approximation as in the work of Håstad.
    2. Starting from the work of the author and Raz, we get a projection PCP theorem with the smallest soundness error known today. The theorem yields nearly a quadratic improvement in the size compared to previous work.

Theory reading group meetings spring 2014

  • Monday June 23 at 12:00 in 4523, Lindstedtsvägen 5
    Indistinguishability obfuscation from semantically-secure multilinear encodings
    (Rafael Pass, Cornell University and Cornell NYC Tech)

    The goal of program obfuscation is to "scramble" a computer program, hiding its implementation details while preserving functionality. Unfortunately, the "dream" notion of security, guaranteeing that obfuscated code does not reveal any information beyond black-box access to the original program, has run into strong impossibility results, and is known to be unachievable for general programs (Barak et al, JACM 2012). Recently, the first plausible candidate for general-purpose obfuscation was presented (Garg et al, FOCS 2013) for a relaxed notion of security, referred to as indistinguishability obfuscation; this notion, which requires that obfuscations of functionally equivalent programs are indistinguishable, still suffices for many important applications of program obfuscation.

    We present a new hardness assumption—the existence of semantically secure multilinear encodings—which generalizes a multilinear DDH assumption and demonstrate the existence of indistinguishability obfuscation for all polynomial-size circuits under this assumption (and the LWE assumption). We rely on the beautiful candidate obfuscation constructions of Garg et al (FOCS'13), Brakerski and Rothblum (TCC'14) and Barak et al (EuroCrypt'14) that were proven secure only in idealized generic multilinear encoding models, and develop new techniques for demonstrating security in the standard model, based on semantic security of multilinear encodings (which trivially holds in the generic multilinear encoding model).

    Joint work with Karn Seth and Sidharth Telang.

  • Monday June 16 at 12:00 in 4523, Lindstedtsvägen 5
    Formulas vs. circuits for small distance connectivity
    (Benjamin Rossman, National Institute of Informatics, Tokyo)

    Are poly-size boolean circuits strictly more powerful than poly-size boolean formulas? This question (also known as NC^1 vs. P) is one of the central open problems in complexity theory. We can also consider versions of this question for restricted classes of circuits. In the monotone setting, a super-polynomial separation of formulas vs. circuits was shown by Karchmer and Wigderson (1988). In recent work, we give the first super-polynomial separation in the (non-monotone) bounded-depth setting.

    Our main result is a lower bound showing that depth-d formulas solving the Distance-k st-Connectivity problem require size n^(log k) for all k <= loglog n and d <= log n/(loglog n)^O(1). In contrast, this problem has circuits of size n^Ω(1) and depth O(log k) by the "recursive doubling" method of Savitch. As a corollary, it follows that depth-d circuits of size S cannot be simulated by depth-d formulas of size S^o(d) for all d <= logloglog S (previously this was not known for any unbounded d -> \infty).

    The first part of the talk will be a gentle introduction to the question of formulas vs. circuits and the Distance-k st-Connectivity problem. After the break, I will give an overview of the new proof technique.

  • Wednesday May 14 at 12:00 in 1537, Lindstedtsvägen 3 (joint with the Combinatorics Group)
    Separating path systems
    (Victor Falgas-Ravry, Umeå University)

    Let G be a graph on n vertices. A family F of paths in G constitutes a separating system of G if for ever pair of distinct edges e,f in E(G) there exists a path p in F which contains exactly one of e and f. How small a separating path system can we find?

    This question arises naturally in the context of network design. The graph G represents a communication network in which one link is defective; to identify this link, we can send messages between nodes along predetermined paths. If a message does not reach its destination, then we deduce that the defective link lies on the corresponding path. A minimal separating path system thus allows us to determine precisely which link is defective using a minimal number of messages.

    We show that for asymptotically almost all n-vertex graphs, we can find a separating system containing at most 48n paths. In addition we prove some exact extremal results in the case where G is a tree.

    This is joint work with Teeradej Kittipassorn, Daniel Korandi, Shoham Letzter and Bhargav Narayanan. Similar results have recently and independently been obtained by Balogh, Csaba, Martin and Pluhar.

  • Monday May 12 at 12:00 in 4523, Lindstedtsvägen 5 (joint with the Combinatorics Group)
    Models for wireless algorithms
    (Magnús M. Halldórsson, Reykjavik University)

    The design and analysis of algorithms requires appropriate models — models that capture reality, yet are algorithmically usable; general, yet analyzable. The wireless setting has proved most challenging in this regard.

    We survey some of the recent progress on fundamental problems in the SINR (or physical) model, including link capacity and scheduling, aggregation, and the relative value of power control.

    The basic SINR model, however, still makes unrealistic assumptions that hold only in idealistic situations. We outline how to allow for arbitrary static environments while maintaining comparable performance guarantees with what holds in the basic SINR model. We might therefore be approaching an algorithmic model that captures reality with high fidelity while maintaining generality and analytic feasibility.

  • Thursday April 24 at 12:00 in 4523, Lindstedtsvägen 5
    Lower bounds from the strong exponential time hypothesis
    (Janne H. Korhonen, University of Helsinki)

    Exact exponential algorithmics considers the development of faster, still exponential-time, algorithms for problems that are believed to lie outside polynomial time. There have been notable successes in the recent years, such as the Björklund's 1.657^n time Hamiltonian path algorithm; however, for the central problem of CNF-SAT, no such strictly exponential improvement has been obtained. That is, while o(2^n) algorithms are known, there is no known algorithm with running time c^n poly(n,m) for some c < 2.

    Following the influential work of Impagliazzo, Paturi and Zane, connections between the exponential complexity of CNF-SAT and other problems have been investigated extensively. The basic idea is that if we assume CNF-SAT in fact cannot be solved in time c^n poly(n,m) for any c < 2 -- this assumption is known as the strong exponential time hypothesis -- we can then derive conditional lower bounds for other problems. These results can then be viewed as hardness results or new attacks on the complexity of CNF-SAT, depending on one's view on the strong exponential time hypothesis.

    In this talk, we will survey these connections between the strong exponential time hypothesis and other problems. In particular, we will focus on the perhaps somewhat unexpected conditional lower bounds for polynomial-time problems, and the basic strategies used in the proofs of these results.

  • Tuesday April 22 at 12:00 in 4523, Lindstedtsvägen 5
    Bi-Lipschitz bijection between the Boolean cube and the Hamming ball
    (Igor Shinkar, Weizmann Institute of Science)

    We construct a bi-Lipschitz bijection from the Boolean cube to the Hamming ball of equal volume. More precisely, we show that for all even n there exists an explicit bijection f from the n-dimensional Boolean cube to the Hamming ball of equal volume embedded in (n+1)-dimensional Boolean cube, such that for all x and y it holds that distance(x,y) / 5 <= distance(f(x),f(y)) <= 4 distance(x,y) , where distance(,) denotes the Hamming distance. In particular, this implies that the Hamming ball is bi-Lipschitz transitive.

    This result gives a strong negative answer to an open problem of Lovett and Viola (CC 2012), who raised the question in the context of sampling distributions in low-level complexity classes. The conceptual implication is that the problem of proving lower bounds in the context of sampling distributions will require some new ideas beyond the sensitivity-based structural results of Boppana (IPL 97).

    We also study the mapping f further and show that it (and its inverse) are computable in DLOGTIME-uniform TC0, but not in AC0. Moreover, we prove that f is "approximately local" in the sense that all but the last output bit of f are essentially determined by a single input bit.

    Joint work with Itai Benjamini and Gil Cohen.

  • Monday April 14 at 12:00 in 4523, Lindstedtsvägen 5 (joint with the Combinatorics Group)
    First order convergence of graphs
    (Daniel Král', University of Warwick)

    Nesetril and Ossona de Mendez introduced the notion of the first order (FO) convergence of graphs. This notion can be viewed as a unified notion of the existing notions of convergence of dense and sparse graphs. In particular, every FO convergent sequence of graphs is convergent in the sense of left convergence of dense graphs, and every FO convergent sequence of sparse graphs is convergent in the Benjamini-Schramm sense.

    During the talk, we will provide a brief introduction to the theory of graph limits and its applications in extremal combinatorics and theoretical computer science. We will then present our results on the FO convergence of trees, the FO convergence of matroids and we will also discuss the relation of the limit object to the convergent sequence. In particular, our results yield answers to several problems raised by Nesetril and Ossona de Mendez.

    The presented results were obtained jointly with Demetres Christofides, Frantisek Kardos, Martin Kupec, Anita Liebenau, Lukas Mach and Vojtech Tuma.

  • Monday April 7 at 12:00 in 3721, Lindstedtsvägen 25 (joint with the Combinatorics Group)
    A relative Szemerédi theorem
    (David Conlon, University of Oxford)

    The celebrated Green-Tao theorem states that there are arbitrarily long arithmetic progressions in the primes. One of the main ingredients in their proof is a relative Szemerédi theorem which says that any subset of a pseudorandom set of integers of positive relative density contains long arithmetic progressions.

    In this talk, we will discuss a simple proof of a strengthening of the relative Szemerédi theorem, showing that a much weaker pseudorandomness condition is sufficient. Our strengthened version can be applied to give the first relative Szemerédi theorem for k-term arithmetic progressions in pseudorandom subsets of Z_N of density N^{-c_k}.

    The key component in our proof is an extension of the regularity method to sparse pseudorandom hypergraphs, which we believe to be interesting in its own right. From this we derive a relative extension of the hypergraph removal lemma. This is a strengthening of an earlier theorem used by Tao in his proof that the Gaussian primes contain arbitrarily shaped constellations and, by standard arguments, allows us to deduce the relative Szemerédi theorem.

    This is joint work with Jacob Fox and Yufei Zhao.

  • Wednesday April 2 at 11:15 in 4523, Lindstedtsvägen 5 (joint with the Combinatorics Group)
    Induced matchings, arithmetic progressions and communication
    (Benjamin Sudakov, ETH Zürich)

    Extremal Combinatorics is one of the central branches of discrete mathematics which deals with the problem of estimating the maximum possible size of a combinatorial structure which satisfies certain restrictions. Often, such problems have also applications to other areas including Theoretical Computer Science, Additive Number Theory and Information Theory. In this talk we will illustrate this fact by several closely related examples focusing on a recent work with Alon and Moitra.

  • Monday March 31 at 12:00 in 3721, Lindstedsvägen 25 (joint with the Combinatorics Group)
    The graph regularity method
    (Jacob Fox, MIT)

    Szemerédi's regularity lemma is one of the most powerful tools in graph theory, with many applications in combinatorics, number theory, discrete geometry, and theoretical computer science. Roughly speaking, it says that every large graph can be partitioned into a small number of parts such that the bipartite subgraph between almost all pairs of parts is random-like. Several variants of the regularity lemma have since been established with many further applications. In this talk, I will survey recent progress in understanding the quantitative aspects of these lemmas and their applications.

  • Monday March 24 at 12:00 in 4523, Lindstedsvägen 5
    Improved inapproximability results for hypergraph coloring
    (Prahladh Harsha, Tata Institute of Fundamental Research, Mumbai)

    Despite the tremendous progress in understanding the approximability of several problems, the status of approximate coloring of constant colorable (hyper)graphs is not resolved and in fact, there is an exponential (if not doubly exponential) gap between the best known approximation algorithms and inapproximability results. The best known approximation algorithms which are a combination of combinatorial and semi-definite programming methods, require at least nδ colors to color a 2 colorable 4-uniform hypergraph for some constant δ ∈ (0,1). On the contrary, till recently, the best known hardness results could rule out at best coloring a 2-colorable hypergraph with polylog n colors (if not polyloglog n colors in some cases).

    Recently, with the discovery of the low-degree polynomial long code (aka short code of Barak et al [FOCS 2012]), there has been a super-polynomial (and in some cases, exponential) improvement in the inapproximability results. In particular, we prove quasi-NP-hardness of the following problems on n-vertex hypergraphs:

    • Coloring a 2-colorable 8-uniform hypergraph with 2^{2^{\sqrt{log log n}} colors.
    • Coloring a 4-colorable 4-uniform hypergraph with 2^{2^{\sqrt{log log n}} colors
    • Coloring a 3-colorable 3-uniform hypergraph with (log n)^{1/log log log n} colors.
    These results are obtained using the low-degree polynomial code and the techniques proposed by Dinur and Guruswami [FOCS 2013] to incorporate this code for inapproximability results.

    In this talk, I'll explain the bottleneck in obtaining improved coloring inapproximability results using the long code and how derandomizations of the long code (the short code in our setting) can be used to improve the inapproximability factors.

    Joint work with V. Guruswami, J. Håstad, S. Srinivasan, G. Varma.

  • Monday March 10 at 12:00 in 3721, Lindstedsvägen 25 (joint with the Combinatorics Group)
    Random Cayley graphs
    (Demetres Christofides, UCLan Cyprus)

    In this talk we will define various models of random Cayley graphs. We will then mention some results and partial results about random Cayley graphs and compare them with the corresponding results for the classical random graphs. We will also recall some of the proofs of the results in the classical case and pinpoint at which places these proofs fail to work in the Cayley case. Finally we will briefly discuss how some of these results in the Cayley case can be obtained.

    In the second part of the talk we will concentrate on a specific result for classical random graphs and its corresponding extension for random Cayley graphs: It is known that there is a threshold p(n) such that G(n,p) has diameter greater than 2 if p is slightly below the threshold and diameter equal to 2 if p is slightly above the threshold. In this part of the talk we will discuss how this result extends to random Cayley graphs.

  • Monday February 10 at 12:00 in 4523, Lindstedsvägen 5 (joint with the Combinatorics Group)
    Smoothed analysis on connected graphs
    (Michael Krivelevich, Tel Aviv University)

    The main paradigm of smoothed analysis on graphs suggests that for any large graph G in a certain class of graphs, perturbing slightly the edges of G at random (usually adding few random edges to G) typically results in a graph having much nicer properties.

    In this talk we discuss smoothed analysis on trees, or equivalently on connected graphs. A connected graph G on n vertices can be a very bad expander, can have very large diameter, very high mixing time, and possibly has no long paths. The situation changes dramatically when \epsilon n random edges are added on top of G, the so obtained graph G* has with high probability the following properties:

    • its edge expansion is at least c/log n;
    • its diameter is O(log n);
    • its vertex expansion is at least c/log n;
    • it has a linearly long path;
    • its mixing time is O(log^2n)
    (the last three items assuming the base graph G has bounded degrees). All of the above estimates are asymptotically tight.

    Joint work with Daniel Reichman (Weizmann Institute) and Wojciech Samotij (Tel Aviv/Cambridge).

Theory reading group meetings autumn 2013

  • Monday December 16 at 12:00 in 4523
    Approximate groups and almost-periodicity
    (Olof Sisask, Department of Mathematics, KTH)

    One can characterise the finite subgroups of an abelian group — or rather cosets of subgroups — as those non-empty subsets A of the group for which the sumset A+A = {a+b : a,b in A} has the same size as A itself. What if one relaxes this condition to |A+A| < K|A| for some fixed number K — must such sets look a bit like subgroups?

    This kind of question forms the backdrop for a fundamental result of additive combinatorics, known as Freiman's theorem, that gives a partial classification of such 'approximate subgroups'. There has been great progress in the past few years on obtaining stronger descriptions of such sets, but there are still important unsolved problems in this direction, the most famous probably being the Polynomial Freiman-Ruzsa conjecture. A positive resolution of this conjecture would provide very interesting and strong classifications of some fundamental approximate algebraic objects, such as approximate subgroups and approximate homomorphisms, and has been shown to have nice applications in other areas — among them theoretical computer science. The plan for this talk is to describe some of the background of the Polynomial Freiman-Ruzsa conjecture and some of the underlying theory, and possibly to mention some of the application areas in TCS.

  • Monday December 9 at 12:00 in 4523
    From degree to space and back: A journey through algebraic proofs
    (Mladen Mikša, Theory Group, KTH)

    Have you ever run a SAT solver on your favourite problem just to realize too late that going cheap on RAM was a bad idea and that the solver won't be able to find a solution? Attending this talk might help you understand better your SAT solver's needs.

    In order to theoretically study a SAT solver, we model the reasoning that goes on inside the solver by a corresponding proof system and map different resources of the SAT solver to corresponding complexity measures of the proof system. Thus, we map the running time of a SAT solver to the size of the proof in the proof system, while the memory consumption is mapped to space. In the talk, we focus on the space measure in the algebraic proof system polynomial calculus, which is still poorly understood compared to resolution (the basis of most state-of-the-art SAT solvers).

    One of the most important results for our understanding of space in resolution was given by Atserias and Dalmau in 2003, in which they showed a relation between space and resolution width, another very useful complexity measure. As we can generalize resolution width to polynomial calculus degree, a natural question that arises is whether a similar relation to the one given by Atserias and Dalmau holds for polynomial calculus space and degree? In this talk we discuss some partial results towards answering that question.

    This talk is based on the paper Towards an Understanding of Polynomial Calculus: New Separations and Lower Bounds, which is a joint work with Yuval Filmus, Massimo Lauria, Jakob Nordström, and Marc Vinyals, and which was presented at ICALP 2013.

  • Monday December 2 at 12:00 in 4523
    Pebble games and complexity
    (Siu Man Chan, Princeton University)

    We will discuss space and parallel complexity, ranging from some classical results which motivated the study, to some recent results concerning combinatorial lower bounds in restricted settings. The recurring theme is some pebble games. We will highlight some of their connections to Boolean complexity and proof complexity.

  • Monday November 25 at 12:00 in 4523
    Integrating cutting planes in a modern SAT solver
    (Daniel Le Berre, Université d'Artois)

    SAT solvers used in a daily basis in hardware or software verification are based on the so called "conflict driven clause learning (CDCL)" architecture. Such architecture is based on a proof system equivalent to full resolution. Resolution in that context is used to derive new clauses during conflict analysis. SAT solvers can easily be extended to deal with cardinality constraints and Pseudo-Boolean constraints while keep a resolution-based proof system. A major focus has been to study the translation of those constraints into CNF to reuse without modifications the current SAT solvers. Another approach is to extend the CDCL architecture to use cutting planes instead of Resolution on cardinality or pseudo Boolean constraints. The presentation will focus on the design of such a kind of solver, from the specific properties of Pseudo-Boolean constraints to the design of a cutting planes based conflict analysis procedure. Some experimental results based on the implementation of such procedure available in Sat4j will conclude the talk.

  • Wednesday November 13 at 12:00 in 1537
    Turan-problem for hypergraphs
    (Klas Markström, Umeå University)

    The classical Turan-problem asks the following question. Given a graph H, what is the maximum number of edges in a graph on n vertices which does not contain a copy of H? For ordinary graphs a results of Erdös, Stone and Simonovits gives an asymptotic solution to this problem. However the asymptotics for bipartite graphs H and graphs H which do not have constant size still present problems. The latter connects to the well known triangle removal lemma.

    A 3-graph, or 3-uniform hypergraph, consists of a vertex set and a set of edges, which are vertex sets of size 3. Unlike the Turan-problem for graphs, the Turan-problem for 3-graphs is still far from understood, for example we do not know the correct asymptotics for any complete 3-graph.

    I will survey some methods and problems in this area, discussing how lower and upper bounds have been proven.

  • Monday October 14 at 12:00 in 4523
    Lower bounds (slightly) beyond resolution
    (Marc Vinyals, Theory Group, KTH)

    One of the goals of proof complexity is to show lower bounds for stronger and stronger proof systems. For the well-known resolution proof systems exponential lower bounds are known for many formula families, but for stronger proof systems many of these formulas quickly become either provably easy or very hard to analyze.

    In this talk, we will look at k-DNF resolution, an extension of resolution that works with k-DNF formulas instead of clauses, which is exponentially more powerful than resolution yet weak enough so that one can prove interesting lower bounds and see how the hardness of different formulas change. We will show exponential lower bounds for the weak pigeonhole principle and for random k-CNF formulas as well as separations between k-DNF and (k+1)-DNF resolution for increasing k. The main technical tool is a version of the switching lemma that works for restrictions that set only a small fraction of the variables and is applicable to DNF formulas with small terms.

    This presentation is based on the paper Segerlind, Buss, Impagliazzo: A Switching Lemma for Small Restrictions and Lower Bounds for k-DNF Resolution (dx.doi.org/10.1137/S0097539703428555).

  • Monday September 30 at 12:00 in 4523
    The complexity of proving that a graph is Ramsey
    (Massimo Lauria, Theory Group, KTH)

    We say that a graph with n vertices is c-Ramsey if it does not contain either a clique or an independent set of size c*log(n). We define a CNF formula which expresses this property for a graph G. We show a superpolynomial lower bound on the length of resolution proofs that G is c-Ramsey, for every graph G. Our proof makes use of the fact that every Ramsey graph must contain a large subgraph with some of the statistical properties of the random graph.

    Joint work with Pavel Pudlák, Vojtech Rödl and Neil Thapen. The paper appeared at ICALP 2013.

  • Monday September 9 at 12:00 in 1537
    Average-case complexity of lattice problems
    (Rajsekar Manokaran, Theory Group, KTH)

    The average-case complexity of a problem is the complexity of solving instances of it picked from a specific distribution. In a seminal work, Ajtai [STOC '96] showed a relation between the average-case complexity and the worst-case complexity of lattice problems. This result is central to cryptography implemented using lattices. Subsequently, Micciancio and Regev [FOCS '04] improved the parameters involved in the proof while also simplifying the presentation.

    In this talk, I will describe the result along the lines of the work of Micciancio and Regev. Time permitting, I will describe the recent improvements achieved by Micciancio and Peikert [CRYPTO '13].

  • Monday August 26 at 12:00 in 1537
    Weak pigeonhole principles, circuits for approximate counting, and propositional proofs of bounded depth
    (Albert Atserias, Universitat Politècnica de Catalunya)

    The pigeonhole principle (PHP) states that m pigeons cannot sit injectively into n holes if m is bigger than n. An often quoted result of the research in propositional proof complexity is that the PHP with m = n+1 does not have small proofs in proof systems that "lack the ability to count". These include resolution [Haken] and, more generally, proof systems that manipulate formulas with a bounded number of alternations between disjunctions and conjunctions (a.k.a. bounded-depth formulas) [Ajtai]. In contrast, for proof systems that manipulate arbitrary propositional formulas, or even bounded-depth formulas with "threshold gates", the PHP with m = n+1 admits small proofs [Buss]. For weaker PHPs where m is at least as large as a constant factor larger than n, the situation is much less understood. On one hand it looks clear that the ability to count approximately should be enough to establish these weaker PHPs. On the other, while bounded-depth circuits exist for approximate counting [Stockmeyer, Ajtai], their mere existence is not enough to get bounded-depth small proofs since one would also need elementary (i.e. comparably complex) proofs of their correctness.

    In this talk we will survey the status of this classical problem in propositional proof complexity. Along the way we will discuss some new recent related results showing that a close variant of the weak PHP admits and requires quasipolynomial-size depth-2 proofs. To our knowledge, this is the first natural example that requires superpolynomial but not exponential proofs in a natural proof system. It also shows that the method of proof is in some sense "the right method"; a remarkable and rather unexpected fact by itself.

Theory reading group meetings spring 2013

  • Monday May 13 at 12:00 in 1537
    On fitting too many pigeons into too few holes
    (Mladen Mikša, Theory Group, KTH)

    If m pigeons want to nest in n separate pigeonholes, they will run into serious problems if m > n. Although this might seem trivial, it is one of the most extensively studied (and fascinating) combinatorial principles in all of proof complexity.

    In a breakthrough result, Haken (1985) showed that it is exponentially hard for the resolution proof system to prove that n+1 pigeons don't fit into n holes. However, what happens when the number of pigeons increases? Since it becomes increasingly obvious that not every pigeon can get a different hole, perhaps one can find shorter proofs as the number of pigeons goes to infinity? This notorious open problem was finally resolved by Raz (2001), who established that even for infinitely many pigeons one still needs proofs of exponential length. Raz's result was subsequently simplified and strengthened by Razborov.

    In the lunch seminar part of this talk, we will give an overview of Razborov's proof. The presentation will be self-contained and no prerequisites in proof complexity are required. After the break, we will go into technical details and prove the full result.

  • Monday April 22 at 12:00 in 4523
    Approximability of some constraint satisfaction problems
    (Sangxia Huang, Theory Group, KTH)

    I will talk about approximability of Constraint Satisfaction Problems (CSPs). In particular, we focus on CSPs of "sparse" Boolean predicates. This is also related to other optimization problems, such as finding maximum independent set. For CSP instances that are almost satisfiable, Siu On Chan proved recently that there is a predicate on k variables with k+1 accepting assignments that is NP-hard to approximate better than (k+1)/2^k. The case of approximation on satisfiable instances is rather different. This is also an important question, and I will describe many recent developments, including my own work.

    For the first part, I will give an overview of the state of the art and some techniques — mainly reductions for showing hardness. In the second part, I will go over some proofs in Chan's paper and prove part of the main result. The talk does not require prior experience in the PCP business.

  • Wednesday April 17 at 12:00 in 4523
    Rediscovering the joys of pebbling
    (Jakob Nordström, Theory Group, KTH)

    In the early 1970s, combinatorial pebble games played on directed acyclic graphs were introduced as a way of studying programming languages and compiler construction. These games later found a broad range of applications in computational complexity theory and were extensively studied up to ca 1985. Then they were mercifully forgotten, more or less… Until during the last decade, when pebbling quite surprisingly turned out to be very useful in proof complexity. In this talk, we will describe the connections between the two settings and how tight they are, present an improved reduction, and discuss the gap that remains. In order to use this reduction to obtain interesting results in proof complexity, one needs pebbling results with quite specific (and rather strong) properties. We will also discuss a new such result, that broke the 25-year hiatus in the pebbling literature by appearing in CCC '10.

    This seminar is intended to be completely self-contained. In particular, no prerequisites in proof complexity or pebbling are required. After the break, in the technical sequel we intend to have some fun and actually prove some pebbling time-space trade-off results.

  • Tuesday April 9 at 12:00 in 1537
    On the complexity of finding width-k resolution refutations
    (Christoph Berkholz, RWTH Aachen University)

    One approach to attack NP-hard satisfiability problems such as 3-SAT or the Constraint Satisfaction Problem (CSP) is to design algorithms that run in polynomial time but do not always succeed. In this talk I gently introduce the approach of searching for width-k resolution refutations for 3-SAT (also known as k-consistency test in the CSP-community). This technique can be implemented in time n^O(k), hence is in polynomial time for every fixed k.

    One drawback of this approach is that the degree of the polynomial increases with the parameter k. My main result is a lower bound showing that this cannot be avoided: Deciding whether there is a width-k resolution refutation requires time n^{ck} for an absolute constant c>0. Furthermore, the problem is EXPTIME-complete (if k is part of the input).

  • Monday March 25 at 12:00 in 4523
    Faster Hamiltonicity
    (Per Austrin, Theory Group, KTH)

    I will talk about exact algorithms for Hamiltonicity and Travelling Salesperson. The classic Bellman/Held-Karp dynamic programming algorithm for these problems has a running time of O(n^2*2^n). This remained unbeaten for almost half a century until 2010 when Björklund gave an algorithm with running time O(2^{0.73n}) for undirected Hamiltonicity. Since then there has been a flurry of generalizations and simplifications, but many open questions remain.

    My aim for the first part is to describe the state of the art and some general techniques that are used, and for the second part to go over one or two algorithms in detail. The talk should be accessible without any specific background knowledge and in particular you don't need to know what the Hamiltonicity and Travelling Salesperson problems are beforehand.

  • Monday March 18 at 12:00 in 4523
    Subgraphs satisfying MSO properties on z-topologically orderable digraphs
    (Mateus de Oliveira Oliveira, Theory Group, KTH)

    We introduce the notion of z-topological orderings for digraphs. We prove that given a digraph G admitting a z-topological ordering, we may count the number of subgraphs of G that satisfy a monadic second order formula φ and that are the union of k directed paths in time f(φ, k, z)·n^O(k·z) . Our result implies the polynomial time solvability of a vast number of natural counting problems on digraphs admitting z- topological orderings for constant z. For instance, we are able to answer in polynomial time questions of the form "How many planar subgraphs of G are the union of k directed paths?" Concerning the relationship between z-topological orderability and other digraph measures, we observe that any digraph of directed path-width d has a z-topological ordering for z ≤ 2d + 1. Since graphs of directed path-width can have both arbitrarily large undirected tree width and arbitrarily large clique width, our result provides for the first time a suitable way of partially transposing metatheorems developed in the context of the monadic second order logic of graphs of bounded undirected tree width and bounded clique width to the realm of digraph width measures that are closed under taking subgraphs and whose constant levels incorporate families of graphs of arbitrarily large tree width and arbitrarily large clique width.

  • Monday February 18 at 12:00 in 4523
    Exponential lower bounds for the PPSZ k-SAT Algorithm
    (Dominik Scheder, Aarhus University)

    In 1998, Paturi, Pudlak, Saks, and Zane presented PPSZ, an elegant randomized algorithm for k-SAT. Fourteen years on, this algorithm is still the fastest known worst-case algorithm. They proved that its expected running time on k-CNF formulas with n variables is at most 2^((1 - epsilon_k)n), where epsilon_k = Omega(1/k). So far, no exponential lower bounds at all have been known.

    We construct hard instances for PPSZ. That is, we construct satisfiable k-CNF formulas over n variables on which the expected running time is at least 2^((1 - epsilon_k)n), for epsilon_k in O(log^2 (k) / k).

    This is joint work with Shiteng Chen, Navid Talebanfard, and Bangsheng Tang.

  • Monday February 11 at 12:00 in 4523
    Modular Verification of Temporal Safety Properties of Procedural Programs
    (Dilian Gurov, Theory Group, KTH)

    Modularity as a software design principle aims at controlling the complexity of developing and maintaining large software. When applied to verification, modularity means that the individual modules are specified and verified independently of each other, while global, system-wide properties are verified relative to the specifications of the modules rather than to their implementatons. Such a relativization is the key to verifying sofware in the presence of variability, that is, of modules the implementation of which is expected to either evolve or be dynamically upgraded, or is not even available at verification time, or exists in multiple versions as resulting from a software product line.

    In this tutorial I will present modular verification in the context of temporal safety properties and a program model of recursive programs that abstracts away from data. The proposed method is algorithmic and is based on the construction of maximal program models that replace the local specifications when verifying global properties.

  • Monday February 4 at 12:00 in 4523
    Boolean influence
    (Erik Aas, Department of Mathematics, KTH)

    The influence of a variable of a Boolean function is a measure of how likely that variable is to control the output of the function. I'll present some fundamental results concerning the influences of threshold functions (a special kind of Boolean function). If time permits we will prove the Kahn-Kalai-Linial theorem, giving a lower bound for the largest influence of a variable of a "balanced" Boolean function.

  • Tuesday January 22 at 12:00 in 1537
    Verifying a fault-tolerant distributed aggregation protocol with the Coq proof assistant
    (Karl Palmskog, Theory Group, KTH)

    Decentralized aggregation of data from network nodes is an important way of determining system-wide properties in distributed systems, e.g. in sensor networks. A scalable way of performing such aggregation is by maintaining a network overlay in the form of a tree, with data flowing from the leaves towards the root.

    We describe a variant of the tree-based Generic Aggregation Protocol which is well suited to secure aggregation, and argue formally that the protocol has the expected behaviour for networks of arbitrary size, even in the presence of crash failures.

    Practical verification techniques for communication protocols such as model checking usually require models with bounded state space. To reason about the protocol without such undue abstraction, we encode it as a transition system in the inductive logical framework of the Coq proof assistant and perform machine-assisted proofs.

    Using Coq for verification requires knowledge of many techniques and theories unrelated to the problem domain, and we give an overview of the libraries, tools, and trade-offs involved.

Theory reading group meetings autumn 2012

  • Monday December 17 at 12:00 in 4523
    Majority is stablest: Discrete and SoS
    (Sangxia Huang, Theory Group, KTH)

    Consider a vote by n people on a Yes/No question and a voting scheme that decides the outcome of the vote based on the n inputs. The Majority is Stablest Theorem says that among all voting schemes where the number of possible ways to vote so that the result in Yes and so that the result in No are the same, and that no voter has large influence on the result, Majority is the stablest scheme (result least likely to be affected by flipping a small number of inputs).

    The Majority is Stablest Theorem has numerous applications in hardness of approximation and social choice theory. The proof by [Mossel, O'Donnell, Oleszkiewicz 10] uses deep results in Gaussian analysis and an Invariance Principle that allows to deduce the discrete result from the Gaussian one.

    Recently, Anindya De, Elchanan Mossel and Joe Neeman provided a short general proof of the Majority is Stablest Theorem that does not rely on Gaussian analysis and the Invariance Principle. The new proof can also be transformed into a "Sum of Squares" proof, thus showing that the Khot-Vishnoi instance of Max-Cut does not provide an integrality gap instance for Max-Cut in the Lasserre hierarchy.

    In this seminar, we introduce basic notions of Boolean function analysis. We present the ideas of both the "Invariance Principle"-based proof and the new proof. In the reading group that follows the seminar, we discuss details of the new proof as well as (time permits) the Sum of Squares proof of the Majority is Stablest Theorem.

  • Tuesday December 11 at 12:00 in 4523
    Hypercontractivity, sum-of-squares proofs, and their applications
    (Rajsekar Manokaran, Theory Group, KTH)

    In this talk, I will present the recent work of Barak, Brandao, Harrow, Kelner, Steurer and Zhou on using the "Sum of Squares" semi-definite programming hierarchy in solving the known hard instances of Unique Games (UG) problem. The supposed hardness of this problem, known as the UG Conjecture has been instrumental in proving a wide variety of tight inapproximability results, hence warranting their study. [BBHKSZ '12] show that the SoS SDP hierarchy can be used to refute the conjecture on instances involving a noise operator on the boolean cube (which has been the basis of hard instances to this problem till date).

    I will first describe the "2 to 4 norm" problem, showing how it relates to the UG conjecture, with a focus on the known family of hard instances. Then, I will describe the SDP hierarchy followed by an overview of the proof that the SDP does indeed refute the conjecture on this family of instances. I will leave the details of the proofs to the session after the seminar.

  • Monday November 26 at 12:00 in 4523
    Monotone submodular maximization over a matroid using local search
    (Yuval Filmus, University of Toronto)

    Maximum coverage is the relative of set cover in which instead of trying to cover the entire universe with as few sets as possible, we are trying to cover as many elements as possible using a fixed number of sets. The greedy algorithm gives a 1-1/e approximation. Surprisingly, Feige proved that this ratio is optimal unless P=NP.

    Things get more complicated when we replace the cardinality constraint on the sets with a matroid constraint (say, there are K types of sets, and we should choose at most one of each type). The greedy algorithm now gives only a 1/2 approximation. Calinescu et al. developed a sophisticated algorithm that gives a 1-1/e approximation. Their algorithm combines gradient ascent on a continuous relaxation with optimal rounding, and also works in the more general setting of monotone submodular maximization.

    We show that non-oblivious local search also gives the optimal approximation ratio. The auxiliary objective function, with respect to which the local search proceeds, gives more weight to elements covered multiple times in the current solution. We also extend our algorithm to the setting of monotone submodular maximization.

    Joint work with Justin Ward.

  • Wednesday November 14 at 12:00 in 4523
    An information complexity approach to extended formulations
    (Lukáš Poláček, Theory Group, KTH)

    Lukáš will present the paper An Information Complexity Approach to Extended Formulations by Mark Braverman and Ankur Moitra (eccc.hpi-web.de/report/2012/131/). The abstract of the paper is as follows:

    We prove an unconditional lower bound that any linear program that achieves an O(n^{1?\eps}) approximation for clique has size 2^\Omega(n^\eps). There has been considerable recent interest in proving unconditional lower bounds against any linear program. Fiorini et al proved that there is no polynomial sized linear program for traveling salesman. Braun et al proved that there is no polynomial sized O(n^{1/2 - \eps})-approximate linear program for clique. Here we prove an optimal and unconditional lower bound against linear programs for clique that matches Håstad's celebrated hardness result. Interestingly, the techniques used to prove such lower bounds have closely followed the progression of techniques used in communication complexity. Here we develop an information theoretic framework to approach these questions, and we use it to prove our main result.

    Also we resolve a related question: How many bits of communication are needed to get \eps-advantage over random guessing for disjointness? Kalyanasundaram and Schnitger proved that a protocol that gets constant advantage requires \Omega(n) bits of communication. This result in conjunction with amplification implies that any protocol that gets \eps-advantage requires \Omega(\eps^2 n) bits of communication. Here we improve this bound to \Omega(\eps n), which is optimal for any \eps > 0.

  • Wednesday November 6 at 13:15 in E53
    Applying Boolean Groebner basis to cryptography and formal verification
    (Alexander Dreyer, Fraunhofer Institute for Industrial Mathematics ITWM)

    We will present and discuss some application examples for Boolean Groebner basis from cryptography and formal verification.

    Computing Groebner basis is of double-exponential complexity in worst case. But thanks to very sophisticated heuristics a plenty of practical problem can be solved in reasonable time.

    We will also talk about (still) open problems.

  • Tuesday November 6 at 12:00 in E53
    Basics on Boolean Groebner basis and algebraic SAT solving
    (Alexander Dreyer, Fraunhofer Institute for Industrial Mathematics ITWM)

    We will give a short introduction into the topic of Groebner basis, in particular we will scratch the surface of Boolean polynomials, i.e. those polynomials with coefficients in {0,1} and a fixed degree bound (per variable) of one.

    This is accompanied with a brief presentation of the data structure proposed for Boolean polynomials in our software framework PolyBoRi for computing wit POlynomials in BOolean RIngs. These polynomials are the algebraic representation of Boolean function. This enables us to use techniques (like the Groebner basis formalism) from computational algebra for solving Boolean problems. Their special properties allow for representing them effectively as binary decision diagrams, but this will need for new heuristics and algorithms.

    Finally, the talk will connect Groebner basis with the classical SAT solver approach for solving satisfiability problem of propositional logic.

  • Tuesday October 16 at 12:00 in 4523
    Optimality of size-degree tradeoffs for polynomial calculus
    (Massimo Lauria, Theory Group, KTH)

    Polynomial calculus is a way of refuting unsatisfiable CNF formulas by translating them to polynomials and proving that these polynomials have no common root. To show lower bounds on the size of such proofs, one usually proves strong lower bounds on degree, and then uses a general size-degree trade-off theorem saying that very high degree implies very high size.

    There is an annoying gap in this theorem, however, in that fairly high degree, up to sqrt(n), cannot be proven to say anything about size. A natural question is whether this is inherent or whether the theorem could be strengthened so that somewhat strong degree lower bounds would yield somewhat strong size lower bounds.

    We rule out this possibility by proving that the size-degree trade-off in its current form is essentially optimal. We do so by studying formulas encoding properties of linear orderings, which are known to have small proofs, and showing that these formulas require sqrt(n) degree.

    Joint work with Nicola Galesi.

  • Tuesday October 2 at 12:00 in 4523
    On the virtue of succinct proofs: Amplifying communication complexity hardness to time-space trade-offs in proof complexity
    (Jakob Nordström, Theory Group, KTH)

    Motivated by the satisfiability problem and SAT solving, the last decade has seen active research into time-space trade-offs in proof systems for propositional logic. So far most of the research has focused on the resolution proof system operating with CNF formulas, whereas stronger systems using algebraic and geometric methods of reasoning have remained beyond reach.

    In this work, we report the first trade-off results for algebraic and geometric proof systems. Our results are obtained by combining combinatorial pebble games, extensively studied in the 1970s and 1980s, with recently developed tools in communication complexity based on information theory.

    No prior knowledge of proof complexity or communication complexity will be assumed. This talk presents joint work with Trinh Huynh published in STOC '12.

  • Tuesday September 11 at 12:00 in 4523
    Amazingly dense Ruzsa-Szemeredi graphs, plus applications
    (Joshua Brody, Aarhus University)

    An induced matching of a graph G=(V,E) is a matching M such that no two edges of M are joined by an edge in G. A Ruzsa-Szemeredi graph is a graph G, together with an edge coloring, such the set of edges of any particular color forms an induced matching on G. Researchers have found surprising applications of Ruzsa-Szemeredi graphs in a wide range of areas, including communication over shared channels, linearity testing, and communication complexity. For these applications, we generally want graphs that (1) are as dense as possible, but (2) use as few colors as possible.

    In this talk, I'll present a recent construction of Alon, Moitra, and Sudakov from STOC 2012 which seems too good to be true. They construct a graph which is nearly complete, but can be decomposed into a slightly superlinear number of induced matchings. This is essentially the best one can hope for. Time permitting, I'll also present an application or two.

    This talk assumes only basic knowledge of graph theory. (i.e. I'll start by describing what induced matchings are, and go from there)

    This talk mostly covers work from the STOC 2012 paper Nearly Complete Graphs Decomposable into Large Induced Matchings and their Applications by Noga Alon, Ankur Moitra, and Benny Sudakov.

Published by: Jakob Nordström <jakobn~at-sign~kth~dot~se>
Updated 2017-10-13