[ Homepage ] [ Koduleht ] [ Mājaslapa ]
Slides of the talk: [ pptx ]
Abstract:
We investigate the biggest possible gaps between quantum and classical algorithms in the query model of computation (which encompasses most of the known quantum algorithms).
For partial functions, we exhibit a property-testing problem called Forrelation with following properties:
- it can be solved using 1 quantum query;
- it requires \Omega(\sqrt{N}/log N) queries for randomized algorithms.
We also show that this separation is close to being optimal: any 1-query quantum algorithm can be simulated by a randomized algorithm that makes \Omega(\sqrt{N}) queries and any t-query quantum algorithm whatsoever can be simulated by an O(N^{1-1/2t})-query randomized algorithm.
For total functions, much smaller gaps between different models of computation are achievable (due to the fact that the algorithm must output a decisive answer on every input). Before our work, the biggest known gap for total functions was the quadratic gap achieved by Grover's search algorithm.
We improve on this, showing a function that can be computed by a quantum algorithm making m queries but requires \Omega(m^4/log^c m) queries for deterministic algorithms. We also substantialy improve the biggest known advantage for exact quantum algorithms (algorithms that always output the correct answer), to a nearly-quadratic (m queries for an exact quantum algorithms vs. \Omega(m^2/log^c m) queries for classical algorithms) and solve two longstanding open questions about relations between classical models of computation:
- we show a function that can be computed by a randomized algorithm with m queries but requires \Omega(m^2/log^c m) queries deterministically, improving over a result by Snir from 1986;
- we show the first example of a function for which randomized algorithms that are allowed to make a mistake with a small probability are better than zero-error randomized algorithms.
Slides of the talk: [ pdf ]
Abstract:
In the monotonicity testing problem, the task is to distinguish whether
a given Boolean function is monotone or is far from monotone, using as
few queries to the function as possible. This is one of the most
natural problems in property testing. Originally defined in 1998, after
some initial progress in the early 2000s, this problem withstood further
attempts at understanding its query complexity, until a very recent
outburst of activity.
In this talk, I survey some recently obtained results, including (a) a
complete characterisation of non-adaptive randomised query complexity by
Chen, De, Servedio, and Tan [STOC'15], and Khot, Minzer and Safra
[FOCS'15]; and (b) a first polynomial lower bound in the adaptive
settings, which is joint work with Eric Blais [STOC'16].
Abstract: Since its introduction in by Birk and Kol in 1998, the problem of index coding has been generalized in a number of directions. It is a problem that has aroused much interest in recent years; from the theoretical perspective, its equivalence to network coding has established it as an important area of network information theory. In the classical case, a central broadcaster has a data file x. There are n users each of whom already possesses some subset of components of x as its side-information and each of whom requests some component of the file. The index coding problem is to determine the minimum number of transmissions required so that the demands of all users can be met, given that data may be encoded prior to broadcast. Here we present new bounds on the optimal rate for different instances of the index coding problem using linear programming methods.
Slides of the talk: [ pdf ]
Abstract:
Cellular automata (briefly, CA) are synchronous distributed systems on
regular grids, where each site is connected with a finite number of
neighboring sites, each neighborhood having the same shape. The sites
are finite state automata, whose input alphabet is made of the
possible states of the neighborhood, and which are updated
synchronously: this induces in a natural way a transformation of the
global state of the grid.
The inability for a cellular automaton to correct finitely many errors
in finite time, is referred to in literature as pre-injectivity. As an
"almost dual", Siamak Taati introduced post-surjectivity, which can be
expressed as follows: for every pair of source-target configurations,
every finite block of errors in the target can be obtained by setting
a suitable finite block of errors in the source. While pre-injectivity
is a weakening of injectivity, post-surjectivity is, under suitable
conditions, a strengthening of surjectivity.
This "almost dualization" also extends to a remarkable result and a
famous conjecture in cellular automata theory. First, as those CA
which are both injective and surjective have a reverse CA, so do those
which are both post-surjective and pre-injective. Next, on a very
large class of symmetry groups for the grid (no counterexamples
known!) as every injective CA is surjective, so every post-surjective
CA is pre-injective.
Slides of the talk: [ pptx ]
Abstract: Polynomial-time constant-space quantum Turing machines (QTMs) and logarithmic-space probabilistic Turing machines (PTMs) recognize uncountably many languages with bounded error (Say and Yakaryilmaz 2014, arXiv:1411.7647). In this paper, we investigate more restricted cases for both models to recognize uncountably many languages with bounded error. We show that double logarithmic space is enough for PTMs on unary languages in sweeping reading mode or logarithmic space for one-way head. On unary languages, for quantum models, we obtain middle logarithmic space for counter machines. For binary languages, arbitrary small non-constant space is enough for PTMs even using only counter as memory. For counter machines, when restricted to polynomial time, we can obtain the same result for linear space. For constant-space QTMs, we follow the result for a restricted sweeping head, known as restarting realtime.
Slides of the talk: [ pdf ]
Abstract: We present a hybrid encryption scheme that is chosen ciphertext secure in the quantum random oracle model. The scheme is a combination of an asymmetric and a symmetric encryption scheme that are secure in a weak sense. It is a slight modification of the Fujisaki-Okamoto transform that is secure against classical adversaries. In addition, we modify the OAEP-cryptosystem and prove its security in the quantum random oracle model based on the existence of a partial-domain one-way injective function secure against quantum adversaries.
Slides of the talk: [ pptx ]
Abstract: In the presentation I am going to present a new object counting method from images that is intended for counting similarly sized and mostly round objects. Unlike many other algorithms of the same purpose, the proposed method does not rely on identifying every object, it uses statistical data obtained from the image instead. The method is evaluated on images with human bone cells, oranges and pills achieving good accuracy. Its strengths are ability to deal with touching and partly overlapping objects, ability to work with different kinds of objects without prior configuration and good performance.
Slides of the talk: [ pdf ]
Abstract:
The paper examines hierarchies for nondeterministic and deterministic ordered read-k-times Branching programs. The currently known hierarchies for deterministic k-OBDD models of Branching programs for k=o(n^{1/2}/\log^{3/2}n) are proved by B. Bollig, M. Sauerhoff, D. Sieling, and I. Wegener in 1998. Their lower bound technique was based on communication complexity approach. For nondeterministic k-OBDD it is known that, if k is constant then polynomial size k-OBDD computes same functions as polynomial size OBDD (The result of Brosenne, Homeister and Waack, 2006). In the same time currently known hierarchies for nondeterministic read k-times Branching programs for k=o(\sqrt{\log{n}}/\log\log{n}) are proved by Okolnishnikova in 1997, and for probabilistic read $k$-times Branching programs for k\leq \log n/3 are proved by Hromkovic and Saurhoff in 2003.
We show that increasing k for polynomial size nodeterministic k-OBDD makes model more powerful if k is not constant. Moreover, we extend the hierarchy for probabilistic and nondeterministic k-OBDDs for k=o(n/ \log n). These results extends hierarchies for read-k-times Branching programs, but k-OBDD has more regular structure. The lower bound techniques we propose are a ``functional description'' of Boolean function presented by nondeterministic k-OBDD and communication complexity technique. We present similar hierarchies for superpolynomial and subexponential width nondeterministic k-OBDDs.
Additionally we expand the hierarchies for deterministic k-OBDDs using our lower bounds for k=o(n/ \log n). We also analyze similar hierarchies for superpolynomial and subexponential width k-OBDDs.
Slides of the talk: [ pdf ]
Abstract: We show an equivalence between 1-query quantum algorithms and representations by degree-2 polynomials. Namely, a partial Boolean function f is computable by a 1-query quantum algorithm with error bounded by \epsilon<1/2 iff f can be approximated by a degree-2 polynomial with error bounded by \epsilon'<1/2. This result holds for two different notions of approximation by a polynomial: the standard definition of Nisan and Szegedy and the approximation by block-multilinear polynomials recently introduced by Aaronson and Ambainis (STOC'2015). We also show two results for polynomials of higher degree. First, there is a total Boolean function which requires \tilde{\Omega}(n) quantum queries but can be represented by a block-multilinear polynomial of degree \tilde{O}(\sqrt{n}). Thus, in the general case (for an arbitrary number of queries), block-multilinear polynomials are not equivalent to quantum algorithms. Second, for any constant degree k, the two notions of approximation by a polynomial (the standard and the block-multilinear) are equivalent. As a consequence, we solve an open problem by Aaronson and Ambainis (STOC'2015), showing that one can estimate the value of any bounded degree-k polynomial p:{0, 1}^n \rightarrow [-1, 1] with O(n^{1-\frac{1}{2k}}) queries.
Abstract: We consider functions with sets of binary vectors of length n as domains. Given a function f, the protocol F computing f is a function outputting a set of message vectors M_1, ..., M_r such that for any input sets S_A and S_B the value of the last message vector M_r equals to the value of the function on the union of the sets, i.e. M_r = f(S_A \cup S_B). The cooperative two-party function computation problem is the problem of finding a protocol F which minimizes the communication complexity \sum_{i=1}^r |M_i|. We give lower bounds on the communication complexity of protocols computing certain functions.
Slides of the talk: [ odp ]
Abstract:
DARPA's Brandeis program seeks to develop the technical means to protect
the private and proprietary information of individuals and enterprises.
The studied tools include secure computation, differential privacy,
domain-specific reduction of features of data. Much effort is also put
on letting the data owners understand, state their limits for, and
control the use of their data. The tools are evaluated by integrating
them into larger systems.
Our team is working on measuring the privacy losses these systems
entail, and the reductions of the losses when various privacy-enhancing
technologies are employed. In my talk, I will describe the definitional
framework we are working on, the model of systems we are using, and the
analyses we have developed. Due to the size of systems we want to
analyse, the emphasis has been on compositionality of privacy
definitions. Several existing measures of similarity are special cases
of our definitions, including statistical distance and differential privacy.
The work I'll be describing has been jointly performed by researchers
and engineers in Cybernetica, and in the Software Engineering group at
the University of Tartu.
Slides of the talk: [ pptx ]
Abstract:
A NIZK shuffle argument enables a mix-server to prove in zero
knowledge that she has correctly shuffled and rerandomized her input
ciphertexts. We propose a new random oracle-less NIZK shuffle
argument. It has a simple structure, where the first verification
equation ascertains that the prover has committed to a permutation
matrix, the second verification equation ascertains that the same
permutation was used to permute the ciphertexts, and the third
verification equation ascertains that input ciphertexts were
``correctly'' formed. The new argument has 4 times more efficient
verification than the up-to now most efficient shuffle argument by
Fauzi and Lipmaa (CT-RSA 2016). Compared to the Fauzi-Lipmaa shuffle
argument, we (i) remove the use of knowledge assumptions and prove
our scheme is sound in the generic bilinear group model, and (ii)
prove standard soundness, instead of culpable soundness.
authors: Prastudy Fauzi, Helger Lipmaa and Michał Zając
Accepted to ASIACRYPT 2016
link: http://kodu.ut.ee/~lipmaa/papers/flz16/
Slides of the talk: [ pdf ]
Abstract: Analyzing the behaviour of a concurrent program is made difficult by the number of possible executions, since to check that the program satisfies a property P we would need to check that each of the executions satisfies it. According to trace theory, if two actions a and b are independent of each other (meaning that for every state, executing ab or ba gives the same result) then the executions sabt and sbat are considered equivalent. If the property that we are interested in is invariant under this equivalence relation then it is sufficient to check that the property is satisfied for a member of each of the equivalence classes. This talk presents a way to generate only the canonical representatives of all of the possible executions of a program and shows how it can be extended to the TSO, PSO and RMO memory models.
Slides of the talk: [ pdf ]
Abstract: In this talk I will present a classification of two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows one to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of new postselection gadgets and Lie theory.
Slides of the talk: [ pdf ]
Abstract: Quantum walks are quantum counterparts of classical random walks. They have been useful for designing quantum algorithms that outperform their classical versions for a variety of search problems. Most of the papers, however, consider a search space containing a single marked element only. We show that if the search space contains more than one marked element, their placement may drastically affect the performance of the search. More specifically, we study search by quantum walks on general graphs and demonstrate a wide class of configurations of marked vertices, for which search by quantum walk needs Ω(N) steps, that is, it has no speed-up over the classical exhaustive search.
Slides of the talk: [ pdf ]
Abstract: The last a few decades have shown rise of several programming languages that use indentation for structuring code. Most popular examples are Haskell and Python. Recently, a new approach has been proposed to extend context free grammars and parsing expression grammars for specifying indentation in an elegant way using binary relations. We show applicability of this approach elsewhere, e.g. to specifying correct handling of operator priorities in expressions, analyze the limits of this approach and, finally, draw connections of the approach to attribute grammars.
Slides of the talk: [ pdf ]
Abstract:
Quantum walks have been shown to disperse much faster than
their classical analogues on various graphs, reaching distant vertices
up to exponentially faster in some cases. For some starting states of a
quantum walk, though, it is possible that the walk doesn't disperse at
all, but remains localized at the starting state with high probability.
We show a class of such starting states, which exhibit localization for
a wide variety of graphs with low electric resistance.
Similarly, quantum search on a graph can allow finding marked vertices
quadratically faster than classical search. Unlike classical search,
however, additional marked vertices do not always make finding one of
them easier. For some graphs, additional marked vertices can cause the
quantum search to remain close to its starting state instead of finding
the marked vertices. We give a general characterization of the graphs
and configurations of marked vertices for which this occurs.
Slides of the talk: [ pdf ]
Abstract:
In d-level random access codes (RACs) two parties communicate higher-dimensional systems. For such tasks one can construct quantum RACs with a bigger advantage over classical RACs compared to previously considered RACs with binary alphabet. First we present a brief overview of d-level RACs. Then we show that a variant of d-level RACs can have interesting implications in an operational revelation of preparation contextuality of maximally mixed states in higher dimensions. We introduce a new class of information processing tasks, namely d-level parity oblivious random access codes and obtain bounds on the success probabilities of performing such tasks in any preparation noncontextual theory. These bounds constitute noncontextuality inequalities for any value of d. For d = 3, using a set of mutually asymmetric biased bases we show that the corresponding noncontextual bound is violated by quantum theory. We also show quantum violation of the inequalities for some other higher values of d.
References:
http://arxiv.org/abs/1607.05490
http://arxiv.org/abs/1510.03045
http://arxiv.org/abs/1506.05174
Slides of the talk: [ pdf ]
Abstract: Minimum Pearson Distance (MPD) detection offers resilience against unknown channel gain and varying offset. MPD detection is used in conjunction with a set, S, of codewords having specific properties. In this work, we study the properties of the codewords of S, compute the size of S, and derive its redundancy for asymptotically large values of the codeword length n.
Slides of the talk: [ pdf ]
Abstract: Author has devised a technology that allows to hide both source and destination addresses in IP packets on the network level (layer 3). Source IP spoofing for anonymization over UDP (SIPSA) is a proposal for a protocol that in many network environments would allow two hosts on the network to hide both their source and destination addresses, while still being able to send and receive information. While a proof of concept tool that allows testing SIPSA is freely and publicly available on github, this presentation focuses on the operating principles and theory behind the technology.
Slides of the talk: [ pdf ]
Abstract: We present a reinterpretation of the Kameda-Weiner method of finding a minimal nondeterministic finite automaton (NFA) of a language, in terms of atoms of the language. We introduce a method to generate NFAs from a set of languages, and show that the Kameda-Weiner method is a special case of it. Our method provides a unified view of the construction of several known NFAs, including the canonical residual finite state automaton and the atomaton of the language.
Slides of the talk: [ pdf ]
Abstract: We address the question of constructing explicitly quasi-uniform codes from groups. We determine the size of the codebook, the alphabet and the minimum distance as a function of the corresponding group, both for abelian and some nonabelian groups. Potentials applications comprise the design of storage codes.
Slides of the talk: [ pptx ]
Abstract: Commitment schemes are a fundamental primitive in cryptography. Their security (more precisely the computational binding property) is closely tied to the notion of collision-resistance of hash functions. Classical definitions of binding and collision-resistance turn out too be weaker than expected when used in the quantum setting. We present strengthened notions (collapse-binding commitments and collapsing hash functions), explain why they are "better", and show how they be realized under standard assumptions.
Slides of the talk: [ pdf ]
Abstract: This talk is concerned with language design and semantics for effects in functional programming. I will propose interaction morphisms as a means to specify how an effectful computation is to be run, provided a state in a context. An interaction morphism of a monad T and a comonad D is a family of maps TX x DY -> X x Y natural in X and Y and subject to some equations. Interaction morphisms turn out to be a natural concept with a number of neat properties. In particular, they are in a bijective correspondence with carrier-preserving functors between the categories of coalgebras of D and stateful runners of T (monad morphisms from T to state monads), but considerably more can be said.
Slides of the talk: [ pdf ]
Abstract: In this talk we investigate ways of introducing partiality as an effect in Martin-Löf type theory. We introduce a class of monads, that we call join classifying monads. These are monads for partiality, in the sense that their Klesli categories possess the structure of Cockett and Guo's join restriction categories. The fundamental example of a join classifying monad is Capretta's delay monad quotiented by weak bisimilarity. Inhabitants of the delay datatype are delayed values, that can be non-terminating and not return a value at all. Two delayed values are weakly bisimilar if they differ by finite amount of delay. We show that the quotiented delay monad is the initial join classifying monad. Intuitively, this means that the Kleisli category of such monad represents the “closest” framework to Martin-Löf type theory in which it is possible to define possibly non-terminating programs.
Slides of the talk: [ pdf ]
Abstract:
Affine automata was recently introduced as a quantum like
non-linear classical automata, which mimic the quantum interference
classically. They can be more powerful then probabilistic and quantum
automata in different language acceptance modes. In this talk, we
mention about the motivation of model, their basic properties, and the
main results.
References:
http://arxiv.org/abs/1602.04732
http://arxiv.org/abs/1602.05432
http://arxiv.org/abs/1602.07967
Slides of the talk: [ pdf ]
Abstract: In this work, we analyze the failing sets of the interval-passing algorithm (IPA) for compressed sensing. The IPA is an efficient iterative algorithm for reconstructing a $k$-sparse nonnegative $n$-dimensional real signal $\vec{x}$ from a small number of linear measurements $\vec{y}$. In particular, we show that the IPA fails to recover $\vec{x}$ from $\vec{y}$ if and only if it fails to recover a corresponding binary vector of the same support, and also that only positions of nonzero values in the measurement matrix are of importance for success of recovery. Based on this observation, we introduce termatiko sets and show that the IPA fails to fully recover $\vec x$ if and only if the support of $\vec x$ contains a nonempty termatiko set, thus giving a complete (graph-theoretic) description of the failing sets of the IPA. Finally, we present an extensive numerical study showing that in many cases there exist termatiko sets of size strictly smaller than the stopping distance of the binary measurement matrix; even as low as half the stopping distance in some cases.