[ Homepage ] [ Koduleht ] [ Mājaslapa ]
Slides of the talk: [ pdf ]
Abstract:
We define a quantum hashing as a quantum generalization of classical
hashing. We define the concept of a quantum hash generator and offer a
design, which allows one to build a large number of different quantum
hash functions. The construction is based on composition of a classical
epsilon-universal hash family and a given family of functions --
quantum hash generators.
The relationship between epsilon-universal hash families and
error-correcting codes give possibilities to build a large amount of
different quantum hash functions. In particular, we present quantum
hash function based on Reed-Solomon code, and we proved, that this
construction is optimal in the sense of number of qubits needed.
Using the relationship between epsilon-universal hash families and
Freivalds' fingerprinting schemas we present explicit quantum hash
function and prove that this construction is optimal with respect to
the number of qubits needed for the construction.
Slides of the talk: [ ppt ]
Abstract:
A quantum algorithm is exact if, on any input data, it outputs the
correct answer with certainty (probability 1) - in contrast to the
usual model in which a quantum algorithm is allowed to output an
incorrect answer with a small probabilities. Coming up with exact
quantum algorithms is a difficult task because we have to ensure that
no amplitude ends up in a state corresponding to an incorrect answer -
on any input.
We present the first example of a Boolean function f(x_1, ...,
x_N) for which exact quantum algorithms have superlinear advantage over
the deterministic algorithms. Any deterministic algorithm that computes
our function must use N queries but an exact quantum algorithm can
compute it with O(N^{0.8675...}) queries.
We also present several examples where exact quantum algorithms outperform deterministic algorithms by smaller amounts.
The talk is based on papers:
[1] A. Ambainis. Superlinear advantage for exact quantum algorithms. STOC'2013 and arxiv:1211.0721.
[2] A. Ambainis, J. Iraids, J. Smotrovs. Exact Quantum Query Complexity of EXACT and THRESHOLD. TQC'2013 and arxiv:1302.1235.
[3]
A. Ambainis, J. Gruska, S. Zheng. Exact quantum algorithms have
advantage for almost all Boolean functions. Quantum Information &
Computation, to appear. arXiv:1404.1684.
Slides of the talk: [ pdf ]
Abstract:
Non-trivial analysis problems require complete lattices with
infinite ascending and descending chains. In order to compute
reasonably precise post-fixpoints of the resulting systems of
equations, Cousot and Cousot have suggested accelerated fixpoint
iteration by means of widening and narrowing.
The strict separation into phases, however, may unnecessarily give up
precision that cannot be recovered later. While widening is also
applicable if equations are non-monotonic, this is no longer the case
for narrowing. A narrowing iteration to improve a given post-fixpoint,
additionally, must assume that all right-hand sides are monotonic.
The latter assumption, though, is not met in presence of widening. It
is also not met by equation systems corresponding to context-sensitive
interprocedural analysis, possibly combining context-sensitive
analysis of local information with flow-insensitive analysis of
globals.
As a remedy, we present a novel operator that combines a given
widening operator with a given narrowing operator. We present adapted
versions of round-robin as well as of worklist iteration, local, and
side-effecting solving algorithms for the combined operator and prove
that the resulting solvers always return sound results and are
guaranteed to terminate for monotonic systems whenever only finitely
many unknowns (constraint variables) are encountered.
This is joint work with Helmut Seidl and Vesal Vojdani, was presented
at PLDI 2013.
Slides of the talk: [ pdf ]
Abstract:
Span program is a certain linear-algebraic model of computation. Span
programs compute Boolean functions. There exists a method for obtaining
quantum query algorithms with similar complexity from span programs.
Span program is a promising model for creating new quantum algorithms.
In this talk we will get acquainted with span programs and will see example algorithms for some graph problems.
Slides of the talk: [ pdf ]
Abstract:
Frequency computation was introduced by G. Rose in 1960. An algorithm
(m,n)-computes a function if for every n-tuple of distinct inputs the
algorithm outputs an n-tuple of results such that at least m of them
are correct.
It has been proven by B. A. Trakhtenbrot that if the frequency m/n
<= 1/2 then a continuum of sets is (m,n)-computable, otherwise only
recursive sets can be (m,n)-computed.
In the original definition any m-tuple of the n outputs can give
the correct answers. We generalize the notion of frequency
computability by demanding a specific structure for the correct
answers.
We show that if this structure is described in terms of finite
projective planes then even a frequency of O(1/sqrt(n)) ensures
recursivity of the computable set. We also show that with overlapping
structures this frequency cannot be significantly decreased. We also
introduce the notion of graph frequency computation and prove
sufficient conditions for a graph G such that a continuum of sets can
be G-computed.
Slides of the talk: [ ppt ]
Abstract: Rūsiņš Freivalds has recently introduced the idea of using p-adic numbers in Turing machines and finite automata to describe random branching of the process of computation. In the last two years different types of ultrametric finite automata and Turing machines were explored. In this speech I would like to show some results obtained in the researches, that I had participated in. These researches have covered one-way, two-way and multihead finite ultrametric automata and ultrametric Turing machines.
Slides of the talk: [ ppt ]
Abstract:
The notion of non-malleable codes was introduced in [6]. Informally, a
code is non-malleable if the message contained in a modified codeword
is either the original message, or a completely unrelated value. In
contrast to error-correction and error-detection, non-malleability can
be achieved for very rich classes of modifications.
We will give a short survey of the recent advances in this area. We
will focus on the result of [5], where an information-theoretically
non-malleable code in the split-state model for one-bit messages was
constructed. We also plan to talk about the recent exciting work [1],
which contains a construction in the split-state model for multi-bit
messages. If time permits, we will also give a brief overview of
[2,3,4].
[1] Divesh Aggarwal and Yevgeniy Dodis and Shachar Lovett
"Non-malleable Codes from Additive Combinatorics"
STOC 2014
[2] Eshan Chattopadhyay and David Zuckerman
"Non-Malleable Codes Against Constant Split-State Tampering"
FOCS 2014
[3] Mahdi Cheraghchi and Venkatesan Guruswami
"Capacity of Non-Malleable Codes"
ITCS 2014
[4] Mahdi Cheraghchi and Venkatesan Guruswami
"Non-Malleable Coding Against Bit-wise and Split-State Tampering"
TCC 2014
[5] Stefan Dziembowski and Tomasz Kazana and Maciej Obremski
"Non-Malleable Codes from Two-Source Extractors"
CRYPTO 2013
[6] Stefan Dziembowski and Krzysztof Pietrzak and Daniel Wichs
"Non Malleable Codes"
ICS 2010
Abstract:
A set of ciphertexts is a shuffle of another set of
ciphertexts if the corresponding set of plaintexts are permutations of
each other. Non-interactive zero-knowledge shuffle arguments are
required to prove the correctness of shuffles, which is important in a
variety of applications. For instance, shuffle arguments are used in
e-voting to guarantee security against malicious voting servers. In
such an application, a server has to potentially shuffle hundreds of
thousands of encrypted ballots. In real elections, it is tantamount
that this does not take too much time.
Up to now, only two shuffle arguments in the standard model (i.e.,
without assuming a random oracle) have been proposed. Both arguments
are relatively inefficient compared to random-oracle based shuffle
arguments. We propose a new non-interactive perfect zero-knowledge
shuffle argument that optimizes the result of Lipmaa and Zhang (J. of
Comput. Security, 2013). The resulting shuffle argument is
significantly more efficient, in fact, it approaches the efficiency of
random-oracle based shuffle arguments.
Abstract:
In the modern world there is often a need to analyze big data sets
which can be represented by graphs. Examples include social networks,
electrical schemes and telecommunication networks. Visualization of
such data can significantly improve one’s ability to effectively work
with information represented by these networks or schemas.
All graph visualizations can be divided in two groups- static, where
whole graph can be drawn entirely on a screen and dynamic, where only
some part of the graph is presented on the screen and user is able to
manipulate the graph by adding or removing some of it’s parts.
In this talk I will describe common ideas of both static and
dynamic graph visualization as well as present graph compact orthogonal
layout algorithm designed in the Institute of Mathematics and Computer
Science, University of Latvia and our experience in designing dynamic
social graph visualization tool.
Slides of the talk: [ pdf ]
Abstract:
Functional reactive programming (FRP) makes it possible to
deal with temporal aspects in a declarative fashion. Traditional
approaches to FRP do not support working with resources like GUI
widgets or files. As a result, programming with effects is not fully
declarative, does not adequately match the properties of resources,
and has performance issues. Therefore, it is reasonable to extend
FRP with support for resources.
In this talk, I present an abstract categorical semantics for such an
extended form of FRP and argue that certain models of FRP with
resources correspond to abstract process categories, which are
categorical models of ordinary FRP.
Abstract:
Ubiquitous use of the Internet and mobile technologies combined with
dropping data storage and processing costs have enabled new forms of
communications and data-driven innovations. However, they have also
created unprecedented challenges for privacy, with companies, policy
makers, and individuals struggling in their search for approaches that
could enable innovation while avoiding privacy harms.
In this talk, I will present algorithmic research that
demonstrates how these seemingly conflicting goals may be achieved,
even when the data being collected about individuals is constantly
changing and expanding. I will first show that merely restricting data
sharing is insufficient to protect privacy. I will then motivate and
introduce a formal and rigorous privacy guarantee that has emerged in
recent years -- differential privacy. Finally, I will present
algorithms that enable useful search data releases and social
advertising computations with differential privacy.
Slides of the talk: [ pdf ]
Abstract:
Consider a wireless network where each node possesses some
subset of a larger collection of files. The goal of the data exchange
protocol is to deliver all the files to all the nodes in the minimum
number of transmissions and communication rounds.
A special case of that scenario where the underlying network is a
complete graph was studied by S. El Rouayheb, A. Sprintson and
P. Sadeghi (ITW 2010). In this work, we consider a general case,
where the underlying network is arbitrary. By generalizing the
techniques in the work of El Rouhayeb et al., we propose an algebraic
framework, which yields an efficient schedule of coded transmissions.
Slides of the talk: [ ppt ]
Abstract:
Quantum nonlocality, or quantum entanglement, is widely known in quantum game theory. In
some games the payoff of players properly equipped with entangled quantum
bits can be up to exponentially bigger in comparison with ordinary
players. However, all these games are quite artificial and, besides, they are fully
“cooperative”: there are no opponents as such, but all players should
strive for the same goal.
We consider a class of simple games that simulate one important
aspect of the game of bridge. We determine optimal (classical)
strategies for this class of games and show how the effect of quantum
nonlocality can improve the players’ performance.
Our simplified game of bridge favorably differs from games previuously studied
in quantum game theory. Firstly, it has been derived from quite natural problem
(the original game of bridge); secondly, there is an apparent presence of
competition in the game (because it is a zero-sum one!); and, finally, its
analysis does not require deep understanding of the heavy mathematical
formalism of quantum information theory.
Slides of the talk: [ pdf ]
Abstract:
We propose two embarrassingly parallel methods for
obliviously evaluating functions having a special form and working
over fixed-point real numbers. First, we will consider a method that
works for monotone functions $f$ such that $f\circ g=h$ for some $g$
and $h$ that are easy to compute on fixed-point numbers and where
$g$ is strictly monotone. Such functions include $f(x)=\frac{1}{x}$
and $f(x)=\sqrt{x}$.
Second, will describe a method for obliviously evaluating functions
defined as $\int_a^{x} f(t)dt$ where $x$ is private. Both of the
methods rely on point-counting techniques rather than evaluation of
series, allowing the result to be obtained using less rounds of
computations with the price of more communication in one round. Since
the complexity of oblivious computing methods (like secret-shared
multi-party computations (SMC)) is largely determined by the round
complexity, this approach has a potential to give better performance
compared to series-based approaches.
Slides of the talk: [ pdf ]
Abstract:
In this paper, we describe efficient protocols to perform in
parallel many reads and writes in private arrays according to private
indices. The protocol is implemented on top of the Arithmetic Black
Box (ABB) and can be freely composed to build larger
privacy-preserving applications. For a large class of secure
multiparty computation (SMC) protocols, we believe our technique to
have better practical and asymptotic performance than any previous
ORAM technique that has been adapted for use in SMC.
Our ORAM technique opens up a large class of parallel algorithms for
adoption to run on SMC platforms. In this paper, we demonstrate how
the minimum spanning tree (MST) finding algorithm by Awerbuch and
Shiloach can be executed without revealing any details about the
underlying graph (beside its size). The data accesses of this
algorithm heavily depend on the location and weight of edges (which
are private) and our ORAM technique is instrumental in their
execution. Our implementation should be the first-ever realization of
a privacy-preserving MST algorithm.
Abstract: In Eurocrypt 2013, Gennaro et al.~proposed an efficient \emph{non-adaptive} short QAP-based NIZK argument for $\CIRCUITSAT$, where non-adaptivity means that the CRS depends on the statement to be proven. While their argument can be made adaptive by using universal circuits, this increases the prover computation by a logarithmic multiplicative factor. By following the QAP-based approach, we propose an efficient product argument, and then use it together with a modified shift argument of Fauzi et al.~in the modular framework of Groth to design an \emph{adaptive} short NIZK argument for $\SUBSETSUM$ and several other NP-complete languages that has the same complexity parameters as the QAP-based non-adaptive argument, resulting in the first adaptive short NIZK arguments for NP where the prover computation is dominated by a linear number of cryptographic operations. We also construct the most efficient known range argument.
Slides of the talk: [ pdf ]
Abstract:
Sequence analysis consists of characterizing non-random content of a
sequence. For example, if a sequence is a concatenation of several
strings, one may be interested in finding all maximal substrings that
occur exactly ones in each string. This and many similar sequence
analysis tasks can be solved optimally in O(n) time using *suffix
trees*, where n is the length of the sequence drawn from alphabet
{1,2,...,c}, where c<=n. Suffix tree solutions take O(n log n) bits
of space, but the space can be improved to O(n log c) bits by using
*compressed suffix trees*; however, the combined construction and
analysis time is no longer linear.
This talk describes a linear time and O(n log c) bits solution to
sequence analysis tasks like above. The core of the solution is
establishing a correspondence between interval pairs in a
space-efficient *bidirectional Burrows-Wheeler index* and nodes of a
suffix tree, and a way to space-efficiently explore suffix tree using
only the former.
The talk is based on the recent work published in ESA 2013
(Belazzougui, Cunial, Kärkkäinen, Mäkinen) and STOC 2014 (Belazzougui)
Slides of the talk: [ ppt ]
Abstract:National Open Access Research Data Archive MIDAS [1], with the metadata based mainly on CERIF standard data model [2], currently is under development by Vilnius University with the partner Vilnius University Hospital Santariškių Klinikos (Santariskes Clinics). The purpose of this project is to establish an infrastructure of national research data archive that enables collection and storage of research and empirical data and ensures free, easy and convenient access to the research data and metadata. Project duration: 30 months (January 2012 –June 2014; now extended until April 2015). Currently, just the first iteration of development is completed. So, now is the time to show MIDAS data archive prototype version to various researches – including participants of “Theory Days at Ratnieki”, who represent different Computer Science fields, and to receive their comments, improvement suggestions, etc.
Abstract: In today's dynamic attack landscape, detecting possible attacks is too slow and exceeds the limits of human imaginative capability. Emerging security risks and multi-step attacks demand tool support to predict, prioritise, and prevent complex attacks systematically. In the TREsPASS 7th Framework project, attack navigators are being developed to analyse and visualise information security risks in dynamic organisations, as well as possible countermeasures. To this end, the project combines knowledge from technical sciences, social sciences, and state-of-the-art industry processes and tools.
Abstract: This paper studies the possibility of achieving IND-CPA security in privately outsourcing linear equation systems over real numbers. The particular task is solving a full-rank $n\times n$ system $Ax = b$. Since the most complex part of this task is inverting $A$, the problem can be reduced to outsourcing a square matrix inverse computation. Although this task is trivial for matrices over finite fields, it is not so easy for matrices over real numbers. We study the class of affine transformations for matrices over real numbers, find out which forms are possible at all, and find some properties of the transformation and the set of the initial matrices that have to be satisfied in order to make the initial matrices perfectly (or at least statistically) indistinguishable after applying the transformation.
Slides of the talk: [ pdf ]
Abstract: The essence of Bell's theorem is that, in general, quantum statistics cannot be reproduced by local hidden variable (LHV) model. This impossibility is strongly manifested while analyzing the singlet state statistics for Bell-CHSH violations. In this work, we provide various subsets of two outcome POVMs for which a local hidden variable model can be constructed for the singlet state.
Slides of the talk: [ pdf ]
Abstract:
We study codes over permutations, which can be used for
error correction in data transmission and storage for example in flash
memories and power line communications. For $S_n$, the set of
permutations of $n$ elements, one can define various distance
metrics. A code is a subset whose every two elements are at least some
distance apart. We are looking at construction of such codes for
different metrics. I will give an overview of what kind of metrics one
can define. I will then talk about results and ideas for concrete
metrics, such as the Hamming and Ulam metrics. If a permutation is
written as an $n$-element sequence of the numbers $1...n$, then the
Hamming metric counts in how many positions two permutations differ
and the Ulam metric is defined as $n$ minus the length of the longest
common subsequence.
This is very much a work in progress and a cooperation with Vitaly
Skachek and Faruk Gologlu.
Abstract: This is about using monads to analyze effectful computations. We want to promote two ideas. First, to describe computations using a store of a specific kind, e.g., a stack, one can apply special monads more fine-grained than the state monad, allowing tracking different intensional aspects of the effects of such computations. Second, it make sense to pay attention also to the comonads corresponding the Lawvere theories of these monads---their coalgebras serve as stack implementations.
Slides of the talk: [ pdf ]
Abstract: We consider a relatively new descriptional complexity measure for Deterministic Finite Automata, BC-complexity, as an alternative to the state complexity. We prove that for two DFAs with the same number of states BC-complexity can differ exponentially. In some cases minimization of DFA can lead to an exponential increase in BC-complexity, on the other hand BC-complexity of DFAs with a large state space which are obtained by some standard constructions (determinization of NFA, language operations), is reasonably small. But our main result is the analogue of the ”Shannon effect” for finite automata: almost all DFAs with a fixed number of states have BC-complexity that is close to the maximum.
Slides of the talk: [ pdf ]
Abstract: We will discuss the relationship between the definability of graphs in first-order logic and the computational complexity of the graph isomorphism and homomorphism problems. If every graph in a class of graphs C is definable with k variables, then the isomorphism problem for C is solvable in polynomial time by the (k-1)-dimensional Weisfeiler-Lehman algorithm. Moreover, definability with finitely many variables and logarithmic quantifier depth implies that isomorphism can be tested even in parallel logarithmic time. The existential-positive fragment of k-variable logic corresponds to the algorithmic techniques for homomorphism testing (i.e., constraint satisfaction) known as k-Consistency Checking. Examining the expressibility of this logic, we are able to estimate the time efficiency of this approach to constraint satisfaction.
Abstract: Degenerate perturbation theory is a "textbook tool" for quantum mechanics, famously used to derive the spectra of atoms in the presence of an external electric field (i.e., the Stark effect). In this talk, we show that it can also be used to analyze quantum computing algorithms, specifically quantum search search on graphs. Using it, we show two intuitions to be false, that global symmetry and high connectivity are not necessary for fast quantum search.
Slides of the talk: [ pdf ]
Abstract:
Low-density parity-check (LDPC) codes are widely used in
communications due to their excellent practical performance. Error
probability of LDPC code under iterative decoding on the binary
erasure channel is determined by a class of combinatorial objects,
called stopping sets. Stopping sets of small size are the reason for
the decoder failures. Stopping redundancy is defined as the minimum
number of rows in a parity-check matrix of the code, such that there
are no small stopping sets in it.
Han, Siegel and Vardy derive upper bounds on the stopping redundancy
of general binary linear codes by using probabilistic analysis. For
many families of codes, these bounds are the best currently known. We
improve on the results of Han, Siegel and Vardy by modifying their
analysis. Our approach is different in that we judiciously select the
first and the second rows in the parity-check matrix, and then proceed
with the probabilistic analysis. Numerical experiments confirm that
the bounds obtained by us are superior to those of Han, Siegel and
Vardy for two codes: the extended Golay code and the quadratic residue
code of length 48.