Symposium Univ Bordeaux

Algebra, Codes and Networks / June 16 - 20, 2014


Invited talks

Andrew Barron

Optimal Rate Communication by Fast Sparse Superposition Codes.   Slides

We present properties of our recently developed sparse superposition codes for the additive white Gaussian noise channel.  With a fast adaptive successive decoder it achieves nearly exponentially small error probability at any fixed rate R less than the Shannon capacity.  This is joint work with Antony Joseph and Sanghee Cho.

Ronald Cramer

Optimal Algebraic Manipulation Detection Codes

Algebraic manipulation detection (AMD) codes, introduced at EUROCRYPT 2008, may, in some sense, be viewed as keyless combinatorial authentication codes that provide security in the presence of an oblivious, algebraic attacker. Its original applications included robust fuzzy extractors, secure message transmission and robust secret sharing. In recent years, however, a rather diverse array of additional applications in cryptography has emerged. In this paper we consider, for the first time, the natural regime of arbitrary positive constant error probability ? in combination with messages of unbounded binary length l. Adapting a known bound to this regime, it follows that the binary length ? of the tag satisfies ?? log l + ??(1). We shall call AMD codes meeting this lower bound optimal. Owing to our refinement of the mathematical perspective on AMD codes, which focuses on symmetries of codes, we propose novel constructive principles. Our main result is an efficient randomized construction of optimal AMD codes based on a careful adaptation of certain asymptotically good quasi-cyclic codes.

Joint work with Carles Padró and Chaoping Xing.

Alexandros Dimakis

Coding Theory for Large-Scale Storage.   Slides

Modern distributed storage systems are deploying coding techniques to introduce high reliability with limited storage overhead. It was recently discovered that network coding techniques can improve the maintenance (rebuild) properties of such distributed coded systems compared to standard Reed-Solomon codes. We will cover the developing theory and practice of distributed storage codes. We will focus on recent developments and connections to locally decodable codes. Finally, we will discuss recent implementations running over Hadoop.

Michael Gastpar

Lattice codes for Gaussian Multiple-User  Channel.   Slides

Lattice codes can be applied to many Gaussian multiple-user channel settings. In this talk, we survey previous results, including the Gaussian single-user channel capacity problem, the Compute-and-Forward approach, and the integer-forcing architecture for MIMO systems. Then, we present recent and emerging results. One of them concerns the standard Gaussian multiple-access
channel, where there is already a wealth of classical strategies that attain parts or all of the capacity region. Lattice codes enable an novel strategy, leveraging an idea similar to Compute-and-Forward. This strategy has many desirable properties, including the fact that only single-stream decoding is needed and that a signifiant portion of the dominant face of the capacity region can be achieved in this fashion.

Swastik Kopparty

Multiplicity Codes.   Slides

Multiplicity codes are algebraic error-correcting codes generalizing classical polynomial evaluation codes, and are based on evaluating polynomials and their derivatives. These codes have improved rate and better local decoding, list-decoding and local list-decoding algorithms than their classical counter-parts. I will give an overview of these codes and their decoding algorithms.

Damien Stehlé

Secure lattice codes for the Gaussian wiretap channel.    Slides

We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. It relies on discrete Gaussian distributions over a lattices, and achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in our proposed schemes. The key tools are the lattice smoothing parameter and the flatness factor: the flatness factor characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We introduce the notion of secrecy-good lattices, and propose the flatness factor as a design criterion of such lattices. Joint work with Jean-Claude Belfiore and Cong Ling.

Vinod Vaikuntanathan

Lattices, cryptography, and how to compute blind-folded.   Slides

Lattices have revolutionized cryptography in the recent years, helping us construct cryptographic primitives that one could only dream of before. I will survey these exciting developments, focusing on methods for computing on encrypted data, such as fully homomorphic encryption and functional encryption.

Mary Wootters

List-decodability of randomly punctured codes.   Slides

List decoding of error correcting codes is a relaxation of the (standard) unique decoding problem, where the receiver is allowed to output a short list of possible messages, rather than a unique correct answer.  This relaxation allows for error correction even for very large error rates.  Which codes are optimally list-decodable?  The well-known Johnson bound implies that any code with good distance has good (but not necessarily optimal) list-decodability.  We provide a randomized result of this type, and we show that random puncturings (in fact, a general class of random operations) of codes with good distance are near-optimally list-decodable, with high probability.  As corollaries we'll answer two long-standing open questions, showing that (1) random linear codes are (nearly) optimally list-decodable, and that (2) there exist Reed-Solomon codes which are list-decodable beyond the Johnson bound.  This is joint work with Atri Rudra.

Sergey Yekhanin

Local erasure coding for data storage.    Slides

Historically, most large distributed storage systems (e.g., Hotmail) have been using replication to provide reliability against machine failures. Today however as the amount of stored data reaches multiple Exabytes keeping few copies of data around is becoming prohibitively expensive. Therefore more and more systems are adopting erasure coding in place of replication. Local Reconstruction Codes (LRCs) are a new class of erasure correcting codes designed specifically for applications in data storage. Built upon the rich mathematical theory of locally decodable codes developed in the theory community, LRCs provide high level of reliability and allow data fragments to be reconstructed quickly in typical failure scenarios. LRCs have been recently deployed by Windows Azure Storage as well as few other production systems. In this talk we will discuss motivation behind local reconstruction codes and cover the main technical challenges and tradeoffs in the design of these codes. (Based on joint papers with Brad Calder, Michael Forbes, Parikshit Gopalan, Cheng Huang, Bob Jenkins, Jin Li, Aaron Ogus, Huseyin Simitci, and Yikang Xu.)



TOP