Monday, November 28, 2016

Interval arithmetic and practical lattice reduction

IALatRed in a Nutshell

Lattice reduction is fundamental in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra–Lenstra–Lovász reduction algorithm (called LLL or L3) has been improved in many ways through the past decades and remains one of the central tool for reducing lattice basis. In particular, its floating-point variants — where the long-integer arithmetic required by Gram–Schmidt orthogonalization is replaced by floating-point arithmetic — are now the fastest known. Yet, the running time of these floating-point versions is mostly determined by the precision needed to perform sound computations: theoretical lower bounds are large whereas the precision actually needed on average is much lower. IALatRed is a proof-of-concept of an adaptive precision version of LLL, one of its variant Potential-LLL and the BKZ reduction. In these algorithms, floating-point arithmetic is replaced by Interval Arithmetic. The certification property of interval arithmetic enables runtime detection of precision defects in numerical computations and accordingly, makes it possible to run the reduction algorithms with guaranteed nearly optimal precision. As such, these adaptive reduction algorithms run faster than the state-of-the-art implementations, while still being provable.

A brief history of lattice reduction

Lattices are defined as additive discrete subgroups of $\mathbb{R}^n$,  i.e. the integer span $L(\mathbf{b_1}, \ldots \mathbf{b_d}) = \bigoplus_{i=1}^d \mathbb{Z} \mathbf{b_i}$ of a linearly independent family of vectors $\mathbf{b_1}, \ldots, \mathbf{b_d}$ in $\mathbb{R}^n$. Such a family is called a basis of the lattice, and is not unique. Nevertheless, all the bases of a given lattice have the same number of elements, $d$, which is called the *dimension* of the lattice. Among the infinite number of different bases of a n-dimensional lattice with $n\geq 2$, some have interesting properties,  such as having reasonably small vectors and low orthogonality defect. Finding such reduced bases is the goal of *lattice reduction* theory and has been crucial in several fields of computer science and mathematics, especially in cryptology. For instance, lattices have been used to break many public-key cryptosystems in the last decades,  including knapsack cryptosystems or RSA in specific settings thanks to Coppersmith's method.

The problem of finding good bases goes back to the early works of Lagrange and Gauss, for dimension two lattices, and the introduction of the Gauss algorithm, even though first introduced by Lagrange. This procedure can be seen as a 2-dimensional extension of the well-known Euclid algorithm for computing the greatest common divisor of two integers. In 1850, Hermite published the first reduction algorithm for arbitrary dimension. A century later, in 1982, Lenstra, Lenstra and Lovász designed the LLL algorithm, with the polynomial factorization problem as an application, after the celebrated work of Lenstra on integer programming. This algorithm is a milestone in the history of lattice reduction algorithms, being the first algorithm whose running-time is polynomial, for arbitrary dimension. This work has been improved in multiple ways among others by Kaltofen, Schnorr, and Gama and Nguyen, decreasing the time complexity or improving the quality of the reduction.

 

 A bird's eye view on Interval Arithmetic

Interval arithmetic is a representation of reals by intervals — whose endpoints are floating-point numbers — that contain them. Arithmetic operations, in particular the basic operations $+$, $-$, $\times$ , $\div$ can be redefined in this context. The main interest of this representation lies in its certification property:  if real numbers are represented by intervals, the interval resulting from the evaluation of an algebraic expression contains the exact value of the evaluated expression.

If the birth of Lattice Reduction is well-dated, it is not the case of Interval Arithmetic. For some authors, it has been introduced by R. Moore in 1962 in his PhD thesis. For others it can be dated back in 1958 in an article of T. Sunaga which describes an algebraic interpretation of the lattice of real intervals, or even sooner in 1931 as a proposal in the PhD thesis of R.C. Young at Cambridge. Nonetheless its development and industrial applications have to wait for 1980, and the momentum of U. Kulisch in Karlsruhe, leading IBM to develop a specific instruction set and a compiler natively integrating Interval Arithmetic.  Its main asset — calculating directly on sets — is nowadays used to deterministically determine the global extrema of a continuous function or determining the zeroes of a function and proving their existence. Another application of Interval Arithmetic is to be able to detect lack of precision at run-time of numerical algorithms, thanks to the guarantees it provides on computations.

Mixing the two of them: IALatred

This last application can lead to the design of adaptive precision numerical algorithms. In particular e  transformed the celebrated LLL algorithm  into an adaptive precision version, leading to design a provable reduction algorithm which in some cases even run faster than the state-of-the-art standard FpLLL of D. Stehle et al. We also applied this technique to a stronger variant of LLL, called Potential-LLL, first  introduced by Fontain et al., with similar results, as well as the stronger BKZ algorithm.



No comments:

Post a Comment