Wednesday, February 3, 2010

Barnes Walls Lattice - I

Hello again Everybody,

Today we discuss this paper by Daniele Micciancio and Antonio Nicolosi.

Grab your own copy though, it will be helpful. Assuming that you have a copy in your hand now, you can follow this post. I will be discussing their (the authors) work in some relaxed chunks explaining in a little more detail that I think should be helpful.

You might see that this post is labeled post - I. There will be more follow-up posts as I understand more about the topic.

So, time to start. Here is what Barnes Wall Lattice means (copied verbatim from the paper). For any positive integer $n$, $BW^n$ is the $N = 2^n$ dimensional lattice over $G$ generated by the rows of the $n$-fold Kronecker product

$BW^n = \begin{pmatrix}1&1\\0&\phi\end{pmatrix}^{\oplus n}$

Here, the symbol $\phi$ is the Gaussian integer $i + 1$ and $\oplus$ is the symbol for Kronecker Product. Note that the paper uses a different symbol.

Note that $\phi$ is the prime of least squared norm in $G$.

You can go through the paper now over to the point where the authors note that $BW^{n+1} = {[u,\ u+\phi \cdot v]:\ u,v \in \  BW^n}$

Now you are invited to have a look at the algorithm presented in the paper for parallel Bounded Distance Decoding - that is - at the algorithm ParBW.

Next, I will prove that the algorithm works correctly. It is here that you might want to see the relaxed exposition in this post. To simplify the situation, first we will make a few simple observations following the authors.

But before that, we restate the problems that authors are going to address. Given a vector $s \in C^n$ which is within a distance of $d_min/2$ of some lattice point, find the lattice point $z$ it is closest to.

(1) Any vector $z \in BW^n$ can be written as $[z_0, z_1]$ where $z_0$ and $z_1$ belong to $BW^{n-1}$.

(2) The converse of (1) is not true. Huh...you did not really think it was that easy. In fact, if $z_0$ and $z_1$ belong to $BW^{n-1}$, it is $z_0 + \phi \cdot z_1$ that belongs to $BW^n$.

(3) The squared $l_2$-norm of $[s_0, s_1]$ i.e., $||s_0, s_1||^2 = ||s_0||^2 + ||s_1||^2$. Thats true no matter where you split the vector $s$. Further, we split $s$ so that both $s_0$ and $s_1$ (and also $z_0$ and $z_1$ above) are in $C^{n-1}$ (resp $BW^{n-1}$) if to begin with $s$ was in $C^n$ (resp $z$ was in $BW^n$).

(4) There is an automorphism, that is, a distance preserving linear transform that maps points in $BW^n$ to other points in $BW^n$. The authors say $T[z_0, z_1] \rightarrow \phi/2 \cdot [z_0 - z_1, z_0 + z_1]$.

A better way to visualize this, as Chris told me, was to see the corresponding matrix and check if it was uni-modular (determinant 1, all entries from the corresponding domain). You can see that the transformation is captured in the following matrix which acts on the column vector $[z_0, z_1]$ from the left and results in the column vector on the right side above.

\[\phi/2 \cdot \begin{pmatrix}1&-1\\1&\ 1\end{pmatrix} \] as you might check.

The determinant of this matrix is $2 \cdot \phi^2/4 = i$. Indeed, its unimodular.

Just for clarification sake, we call $\phi/2 \cdot [z_0 - z_1]$ as $z_-$ and $\phi/2 \cdot [z_0 + z_1]$ as $z_+$. Similarly we define $s_-$ and $s_+$.

(5) Now this is a pretty  important observation that says that given a transformed point $z$ under some transformations that we list below, it is possible to recover that point.

So, given the following transformation that is not distance preserving, that is the one here which maps $[z_0, z_1]$ to $[z_0, (z_0+z_1)/2]$, which can be captured by this matrix
\[P = \begin{pmatrix}1&0\\\phi/2&\phi/2\end{pmatrix} \]
you can come up with the corresponding inverse matrix, $P^{-1}$ which is really lovely.

It is precisely the matrix that does the transformation you see in line 9 of the function ParBW.

Looking ahead, a quick calculation reveals that
\[P^{-1} = \begin{pmatrix}1&0\\-1&2/\phi\end{pmatrix} \]

So, if you were to apply $P$ to some lattice point in $BW^n$ and then you follow it with $P^{-1}$, you will recover the point. Ditto holds for all the other transformations.

With these remarks in place, the algorithm is straightforward to analyze (assuming that SeqBW) works correctly, of course. We will talk about SeqBW in a later post.

Just follow the algorithm. Recurse when the authors say. In case, you are following the paper (you definetly should) notice that there is a minor slip when authors say "The algorithm recursively computes four $N/2$-dimensional vectors $z_*$ (for the symbol $ * \in {0,1,+,-}$) with the property that if $s_*$ is within squared distance $N/2$ from $BW^{n-1}$, then $z_* = \bar{z_*}$"

The statement should end in  "...squared distance $N/8$ from $BW^{n-1}$, then $z_* = \bar{z_*}$".

Anyway, now the idea is pretty clear. We calculated $s_-$ and $s_+$ as fail-safe guarantees (in a sense that will become clear soon enough).

The transormation taking $[s_0, s_1]$ to $[s_-, s_+]$ being distance preserving the shortest distance of $s$ from a lattice point is the same as the shortest distance of $[s_-, s_+]$ from the transformed lattice point (which is again a lattice point in the original lattice!).

Thus, by observation (4) above, we are cruising smoothly. One among $s_0, s_1$ and one among $s_-, s_+$ is a small vector (meaning has size smaller than $N/4$).

By applying the new inverse transforms we learned of in (5), we can recover 4 candidate close points. The closest one to $s$ among these is the answer as we assumed that $s$ was within decoding radius of some lattice point.

This, hopefully clearly covers the function ParBW. Allow me to catch up a little breath. I will blog upon the remaining portions soon.

Thanks and Have a Good Day
-Akash

No comments:

Post a Comment