Hyperdense coding

Is it possible to encode n (random) bits into less than n bits?

Certainly not: that can be proven easily (see below).  But we have shown that something striking can be done in our paper in IEEE Transactions on Information Theory,Volume 54, Issue 8, Aug. 2008 Pages:3687 - 3692

What is this?

Superdense Coding was invented in a quantum-theoretical context (by using Einstein-Podolski-Rosen entangled pairs) by Bennet and Wiesner in 1992 (Phys. Rev. Lett., vol. 69, pp. 2881.2884, 1992).

They described how one can encode n bits into n/2 quantum bits.

I In our paper we give an infinitely much denser encoding, without using quantum bits or quantum theory at all. 

In our paper we describe how to make the impossible in a certain sense: in a feasible-looking model of data storage and retrieval (Probabilistic Memory Cells, PMC’s), we give a recipe to
encode n bits into n PMC’s
encode n PMC’s into k PMC’s, where k is much less than n,
store or transmit these k PMC’s,
decode this small number of k PMC’s into n PMC’s,
retrieve the n original bits from this n PMC’s.

By observing, or reading a PMC only a bounded number of bits can be retrieved (say 3).

That means that storing or transmitting the k PMC bits instead of the n bits would need much less resources than storing or transmitting the n original bits. We call this “hyperdense coding”.


An application of this is the matrix multiplication result, computing the PMC representation of the matrix product AB (for n x n matrices A, B) with much less than n multiplications, instead of the much more than n2 multiplications, known today by the best algorithms.

Why this result is “impossible” ?


It is impossible, in general, to encode n bits into k bits, where k is less than n, then decode  these k bits into the original n bits, since:

There are 2n possible length-n bit sequences, since every bit can assume one of the two possible values, 0 or 1;

There are 2k possible length-k bit sequences;

All the 2n decoded length-n sequences cannot be got from the encoded 2k length-k sequences, since every length-k sequence could encode almost one length-n sequence (since the decoding procedure just uses the length-k sequence as its input, and nothing else, so it cannot decode more than one length-n sequence from one code of length k<n), so we will have 2k decoded sequences in total, that is less than 2n.

How is it made possible?


The k PMC’s in the middle holds the information content of the n original PMC’s, the information just cannot be gained easily from them.
The main point, in more mathematical setting is as follows:
First, we encode n bits into n linear functions; then these n linear functions into k linear functions, and at the end, these k linear functions into n linear functions, always with linear transformations.
One might say that all these linear functions are linearly independent, so
-         the first n have dimension n,
-         the second k have dimension k and
-         the last n have dimension n again.
This is simply impossible, since linear transformations cannot increase the number of dimensions of the linear functions (from k to n).
However, numbers modulo 6 do not form a field, so the dimensions or linear independence cannot be defined in the classical way; that makes our results possible.
Please note that this is just the flavor of the mathematical proof, and in no way a proof itself; the paper appeared in the IEEE Transactions of Information Theory contains the proof (see the references).
In the paper we refer to numerical examples, available on the web, in the form of a Maple worksheet: http://www.cs.elte.hu/~grolmusz/supporting.mws




-         This is not a data compression result: we retrieve the exact original bits at the end, and the hyperdense coding is independent on the distribution of the inputs bits. This means that we do not “zip” the data as some popular computer programs do; these programs can compress only texts, written on natural languages (and not randomly chosen sequences), or images, containing large areas of the same color and shade.
-         Our tools are simple linear transformations; however we use linear transformations NOT over some field (as it is usual in mathematics or its applications) but over the ring of residues of some small, composite, non-prime power numbers, say 6.
-         We do not use quantum theory at all; however, the term and idea of the observation of a PMC (in the article) comes clearly from quantum theory,


Vince Grolmusz: Modular Representations of Polynomials: Hyperdense Coding and Fast Matrix MultiplicationIEEE Transactions on Information Theory.Volume 54, Issue 8, Aug. 2008 Pages:3687 - 3692; Digital Object Identifier   10.1109/TIT.2008.926346

Vince Grolmusz.: On some applications of randomized memory. Proceedings of the GRACO2005, Angra dos Reis, Brazil, 27-29 April 2005. Electronic Notes in Discrete Mathematics, Vol. 19, 2005, pp. 203-209

Vince Grolmusz: Defying Dimensions Modulo 6 Proceedings of Computability in Europe (CiE) 2005 : "New Computational Paradigms" held in Amsterdam in June 2005, pages 78-81, also as an ECCC Report TR03-058

Vince Grolmusz: Near Quadratic Matrix Multiplication Modulo Composites,  ECCC Report TR03-001


U.S:. Patent 7606847: Dense and randomized storage and coding of information