Hyperdense coding
Is it possible to encode n (random) bits into less than n bits?
Certainly not: that can be proven easily (see below). But we have
shown that something striking can be done in our paper in IEEE
Transactions on Information Theory,Volume 54, Issue 8,
Aug. 2008 Pages:3687

3692
What is this?
Superdense Coding was
invented in a
quantumtheoretical context (by using EinsteinPodolskiRosen entangled
pairs)
by Bennet and Wiesner in 1992 (Phys.
Rev. Lett., vol. 69, pp. 2881.2884, 1992).
They
described how one can encode n
bits into n/2 quantum bits.
I In our
paper we give an infinitely
much denser encoding, without using quantum bits or quantum theory at
all.
In our
paper we describe how to make the impossible in a certain sense:
in a feasiblelooking model of data storage and retrieval
(Probabilistic Memory
Cells, PMC’s), we give a recipe to

encode n
bits into n PMC’s

encode n
PMC’s into k PMC’s, where k is much
less than n,

store or
transmit these k PMC’s,

decode
this small number of k PMC’s into n
PMC’s,

retrieve
the n original bits from this n PMC’s.
By
observing, or reading a PMC only a bounded number of bits can be
retrieved (say 3).
That
means that storing or transmitting the k PMC
bits instead of the n bits would need much less
resources
than storing or transmitting the n original bits. We call this
“hyperdense
coding”.
An
application of this is the matrix multiplication result, computing the
PMC
representation of the matrix product AB (for n x n matrices A, B) with
much less than n multiplications,
instead of the much more than n^{2}
multiplications, known today by the best algorithms.
Why this result is “impossible” ?
It is
impossible, in general, to encode n bits into k bits, where
k is less than n, then decode these k bits into the original n bits, since:
There
are 2^{n }possible lengthn bit sequences,
since every bit can assume one of the two possible
values, 0 or 1;
There
are 2^{k} possible lengthk bit sequences;
All
the 2^{n }decoded lengthn sequences cannot be got from the
encoded
2^{k} lengthk sequences,
since every lengthk sequence could
encode almost one lengthn sequence
(since the decoding procedure just uses the lengthk sequence as its
input, and
nothing else, so it cannot decode more than one lengthn sequence from
one code
of length k<n), so we will have 2^{k} decoded sequences in
total,^{
}that is less than 2^{n}.
How is it made possible?
The k PMC’s in the middle
holds the information content of the n
original PMC’s, the information just cannot be gained easily from them.
The
main point, in more mathematical setting is as follows:
First,
we encode n bits into n linear
functions; then these n linear functions into k linear functions, and at the end,
these k linear functions into n
linear functions, always with linear
transformations.
One
might say that all these linear functions are linearly independent,
so

the first n
have dimension n,

the second
k have dimension k and

the last n
have dimension n again.
This
is simply impossible, since linear transformations cannot increase
the number of dimensions of the linear functions (from k
to n).
However,
numbers modulo 6 do not form a field, so the dimensions or
linear independence cannot be defined in the classical way; that makes
our
results possible.
Please
note that this is just the flavor of the mathematical proof, and
in no way a proof itself; the paper appeared in the IEEE Transactions
of
Information Theory contains the proof (see the references).
In the
paper we refer to numerical examples, available on the web, in
the form of a Maple worksheet:
http://www.cs.elte.hu/~grolmusz/supporting.mws
Background:

This is
not a data compression result: we retrieve the exact original
bits at
the end,
and the hyperdense coding is independent on the distribution of the
inputs
bits. This means that we do not “zip” the data as some popular computer
programs do; these programs can compress only texts, written on natural
languages (and not randomly chosen sequences), or images, containing
large
areas of the same color and shade.

Our tools
are simple linear transformations; however we use linear
transformations NOT
over some field (as it is usual in mathematics or its applications) but
over
the ring of residues of some small, composite, nonprime power numbers,
say 6.

We do not use quantum theory at all;
however, the term and idea of the observation of a PMC
(in the article) comes
clearly from quantum theory,
Publications:
Vince Grolmusz: Modular
Representations
of Polynomials: Hyperdense Coding and Fast Matrix
Multiplication. IEEE
Transactions on Information Theory.Volume 54, Issue 8,
Aug. 2008 Pages:3687

3692; Digital Object Identifier 10.1109/TIT.2008.926346
Vince Grolmusz.: On
some
applications of randomized memory. Proceedings of the
GRACO2005, Angra dos Reis, Brazil, 2729 April 2005. Electronic
Notes
in Discrete Mathematics, Vol. 19, 2005, pp. 203209
Vince Grolmusz: Defying
Dimensions Modulo 6 Proceedings of Computability in Europe (CiE)
2005 : "New Computational Paradigms" held in Amsterdam in June 2005,
pages 7881, also as an ECCC
Report
TR03058
Vince Grolmusz: Near Quadratic Matrix Multiplication Modulo
Composites, ECCC
Report
TR03001
Patent:
U.S:.
Patent 7606847: Dense and randomized storage and coding of information