Reversible Computing FAQ
(under construction)
Basic questions:
-
What is reversible computing?
-
Can I make any deterministic computation reversible
by just keeping a copy of the initial state and all inputs?
-
Who invented reversible computing?
-
Can it really compute using zero energy?
-
Can it really make the energy dissipation of a computation
arbitrarily small?
-
Can it be practically implemented?
-
Doesn't it increase computational complexity?
-
Is physics reversible?
Adiabatic circuits:
-
Who invented adiabatic circuits?
-
Is it possible to pipeline reversible circuits?
-
What about frictional effects, such as electrical resistance?
-
What about leakage effects, such as subthreshold conduction or tunneling?
-
What about errors and error correction?
-
How well do adiabatic circuits scale?
-
Which works better, retractile cascades or reversible pipelines?
Answers to some technical objections:
-
What about von Neumann '57?
-
What about Landauer '61?
-
What about Mead & Conway '80?
-
What about Wolpert & Noyes '92?
-
What about Nardiello '93?
-
What about Shizume '95?
-
What about Matsueda/Goto/Loe '96?
-
What about Smith '99?
What the Heck is Reversible Computing anyway?
Reversible computing, in a general sense, means computing using
reversible operations, that is, operations that can be easily and exactly
reversed, or undone. In technical terms, a reversible operation
performs a bijective transformation of its local configuration space.
When this kind of reversibility is maintained at the lowest level, in
the physical mechanisms of operation of our bit-devices (such as transistors),
it avoids dissipating the energy that is associated with the bits of information
that are being manipulated. This can help to reduce the overall energy
dissipation of computations, which can in turn increase battery life or
processing speed in heat-limited systems.
By about 2050 (perhaps sooner), nearly all computing will be
heat-limited, as bit energies approach the absolute thermodynamic lower
bound of kT ln 2, where only one bit's worth of physical information
(physical entropy) is used to encode each logical bit. Reversible
computing is a necessity in order to further increase a system's
rate of computation per unit power consumed beyond the point where that
limit has been reached.
Furthermore, when reversibility is maintained at the highest
levels, in one's computer architectures, programming languages, and algorithms,
it provides opportunities for interesting applications such as bi-directional
debuggers, rollback mechanisms for speculative executions in parallel and
distributed systems, and error and intrusion detection techniques.
Finally, the two types of reversibility (low-level and high-level) are
deeply connected, because, as it turns out, achieving the maximum possible
computational performance for a given rate of bit dissipation generally
requires explicit reversibility not only at the lowest level, but at all
levels of computing--in devices, circuits, architectures, languages, and
algorithms (a strongly conjectured, but not yet formally proven result-call
it Frank's Law).
Some additional discussion can be found here
and here.
Why can't I make any deterministic computation reversible
by just keeping a copy of the initial state and all inputs?
Although it's true that keeping all inputs makes the sequence of global
state transitions a bijective function, this is not what we mean by reversible
computing. Remember, a reversible computation is one in which individual
operations are reversible in the sense that they can be easily undone--i.e.,
their inverses can be easily computed. Undoing operations is necessary
in order to uncompute previous states of the computation, and recover their
energy. Keeping a history of all inputs does not make it easy
to undo an operation - in fact, in that scenario, undoing an operation
would generally require re-running the entire computation from the beginning.
Who invented reversible computing?
The credit should be shared. [Landauer '61] was the first to describe
what we call the Landauer embedding, which is the naive technique
for transforming irreversible computations into equivalent reversible ones,
but he thought his machines could not reversibly get rid of their undo
trails. [Lecerf '63] first described reversible Turing machines in formal
detail, and invented Lecerf reversal technique to uncompute histories,
but he was unaware of the thermodynamic applications, and his machines
did not save their outputs, so were not very useful. [Bennett '73]
reinvented Lecerf reversal and added the Bennett trick of copying
the output before uncomputing the undo trail, thereby proving for the first
time that reversible computations could avoid entropy generation.
Fredkin [Fredkin & Toffoli '78] reinvented reversible computing in
the form of conservative logic circuits, and proved they were universal.
Toffoli [Toffoli '80] invented the Toffoli gate (also called the
controlled-controlled-not
gate), which is perhaps the most convenient universal reversible logic
primitive. All these pioneering developments together incrementally
set the stage for the field of reversible computing; no one person was
unilaterally responsible.
Can reversible computing really dissipate absolutely
zero
energy?
Of course not. Any non-equilibrium physical system (whether
a computer or a rock) dissipates energy at some rate, however small,
because of the always non-zero probability of interaction between parts
of the system (or between a part of the system and a part of its environment)
that are at different temperatures, or in different physical states.
Even the very atoms in a computer (or a rock) are very slowly dissipating
away into its surrounding environment. Furthermore, no real
physical system is ever in absolutely perfect equilibrium, since the only
true equilibrium state would be if the entire universe were uniformly filled
with a gas of elementary particles that was neither expanding or contracting,
but such a state is unstable in general relativity, even with a nonzero
cosmological constant. An equilibrium scenario is not a possible
future state of our universe.
Okay, then can reversible computing really make
the energy dissipation of a computation be an arbitrarily small
non-zero amount?
Only insofar as the computer can be arbitrarily well isolated from unwanted
interactions, errors, and energy leakage (which are all really different
ways of describing the same thing). Even a superconducting reversible
computer with the best possible thermal insulation still faces the possibility
of being struck by a high-energy cosmic ray - or an uncharted asteroid
(an asteroid can be thought of as an extremely high-energy cosmic
ray). Even if you bury the computer at the center of a cold planet,
a black hole could always swing through the solar system and swallow it
up. Objects like rouge black holes passing through can be viewed
as merely a very extreme variety of thermal noise, one which illustrates
the impossibility of ever getting rid of unwanted interactions completely.
Even with total control over the state of the universe, there will always
a small residual rate of quantum tunneling out of any non-equilibrium state
space. However, it might be possible that as the universe grows and
cools, we may be able to compute with ever-smaller lower-bounds on the
rate of energy dissipation per operation, and thus perform infinitely many
computations with only finite energy, a scenario explored by [Dyson '79],
and objected to by [Krauss & Starkman '99]. [Dyson '01] thinks
the argument may hinge on whether a true analog basis for computing is
possible. However, even if Dyson is correct in the long run, if one
chooses a particular era (the next billion years, say), it is not clear
whether we could ever control enough of the environment to achieve any
desired nonzero rate of dissipation, however small. But, despite
all these caveats, it may yet be possible to set up reversible computations
that dissipate such amazingly tiny amounts of energy that the dissipation
is not a barrier to anything that we might wish to do with them - I call
such computations ballistic. We are a long way from achieving
ballistic computation, but we do not yet know of any fundamental
reasons that forbid it from ever being technically possible.
Can it be practically implemented?
Does it increase time complexity?
Does it increase space complexity?
Does it increase overall computational complexity?
Adiabatic circuits:
What does "adiabatic" mean?
Who invented adiabatic circuits?
Is it possible to pipeline reversible circuits?
What about frictional effects, such as electrical resistance?
What about leakage effects, such as subthreshold conduction or tunneling?
What about errors and error correction?
How well do adiabatic circuits scale?
Which is more efficient, retractile cascades or reversible pipelines?
Answers to some technical questions:
What about von Neumann '57?
What about Landauer '61?
What about Mead & Conway '80?
What about Wolpert & Noyes '92?
What about Nardiello '93?
What about Shizume '95?
What about Matsueda/Goto/Loe '96?