If reading Nietzsche is a matter of finding his “sapphires
in the mud,” then reading Turing's Cathedral by George Dyson is like being handed a crystal bowl full of
sapphires finely cut and polished. Every
paragraph is a gem, but their individual beauty is obscured by lack of
aesthetic structure to the collective whole.
A book must be something more than a collection of great paragraphs wrapped
in a clever cover.
That said, Turing’s
Cathedral does offer some intriguing insights into the history of
computing, the history of the atomic bomb, and the history of science in the
1930s and ‘40s, particularly the question of how so many preeminent European
scientists (Einstein and Gödel
among them) ended up at then-lowly Princeton University. As it turns out, all three histories converge
in one man, John von Neumann, who is often declared father of the computer, but
could just as well be called co-creator of the atomic bomb and guardian of the
great lights that Nazi Germany wished to extinguish. Von Neumann was a gifted mathematician and
logician whose interest lay primarily in the problems of shockwave
propagation. His real genius, however,
lay in his ability to bring together even greater minds than his own, and apply
them to the task of building a logic machine powerful enough to win World War
II. For all of his genius though, not
even von Neumann could have forecast the effects his computing machine would
have on human society as its shockwave rolls out across the face of this
planet.
Perhaps his greatest move was to appropriate the mind of
Alan Turing, the brilliant young British mathematician who cracked the Enigma
code of Nazi encryption. Turing had also
published the seminal article on a
conceptual logic machine that could take a set of encoded instructions and act
upon them, including the modification of its own instructions, which would then
change the behavior of the machine from that point on. This so-called Turing Machine would read and
write instructions on impractical paper tape, but that did not bother von
Neumann in the slightest: there were other men with better ideas about
data storage. What made Turing the central
piece of von Neumann’s mental jigsaw puzzle was the simplicity of his idea, for
that is the only way a successful computing machine was ever going to work.
But it’s not like all of these great minds woke up in
Princeton, New Jersey one day and decided to build a computer. Other institutions were working on the
problem, and a number of companies were building the components—vacuum tubes,
etc.—that von Neumann’s machine would utilize.
IBM was producing a number of devices that we now see as precursors to
the computer, and had developed punched card data storage technology as far
back as the U.S. Census of 1890. Before
that was Charles Babbage’s plan for a steam-driven difference engine (never
actually built) and Blaise Pascal’s mechanical device that we would now call a
calculator.
Any genesis of the concept
of a computing machine, however, is better begun with the ideas of Gottfried
Wilhelm Leibniz, who is best known as the man not credited with inventing the calculus. Sir Isaac Newton beat Leibniz in the race to
publish on their contemporaneous invention, and that was that. Even so, modern calculus notation is based
largely on the simpler, cleaner form that Leibniz used, not Newton’s. Minimalism was a hallmark of Leibniz, and
that brings us to his ideas on a computing machine, which he actually thought
of in those terms. Though he did, in
fact, design a mechanical computing machine (one that used gravity-driven balls
dropping through channels, where we now use electrons flowing through
circuits), Leibniz’ most important contributions can be filed under the heading
“Less is More.” First, he was an early
proponent of the binary numbering system (using only zeroes and ones) in place
of our commonly used decimal system of zero through nine. Leibniz believed God works solely in binary,
so he thought we probably should too.
Second, he put forth a theory that fewer rules will always result in
greater diversity. We now see this
clearly demonstrated in the human genome, in which but four proteins are
combined according to a very small set of rules, yet they produce a vast array
of human beings, each a unique specimen within fuzzy classes of gender, race,
etc.
The final idea that led to creation of a true computing
machine is the concept of recursion,
which, as far as I know, is not credited to any one individual, but has been
with us for some time. In computer
operations, recursion basically means taking the result from some mathematical
function and running it back through the same function; then taking that result
and running it through again, over and over and over. Some recursive functions produce trivial
results, that is, they eventually produce only zero ad infinitum. Others are unbounded, meaning the results
quickly shoot off into infinity, which is difficult for humans to work
with. Still others are non-trivial and
bounded—and those are the ones that we find useful.
It does not, however, require a machine to process recursive
functions. The original use of the word
“computor” was, in fact, spelled with an “o”; moreover, it was a job title, not
a machine. Even into the 1940s computors
were employed to do the grunt work of higher mathematics. They were typically graduate students or the
wives of university professors. They
were bright and meticulous, but the work was tedious and error-prone, and the
process of running a complex function through numerous iterations could be
exceedingly slow, even using the mechanical adding machines of the day. A problem as complex as a nuclear chain
reaction, even if you could employ hundreds of computors to work on it, would
take months or years. By then the war
could be over—you lose.
But with the development of a true computer, known as the ENIAC (for Electronic Numerical Integrator And Computer), von Neumann had a machine that could replace a whole pool of computors and process the data required to analyze nuclear reactions in days or weeks rather than months or years. In spite of the urban legend that says an atomic bomb can be built using off-the-shelf components and plans downloaded from the Internet, triggering and sustaining a nuclear reaction is actually quite difficult; that’s why only a handful of nations have, so far, managed to pull it off. Timing is everything, and the timing is in nanoseconds—lots and lots of them that have to be run through a recursive wave propagation function that is non-trivial and bounded until the fuel is spent. The first nation to build a computing machine capable of handling this analysis would be the first to produce an atomic bomb—we win.
Now I’d like to return to those punched cards used for
storing data, and interrupt with a personal story.
I am sure children everywhere have used playing cards to
build a structure, an actual house of cards.
It’s fun, but your structure is limited by the number of cards in a deck:
54, maybe 56 with jokers. My friend
Paul, however, had boxes of IBM punch cards that his dad had brought home from
work. Paul and I would sit on the garage
floor and build structures for hours on end, using these oversized cards with
all the window holes in them. We didn’t
build houses of cards; we built fortresses and skyscrapers! Our structures were limited not by the number
of cards in a deck, but by our ability to gingerly add each additional card to a
structure without toppling it. Then, if
we did manage to keep the structure erect until boredom finally set in, we
could do one of two things: we could selectively remove noncritical cards until
the structure eventually collapsed, or—even better—we could kick it, blowing the thing to
smithereens. Boys will be boys.
And really, von Neumann and crew were doing the same thing
with punched cards at Princeton. We now
see punched cards as quaint, representing a bygone era of computers that were
slow and cumbersome in comparison to today’s virtual machines existing
somewhere in the cloud—wherever that is.
In reality, punch cards were a worthy data storage device that served us
well for nearly a century. Punch cards
were cheap and plentiful. If handled and
stored carefully, they were quite stable.
When your program had run its course, you could put your batch of cards,
representing the results of your work, in a box and take them home with
you. You could touch them, look at them,
and see with your own eyes how the data had played out. Depending on the type of problem you were
working on, you could rearrange the cards by hand if you wanted to. Of course, you could also spill the box and
have a real mess—or a new idea.
Most importantly, you could come back the next day and run
your program again, this time using that same box of cards as the input to
today’s run. Maybe that batch of cards
represents a nuclear reaction at time t,
so now we’ll see what happens at t’. Maybe it represents the human population this
year, and now we’ll forecast next year’s population by running that data
through as the initial conditions for future population growth. Maybe our recursive function then shows that
the population will actually decline next year.
It all depends on how the propagation function handles the current
conditions fed into it.
In everyday life, we find ourselves surrounded by recursive
functions. Unthinking analysis of one’s
work life might suggest that it’s the same thing, day in and day out, but you
know it’s not: today’s problems arose out of yesterday’s solutions, and
tomorrow will depend entirely on what happens today. In education we talk of constructivism, but that’s simply a matter of building upon
students’ previous learning, an obviously recursive process. In fact, our very lives are recursive: for
better or worse, the person who woke up this morning is not the one who woke up
yesterday. Maybe we’re a bit wiser;
maybe we’re a bit sicker—maybe both, the illness having provided some insight. And karma—well, that’s just recursion on a
cosmic scale.
The sciences still have some propagation problems that we
struggle to analyze fast enough to be of any use. Weather is probably the best example. With its incredibly large set of continuously
changing, interdependent data points, weather has proved frustratingly
difficult to forecast in real time, and even then we can accurately forecast only
a couple of days into the future. Beyond
a week or so out, you might as well use Farmer’s
Almanac.
It’s getting better all the time though. Why?
Because yesterday’s computer showed us how to build a better one today. Turing and von Neumann and all of the great
minds in Turing's Cathedral gave us
mighty high shoulders to stand on, just as Leibniz and so many others had done
for them.
No comments:
Post a Comment