Paul Allen says that the Singularity is Not Near

Paul G. Allen

October 12, 2011

Mark Greaves

Paul Allen: The Singularity Isn't Near
The Singularity Summit approaches this weekend in New York. But the Microsoft cofounder and a
colleague say the singularity itself is a long way off.

Credit: Technology Review

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a
tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human
capabilities. They call this tipping point the singularity, because they believe it is impossible to
predict how the human future might unfold after this point. Once these machines exist, Kurzweil and
Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot
even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of
humans in a world where machines are as much smarter than us as we are smarter than our pet dogs
and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical
nanotechnology will allow us to download a copy of our individual brains into these superhuman
machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff.

While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we
think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate
of relevant scientific and technical progress. He reasons that the rate of progress toward the
singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially
accelerating—what Kurzweil calls the “Law of Accelerating Returns.” He writes that:

So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years
of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase
exponentially. There’s even exponential growth in the rate of exponential growth. Within a few
decades, machine intelligence will surpass human intelligence, leading to The Singularity … [1]

By working through a set of models and historical data, Kurzweil famously calculates that the
singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science
and technology is littered with people who confidently assert that some event can’t happen, only to be
later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly
unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic
workings can ultimately be known through sustained human effort. But if the singularity is to arrive
by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because
the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of
progress.

Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not
physical laws. They are assertions about how past rates of scientific and technical progress can
predict the future rate. Therefore, like other attempts to forecast the future from the past, these
“laws” will work until they don’t. More problematically for the singularity, these kinds of
extrapolations derive much of their overall exponential shape from supposing that there will be a
constant supply of increasingly more powerful computing capabilities. For the Law to apply and the
singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s
hardware technologies (memory, processing power, bus speed, etc.) but also in the software we
create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run
today’s software faster. We would also need to build smarter and more capable software programs.
Creating this kind of advanced software requires a prior scientific understanding of the foundations
of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the “singularity is near”
arguments fail to persuade us. It is true that computer hardware technology can develop amazingly
quickly once we have a solid scientific framework and adequate economic incentives. However,
creating the software for a real singularity-level computer intelligence will require fundamental
scientific progress beyond where we are today. This kind of progress is very different than the
Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge.
Building the complex software that would allow the singularity to happen requires us to first have a
detailed scientific understanding of how the human brain works that we can use as an architectural
guide, or else create it all de novo. This means not just knowing the physical structure of the brain,
but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in
human consciousness and original thought. Getting this kind of comprehensive understanding of the
brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though,
then we absolutely require a massive acceleration of our scientific progress in understanding every
facet of the human brain.

But history tells us that the process of original scientific discovery just doesn’t behave this way,
especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific
progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let
alone an exponentially accelerating one. Instead, scientific advances are often irregular, with
unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing
theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’
t arrive when predicted, and every so often new scientific paradigms sweep through the field and
cause scientists to reëvaluate portions of what they thought they had settled. We see this in
neuroscience with the discovery of long-term potentiation, the columnar organization of cortical
areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-
style acceleration needed to get to the singularity on Kurzweil’s schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human
intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in
our understanding of natural systems, we typically find that we require more and more specialized
knowledge to characterize them, and we are forced to continuously expand our scientific theories in
more and more complex ways. Understanding the detailed mechanisms of human cognition is a task
that is subject to this complexity brake. Just think about what is required to thoroughly understand
the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has
been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be.
It is not like a computer, with billions of identical transistors in regular memory arrays that are
controlled by a CPU with a few different elements. In the brain every individual structure and neural
circuit has been individually refined by evolution and environmental factors. The closer we look at the
brain, the greater the degree of neural variation we find. Understanding the neural structure of the
human brain is getting harder as we learn more. Put another way, the more we learn, the more we
realize there is to know, and the more we have to go back and revise our earlier understandings. We
believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of
neurons and operates according to physical principles. But for the foreseeable future, it is the
complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns,
that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately
achievable, it has not shown itself to be the kind of area in which we can make exponentially
accelerating progress. But suppose scientists make some brilliant new advance in brain scanning
technology. Singularity proponents often claim that we can achieve computer intelligence just by
numerically simulating the brain “bottom up” from a detailed neural-level picture. For example,
Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely
take a snapshot a person’s living brain at the subneuron level. He suggests that these scanners would
most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless
of whether nanobot-based scanning succeeds (and we aren’t even close to knowing if this is possible),
Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity:
computers could exhibit human-level intelligence simply by loading the state and connectivity of each
of a brain’s neurons inside a massive digital brain simulator, hooking up inputs and outputs, and
pressing “start.”

However, the difficulty of building human-level software goes deeper than computationally modeling
the structural connections and biology of each of our neurons. “Brain duplication” strategies like
these presuppose that there is no fundamental issue in getting to human cognition other than having
sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true
theoretically, it has not worked out that way in practice, because it doesn’t address everything that
is actually needed to build the software. For example, if we wanted to build software to simulate a
bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t
sufficient. To fully simulate the flight of an actual bird, we also need to know how everything
functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been
made (using many different organisms) to chain together simulations of different neurons along with
their chemical environment. The uniform result of these attempts is that in order to create an
adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of
knowledge about the functional role that these neurons play, how their connection patterns evolve,
how they are structured into groups to turn raw stimuli into information, and how neural information
processing ultimately affects an organism’s behavior. Without this information, it has proven
impossible to construct effective computer-based simulation models. Especially for the cognitive
neuroscience of humans, we are not close to the requisite level of functional knowledge. Brain
simulation projects underway today model only a small fraction of what neurons do and lack the detail
to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly
seems to be exponential. Again, as we learn more and more about the actual complexity of how the
brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to
get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches
to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks,
most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back,
we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least
when measured against the creation of a fully general human intelligence. While we have learned a
great deal about how to build individual AI systems that do seemingly intelligent things, our systems
have always remained brittle—their performance boundaries are rigidly set by their internal
assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical
answers outside of their specific focus areas. A computer program that plays excellent chess can’t
leverage its skill to play other games. The best medical diagnosis programs contain immensely
detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great
sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small
scale? One answer involves the basic scientific framework that AI researchers use. As humans grow
from infants to adults, they begin by acquiring a general knowledge about the world, and then
continuously augment and refine this general knowledge with specific knowledge about different
areas and contexts. AI researchers have typically tried to do the opposite: they have built systems
with deep knowledge of narrow areas, and tried to create a more general capability by combining
these systems. This strategy has not generally been successful, although Watson’s performance on
Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to
directly create a large amount of general knowledge of the world, and then add the specialized
knowledge of a domain (for example, the work of Cycorp), have also met with only limited success. And
in any case, AI researchers are only just beginning to theorize about how to effectively model the
complex phenomena that give human cognition its unique flexibility: uncertainty, contextual
sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-
level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer
intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably
even whole new research approaches that are incommensurate with what we believe now. This kind of
basic scientific progress doesn’t happen on a reliable exponential growth curve. So although
developments in AI might ultimately end up being the route to the singularity, again the complexity
brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity
is close. Without having a scientifically deep understanding of cognition, we can’t create the software
that could spark the singularity. Rather than the ever-accelerating advancement predicted by
Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the
complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience
approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity
and discovery. Progress here is deeply affected by the ways in which our brains absorb and process
new information, and by the creativity of researchers in dreaming up new theories. It is also
governed by the ways that we socially organize research work in these fields, and disseminate the
knowledge that results. At Vulcan and at the Allen Institute for Brain Science, we are working on
advanced tools to help researchers deal with this daunting complexity, and speed them in their
research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest
problems there is. We continue to make encouraging progress. But by the end of the century, we
believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which
invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a
computer scientist who serves as Vulcan’s director for knowledge systems.

[1] Kurzweil, “The Law of Accelerating Returns,” March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of
massive brain simulation. Petaflop-class computers (such as IBM’s BlueGene/P that was used in the
Watson system) are now available commercially. Exaflop-class computers are currently on the drawing
boards. These systems could probably deploy the raw computational capability needed to simulate the
firing patterns for all of a brain’s neurons, though currently it happens many times more slowly than
would happen in an actual brain.

UPDATE: Ray Kurzweil responds here


Next