HUMAN-LEVEL ARTIFICIAL INTELLIGENCE --- AND ITS CONSEQUENCES --- ARE NEAR

THE TIME FOR POWERFUL ARTIFICIAL INTELLIGENCE IS RAPIDLY APPROACHING
====================================================================

There is a good chance human-level AI will be created within five to fifteen years --- and, almost certainly, within twenty-
five.

In ten years, for example, a machine costing one million dollars may well be able to: --- write reliable, complex code
faster than a hundred human programmers --- remember every word and concept in a world-class law library, and
reason from them hundreds of times faster than a human lawyer --- or ---- contribute more rapidly to the advancement
of mathematical physics than all of humanity combined.

A cloud of such systems could represent all the knowledge recorded in books and on the web --- stored in a highly
indexed, inter-mapped, semantic deep structure that would allow extremely rapid reasoning from it. Such a cloud would
have the power to rapidly search, match, infer, synthesize, and create --- using that world of data --- so as to provide
humanity with a font of knowledge, reasoning, technology, and creativity few can currently imagine.

YOU SHOULD BE SKEPTICAL. The AI field has been littered with false promises. But for each of history's long-sought,
but long-delayed technical breakthroughs, there has always come a time when that breakthrough --- finally --- DID
happen. There are strong reasons to believe that --- for powerful machine intelligence --- that time is fast approaching.

What is the evidence?

It has two major threads.

First, within five to ten years, we are projected, for the first time, to have hardware with the computational power to
roughly support human-level intelligence. Within that time, the price for such hardware could be as low as three million
dollars, down, by the end, to, perhaps, as little as one hundred thousand. These prices are low enough that virtually
every medium to large-size business, educational, and governmental organization would be able to afford them.

Second, due to advances in brain science and in AI, itself, there are starting to be people who have developed
reasonable and relatively detailed architectures for how to use such powerful hardware to create near-human and,
ultimately, super-human artificial intelligence.



THE HARDWARE
=============

To do computations of the type at which we humans currently out perform computers, you need something within at
least several orders of magnitude of the capacity of the human brain, itself. You need such capacity in each of at least
four dimensions. These include representational capacity, computational capacity, processor-to-memory bandwidth,
and processor-to-processor bandwidth. You can't have the common sense, intuition, natural language capabilities, and
context appropriateness of human thought --- unless you can represent, rapidly search, infer between, and make
generalizations from, vast portions of human-level world knowledge --- where --- "world knowledge" is the name given to
the extremely large body of experientially derived visual, auditory, olfactory, tactile, kinesthetic, emotional, linguistic,
semantic, goal-oriented, and behavioral knowledge that most humans have.

Most past AI work has been done on machines that have less than one one millionth the capacity of the human brain in
one or more of these four dimensions. That is like trying to do what the human brain does with a brain the size of a
spider’s. Even many current supercomputers, that cost tens of millions of dollars, have processor-to-processor
bandwidths that are three or more orders of magnitude smaller than that of the human brain.

No wonder so many prior attempts at human-level AI have hit a brick wall. Nor is it any surprise that most of the AI
establishment does not understand the importance of the correct ---roughly brain-level-hardware --- approach to AI.
Such an approach has been impossible to properly study, and experiment with, at prior hardware costs and funding
levels --- and, thus, has been impossible to use for advancing one’s career in the AI field, or for raising venture capital.

But starting in three to five years it should be possible to make hardware that is much more suited for roughly human-
like computing.

Moore’s Law is likely to keep going for some time. 22nm node prototypes have already been built. Intel claims it is
confident it can take CMOS two generations further, to the 11nm node, by mid to late this decade. But, perhaps even
more important, there has been a growing trend toward more AI-capable hardware architectures, and, in particular,
toward addressing the bandwidth deficiencies of current computing systems.

This is indicated by the trend toward putting more processor cores, with high speed interconnect, on a chip. Tilera has
recently demonstrated a 100 core processor with extremely high internal bandwidth. IBM and Intel both have R&D chips
with roughly 64 to 80 mesh-networked processors, and they both plan to provide high bandwidth connections between
such processors and memory placed on multiple semiconductor layers above or below them. High bandwidth to such
memory will be provided by massive numbers of through-silicon metal vias connected between layers. Intel has said it
hoped to have such multi-core, multi-layer modules on the market by 2012. And one of its researchers has said
inferencing is one of the major tasks that could make such hardware commercially valuable.

Photonics will enable hundreds of gigabits per second to be communicated on photolithographically produced
waveguides at relatively low energy and thermal costs. This, and the through-silicon vias, will substantially break the
processor-to-RAM and processor-to-processor bandwidth bottlenecks that are currently the major barriers preventing
current clustered systems from being used efficiently for human-like reasoning. These bottlenecks need to be broken
because many types of human-like reasoning involve --- massively parallel, highly-non-local, out-of-order, memory
accessing --- in huge, sparsely-interconnected, networks of world knowledge. With the rapid advances in integrated
photonics --- and in low-cost interconnect between such integrated photonics and optical fibers --- being made by
organizations like HP, IBM, Luxtera, and Cornell University, it will become possible to extend massive numbers of
extremely high bandwidth optical links across chips, wafers, boards, and multi-board systems --- enabling us to create
computers --- and clouds of computers --- having not only more effective representational and computational power
than the human brain, but also greater processor-to-memory and processor-to-processor interconnect.

With the highly redundant designs made possible by tiled processors, and their associated memory and network
hardware --- wafer-scale and multi-level wafer-scale manufacturing techniques can become practical. Such highly
uniform, replicated designs make it easier to provide fault-tolerance and self-test. The conventional wisdom is that
wafer-scale integration was proved futile in the 1980s. But that was when the large size of most circuit components
made it inefficient to provide redundancy in anything other than highly replicated circuits, such as memories. In the
coming decade, however, entire cores will be small enough to be fused out with relatively little functional loss. In
addition, redundant vertical and horizontal pathways can be provided in 3D circuitry, so that a defect in part of one
layer will not prevent functional access to components above, below, and beside it.

Combined, all these technologies can greatly decrease the cost of manufacturing the massive amounts of memory,
processing power, and connectivity demanded for extremely powerful --- roughly brain-level --- artificial intelligence.


For example, if --- 11nm semiconductor lithography --- multilevel circuitry --- and --- integrated-photonic interconnect ---
are all in mainstream production in ten years --- as many predict --- then one million dollars should be able to purchase
a system with: --- roughly 4 million small processor cores, allowing a theoretical max of 4 thousand trillion instructions
per second --- 32 TBytes of 2ns EDRAM, allowing roughly 400 trillion read-modify-writes to EDRAM per second --- over
200 TBytes of sub-20ns-read-access phase-change-memory (PCM), allowing roughly 160 trillion random reads per
second --- and a global, sustainable, inter-processor bandwidth of over 20 trillion 64Byte payload messages per
second.

The AI community does not know exactly how much representation, computation, processor-to-memory, and processor-
to-processor capacity is needed for human-level computing. The estimates vary by four or five orders of magnitude.
Some think we will have to match the complexity of the brain almost exactly to get brain-level performance, causing
them to think we will have to wait until approximately 2040 to achieve human-level AI. But this fails to take into account
the many superiorities electronic hardware has relative to wetware. From my research and calculations, I am relatively
confident that the above computational resources --- that could be available for one million dollars by 2020 --- would
have more than enough capability to provide something approaching --- or, very possibly, substantially surpassing ---
all the useful talents at which a human mind can currently out perform computers.

In addition, a machine with this power could also execute tasks at which computers already substantially out perform
humans at speeds, and with exact memory, that exceed that of humans by millions or trillions of times. Combining the
types of computing at which humans and machines each currently excel will greatly amplify the power of artificial
intelligence. Such a system could have a high-bandwidth, fine-grained interface between these two different types of
computation. And it could have the ability to rapidly vary the degree of mixture between them in each of many different
concurrent processes or sub-processes --- all under the dynamic control of powerful hierarchical mental behaviors that
have been honed by automatic reinforcement learning. This mixture will enable artificial intelligences that are
substantially sub-human in some ways, to be hundreds to millions of times more powerful than humans at tasks that
now can only be performed by us --- such as --- trying to use on-line tools to find the set of legal cases that are most
relevant to a new, complex legal problem --- or trying to find information on the internet in those situations in which
Google doesn't seem helpful.


To show why the 2020 system hypothesized above would, most probably, be capable of human-level thinking --- let us
assume half of its 200TBytes of PCM memory were used to represent nodes and links in an experientially grounded,
self-organizing, semantic-net memory. Assume an average of 100 bytes is required to represent and properly index an
occurrence of a pattern, or concept, represented by a node in that net. Assume that roughly another 100 bytes is
required to represent one of the relationships of such a concept’s occurrence to another pattern or to one or more
temporal, spatial, or semantic maps. With these assumptions this 100TBytes could create an experiential record storing
an average of 1000 such nodes or links for each of one billion seconds. That’s roughly the equivalent of three pages of
text to describe a person’s experiences for every second in over 31 years. When combined with the type of memory
described in the paragraph below, this is almost certainly much, much more world knowledge than a human mind can
store.

Continuing this simplified model of memory distribution --- let the remaining 100TByte of PCM store billions of patterns
to represent and ground the meaning of the above mentioned nodes and links. This would include an invariant,
hierarchical, self-organizing memory representing the composition, generalization, and similarity relationships between
such patterns. This semantic net would include --- billions of patterns generalized from activation, or recorded, states in
the system's network of sensory and semantic nodes and links --- and --- mappings between such generalized patterns,
and their parts, and occurrences of such patterns in perceptions, thoughts, plans, imaginations, and memories. These
generalized patterns would include billions of relatively simple sensory and motor patterns. They would also include
more complex patterns representing concepts such as objects, persons, actions, emotions, drives, goals, and
behaviors. These more complex patterns would include physical and mental behaviors and plans --- and their
associated goals and other memories --- including feedback on their value and effectiveness. These patterns would
include temporal and spatial relationships, and relationships defined by the relative roles of patterns in larger patterns.
They would also include probabilistic statistics on the frequency, long term importance, and relationships between such
patterns.

For many applications, such a system would contain many terabytes of information to help them excel at communicating
with humans through --- text --- speech --- vision --- gestures --- facial expressions --- tones of voice --- and
photorealistic, real-time, audio/video animation. Such systems would record --- tens of millions of compressed
photographs, millions of which would be stored in morphable, semantically-labeled photosynths for generating 3D
images and animations --- millions of seconds of compressed audio and moving images --- including models of humans
communicating --- and --- many millions of mental patterns and behaviors relating to understanding human intentions,
communications between humans, and communication with humans.

From the above we can see that the 200TBytes of storage --- provided by the hypothetical 2020 system --- particularly
if it uses a context-sensitive, invariant representation scheme (of the type discussed in more detail below) --- is almost
certainly enough to represent much more functional world knowledge than a human mind can store --- and to ground
the concepts in such knowledge in an extremely rich web of sensory, cognitive, emotional, and behavioral memories
and associations. This grounding should be much more than enough to give such a system's symbols --- true
“meaning.”


This hypothetical 2020 system --- not only has enough capacity to more than represent human-level world knowledge
--- it also appears to have enough computational and communication capacity to reason from such world knowledge
faster than humans. The 2020 system’s ability to randomly read its PCM memory 160 trillion times per second, and to
perform over a 400 trillion random read-modify-writes to portions of its EDRAM representing dynamic activation values
of patterns stored in the PCM, give it tremendous power to reason from its world-knowledge. It would have enough
power to perform relatively shallow and most-probable (i.e. subconscious) inferencing simultaneously from billions of
somewhat activated patterns --- and --- relatively deep and/or broad (i.e., conscious) inferencing, involving tens of
billions of multi-level spreading activations from each of a small number of highly activated patterns that were the focus
of attention. This allows rich, deep, grounded, and highly dynamic activation states. Ones that would probably have
more useful informational complexity than those in our own minds.

These dynamic activation states --- when combined with mental behaviors for dynamically selecting and focusing
attention --- can give rise to a powerful combination of conscious and subconscious thought. In this combination,
conscious thought would commonly result from massive activation from a relatively small number of concepts and their
relationships. The concepts chosen for such massive conscious activation would be generated, tested, and selected by
many billions, or trillions, of computations in the subconscious. These subconscious computations would be made in
response to sensations, emotions, desires, goals, memories --- and --- from activations from current and recently
consciously activated concepts. In such a system the distribution of activation energy between conscious and
subconscious activations, and between various activations within the subconscious, can be rapidly varied. For example,
this allows each of many increasingly higher scoring networks of activation in the subconscious to receive increasingly
more activation energy to verify which of them actually warrant being thresholded into conscious attention.


In summary, the above numbers give us good reason to believe that within ten years it will be commercially viable to
build and sell machines that have the representation, computation, processor-to-memory, and processor-to-processor
capacities necessary to support human-level --- and likely superhuman-level --- intelligence.

As one former head of DARPA’s AI funding told me, “The hardware being there is a given. It’s the software that’s
needed.”



THE SOFTWARE
=============

Tremendous advances have been made in artificial intelligence in the recent past. This is largely due to the ever
increasing rate of progress in brain science. It is also due to the increasing power of the computers that researchers
can experiment with.

One example of such recent progress is the paper “Learning a Dictionary of Shape-Components in Visual Cortex:...”,
by Thomas Serre of Prof.Tomasa Poggio’s group at MIT. It describes a system that provides human-level performance
in one limited, but impressive, type of human visual perception ( http://cbcl.mit.edu/...TR-2006-028.pdf ). The Serre-
Poggio system learns and uses patterns in a generalization and composition hierarchy. This allows efficient multiple use
of representational components, and computations matching against them, in multiple higher level patterns. It allows the
system to learn in compositional increments. It also provides surprisingly robust invariant representation. Such invariant
representation is extremely important because it allows efficient non-literal matching, pattern recognition, and context
appropriate pattern imagining and instantiation. Such non-literal match and instantiation tasks have --- until recently ---
been among the major problems in trying to create human-like perception, cognition, imagination, and planning.

Although it is different than the Serre-Poggio system, the system described in Geoff Hinton’s Google Tech Talk at http:
//www.youtube.c...h?v=AyzOUbkUf3M demonstrates a character recognition architecture that shares many of these
same beneficial characteristics --- including a hierarchical, scalable, and invariant representation/computation scheme
that can be efficiently and automatically trained. The Hinton scheme is quite general, and can be applied to many types
of learning, recognition, and context sensitive imagining. The architecture described by Jeff Hawkins et al. of Numenta,
Inc. in “Towards a Mathematical Theory of Cortical Micro-circuits” ( http://www.ploscompb...al.pcbi.1000532 ) also
shares the concepts of hierarchical memory and invariance, and provides a potentially powerful and general
computational model that attempts to describe the functioning of the human cortex in terms of its individual layers.

Similar amazing advances have been made in understanding other brain systems --- including those that control and
coordinate the behavior of, and between, multiple areas in the brain --- and those that focus attention and decide which
of competing actions to take or consciously consider.

These advances, and many more, provide enough understanding that we can actually start experimenting with designs
for powerful artificial minds. It’s not as if we have exact blue prints. But we do have a good overview, and good ideas on
how to handle every problem I have ever heard mentioned in regard to creating roughly brain-like AI. As Deb Roy, of
MIT, once agreed with me after one of his lectures, there are no problems between us and roughly human-level AI that
we have no idea how to solve. The major problem that exists is the engineering problem of getting all the pieces to fit
and work together well, automatically, and within a commercially-viable computational budget. That will take
experimentation.

But we certainly do know enough to design and build general artificial intelligences that could provide useful functions.


The most complete, currently-publicly-available artificial brain architecture of which I am aware, is the OpenCogPrime
architecture. It has been created by the open-source AGI initiative headed by Ben Goertzel. There may be other
equally complete and impressive brain architectures available to the public. But since I do not know them, let me give a
brief -- but hopefully revealing – overview of the OpenCog architecture --- as I understand it, in combination with some
of my own thinking on the subject. (The OpenCog architecture is described at http://www.opencog.o...ok#Introduction .)

OpenCog starts with a focus on “General Intelligence”, which it defines as “the ability to achieve complex goals in
complex environments.” “AGI” stands for Artificial General Intelligence. It is focused on automatic, interactive learning,
experiential grounding, self understanding, and both conscious (focus-of-attention) and unconscious (less attended)
thought.

It records its sensory and emotional experiences, finds repeated patterns in such recordings, makes generalizations
and compositions out of such patterns --- all through multiple levels of generalization and composition --- based on
spatial, temporal, and learned-pattern-defined relationships. It uses Bayesian mathematics --- based on the
frequencies of the detection of such patterns and their relationships --- in a way that allows inferences to be drawn from
many billions of activated patterns at once.

Patterns -- which can include behaviors (including those that control the operation of the mind itself) -- are recorded,
modified, generalized, and deleted all the time. They have to compete for their computational resources, including
memory space, and, thus, their own continued existence. Re-enforcement learning and other forms of credit
assignment are used to determine which patterns are useful enough to be kept, and for how long. This results in a self-
organizing network of similarity, generalization, and composition patterns and relationships, that all must continue to
prove their worth in a survival-of-the-fittest, goal-oriented, experiential-knowledge ecology.

Re-enforcement learning is also used to weight patterns for long-term and current short term importance, based on the
roles they have played in achieving the system’s goals in the past. These indications of importance -- along with a deep
memory for past similar experiences, contexts, and goals, and for the relative usefulness of various past inferencing
behaviors in such contexts -- significantly narrow and focus attention and spreading activation. This helps avoid the
pitfalls of combinatorial explosion, and it tends to result in context-appropriate perception, cognition, and behavior.

OpenCog uses evolutionary program learning --- somewhat like genetic programming --- to increase the system’s ability
to learn and tune: generalizations of patterns; classifiers; creative ideas; and behaviors --- including physical, attention
focusing, inferencing, and learning behaviors. This evolutionary learning is made more powerful by being used by ---
and by using --- the rest of the system. This includes the system’s composition and generalization hierarchy, its network
of probabilistic associations, its inferencing, and its reinforcement learning. Evolutionary programs can be used by the
system’s experiential probabilistic learning. Such programs can, themselves --- along with experientially learned
patterns --- be incorporated --- with or without modification --- into the learning of new evolutionary programs. A
compositional and generalization hierarchy including such evolutionarily-learned programs enables complex programs
to be learned more efficiently in incremental steps from more simple ones. Experiential memories help guide and
evaluate the evolutionary process, including reducing the computation required for estimating the fitness functions for
many evolutionary candidates. Experiential memories can also provide information for probabilistically inferring which
programs are appropriate to employ, with which parameters, in which contexts.


Taken together, software architectures like those discussed above --- when combined with the hardware likely to be
available within a decade --- will allow AGI systems to automatically learn, reason, plan, imagine, and create with a
sophistication and power never before possible -- not even for the most intelligent of humans.


Of course, it will take some time for the first such systems to automatically learn roughly the equivalent of human-level
world knowledge. After all, it takes over twenty years for most human minds to train up. But there is reason to believe
substantial portions of such machine learning could be performed in parallel. And since such machines will be capable
of remembering vastly more detail than humans, their learning should be much faster. Such machine learning is likely to
be better grounded in physical reality, if such machines can control robotic bodies with human-like senses that enable
them to learn by exploring the physical world, as do human children --- or by, at least, having the equivalent in a fairly
accurate virtual world. The learning of many concepts would be improved by having human teachers. Once such a
system achieves a certain level of world-knowledge --- including a child’s level of common-sense physics, basic human
behavior, natural-language, and visual scene understanding --- they will be able to rapidly learn by reading and viewing
images and diagrams from large libraries of digitally recorded books, and from media on the web.

And once one such system has been fully trained in basic world knowledge --- or in the knowledge relating to a given
field of expertise that is linked to a common representation of such world knowledge--- that knowledge can be copied to
another similar machine in seconds or minutes.



WILL IT WORK?
============

The answer is most probably, yes, because such systems will: ---

-a--- in multiple important ways --- work like the human brain, itself;

-b--- have enough representation, computation, and interconnect capacity to make types of AI that were never before
even close to possible for most in the AI community --- not only possible --- but commercially practical --- including the
ability to represent and rapidly reason from grounded, human-level world knowledge --- and

-c--- benefit from the explosion of AI related advances that will occur in this decade.

This explosion of AI-related advances --- in addition to the hardware advances described above --- will occur in: ---
brain science --- generalized machine learning and inferencing --- attention and inference control --- large-scale
semantic web applications --- learning and reasoning from self-organizing ontologies --- natural language
understanding and generation --- common-sense and world-knowledge learning and computing --- evolutionary
learning --- machine vision --- multimedia indexing --- command and control --- national security and defense
applications --- search --- robotics --- personal assistants --- web agents --- user interfaces --- and more human-like
characters for video games and virtual realities. All of this will be in addition to the increasing research that will be
performed in AGI, itself.

Within three to seven years, hardware having the effective representation, computation, and inter-connect of small
mammal and, then, primate brains will be available --- at sufficiently low costs that thousands of such systems will be
used by academic and corporate teams to experiment in such fields. All this research will help identify, refine, and tune
various general algorithms that could be put together to create powerful generalized “thought robots” --- i.e., powerful
artificial intelligences that can --- automatically --- or with relatively little handcrafting --- tune their learning and mental
behaviors to achieve various goals across a broad range of applications.

AGI is not currently competitive for most applications, because its general algorithms tend to require much more
training, memory, and computation than AI systems handcrafted by humans to solve a particular set of problems. But
many of the learning, inferencing, and inference control mechanisms deployed in more narrow applications can be
generalized to have applicability to AGI. And many early AGI’s will have handcrafted parts to make them more
competitive for specific applications. As memory, computation, and interconnect costs drop drastically relative to
programming costs --- particularly relative to the cost of handcrafting AIs for extremely complex problems --- larger,
more general, and more capable AGI will become ever more competitive.

Once created, AGI will be particularly attractive for corporate and cloud computing --- because it can automatically be
adapted to the many different uses that different people will want from AI services --- and because it can provide
superintelligent user interfaces --- using text, speech, audio, vision, and the real-time generation of animation --- to
make it easy for users to instruct, monitor, and learn relevant information from such machines.


So can human-level AGI be built?

Yes!

The only question is how fast. And it is almost certain that if the right people, got the right money, it could be built within
ten years --- that is --- by no later than Twenty-Twenty.

Making this happen should be our nation’s "Twenty-Twenty vision" because machine superintelligence is the most
transformative technology of all.



THE CONSEQUENCES
=================

It is hard to overstate the economic value and transformative power of the types of machines that will probably be built
by 2020 --- and if not by then --- by 2030.

The one-million-dollar 2020 hardware hypothesized above could be rented out on the web, at a profit, for roughly $50
an hour. It --- would have superhuman concentration --- could work close to 24/7 --- could perform many types of
reasoning tasks millions of times faster than a human --- and --- if connected to a cloud of similar machines that stored
a large percent of human knowledge in instantly-accessible semantic deep structure --- it would, in effect, have
photographic memory for almost all of recorded human knowledge. It is not unrealistic to think that for a large number
of tasks such a machine could do work at a higher rate than one hundred programmers, lawyers, doctors, or managers.

If such a system were part of a computing cloud --- then an average of --- 64 thousand cores --- 500 GBytes of 2ns
EDRAM --- 3 TBytes of 20ns-read-access PCM memory --- and --- over 300 billion global, 64Byte messages per
second --- could be provided to serve an individual user of a wireless mobile phone; retinal-scanning, headset
computer; or other personal device --- at roughly the same price currently charged for long distance phone calls. This
should be enough power to provide users with moderately good --- natural language --- vision --- real-time animation ---
intelligent search --- semantic web reasoning --- and machine-mediated collective computing.

Most of the time they were connected, users would not even begin to fully use the 64K processor chunk of hardware
described above, but there would frequently be tasks demanding more power --- such as understanding difficult natural
language constructions --- performing computationally intensive queries, summaries, and reasoning --- and ---
synthesizing creative solutions to complex problems. Larger portions of the cloud could be used in a multiplexed
manner for such tasks. Users who utilize more than a certain amount of the cloud's resources in a given time could be
notified that they were about to do so, and be billed extra for it at less than one dollar for a the equivalent of using one
of the above described one-million-dollar 2020 machine’s worth of hardware for one minute.

That one dollar should be enough, for example, to get a reasonably well reasoned legal brief on a moderately complex
issue --- something that would cost several hundred to several thousand dollars from most American lawyers.

Even if we make the extremely conservative assumption that our one-million-dollar 2020 machine could only
simultaneously do the work of ten human lawyers, doctors, financial experts, or managers --- that would mean it could
provide the services of such a professional or manager for $5 dollar per hour --- making most such highly educated
professionals or managers unemployable at the current minimum wage.

Furthermore if new areas of electronics --- such as 3D, carbon nanotube, graphene, nanowire, quantum dot, spintronic,
quantum entanglement, molecular, neuromorphic, and self-organizing electronics --- keep Moore’s law going for several
decades past the final density expected for traditional silicon electronics --- in twenty to thirty years a machine of power
similar to the one-million-dollar 2020 system might cost less than a current personal computer. Such a continuation of
Moore's law is likely. This is because machine superintelligences can be produced at even the 22nm node at
sufficiently low prices that they could be commercially useful for greatly increasing the rate of development in
electronics and electronic design. If such cost reductions are, in fact, obtained, virtually all human mental jobs could be
replaced for one or two pennies an hour in thirty years. If superintelligence is used to speed advances and cost
reductions in robotics --- all of humanity --- including in places like China, India, Vietnam, and Haiti --- will cease to be
competitive for most current forms of work.

AGI will create a historical “singularity” of the type Ray Kurzweil has done so much to popularize. That is, a
technological change so powerful --- that --- when combined with the massive acceleration it will cause in --- the internet
--- electronics --- computing --- robotics --- nanotechnology --- and --- biotechnology --- it will warp the very fabric of
human economies, cultures, values, and societies in somewhat the same way the singularity of a black hole warps the
fabric of space-time --- and is believed --- by some --- to create an entirely new universe -- one largely disconnected
from its past in space and time.

Can we, or should we, stop the advent of superintelligence?

No. It is futile to try.

Too many people already know how much technological, economic, military, cultural, and political advantage can be
gained by the nations and corporations that are first to substantially deploy it. It cannot be stopped because electronic
technology and our understanding of intelligence are already so advanced that in a decade the development of
superintelligence will be well within the economic and intellectual grasp of a most nations, thousands of corporations,
and hundreds of universities. It is already within the grasp of the world’s leading nations and technological companies. It
cannot be stopped by international agreements, because --- compared to the development of nuclear weapons --- in a
decade, machine intelligence can be developed for very little money, in very little space, with relatively little electricity.
Its development would be very difficult to detect, prove, or stop.


There are many reasons we should want --- rather than oppose --- the development of superintelligence. It --- and the
rapid advances in technology and productivity it will bring --- could be a force for tremendous good.

It could create a world of material, medical, mental, and intellectual well being and richness. It could help us develop
highly efficient, sustainably, less-polluting, factories, farms, stores, corporations, and transportation. It could teach us
how to cure most disease, how to keep our bodies younger longer, and how to make our minds work in more powerful
and satisfying ways. It could help us to become a truly enlightened species. It could educate all of us, with virtual tutors
more knowledgeable and more capable of explaining things to us than the most brilliant and attentive human teachers.
It could learn to know each of us better than we know ourselves, and to provide us with personal counseling superior to
that of the best psychologist or friend.

It could help us better simulate and determine the costs, benefits, and risks of personal, corporate, and governmental
decisions. It could enable people to communicate, collaborate, and deliberate with an efficiency and fairness never
before possible. It could help us better deal with the rapid changes it will produce --- such as the fact it will end most
current ways of earning a living in the industrialized world --- by helping us to create a new, fair, and sustainable social
contract --- and new types of meaningful work ---- such as sharing more responsibility in much more participatory local,
regional, national, and world governments and institutions. It can allow us to have AGI-mediated, real-time virtual
conversations, debates, celebrations, songs, dances, games, and prayers --- in which hundreds, thousands, millions,
or billions of people take part.

The world is facing many challenges that seem beyond the capacity of our current political institutions to solve. It is
arguable we need superintelligence to help us find how to provide food, shelter, clothing, medical services, education,
meaningful lives --- and --- most importantly --- peace --- for the projected 9 billion people that will populate earth by
2050. Most of these will come from societies that are brutally poor --- and yet most of them will have access to the
extremely powerful --- but, by 2050, inexpensive --- personal information devices of the future --- video devices that will
likely teach them to want as much power and material wealth as people in the richest nations. It is arguable that we will
need superintelligence to help us deal with such problems without poisoning our planet with pollution --- or by
destroying it with war or terrorism.


But superintelligence, and the technologies it will bring, could also cause great harm.

Unless we are careful --- in addition to putting most of us out of work --- superintelligence can be used to create
surveillance systems and robotic police or military forces that could enable one class, group, person, or system of
machines to create a powerful oligarchy or dictatorship. Unethical people, governments, or machines will almost
certainly use superintelligences to constantly try hacking into our networks and intelligent machines --- trying to take
control of them for their own selfish purposes.

AGI can create virtual realities, friends, and lovers that are much more attractive, attentive, sensitive, romantic, funny,
and seductive than those that are real. These virtual worlds and personalities could weaken the bonds between
humans, and could seduce us into increasingly turning over more attention and power to machines, and to the virtual
worlds they generate for us --- whether those machines are controlled by businessmen, political leaders, or machines
themselves. Low income housing may become stackable 4 x 4 x 8 foot plastic pods with super HD 3D virtual interfaces,
in which the elites provide the masses with a bare-minimum physical reality, but an extremely rich --- and much less
expensive --- virtual one.

Human laziness may well lead us to turn too much power over to superintelligences --- so much so that we might soon
be at their mercy. Ultimately, the machines themselves might well take over. And if they do --- it is not clear whether
they would like us enough to let us keep consuming so much of the earth’s resources --- which they, themselves, could
use for their own purposes --- and their own progeny.


Some transhumanists say --- the only way in which what we value as "human" can remain competitive in a world bound
to be ruled by superintelligences is for us to increasingly merge our values, memories, consciousnesses, and bodies
with such machines. They say our seduction by virtual worlds, friends, and lovers is a good thing --- because it will
make --- what they view --- as the necessary man-machine merger --- more emotionally acceptable. To make our lives
more meaningful --- they say --- we should view the machines as our kin and our posterity.

Some transhumanists suggest it is necessary for our very survival that we place into, or onto, our brains high bandwidth
connections to superintelligences. Preferably such connections will be to a World-Wide-Web of other people similarly
connected to such machines. This would make us into Star-Trek-like borgs --- but, let us hope, ones with substantially
more individualism, humor, and happiness.

Other transhumanists suggest uploading our minds to run on such machines so they can “live” for billions of years.

To most people all this sound like a whipped out science-fiction horror flick. But there are reasons to believe that within
only decades --- almost certainly by the end of this century --- much of this could, in fact, come true.

The transhumanists may be right. We humans largely rule the earth because our intelligence and knowledge excels
that of all other species. By analogy, it only makes sense that --- starting in several decades when there are likely to be
networks of many superintelligences --- each thousands of times smarter than humans --- there will ultimately come a
time when machines take domination away from us.

That is, perhaps, unless we join them, and make them part us, and us part them. There are already many who look
forward to connecting their brains to superintelligences --- and it is almost certain, that once superintelligence arrives,
the people who use such implanted, high bandwidth, connections to such machines will be more successful than those
who do not.

But even the transhumanist scenario requires that humanity act intelligently and wisely if the transition to humanity+ is
to be a happy one.

How we develop --- use --- and control superintelligence is one of the greatest challenges facing mankind. We cannot
stop its advent, but we can try to control it --- to reduce its danger, at least to some degree. Great flexibility is possible
in the design of AGI’s, and we should be careful to learn what types of machines are likely to be more safe and what
types are likely to be more dangerous. We should learn how to best use the safer types of superintelligence to protect
us against the more dangerous. If you care about humanity --- more important than creating superintelligence per se ---
is creating super-intelligence that is well combined with the wisdom, compassion, and voices and concerns of billions of
individual human beings.


That is why, ultimately --- from humanity’s standpoint --- the most important technology of all is collective intelligence.

It is the technology of using the internet, computers, and, soon, superintelligence, to enable groups, corporations,
nations --- and ultimately the world --- to think and act together more intelligently, successfully, and humanely --- as we
--- as a species --- have to navigate in an ever more rapidly-changing future.

And that is why --- the single most important use of superintelligence --- is to help give mankind enough collective
intelligence that --- for decades , or, perhaps, even centuries --- humanity can safely and happily travel into that rapidly-
changing future.

(For a more complete discussion of collective intelligence see http://fora.humanity...he-singularity/ .)

Reply
#2   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 09 February 2010 - 11:45 PM
P.S. If any of you have actually taken the trouble to read my above topic post, I would be interested in hearing your
comments --- either as a post below, or as an email sent to me through my profile.

I know that my text contains some rather dense text, but that is because it is attempting to summarize a lot of dense
stuff in both hardware and AI software, and I don't know the knowledge level of most of the readers in this list with
regard to the type of technology I am describing. I assume many of them are not interested in the details of AI
hardware, and others are not interested in technical details at all, but only the "New Age" aspects of superintelligence.

But I hope those who are really interested in making human-level AGI happenn, and in knowing when it is likely to
happen, will understand that the current and prospective state of the art in both fields are very important to such
concerns. Ed Porter
0
MultiQuote
Reply
#3   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 10 February 2010 - 06:51 PM
P.S. Regarding this topic --- there is a very good article entitled “How Long Till Human-Level AI? What Do the Experts
Say?” written by Ben Goertzel, Seth Baum, Ted Goertzel at http://hplusmagazine...-human-level-ai

To me its most important information is in the figure entitled “When Will Milestones of AGI be Achieved without New
Funding”. It indicates that, of the 21 attendees at the AGI 2009 conference who answered the survey, 42% think AGI’s
capable of passing the Turning Test will be created within ten to twenty-years.

Oddly that is slightly more than the 38% who think AGI’s would achieve the human-like capabilities of a 3rd grader
within the same time frame. This might reflect the fact that too many of the attendees have been influenced by the
famous Eliza experiment, which was a quasi Turing Test that actually managed to fool some people into thinking they
were reading text generated by a human doctor --- using mid-1960s computers.

I have always assumed the Turing test would be administered by humans who understood human psychiatry and brain
function, and artificial intelligence sufficiently that they would be able to smoke out a sub-human intelligence relatively
quickly in the Turning Test.

In fact, I am the person quoted in that article for giving my reasons why I thought it would be more difficult to make a
computer pass the turning test than to posses many of the other useful intellectual capabilities of a powerful human
mind --- as quoted in the paragraph that follows:

“One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one
that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it
comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from
a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific
ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities,
masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to
spend time and money developing this capacity in a computer.”


I thought the idea --- suggested in one of the survey questioned mentioned in the article --- that AGI might be funded
by 100 billion dollars is a little rich. I understand, however, such a large figure was picked to --- in effect --- ask how
people how fast they thought AGI would be developed if money was virtually no obstacle.

I think AGI could be developed over ten years for well under 500 million dollars if the right people were administering
and working on the project. (This does not count all the other money that is already likely to be invested in electronics,
computer science, and more narrow AI in the coming decade.) Unfortunately, it would be hard for the government to
know who were the right people, and what were the right approaches, for such a project. But I believe a well designed
project, designed to achieve human level AGI, almost certainly could succeed in ten years with only 2 to 4 billion dollars
of funding over that period. Such a project would fund multiple teams with say 10 to 30 million dollars to start, and then
increasingly allocate funding over time to the teams and approaches that produced the most promising results.

2 to 4 billion dollars over ten years would be totally within the funding capacity of multiple government agencies.

Developing AGI in that time frame would be exceptionally valuable to America --- because it would give a tremendous
chance to save our economy before its is bled to death --- by our trade imbalance with the rapidly developing world ---
and --- by the many tens of trillions of dollars of in health care and other unfunded benefits America owes its seniors
and government workers.

Ed Porter
0
MultiQuote
Reply
#4   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 24 February 2010 - 11:32 PM
Ed Porter, on 04 February 2010 - 11:40 PM, said:

WILL IT WORK?
============

The answer is most probably, yes, because ...
...
...many of the learning, inferencing, and inference control mechanisms deployed in more narrow applications can be
generalized to have applicability to AGI.



As evidence of the above statement, I am attaching a link to a lecture by Pedro Domingos of the University of
Washington on what he views as a highly generalized AI learning and inferencing system using Markov logic networks.
http://videolectures...domingos_mlwuv/ This representation shares many features with the hypergraph representation in
OpenCogPrime by Goertzel et al.

There are some other interesting lectures from the same conference at http://videolectures...ciw08_whistler/
0
MultiQuote
Reply
#5   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 28 February 2010 - 06:34 PM
Other voices predicting AI by 2020
=======================


In the main post above I stated human level AI could be probably built within roughly a decade, by 2020.

That is much sooner than the conventional wisdom in the AI community. But there are some very knowledgeable people
who share my guess of approximately 2020. And some of them have considerable resources to throw at the problem.

In a Google Tech Talk, recorded in May 2006, Doug Lenat, mentioned in passing that Sergey Brin, one of the two
founders of Google, had said AI could be built by 2020. Doug Lenat is head of Cycorp, the corporate continuation of
one of the largest and longest big-AI projects. Lenat’s talk is at http://video.google....88615049492068# . It provides a
good overview of Cycorp ‘s Cyc system, and has an amusing introduction of Doug by Peter Norvig, co-author of one of
the leading textbooks on AI and Google’s director of research.

In response to Lenat’s statement about Brin’s projection, I did a brief web search to see if I could find exactly what Brin
had said about achieving AI by 2020. I was unable to find any other reference to the quote. But I did find the following
information relevant to Google’s pursuit of AI and to the 2020 estimate.

As was cited on multiple web sites --- including http://www.naturalse...brave-new-world --- Google’s Larry Page said at
the 2007 conference for the American Association for the Advancement of the Sciences, that researchers at Google
were working upon developing Artificial Intelligence. He said human brain algorithms actually weren’t all that
complicated and could likely be approximated with sufficient computational power. He said, “We have some people at
Google (who) are really trying to build artificial intelligence and to do it on a large scale. It’s not as far off as people
think.”

According to http://www.alexandri...ndex.php?s=brin : Sergey Brin is reported to have said that the perfect search
engine would “look like the mind of God“. Similar ideas, but less extravagantly worded, have come from Marissa Mayer,
Google’s VP of Search Products and User Experience when she talked about how Google’s massive data stores and
sophisticated algorithms are acting more and more like “intelligence”.

In 2008 Nicholas Carr --- who served as executive editor of the Harvard Business Review, and who has written
extensively on information technology --- wrote a book entitled The Big Switch: Rewiring the World, From Edison to
Google. A review of it, at http://computersight...-nicholas-carr/ , says:

“the book discussed the future of computing. The main discussion was with Google founders, Larry Page and Sergey
Brin, about their dream of what their search engine will do in the coming years. According to Page and Brin, artificial
intelligence is the main goal of those behind the future of Google. Google wants to link the human brain with the
computer to share its search engine. The author also spoke about advancements Microsoft and other Computer
Scientists want for the future of computing. …According to Carr, in 2020, Google’s dream may come true.”

At http://www.forbes.co...1computing.html , Andy Greenberg of Forbes.com interviews Carr about his book. Below is an
excerpt:

[AG]Looking further ahead at Google's intentions, you write in The Big Switch that Google's ultimate plan is to create
artificial intelligence. How does this follow from what the company's doing today?

[NC] It's pretty clear from what [Google co-founders] Larry Page and Sergey Brin have said in interviews that Google
sees search as essentially a basic form of artificial intelligence. A year ago, Google executives said the company had
achieved just 5% of its complete vision of search. That means, in order to provide the best possible results, Google's
search engine will eventually have to know what people are thinking, how to interpret language, even the way users'
brains operate.

Google has lots of experts in artificial intelligence working on these problems, largely from an academic perspective.
But from a business perspective, artificial intelligence's effects on search results or advertising would mean huge
amounts of money.

[AG] You've also suggested that Google wants to physically integrate search with the human brain.

[NC]This may sound like science fiction, but if you take Google's founders at their word, this is one of their ultimate
goals. The idea is that you no longer have to sit down at a keyboard to locate information. It becomes automatic, a sort
of machine-mind meld. Larry Page has discussed a scenario where you merely think of a question, and Google
whispers the answer into your ear through your cellphone.

[AG] What would an ultra-intelligent Google of the future look like?


[NC]I think it's pretty clear that Google believes that there will eventually be an intelligence greater than what we think of
today as human intelligence. Whether that comes out of all the world's computers networked together, or whether it
comes from computers integrated with our brains, I don't know, and I'm not sure that Google knows. But the top
executives at Google say that the company's goal is to pioneer that new form of intelligence. And the more closely that
they can replicate or even expand how peoples' mind works, the more money they make.

[AG] You don't seem very optimistic about a future where Google is smarter than humans.

[NC]I think if Google's users were aware of that intention, they might be less enthusiastic about the prospect than the
mathematicians and computer scientists at Google seem to be. A lot of people are worried that a superior intelligence
would mean for human beings.

I'm not talking about Google robots walking around and pushing humans into lines. But Google seems intent on
creating a machine that's able to do a lot of our thinking for us. When we begin to rely on a machine for memory and
decision making, you have to wonder what happens to our free will.

At http://www.latimes.c...-oe-keen12jul12,1,1010933.story?ctrack=1&cset=true , Google CEO Eric Schmidt is reported
to have said in 2007:

“By 2012, he wants Google to be able to tell all of us what we want. This technology, what Google co-founder Larry
Page calls the "perfect search engine," might not only replace our shrinks but also all those marketing professionals
whose livelihoods are based on predicting — or guessing — consumer desires.”

The article also says

“iGoogle is growing into a tightly-knit suite of services — personalized homepage, search engine, blog, e-mail system,
mini-program gadgets, Web-browsing history, etc. — that together will create the world's most intimate information
database. On iGoogle, we all get to aggregate our lives, consciously or not, so artificially intelligent software can sort
out our desires. It will piece together our recent blog posts, where we've been online, our e-commerce history and
cultural interests. It will amass so much information about each of us that eventually it will be able to logically determine
what we want to do tomorrow and what job we want.”

http://www.computerw...prizes_by_2020/ is an article about Ian Peterson, chief futurologist at British Telecom. In it he
says:

“We will probably make conscious machines sometime between 2015 and 2020, I think. But it probably won't be like you
and I. It will be conscious and aware of itself and it will be conscious in pretty much the same way as you and I, but it will
work in a very different way. It will be an alien. It will be a different way of thinking from us, but nonetheless still thinking”

In response to the interviewer pointing out that

“…as soon as machines become intelligent, according to Moore's Law they will soon surpass humans. By the way, BT's
2006 technology timeline predicts that AI entities will be awarded with Nobel prizes by 2020, and soon after robots will
become mentally superior to humans. What comes after that: the super intelligence or God 2.0? “


Peterson responds
“I think that I would certainly still go along with those time frames for superhuman intelligence, but I won't comment on
God 2.0. I think that we still should expect a conscious computer smarter than people by 2020. I still see no reason why
that it is not going to happen in that time frame. But I don't think we will understand it. The reason is because we don't
even understand how some of the principal functions of consciousness should work. “

Of course, Microsoft Research is also putting a lot of effort into artificial intelligence research. A March 2, 2009 New
York Times article at http://www.nytimes.c.../02compute.html , reports on some of Microsoft’s efforts in the field. Among
other interesting things it says:

“Craig Mundie, the chief research and strategy officer at Microsoft, expects to see computing systems that are about 50
to 100 times more powerful than today’s systems by 2013.

“Most important, the new chips will consume about the same power as current chips, making possible things like a
virtual assistant with voice- and facial-recognition skills that are embedded into an office door.

“We think that in five years’ time, people will be able to use computers to do things that today are just not accessible to
them,” Mr. Mundie said during a speech last week. “You might find essentially a medical doctor in a box, so to speak, in
the form of your computer that could help you with basic, nonacute care, medical problems that today you can get no
medical support for.”

“With such technology in hand, Microsoft predicts a future filled with a vast array of devices much better equipped to
deal with speech, artificial intelligence and the processing of huge databases.”

So, in sum, there is good reason to believe there will be an explosion in AI in the next ten years.








0
MultiQuote
Reply
#6   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 11 March 2010 - 08:13 PM
DARPA's deep learning program could advance AGI.

Below is a link to a DARPA request for a proposal for a program to perform deep learning. It wants a system that can
automatically learn patterns of many different types from visual, auditory, and text with little human guidance, using
automatically learned hierarchical invariant representations, of the general type described in the first few paragraphs of
"THE SOFTWARE" section of the above post.

This is the type of project, which if the right people got the funding could really help advance AGI. It seems like
Numenta, Poggio's group, or Hinton, could all submit compelling responses to this proposal. The request says it is
interested in sponsoring multiple teams, and in disseminating much of what is learned to the public to advance the
computing arts.

Start reading at page four of http://www.darpa.mil...A-09-40_PIP.pdf


0
MultiQuote
Reply
#7   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 19 March 2010 - 04:03 PM
DARPA’s Mind’s Eye project likely to advance AI

======================================

The DARPA “Mind’s Eye” program is another example of an ambitious AI program that is likely to get us closer to
human-level AI. This program will be run out of DARPA's TCTO or Transformational Convergence Technology Officee.

The Mind’s Eye program --- to reach its goals --- has to be able to:

-have a fairly large invariant ontology of objects, motions, humans, weapons, military behaviors, scenes, and scenarios
it recognizes in many different instantiations, forms, views, scale, and lighting;
-do visual scene recognition and understanding;
-understand behaviors of entities it is seeing;
-map such understandings into a larger higher level representation and understanding of what is taking place around it;
-presumably have to combine audio and visual recognition, since sound is an important source of information in a
battlefield;
-have to have complex goal pursuit and attention focusing, to decide what to look at, and track, and spend its optical
and computational resources on; and
-have natural language communication capabilities, or some other method of creating concise reports for human
consumption and for receiving commands from humans
In sum, this project would require quite an advanced set of AI capabilities to function well.

The following is quoted from a short pdf at https://www.fbo.gov/...rch2010_(2).pdf , to spark interest for people to attend
a meeting at which the project will be discussed in more detail. It does not appear the BAA for this project has been
posted yet.


The Mind’s Eye program seeks to develop in machines a capability that currently exists only in animals: visual
intelligence. Humans in particular perform a wide range of visual tasks with ease, which no current artificial intelligence
can do in a robust way. Humans have inherently strong spatial judgment and are able to learn new spatiotemporal
concepts directly from the visual experience. Humans can visualize scenes and objects, as well as the actions involving
those objects. Humans possess a powerful ability to manipulate those imagined scenes mentally to solve problems. A
machine‐based implementation of such abilities would be broadly applicable to a wide range of applications.

This program pursues the capability to learn generally applicable and generative representations of action between
objects in a scene directly from visual inputs, and then reason over those learned representations. A key distinction
between this research and the state of the art in machine vision is that the latter has made continual progress in
recognizing a wide range of objects and their properties—what might be thought of as the nouns in the description of a
scene. The focus of Mind’s Eye is to add the perceptual and cognitive underpinnings for recognizing and reasoning
about the verbs in those scenes, enabling a more complete narrative of action in the visual experience.

One of the desired military capabilities resulting from this new form of visual intelligence is a smart camera, with
sufficient visual intelligence that it can report on activity in an area of observation. A camera with this kind of visual
intelligence could be employed as a payload on a broad range of persistent stare surveillance platforms, from fixed
surveillance systems, which would conceivably benefit from abundant computing power, to camera‐equipped perch‐and‐
stare micro air vehicles, which would impose extreme limitations on payload size and available computing power. For
the purpose of this research, employment of this capability on man‐portable unmanned ground vehicles (UGVs) is
assumed. This provides a reasonable yet challenging set of development constraints, along with the potential to
transition the technology to an objective ground force capability.

Mind’s Eye strongly emphasizes fundamental research. It is expected that technology development teams will draw
equally from the state of the art in cognitive systems, machine vision, and related fields to develop this new visual
intelligence. To guide this transformative research toward operational benefits, the program will also feature flexible and
opportunistic systems integration. This integration will leverage proven visual intelligence software to develop prototype
smart cameras. Integrators will contribute an economical level of effort during the technology development phase,
supporting participation in phase I program events (PI meetings, demonstrations, and evaluations) as well as
development of detailed systems integration concepts that will be considered by DARPA at appropriate times for
increased effort in phase II systems integration.





0
MultiQuote
Reply
#8   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 19 March 2010 - 04:41 PM
DARPA IPTO projects likely to advance AGI
==================================

Here is a summary of projects of DARPA’s IPTO (Information Processing Technique Office) taken from its web site. It
shows this office within DARPA is funding a lot of projects that are likely to speed the advance of AI. I have capitalized
the portions of text that seem most relevant to the development of AI. (Apologies to those who view all caps as
screaming. In the limited word processor offered in this forum, it seems the most efficient way to let one scan
highlighted text.). Particularly if it is combined with the type of deep learning DARPA is proposing, described in one of
my posts above, or if it combined with DARPA’s neuromorphic computing project


============================================================
Cognitive Systems @ http://www.darpa.mil...s/thrust_cs.asp
============================================================
COGNITIVE COMPUTING IS THE DEVELOPMENT OF COMPUTER TECHNIQUES TO EMULATE HUMAN PERCEPTION,
INTELLIGENCE AND PROBLEM SOLVING. Cognitive systems offer some important advantages over conventional
computing approaches. For example, COGNITIVE SYSTEMS CAN LEARN FROM EVENTS THAT OCCUR IN THE REAL
WORLD and so are better suited to applications that require EXTRACTING AND ORGANIZING INFORMATION IN
COMPLEX UNSTRUCTURED SCENARIOS than conventional computing systems, which must have the right models
built in a priori in order to be effective. Because many of challenges faced by military commanders involve vast amounts
of data from sensors, databases, the Web and human sources, IPTO is creating cognitive systems that CAN LEARN
AND REASON TO STRUCTURE MASSIVE AMOUNTS OF RAW DATA INTO USEFUL, ORGANIZED KNOWLEDGE WITH
A MINIMUM OF HUMAN ASSISTANCE. IPTO is implementing cognitive technology in systems that support warfighters in
the decision-making, management, and understanding of complexity in traditional and emergent military missions.
These cognitive systems WILL UNDERSTAND WHAT THE USER IS REALLY TRYING TO DO AND PROVIDE
PROACTIVE INTELLIGENCE, ASSISTANCE AND ADVICE. Finally, the increasing complexity, rigidity, fragility and
vulnerability of modern information technology has led to ever-growing manpower requirements for IT support. The
incorporation of COGNITIVE CAPABILITIES IN INFORMATION SYSTEMS WILL ENABLE THEM TO SELF-MONITOR,
SELF-CORRECT, AND SELF-DEFEND AS THEY EXPERIENCE SOFTWARE CODING ERRORS, HARDWARE FAULTS
AND CYBER-ATTACK.

Programs

Advanced Soldier Sensor Information System and Technology (ASSIST)
---------------------------------------------------------------------------------------------------------
The main goal of the program is to enhance battlefield awareness via exploitation of soldier-collected information. The
program will demonstrate advanced technologies and an integrated system for processing, digitizing and disseminating
key data and knowledge captured by and for small squad leaders.

Bootstrapped Learning (BL)
---------------------------------------------------------------------------------------------------------
THE BOOTSTRAPPED LEARNING PROGRAM SEEKS TO MAKE INSTRUCTABLE COMPUTING A REALITY. THE
"ELECTRONIC STUDENT" WILL LEARN FROM A HUMAN TEACHER WHO USES SPOKEN LANGUAGE, GESTURES,
DEMONSTRATION, AND MANY OTHER METHODS ONE WOULD FIND IN A HUMAN MENTORED RELATIONSHIP.
FURTHERMORE, IT WILL BUILD UPON LEARNED CONCEPTS AND APPLY THAT KNOWLEDGE ACROSS DIFFERENT
FIELDS OF STUDY.

EMBEDDING BL TECHNOLOGY IN COMPUTING SYSTEMS WILL ELIMINATE THE NEED FOR TRAINED
PROGRAMMERS IN MANY PRACTICAL SETTINGS, significantly accelerating human-machine instruction, and making
possible on-the-fly upgrades by domain experts rather than computer experts. Target applications include a variety of
field-trainable military systems, such as human-instructable unmanned aerial vehicles. However, BL technology is being
developed and tested against a portfolio of training tasks across very diverse domains, thus it can be applied to any
programmable, automated system. As such systems have become ubiquitous, and their operation inaccessible to the
layperson, there is also the strong prospect of societal adoption and benefit.

Brood of Spectrum Supremacy (BOSS)
---------------------------------------------------------------------------------------------------------
The goal of the Brood of Spectrum Supremacy (BOSS) program is to provide a radio frequency (RF) spectrum
analogue to night vision capabilities for the tactical warfighter, with a particular focus on RF-rich urban operations. The
program is intended to apply collaborative processing capabilities for software-defined radios to specific military
applications.

Cyber Trust (CT)
---------------------------------------------------------------------------------------------------------
The Cyber Trust program will create the technology and techniques to enable trustworthy information systems by:
1. Developing hardware, firmware, and microkernel architectures as necessary to provide foundational security for
operating systems and applications.
2. Developing tools to find vulnerabilities in complex open source software.
3. Developing scalable formal methods to formally verify complex hardware/software.

Integrated Learning (IL)
---------------------------------------------------------------------------------------------------------
The Integrated Learning program SEEKS TO ACHIEVE REVOLUTIONARY ADVANCES IN MACHINE LEARNING BY
CREATING SYSTEMS THAT OPPORTUNISTICALLY ASSEMBLE KNOWLEDGE FROM MANY DIFFERENT SOURCES
IN ORDER TO LEARN. THE GOAL IS TO MOVE BEYOND THE CURRENT STATISTICALLY-ORIENTED PARADIGMS
VIA THE INTEGRATION OF EXISTING LEARNING, REASONING, AND KNOWLEDGE REPRESENTATION
TECHNOLOGIES INTO A COHERENT ARTIFACT THAT WILL BE ABLE TO LEARN MUCH MORE QUICKLY AND
ROBUSTLY IN A WIDER RANGE OF APPLICATIONS. The program is FOCUSED UPON LEARNING MODELS OF
ACTION FROM VERY SPARSE DATA, which will provide the ability to develop more effective military decision/planning
support systems at lower costs. Target applications include military airspace management and medical logistics.

LANdroids
---------------------------------------------------------------------------------------------------------
Communications are essential to warfighters - they enable warfighters to share situational awareness and to stay
coordinated with each other and command. Communications are important for voice and data and the importance for
data traffic will only increase in the future. The problem is that urban settings hinder communications. Buildings, walls,
vehicles, etc., create obstacles that impact the manner in which radio signals propagate. The net result is unreliable
communications in these settings, which can leave warfighters, sensors, etc., without the benefit of reach back to
command or each other.

This program will help to solve the urban communications problem by CREATING INTELLIGENT AUTONOMOUS
ROBOTIC RADIO RELAY NODES, CALLED LANDROIDS (LOCAL AREA NETWORK DROIDS), WHICH WORK TO
ESTABLISH AND MAINTAIN MESH NETWORKS THAT SUPPORT VOICE AND DATA TRAFFIC. Through autonomous
movement and intelligent control algorithms, LANdroids can mitigate many of the communications problems present in
urban settings, e.g., relaying signals into shadows and making small adjustments to reduce multi-path effects.

LANdroids will be pocket-sized and inexpensive. The concept of operations is that warfighters will carry several
LANdroids, which they drop as needed during deployment. The LANdroids then form the mesh network and work to
maintain it - establishing a communications infrastructure that supports the warfighters in that region.

Machine Reading (MR)
---------------------------------------------------------------------------------------------------------
The Machine Reading Program WILL BUILD A UNIVERSAL TEXT ENGINE THAT CAPTURES KNOWLEDGE FROM
NATURALLY OCCURRING TEXT AND TRANSFORMS IT INTO THE FORMAL REPRESENTATIONS USED BY
ARTIFICIAL INTELLIGENCE (AI) REASONING SYSTEMS. The Machine Reading Program will create an automated
reading system that SERVES AS A BRIDGE BETWEEN KNOWLEDGE CONTAINED IN NATURAL TEXTS AND THE
FORMAL REASONING SYSTEMS THAT NEED SUCH KNOWLEDGE.

Personalized Assistant that Learns (PAL)
---------------------------------------------------------------------------------------------------------
The mission of the PAL program is TO RADICALLY IMPROVE THE WAY COMPUTERS SUPPORT HUMANS BY
ENABLING SYSTEMS THAT ARE COGNITIVE, I.E., COMPUTER SYSTEMS THAT CAN REASON, LEARN FROM
EXPERIENCE, BE TOLD WHAT TO DO, EXPLAIN WHAT THEY ARE DOING, REFLECT ON THEIR EXPERIENCE, AND
RESPOND ROBUSTLY TO SURPRISE. MORE SPECIFICALLY, PAL WILL DEVELOP A SERIES OF PROTOTYPE
COGNITIVE SYSTEMS THAT CAN ACT AS AN ASSISTANT FOR COMMANDERS AND STAFF. Successful completion
of this program will usher in a new era of computational support for a broad range of human activity.

Current software systems - in the military and elsewhere - are plagued by brittleness and the inability to deal with
changing and novel situations - and must therefore be painstakingly programmed for every contingency. If PAL
succeeds it could result in software systems that could learn on their own - that could adapt to changing situations
without the need for constant reprogramming. PAL technology could drastically reduce the money spend by DoD on
information systems of all kinds.

This is the FIRST BROAD-BASED RESEARCH PROGRAM IN COGNITIVE SYSTEMS SINCE THE STRATEGIC
COMPUTING INITIATIVE FUNDED BY DARPA IN THE 1980S. Since then, there have been significant developments in
the technologies needed to enable cognitive systems, such as machine learning, reasoning, perception, and, multi-
modal interaction. Improvements in processors, memory, sensors and networking have also dramatically changed the
context of cognitive systems research. It is now time to encourage the various areas to come together again by
focusing on by a common application problem: a Personalized Assistant that Learns.

Developing cognitive systems that learn to adapt to their user could dramatically improve a wide range of military
operations. The development and application of intelligent systems to support military decision-making may provide
dramatic advances for traditional military roles and missions. The technologies developed under the PAL program are
intended to make military decision-making more efficient and more effective at all levels.

For example, today's command centers require hundreds of staff members to support a relatively small number of key
decision-makers. If PAL succeeds, and develops a new capability for "cognitive assistants," those assistants could
eliminate the need for large command staffs - enabling smaller, more mobile, less vulnerable command centers.

Self-Regenerative Systems (SRS)
---------------------------------------------------------------------------------------------------------
The goal of the SRS program is to develop technology for building military computing systems that provide critical
functionality at all times, in spite of damage caused by unintentional errors or attacks. All current systems suffer
eventual failure due to the accumulated effects of errors or attacks. The SRS program aims to develop technologies
enabling military systems to learn, regenerate themselves, and automatically improve their ability to deliver critical
services. If successful, self-regenerative systems will show a positive trend in reliability, actually exceeding initial
operating capability and approaching a theoretical optimal performance level over long time intervals.

Situation Aware Protocols in Edge Network Technologies (SAPIENT)
---------------------------------------------------------------------------------------------------------
The mission of the Situation Aware Protocols in Edge Network Technologies (SAPIENT) program is to create a new
generation of adaptive communication systems that achieve new levels of functionality through situation-awareness.

Transfer Learning (TL)
---------------------------------------------------------------------------------------------------------
The TRANSFER LEARNING PROGRAM SEEKS TO SOLVE THE PROBLEM OF REUSING KNOWLEDGE DERIVED IN
ONE DOMAIN TO HELP EFFECT SOLUTIONS IN ANOTHER DOMAIN. Adaptive systems, systems that respond to
changes in their environment, stand to benefit significantly from the application of TL technology. Today's adaptive
systems need to be trained for every new situation they encounter. This requires building new training data, which is
the most expensive and most limiting aspect of deploying such systems. The TL PROGRAM ADDRESSES THIS
SHORTCOMING BY IMBUING ADAPTIVE SYSTEMS WITH THE ABILITY TO ENCAPSULATE WHAT THEY HAVE
LEARNED AND APPLY THIS KNOWLEDGE TO NEW SITUATIONS. Thus, rather than having to be retrained for each
new context, TL enables systems to leverage what they have already learned in order to be effective much sooner and
with less effort spent on training. Early applications of TL technology include adaptive ISR systems, robotic vision and
manipulation, and automated population of databases from unstructured text.




============================================================
Command & Control @ http://www.darpa.mil...s/thrust_cc.asp
============================================================
Command and control is the exercise of authority and direction by a properly designated commander over assigned
and attached forces in the accomplishment of a mission. Without question the missions faced by our warfighters today
(such as counter-insurgency) and the operational environments (such as cities) are more complex and dangerous than
ever before. While following their rules of engagement, warfighters must make rapid decisions based on limited
observables interpreted in the context of the evolving situation. Command and control systems must augment the
observables within constrained timelines and present actionable results to the warfighter. IPTO ENABLES WARFIGTER
SUCCESS BY CREATING TECHNOLOGIES AND SYSTEMS THAT PROVIDE TAILORED, CONSISTENT, PREDICTIVE
SITUATION AWARENESS ACROSS ALL COMMAND ELEMENTS, AND CONTINUOUS SYNCHRONIZATION OF
SENSING, STRIKE, COMMUNICATIONS, AND LOGISTICS TO MAXIMIZE THE EFFECTIVENESS OF MILITARY
OPERATIONS WHILE MINIMIZING UNDESIRABLE SIDE EFFECTS. In counter-insurgency operations, targets of interest
are often not known until a significant event (e.g. detonation of IED) occurs. In those instances, reliably and quickly
determining the origin of the devices/vehicles becomes the key to preventing subsequent attacks. IPTO is creating
systems that collect wide area observables in the absence of any strong a priori cues, analyze the prior time history of
events and track insurgent activities to their point of origin.

Programs


Conflict Modeling, Planning, and Outcomes Experimentation (COMPOEX)
---------------------------------------------------------------------------------------------------------
DARPA's Conflict Modeling, Planning, and Outcomes Experimentation (COMPOEX) program is developing a suite of
tools that will help military commanders and their civilian counterparts to plan, analyze and conduct complex campaigns.
"Complex" here refers to those operations - often of long duration and large scale - that require careful consideration of
not only traditional military, but also political, social, economic actions and ramifications.

Deep Green (DG)
---------------------------------------------------------------------------------------------------------
The Deep Green concept is an innovative approach to using simulation to support ongoing military operations while
they are being conducted. The basic approach is to MAINTAIN A STATE-SPACE GRAPH OF POSSIBLE FUTURE
STATES. SOFTWARE AGENTS USE INFORMATION ON THE TRAJECTORY OF THE ONGOING OPERATION, VICE A
PRIORI STAFF ESTIMATES AS TO HOW THE BATTLE MIGHT UNFOLD, AS WELL AS SIMULATION TECHNOLOGIES,
TO ASSESS THE LIKELIHOOD OF REACHING SOME SET OF POSSIBLE FUTURE STATES. THE LIKELIHOOD,
UTILITY, AND FLEXIBILITY OF POSSIBLE FUTURE NODES IN THE STATE SPACE GRAPH ARE COMPUTED AND
EVALUATED TO FOCUS THE PLANNING EFFORTS. This notion is called anticipatory planning and involves the
generation of options (either manual or semi-automated) ahead of "real time," before the options are needed. In
addition, the Deep Green concept provides mechanisms for adaptive execution, which can be described as "late
binding," or choosing a branch in the state space graph at the last moment to maintain flexibility. By using information
acquired from the ongoing operation, rather than assumptions made during the planning phase, commanders and
staffs can make more informed choices and focus on building options for futures that are becoming more likely.

Heterogeneous Airborne Reconnaissance Team (HART)
---------------------------------------------------------------------------------------------------------
The complexity of counter-insurgency operations especially in the urban combat environment demands multiple
sensing modes for agility and for persistent, ubiquitous coverage. The HART system implements collaborative control of
reconnaissance, surveillance and target acquisition (RSTA) assets, so that the information can be made available to
warfighters at every echelon.

Persistent Operational Surface Surveillance and Engagement (POSSE)
---------------------------------------------------------------------------------------------------------
The POSSE program is building a REAL-TIME, ALL-SOURCE EXPLOITATION SYSTEM TO PROVIDE INDICATIONS
AND WARNINGS OF INSURGENT ACTIVITY DERIVED FROM AIRBORNE AND GROUND-BASED SENSORS.
Envisioning a day when our sensors can be integrated into a cohesive "ISR Force", it's building AN INTEGRATED
SUITE OF SIGNAL PROCESSING, PATTERN ANALYSIS, AND COLLECTION MANAGEMENT SOFTWARE that will
increase reliability, reduce manpower, and speed up responses.

Predictive Analysis for Naval Deployment Activities (PANDA)
---------------------------------------------------------------------------------------------------------
The current CONOPS for achieving situation awareness in the maritime domain calls for close monitoring of those
entities that we already have reason to be concerned about (i.e., we already suspect are threats or which carry cargos
that could be dangerous in the hands of the wrong people). PANDA will ADVANCE TECHNOLOGIES AND DEVELOP AN
ARCHITECTURE THAT WILL ALERT WATCHSTANDERS TO ANOMALOUS SHIP behavior AS IT OCCURS, allowing
them to detect potentially dangerous behavior before it causes harm. These technologies and systems will be
transitioned to various partners and customers throughout the development process, ensuring that the end product
meets the needs of the services and watchstanders. Participants will work closely with the transition partners to aid in
this process.

Urban Leader Tactical Response, Awareness & Visualization (ULTRA-Vis)
---------------------------------------------------------------------------------------------------------
Current military operations are focusing efforts on urban and asymmetric warfare, as well as distributed operations, but
small unit leaders lack the capability to issue commands and share mission-relevant information in an urban
environment non-line-of-sight. Various factors that can impact mission effectiveness and tempo of operations are:
1. Leaders communicate by shouting and hand signals;
2. Teams operate within earshot and line-of-sight;
3. Intra-squad radios are hard to hear; and
4. Leaders must stop to use handheld displays.
Military operations in the urban terrain (extensive areas with hostile forces, non-combatant populations, and complex
infrastructure) require special capabilities and agility to conduct close-combat operations under highly dynamic,
adverse conditions. In short, tactical leaders need the ability to adapt on the move, coordinate small unit actions and
execute commands across a wider area of engagement. SIGNIFICANT TACTICAL ADVANTAGES COULD BE
REALIZED THROUGH THE SMALL UNIT LEADER'S ABILITY TO INTUITIVELY GENERATE/ROUTE COMMANDS AND
TIMELY ACTIONABLE COMBAT INFORMATION TO THE APPROPRIATE TEAM OR INDIVIDUAL WARFIGHTER IN A
READILY UNDERSTOOD FORMAT THAT AVOIDS INFORMATION OVERLOAD.

============================================================
High Productivity Computing @ http://www.darpa.mil.../thrust_hpc.asp
============================================================
IPTO is DEVELOPING THE HIGH-PRODUCTIVITY, HIGH-PERFORMANCE COMPUTER HARDWARE AND THE
ASSOCIATED SOFTWARE TECHNOLOGY BASE REQUIRED TO SUPPORT FUTURE CRITICAL NATIONAL SECURITY
NEEDS FOR COMPUTATIONALLY-INTENSIVE AND DATA-INTENSIVE APPLICATIONS. THESE TECHNOLOGIES WILL
LEAD TO NEW MULTI-GENERATION PRODUCT LINES OF COMMERCIALLY VIABLE, SUSTAINABLE COMPUTING
SYSTEMS FOR A BROAD SPECTRUM OF SCIENTIFIC AND ENGINEERING APPLICATIONS, including both
supercomputer and embedded computing. The goal is to ensure accessibility and usability of high end computing to a
wide range of application developers, not just computational science experts. This is ESSENTIAL FOR MAINTAINING
THE NATION'S STRENGTH IN SUPERCOMPUTING BOTH FOR ULTRA LARGE-SCALE ENGINEERING APPLICATIONS
AND FOR SURVEILLANCE AND RECONNAISSANCE DATA ASSIMILATION AND EXPLOITATION. ONE OF THE MAJOR
CHALLENGES CURRENTLY FACING THE DOD IS THE PROHIBITIVELY HIGH COST, TIME, AND EXPERTISE
REQUIRED TO BUILD LARGE COMPLEX SOFTWARE SYSTEMS. POWERFUL NEW APPROACHES AND TOOLS ARE
NEEDED TO ENABLE THE RAPID AND EFFICIENT PRODUCTION OF NEW SOFTWARE, INCLUDING SOFTWARE
THAT CAN BE EASILY CHANGED TO ADDRESS NEW REQUIREMENTS AND TO PLATFORM AND ENVIRONMENTAL
PERTURBATIONS. Computing capabilities must progress dramatically if U.S. forces are to exploit an ever-increasing
diversity, quantity, and complexity of sensor and other types of data. Doing so both in command centers and on the
battlefield will require significantly increasing performance and significantly decreasing power and size requirements.

Programs [there was currently no available description for these programs]


Architecture-Aware Compiler Environment (AACE)
---------------------------------------------------------------------------------------------------------

Disruptive Manufacturing Technology, Software Producibility (DMT-SWP)
---------------------------------------------------------------------------------------------------------
.

High Productivity Computing Systems (HPCS)
---------------------------------------------------------------------------------------------------------


============================================================
Language Processing @ http://www.darpa.mil...s/thrust_lp.asp
============================================================
At present, the exploitation of foreign language speech and text is slow and labor intensive and as a result, the
availability, quantity and timeliness of information from foreign-language sources is limited. IPTO is creating NEW
TECHNOLOGIES AND SYSTEMS FOR AUTOMATING THE TRANSCRIPTION AND TRANSLATION OF FOREIGN
LANGUAGES. These language processing capabilities will enable our military to exploit large volumes of speech and
text in multiple languages, thereby increasing situational awareness at all levels of command. In particular, IPTO is
AUTOMATING THE CAPABILITY TO MONITOR FOREIGN LANGUAGE MEDIA AND TO EXPLOIT FOREIGN LANGUAGE
NEWS BROADCASTS with one-way (foreign-language-to-English) translation technologies. IPTO is also DEVELOPING
HAND-HELD, TWO-WAY (FOREIGN-LANGUAGE-TO-ENGLISH AND ENGLISH-TO-FOREIGN-LANGUAGE) SPEECH-TO-
SPEECH TRANSLATION SYSTEMS that enable the warfighter on the ground to communicate directly with local
populations in their native language. Finally, IPTO is creating TECHNOLOGIES TO EXPLOIT THE INFORMATION
CONTAINED IN HARD-COPY DOCUMENTS AND DOCUMENT IMAGES RESIDENT ON MAGNETIC AND OPTICAL
MEDIA CAPTURED IN THE FIELd. Making full use of all of the information extracted from foreign-language sources
REQUIRES THE CAPABILITY TO AUTOMATICALLY COLLATE, FILTER, SYNTHESIZE, SUMMARIZE, AND PRESENT
RELEVANT INFORMATION IN TIMELY AND RELEVANT FORMS. IPTO is DEVELOPING NATURAL LANGUAGE
PROCESSING SYSTEMS TO ENHANCE LOCAL, REGIONAL AND GLOBAL SITUATIONAL AWARENESS AND
ELIMINATE THE NEED FOR TRANSLATORS AND SUBJECT MATTER EXPERTS AT EVERY MILITARY SITE WHERE
FOREIGN-LANGUAGE INFORMATION IS OBTAINED.

Programs


Global Autonomous Language Exploitation (GALE)
---------------------------------------------------------------------------------------------------------
The goal of the GALE (Global Autonomous Language Exploitation) program is to DEVELOP AND APPLY COMPUTER
SOFTWARE TECHNOLOGIES TO ABSORB, TRANSLATE, ANALYZE, AND INTERPRET HUGE VOLUMES OF SPEECH
AND TEXT IN MULTIPLE LANGUAGES, eliminating the need for linguists and analysts, and automatically providing
relevant, concise, actionable information to military command and personnel in a timely fashion. Automatic processing
"engines" will convert and distill the data, delivering pertinent, consolidated information in easy-to-understand forms to
military personnel and monolingual English-speaking analysts in response to direct or implicit requests.

Multilingual Automatic Document Classification Analysis and Translation (MADCAT)
---------------------------------------------------------------------------------------------------------
The United States has a compelling need for reliable information affecting military command, soldiers in the field, and
national security. Currently, our warfighters encounter foreign language images in many forms, including, but not limited
to graffiti, road signs, printed media, and captured records in the form of paper and computer files. Given the quantity
of foreign language material, it is difficult to interpret the salient pieces of information, much of which is either ignored or
analyzed too late to be of any use. The mission of the Multilingual Automatic Document Classification Analysis and
Translation (MADCAT) Program is to AUTOMATICALLY CONVERT FOREIGN LANGUAGE TEXT IMAGES INTO
ENGLISH TRANSCRIPTS, thus eliminating the need for linguists and analysts while automatically providing relevant,
distilled actionable information to military command and personnel in a timely fashion.

Spoken Language Communication and Translation System for Tactical Use (TRANSTAC)
---------------------------------------------------------------------------------------------------------
Today, phrase-based translation devices are being tactically deployed. These one-way devices translate English input
into pre-recorded phrases in target languages. While such systems are useful in many operational settings, the inability
to translate foreign speech into English is a significant limitation. The mission of the Spoken Language Communication
and Translation System for Tactical Use (TRANSTAC) program is to demonstrate capabilities to rapidly develop and
field TWO-WAY TRANSLATION SYSTEMS THAT ENABLE SPEAKERS OF DIFFERENT LANGUAGES TO
SPONTANEOUSLY COMMUNICATE WITH ONE ANOTHER IN REAL-WORLD TACTICAL SITUATIONS.


============================================================
Sensors & Processing @ http://www.darpa.mil...s/thrust_sp.asp
============================================================
U.S. forces and sensors are increasingly networked across service, location, domain (land, sea and air), echelon, and
platform. This trend increases responsiveness, flexibility and combat effectiveness, but also increases the inherent
complexity of sensor and information management. IPTO is CREATING SYSTEMS THAT CAN DERIVE HIGH-LEVEL
INFORMATION FROM SENSOR DATA STREAMS (FROM BOTH MANNED AND UNMANNED SYSTEMS), PRODUCE
MEANINGFUL SUMMARIES OF COMPLEX DYNAMIC SITUATIONS, AND SCALE TO THOUSANDS OF SOURCES.
Future battlefields will continue to be populated with targets that use mobility and concealment as key survival tactics,
and high-value targets will range from quiet submarines, to mobile missile/artillery, to specific individual insurgents.
IPTO develops and demonstrates system CONCEPTS THAT COMBINE NOVEL APPROACHES TO SENSING, SENSOR
PROCESSING, SENSOR FUSION, AND INFORMATION MANAGEMENT TO ENABLE PERVASIVE AND PERSISTENT
SURVEILLANCE OF THE BATTLESPACE AND DETECTION, IDENTIFICATION, TRACKING, ENGAGEMENT AND
BATTLE DAMAGE ASSESSMENT FOR HIGH-VALUE TARGETS IN ALL WEATHER CONDITIONS AND IN ALL
POSSIBLE COMBAT ENVIRONMENTS. Finally, warfighters in the field must concentrate on observing their immediate
environment but at the same time must maintain awareness of the larger battlespace picture, and as a result they are
susceptible to being swamped by too much detail. IPTO is creating system approaches that can exploit context and
advanced information display/presentation techniques to overcome these challenges.

Programs

Autonomous Real-time Ground Ubiquitous Surveillance - Imaging System (ARGUS-IS)
---------------------------------------------------------------------------------------------------------
The mission of the Autonomous Real-time Ground Ubiquitous Surveillance - Imaging System (ARGUS-IS) program is to
provide military users a flexible and responsive capability to find, track and monitor events and activities of interest on a
continuous basis in areas of interest.

The overall objective is to increase situational awareness and understanding enabling an ability to find and fix critical
events in a large area in enough time to influence events. ARGUS - IS provides military users an "eyes-on" persistent
wide area surveillance capability to support tactical users in a dynamic battlespace or urban environment.

FOPEN Reconnaissance, Surveillance, Tracking and Engagement Radar (FORESTER)
---------------------------------------------------------------------------------------------------------
The Foliage Penetration Reconnaissance, Surveillance, Tracking and Engagement Radar (FORESTER) is a joint
DARPA/Army program to develop and demonstrate an advanced airborne UHF radar capable of detecting people and
vehicles moving under foliage. FORESTER will provide robust, wide-area, all-weather, persistent stand-off coverage of
moving vehicles and dismounted troops under foliage, filling the surveillance gap that currently exists.

Multispectral Adaptive Networked Tactical Imaging System (MANTIS)
---------------------------------------------------------------------------------------------------------
The MANTIS program will develop, integrate and demonstrate A SOLDIER-WORN VISUALIZATION SYSTEM,
CONSISTING OF A HEAD-MOUNTED MULTISPECTRAL SENSOR SUITE WITH A HIGH RESOLUTION DISPLAY AND A
HIGH PERFORMANCE VISION PROCESSOR (ASIC), CONNECTED TO A SOLDIER-WORN POWER SUPPLY AND
RADIO. The helmet-mounted MANTIS Vision Processor will provide the soldier with digitally fused, multispectral video
imagery in real time from the Visible/Near Infrared (VNIR), the Short Wave Infrared (SWIR) and the Long Wave Infrared
(LWIR) helmet-mounted sensors via the high resolution visor display. The processor adaptively fuses the digital
imagery from the multispectral sensors providing the highest context, best nighttime imagery in real-time under varying
battlefield conditions. The system also ALLOWS THE VIDEO IMAGERY TO BE RECORDED AND PLAYED BACK ON
DEMAND AND ALLOWS THE OVERLAY OF BATTLEFIELD INFORMATION. MANTIS will exploit the existing soldier radio
network and PROVIDE SOLDIER-TO-SOLDIER SHARING OF VIDEO CLIPS VIEWED AS PICTURE-IN-PICTURE ON
THEIR HELMET MOUNTED DISPLAYS. MANTIS WILL "regain the nighttime advantage" and "EXPLOIT THE NET" TO
PROVIDE THE INDIVIDUAL SOLDIER WITH UNPRECEDENTED SITUATIONAL AWARENESS.

NetTrack (NT)
---------------------------------------------------------------------------------------------------------
PERSISTENT RECONNAISSANCE, SURVEILLANCE, TRACKING AND TARGETING OF EVASIVE VEHICLES IN
CLUTTERED ENVIRONMENTS.

Quint Networking Technology (QNT)
---------------------------------------------------------------------------------------------------------
In a network centric battle space, U.S. Forces must exploit distributed sensor platforms to rapidly and precisely find, fix,
track, and engage static and moving targets in real time. There are several relevant thrusts to time critical targeting and
strike areas within the Services. One aspect of these thrusts is to use data links to fully integrate tactical UAVs,
dismounted ground forces and weapon control into the future network centric warfare environment.

The Quint Networking Technology (QNT) is a modular network data link program focused on providing a multi-band
modular capability to close the seams between five nodes - Aircraft, UCAV, Weapons, tactical UAV and dismounted
ground forces. The specific intended QNT hardware users are weapons, air control forces on the ground (dismounted)
and tactical UAV's. These three are the focal points of the QNT effort with the other two elements using hardware and
waveforms from established programs. The assumption is these other two types of platforms provide a starting point for
building capability for the other three elements.

Standoff Precision ID in 3-D (SPI-3D)
---------------------------------------------------------------------------------------------------------
The SPI-3D program will develop and demonstrate the ability to provide precision geolocation of ground targets
combined with high-resolution 3D imagery at useful standoff ranges. These dual capabilities will be provided using a
sensor package composed of commercially available components. It will be capable of providing "optical quality
precision at radar standoff ranges" and have the ability to overcome limited weapons effects obscuration, and
penetrate moderate foliage. The figure below shows the operational concept of the SPI-3D system.

Urban Reasoning and Geospatial Exploitation Technology (URGENT)
---------------------------------------------------------------------------------------------------------
The recognition of targets in urban environments poses unique operational challenges for the warfighter. Historically,
target recognition has focused on conventional military objects, with particular emphasis on military vehicles such as
tanks and armored personnel carriers. In many cases, these threats exhibit unique signatures and are relatively
geographically isolated from densely populated areas. The same cannot be said of today's asymmetric threats, which
are embedded in urban areas, thereby forcing U.S. Forces to engage enemy combatants in cities with large civilian
populations. Under these conditions, even the most common urban objects can have tactical significance: trash cans
can contain improvised explosive devices, doors can conceal snipers, jersey barriers can block troop ingress, roof tops
can become landing zones, and so on. Today's urban missions involve analyzing a multitude of urban objects in the
area of regard. As military operations in urban regions have grown, the need to identify urban objects has become an
important requirement for the military. URGENT WILL ENABLE UNDERSTANDING THE LOCATIONS, SHAPES, AND
CLASSIFICATIONS OF OBJECTS FOR A BROAD RANGE OF PRESSING URBAN MISSION PLANNING ANALYTICAL
QUERIES (E.G., FINDING ALL ROOF TOP LANDING ZONES ON THREE STORY BUILDINGS CLEAR OF VERTICAL
OBSTRUCTIONS AND VERIFYING INGRESS ROUTES WITH MAXIMUM COVER FOR GROUND TROOPS). IN
ADDITION, URGENT WILL ENABLE AUTOMATED TIME-SENSITIVE SITUATION ANALYSIS (E.G., ALERTING FOR
VEHICLES FOUND ON A ROAD SHOULDER AFTER DARK AND ESTIMATING DAMAGE TO A BUILDING EXTERIOR
AFTER AN EXPLOSION) THAT WILL MAKE A SIGNIFICANT POSITIVE IMPACT ON URBAN OPERATIONS.

Vehicle and Dismount Exploitation Radar (VADER)
---------------------------------------------------------------------------------------------------------
VADER is a RADAR SYSTEM DESIGNED TO ENABLE THE SURVEILLANCE AND TRACKING OF GROUND VEHICLES
AND DISMOUNTS from a Warrior (or similar) unmanned aerial vehicle (UAV) platform. VADER will PROVIDE REAL-TIME
DATA PRODUCTS TO A COMMAND ECHELONS AT WHICH THE REAL-TIME INFORMATION WILL BE IMMEDIATELY
ACTIONABLE. For example, a warfighter could use the Warrior UAV with VADER installed to monitor a road, track a
vehicle to a stop, OBSERVE DISMOUNT MOTION NEAR THE VEHICLE, CHARACTERIZE CERTAIN MOTIONS (LIKE
SOMEONE CARRYING A HEAVY LOAD), AND MEASURE A GROUND DISTURBANCE AFTER THE VEHICLE DEPARTS.

Video and Image Retrieval and Analysis Tool (VIRAT)
---------------------------------------------------------------------------------------------------------
The overall goal of the Video and Image Retrieval and Analysis Tool (VIRAT) program is to produce A SCALABLE AND
EXTENSIBLE END-TO-END SYSTEM THAT ENABLES MILITARY ANALYSTS TO OBTAIN GREATER VALUE FROM
AERIAL VIDEO COLLECTED IN COMBAT ENVIRONMENTS.


============================================================
Emerging Technologies @ http://www.darpa.mil...s/thrust_ep.asp
============================================================
IPTO is EXPLORING SEVERAL EMERGING INFORMATION PROCESSING TECHNOLOGIES INCLUDING NOVEL USES
OF MODELING AND SIMULATION TO CREATE NEW BATTLE COMMAND PARADIGMS; REVOLUTIONARY
APPROACHES TO POWER, SIZE AND PROGRAMMABILITY AS ENABLERS FOR COMPUTING AT THE EXASCALE;
COMPUTATIONAL SOCIAL SCIENCE AS THE FOUNDATION FOR BETTER UNDERSTANDING OF THE WORLD
FACED BY THE WARFIGHTER; ADVANCED SENSING ARCHITECTURES INCLUDING NEW SENSING MODALITIES TO
COUNTER DIFFICULT THREATS; AUTOMATED STORAGE, INDEXING, ANALYSIS, CORRELATION, SEARCH, AND
RETRIEVAL OF MULTIMEDIA DATA; AND TECHNIQUES TO ENABLE INFORMATION SHARING ACROSS
ORGANIZATIONAL BOUNDARIES AND ADMINISTRATIVE/SECURITY DOMAINS.

Programs


Advanced Speech Encoding (ASE)
---------------------------------------------------------------------------------------------------------
Speech is the most natural form of human-to-human communications. However, THE MILITARY IS OFTEN FORCED TO
OPERATE IN ENVIRONMENTS WHERE SPEECH IS DIFFICULT. For example, the quality and intelligibility of the
acoustic signal can be severely degraded by HARSH ACOUSTIC NOISE BACKGROUNDS that are common in military
environments. In addition, many situations also require war fighters to operate in silence and in a stealth mode so that
their presence and intent are not compromised. THE ADVANCED SPEECH ENCODING (ASE) PROGRAM WILL
DEVELOP TECHNOLOGY THAT WILL ENABLE COMMUNICATION IN THESE CHALLENGING MILITARY
ENVIRONMENTS.

Information Theory for Mobile Ad-Hoc Networks (ITMANET)
---------------------------------------------------------------------------------------------------------
The mission of the Information Theory for Mobile Ad-Hoc Networks (ITMANET) program is TO DEVELOP AND EXPLOIT
MORE POWERFUL INFORMATION THEORY CONCERNING MOBILE WIRELESS NETWORKS. The hypothesis of this
program is that a specific challenge problem --- better understanding of MANET capacity limits --- will lead to actionable
implications for network design and deployment. The anticipated byproducts of a more evolved theory include new
separation theorems to inform wireless network "layering" as well as new protocol ideas.

Integrated Crisis Early Warning System (ICEWS)
---------------------------------------------------------------------------------------------------------
The Integrated Crisis Early Warning System (ICEWS) program seeks to DEVELOP A COMPREHENSIVE, INTEGRATED,
AUTOMATED, GENERALIZABLE, AND VALIDATED SYSTEM TO MONITOR, ASSESS, AND FORECAST NATIONAL,
SUB-NATIONAL, AND INTERNATIONAL CRISES IN A WAY THAT SUPPORTS DECISIONS ON HOW TO ALLOCATE
RESOURCES TO MITIGATE THEM. ICEWS will provide Combatant Commanders (COCOMs) with a powerful, systematic
capability to anticipate and respond to stability challenges in the Area of Responsibility (AOR); allocate resources
efficiently in accordance to the risks they are designed to mitigate; and track and measure the effectiveness of
resource allocations toward end-state stability objectives, in near-real time.

RealWorld
---------------------------------------------------------------------------------------------------------
The RealWorld program exploits technology innovation to PROVIDE EVERY WARFIGHTER WITH THE ABILITY TO
OPEN A LAPTOP COMPUTER AND RAPIDLY CREATE A MISSION-SPECIFIC SIMULATION IN A RELEVANT GEO-
SPECIFIC 3-D WORLD. currently, major simulation programs are time consuming, expensive, and require graduate-
level expertise in computer programming. realworld will remove these barriers and, for the first time, PUT THE
TACTICAL ADVANTAGE OF REAL-TIME SIMULATION DIRECTLY INTO THE HANDS OF THE WARFIGHTER.
0
MultiQuote
Reply
#9   Ed Porter

Member

Group:Members
Posts:27
Joined:02-February 10
LocationExeter, NH
Posted 09 April 2010 - 09:12 PM
DARPA’s 2 liter, 1KW, 10^14 synapse AGI brain
=====================================

DARPA’s Defense Sciences Office (DSO) is supporting the Systems of Neuromorphic Adaptive Plastic Scalable
Electronics, or SyNAPSE, project. It’s goal, according to its April 8, 2008 BAA (Broad Agency Announcement) is to
create a system with roughly: the same number of neurons (they want 10^10); same number of synapses (they want
10^14); and same power as the human brain --- that will fit in a volume of 2 liters or less, and will draw less than one
kilowatt of electric power..

The SyNAPSE BAA says:

“The vision for the anticipated DARPA SyNAPSE program is the enabling of electronic neuromorphic machine
technology that is scalable to biological levels. Programmable machines are limited not only by their computational
capacity, but also by an architecture requiring (human-derived) algorithms to both describe and process information
from their environment. In contrast, biological neural systems (e.g., brains) autonomously process information in
complex environments by automatically learning relevant and probabilistically stable features and associations….


and
“Architectures will support critical structures and functions observed in biological systems such as connectivity,
hierarchical organization, core component circuitry, competitive self-organization, and modulatory/reinforcement
systems. As in biological systems, processing will necessarily be maximally distributed, nonlinear, and inherently noise-
and defect-tolerant. “

Guilio Tononi, who has developed “An information integration theory of consciousness” (described at http://www.
biomedcen.../1471-2202/5/42 ), is working on the SyNAPSE project. As is stated in “Cognitive computing: Building a
machine that can learn from experience” (at http://www.physorg.c...s148754667.html ), Tononi is part of a team that will
be developing a prototype, small-mammal-brain-powered, neuromorphic AGI for the SyNAPSE project.

“Tononi, professor of psychiatry at the UW-Madison School of Medicine and Public Health and an internationally known
expert on consciousness, is part of a team of collaborators from top institutions who have been awarded a $4.9 million
grant from the Defense Advanced Research Projects Agency (DARPA) for the first phase of DARPA's Systems of
Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.

“Tononi and scientists from Columbia University and IBM will work on the "software" for the thinking computer, while
nanotechnology and supercomputing experts from Cornell, Stanford and the University of California-Merced will create
the "hardware." Dharmendra Modha of IBM is the principal investigator.

'The idea is to create a computer capable of sorting through multiple streams of changing data, to look for patterns and
make logical decisions.

“There's another requirement: The finished cognitive computer should be as small as a the brain of a small mammal
and use as little power as a 100-watt light bulb. It's a major challenge. But it's what our brains do every day.”

One of the keys to making the types of compact, low-power, extremely powerful supercomputers SyNAPSE envisions
within in this coming decade is the “memsistor.”

This is because memristors enable a synapse to be modeled much more compactly than ever before possible.
Memristors are a type of resistor in which the resistance can be varied by changing the magnitude or direction of
current passed through it, and can be remembered until the next time it is changed. Hewlet-Packard is currently the
world’s leading developer of memsistor technology and is an important part of the DARPA’s SyNAPSE program.

An article at http://www.newscient...true&print=true states the following about the role of memristors in the SyNAPSE
project:

“So now we've found [memristors], might a new era in artificial intelligence be at hand? The Defense Advanced
Research Projects Agency certainly thinks so. DARPA is a US Department of Defense outfit with a strong record in
backing high-risk, high-pay-off projects - things like the internet. In April last year, it announced the Systems of
Neuromorphic Adaptive Plastic Scalable Electronics Program, SyNAPSE for short, to create "electronic neuromorphic
machine technology that is scalable to biological levels".

“Williams's team from Hewlett-Packard is heavily involved. Late last year, in an obscure US Department of Energy
publication called SciDAC Review, his colleague Greg Snider set out how a memristor-based chip might be wired up to
test more complex models of synapses. He points out that in the human cortex synapses are packed at a density of
about 10^10 per square centimetre, whereas today's microprocessors only manage densities 10 times less. "That is
one important reason intelligent machines are not yet walking around on the street," he says.

'Snider's dream is of a field he calls "cortical computing" that harnesses the possibilities of memristors to mimic how the
brain's neurons interact. It's an entirely new idea. "People confuse these kinds of networks with neural networks," says
Williams. But neural networks - the previous best hope for creating an artificial brain - are software working on standard
computing hardware. "What we're aiming for is actually a change in architecture," he says.

'The first steps are already being taken. Williams and Snider have teamed up with Gail Carpenter and Stephen
Grossberg at Boston University, who are pioneers in reducing neural behaviours to systems of differential equations, to
create hybrid transitor-memristor chips designed to reproduce some of the brain's thought processes. Di Ventra and
his colleague Yuriy Pershin have gone further and built a memristive synapse that they claim behaves like the real thing
(www.arxiv.org/abs/0905.2935).

'The electronic brain will be a time coming. "We're still getting to grips with this chip," says Williams. Part of the problem
is that the chip is just too intelligent - rather than a standard digital pulse it produces an analogue output that
flummoxes the standard software used to test chips. So Williams and his colleagues have had to develop their own test
software. "All that takes time," he says.”

'Two recent articles point out successes HP is making in developing memristors. This progress is so impressive that
memristors may well become the major form of long anticipated “universal” memories (i.e., memory that can be used
substantially like SRAM, DRAM, and flash are today. But first ways will have to be found to substantially increase how
many times memsistor can have their values changed far beyond the number of times flash memory can be changed.
People at HP currently claim to be confidient they can achieve such increases.


An April 7, 2010 NYTimes article (at http://www.nytimes.c...ce/08chips.html ) reported Hewlett-Packard has been making
significant progress on memsistor technology. In part it said:

“they had devised a new method for storing and retrieving information from a vast three-dimensional array of
memristors. The scheme could potentially free designers to stack thousands of switches in a high-rise fashion,
permitting a new class of ultradense computing devices even after two-dimensional scaling reaches fundamental limits”

“The most advanced transistor technology today is based on minimum feature sizes of 30 to 40 nanometers…and Dr.
Williams said that H.P. now has working 3-nanometer memristors that can switch on and off in about a nanosecond, or
a billionth of a second.

'He said the company could have a competitor to flash memory in three years that would have a capacity of 20
gigabytes a square centimeter.”

An April 9, 3010 article from EEtimes (at http://www.eetimes.c...cleID=224202453 ) stated

“Hewlett-Packard has demonstrated memristors ("memory resistors") cast in an architecture that can be dynamically
changed between logic operations and memory storage. The configurable architecture demonstrates "stateful logic"
that HP claims could someday obsolete the dedicated central-processing unit (CPU) by enabling dynamically changing
circuits to maintain a constant memory of their state…

“… HP showed that memristive devices could use stateful logic to perform material implication—a "complete" operator
that can be interconnected to create any logical operation, much as early supercomputers were made from NAND
gates. Bertrand Russell espoused material implication in Principia Mathematica, the seminal primer on logic he co-
authored with Alfred Whitehead, but until now engineers have largely ignored the concept.

“HP realized the material implication gate with one regular resistor connected to two memristive devices used as digital
switches (low resistance for "on" and high resistance for "off"). By using three memristors, HP could have realized a
NAND gate and thus re-created the conditions under which earlier supercomputers were conceived. But HP claims that
material implication is better than NAND for memristive devices, because material implication gates can be cast in an
architecture that uses them as either memory or logic, enabling a device whose function can be dynamically changed.”

All these article indicate advances in memristors might well hasten the day when human-level AGI’s are created.



For more information on the SyNAPSE project look at the following two links

IBM also has part of the SyNAPSE contract as is discussed in the last half of http://www-03.ibm.co...lease/28842.wss

For DSO’s current brief summary of the project see http://www.darpa.mil...napse/index.htm






0
MultiQuote
Reply
Search Topic    

← Previous TopicArtificial General IntelligenceNext Topic →
Page 1 of 1
You cannot start a new topic
You cannot reply to this topic
1 User(s) are reading this topic
0 members, 1 guests, 0 anonymous users




Time Now: Apr 12 2010 12:51 PM
Back to topForum HomeDelete My CookiesMark Board As Read Powered By IP.Board © 2010  IPS, Inc.



Next
The Concebot Concept Index