The Singularity and human brain capacity

“Hans Moracevic provides the following similar chart, which uses a different but overlapping set of historical computers and plots trend lines (slopes) at different points in time. As with the figure above, the slope increases with time, reflecting the second level of exponential growth.

If we project these computational performance trends through this next century, we can see in the figure below that supercomputers will match human brain capability by the end of this decade and personal computing will achieve it by around 2020– or possibly sooner, depending on how conservative an estimate of human brain capacity we use.

The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process. We can express the exponential growth of computing in terms of accelerating pace: it took ninety years to achieve the first MIPS per thousands dollars; now we add one MIPS per thousand dollars every five hours.

IBM’s Blue Gene/P supercomputer is planned to have one million gigaflops (billions of floating-point operations per second), or 10^15 calculations per second when it launches in 2007. That’s one tenth of the 10^16 calculations per second needed to emulate the human brain. And if we extrapolate this exponential curve, we get 10^16 calculations per second early in the next decade.

As discussed above, Moore’s Law narrowly refers to the number of transistors on an integrated circuit of fixed size and sometimes has been expressed even more narrowly in terms of transistors feature size. But the most appropriate measure to track price-performance is computational speed per unit cost, an index that takes into account many levels of “cleverness” (innovation, which is to say, technological evolution). In addition to all of the invention involved in integrated circuits, there are multiple layers of improvement in computer design (for example, pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others).

The human brain uses a very inefficient electrochemical, digital-controlled  analog computational process. The bulk of its calculations are carried out in the interneuronal connections at a speed  of only about two hundred calculations per second (in each connection), which is at least one million times slower that contemporary electronic circuits. But the brain gains its prodigious powers from its extremely parallel organization in three dimensions. There are many technologies in the wings that will build circuitry in three dimensions.

We might ask whether there are inherent limits to the capacity of matter and energy to support computational process. This is an important issue, but we won’t approach those limits until the end of this century. It is important to distinguish between the S-curve that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process within a broad area of technology, such as computation.

Specific paradigms shift, such as Moore’s Law, do ultimately reach levels at which exponential growth is no longer feasible. But the growth of computation supersedes any of its underlying paradigms and is for present purpose an ongoing exponential.”

Ray Kurzweil, The Singularity is near, Chapter Two; A Theory of Technology Evolution. The Law of Accelerating Returns

Advertisements

3 comments

Tell me what you think about...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s