[00:57:59] <Robin> I do worry that the basement AI that takes over the world gets too large a fraction of attention from people who are willing to seriously think about teh future
[00:58:02] <Hank> why do you think the chances the basement AI chances are low? michael wilson would argue its a larger engineering challenge than some percieve it
[00:58:22] <Robin> economists have studied economic growth a lot
[00:58:23] <mitkat> damn! many of us are in the same boat as far as sharing transhumanist interests with others
[00:58:39] <Robin> the basement AI scenario is a growth scenario, and it is built on growth assumptions
[00:58:56] <Robin> mitkat, no interest at school
[00:59:09] <Hank> so hard-take off is infeasible
[00:59:14] <Hank> from an economic pov
[00:59:17] <Robin> I'd say unlikely
[00:59:18] <kanzure> Robin: But what about the transhumanist student groups at universities?
[00:59:24] <Robin> hard to prove about any such thing
[00:59:34] <Robin> kanzure, none at my univ
[00:59:43] <mitkat> heathly skepticism, lack of introduction or not available to the topics in their circles, or something else responsible Robin?
[00:59:47] <Robin> growth now is a proces of the whole world contributing
[00:59:54] <mitkat> for their disinterest at school
[00:59:58] <Robin> billions of tiny improvements spread across the world
[01:00:01] <kanzure> I've been talking on WTA and the extropian mailing list recently about spawning more research groups. Universities might be a good place to start. Lots of young, motivated students.
[01:00:16] <Robin> most industries are very dependent on the rest of the world for their improvment
[01:00:22] <kanzure> (not just for AI- many other transhumanist projects)
[01:00:42] <Robin> so the idea that one machine in a basement can outgrow the entire rest of the world put together
[01:00:52] <Robin> that seems on its face unlikely
[01:00:58] <Hank> well
[01:01:02] <A941> WHats the time in the USA when will our guest arrive?
[01:01:18] <Robin> A, your guest has been here for 2.5 hours
[01:01:24] <mitkat> lol
[01:01:30] <A941> damn
[01:01:37] <mitkat> A941, Robin's stayed much longer than asked, like a champ
[01:01:39] <Hank> if you are researching and developing new faster, smaller computers, having an AI to help speed things up
[01:01:57] <Hank> that's a positive feedback cycle that can do something
[01:02:04] <Robin> hank today the computer industry grows because the world helps
[01:02:32] <Robin> hank, we already have that feedback loop going
[01:02:42] <Robin> the world economy has all those feedback loops going
[01:02:48] <Robin> and that is why we are growing
[01:02:49] <Hank> so having 1 human worth of additional scientific output won't do any good
[01:03:05] <mitkat> Robin, is there any aspect of activism you think is the most important for a young person who wants to "do something" about spreading the idea of transhumanism, or even just raising awareness of the future?
[01:03:05] <Robin> it will add a tiny drop to the world, like the rest of us do[01:03:20] <Hank> but a human has constant brain power[01:03:24] <Hank> no access to source
[01:03:27] <Hank> etc
[01:03:31] <Robin> mitkat, I don't see much need for activism, actually
[01:03:36] <mitkat> we have a lot of members who want to do something to help the meme(s) and besides going to school, there must be other ways to raise consciousness
[01:03:44] <Robin> what we need is to understand the future better
[01:03:55] <Robin> and perhaps to help along some of the development paths
[01:04:44] <Robin> hank, there are dramatically diminshing returns to most improvements...what?
I reiterate:
what?
3.1: Advantages of minds-in-general
From the standpoint of computer science it may seem like breathtaking audacity if I dare to predict any advantages for AIs in advance of their construction, given past failures. But from the standpoint of evolutionary psychology, the human mind has surprising flaws to match its surprising strengths. If discussing the potential advantages of "AIs" strikes you as too audacious, then consider what follows, not as discussing the potential advantages of "AIs", but as discussing the potential advantages of minds in general relative to humans. One may then consider separately the audacity involved in claiming that a given AI approach can achieve one of these advantages, or that it can be done in less than fifty years.
Humans definitely possess the following advantages, relative to current AIs:
We are smart, flexible, generally intelligent organisms with an enormous base of evolved complexity, years of real-world experience, and 10^14 parallelized synapses, and current AIs are not.
Humans probably possess the following advantages, relative to intelligences developed by humans on foreseeable extensions of current hardware:
Considering each synaptic signal as roughly equivalent to a floating-point operation, the raw computational power of a human is enormously in excess of any current supercomputer or clustered computing system, although Moore's Law continues to eat up this ground [Moravec98].
Human neural hardware - the wetware layer - offers built-in support for operations such as pattern recognition, pattern completion, optimization for recurring problems, et cetera; this support was added from below, taking advantage of microbiological features of neurons, and could be enormously expensive to simulate computationally to the same degree of ubiquity.
With respect to the holonically simpler levels of the system, the total amount of "design pressure" exerted by evolution over time is probably considerably in excess of the design pressure that a reasonably-sized programming team could expect to personally exert.
Humans have an extended history as intelligences; we are proven software.
Current computer programs definitely possess these mutually synergetic advantages relative to humans:
Computer programs can perform highly repetitive tasks without boredom.
Computer programs can execute complex extended tasks without making that class of human errors caused by distraction or short-term memory overflow in abstract deliberation.
Computer hardware can perform extended sequences of simple steps at much greater serial speeds than human abstract deliberation or even human 200Hz neurons.
Computer programs are fully configurable by the general intelligences called humans. (Evolution, the designer of humans, cannot invoke general intelligence.)
These advantages will not necessarily carry over to real AI. A real AI is not a computer program any more than a human is a cell. The relevant complexity exists at a much higher layer of organization, and it would be inappropriate to generalize stereotypical characteristics of computers to real AIs, just as it would be inappropriate to generalize the stereotypical characteristics of amoebas to modern-day humans. One might say that a real AI consumes computing power but is not a computer. This basic distinction has been confused by many cases in which the label "AI" has been applied to constructs that turn out to be only computer programs; but we should still expect the distinction to hold true of real AI, when and if achieved.
The potential cognitive advantages of minds-in-general, relative to human minds, probably include:
New sensory modalities. Human programmers, lacking a sensory modality for assembly language, are stuck with abstract reasoning plus compilers. We are not entirely helpless, even this far outside our ancestral environment - but the traditional fragility of computer programs bears witness to our awkwardness. Minds-in-general may be able to exceed human programming ability with relatively primitive general intelligence, given a sensory modality for code.
Blending-over of deliberative and automatic processes. Human wetware has very poor support for the realtime diversion of processing power from one subsystem to another. Furthermore, a computer can burn serial speed to generate parallel power but neurons cannot do the reverse. Minds-in-general may be able to carry out an uncomplicated, relatively uncreative track of deliberate thought using simplified mental processes that run at higher speeds - an idiom that blurs the line between "deliberate" and "algorithmic" cognition. Another instance of the blurring line is coopting deliberation into processes that are algorithmic in humans; for example, minds-in-general may choose to make use of top-level intelligence in forming and encoding the concept kernels of categories. Finally, a sufficiently intelligent AI might be able to incorporate de novo programmatic functions into deliberative processes - as if Gary Kasparov36 could interface his brain to a computer and write search trees to contribute to his intuitive perception of a chessboard.
Better support for introspective perception and manipulation. The comparatively poor support of the human architecture for low-level introspection is most apparent in the extreme case of modifying code; we can think thoughts about thoughts, but not thoughts about individual neurons. However, other cross-level introspections are also closed to us. We lack the ability to introspect on concept kernels, focus-of-attention allocation, sequiturs in the thought process, memory formation, skill reinforcement, et cetera; we lack the ability to introspectively notice, induce beliefs about, or take deliberate actions in these domains.
The ability to add and absorb new hardware. The human brain is instantiated with a species-typical upper limit on computing power and loses neurons as it ages. In the computer industry, computing power continually becomes exponentially cheaper, and serial speeds exponentially faster, with sufficient regularity that "Moore's Law" [Moore97] is said to govern its progress. Nor is an AI project limited to waiting for Moore's Law; an AI project that displays an important result may conceivably receive new funding which enables the project to buy a much larger clustered system (or rent a larger computing grid), perhaps allowing the AI to absorb hundreds of times as much computing power. By comparison, the 5-million-year transition from Australopithecus to Homo sapiens sapiens involved a tripling of cranial capacity relative to body size, and a further doubling of prefrontal volume relative to the expected prefrontal volume for a primate with a brain our size, for a total sixfold increase in prefrontal capacity relative to primates [Deacon90]. At 18 months per doubling, it requires 3.9 years for Moore's Law to cover this much ground. Even granted that intelligence is more software than hardware, this is still impressive.
Agglomerativity. An advanced AI is likely to be able to communicate with other AIs at much higher bandwidth than humans communicate with other humans - including sharing of thoughts, memories, and skills, in their underlying cognitive representations. An advanced AI may also choose to internally employ multithreaded thought processes to simulate different points of view. The traditional hard distinction between "groups" and "individuals" may be a special case of human cognition rather than a property of minds-in-general. It is even possible that no one project would ever choose to split up available hardware among more than one AI. Much is said about the benefits of cooperation between humans, but this is because there is a species limit on individual brainpower. We solve difficult problems using many humans because we cannot solve difficult problems using one big human. Six humans have a fair advantage relative to one human, but one human has a tremendous advantage relative to six chimpanzees.
Hardware that has different, but still powerful, advantages. Current computing systems lack good built-in support for biological neural functions such as automatic optimization, pattern completion, massive parallelism, etc. However, the bottom layer of a computer system is well-suited to operations such as reflectivity, execution traces, lossless serialization, lossless pattern transformations, very-high-precision quantitative calculations, and algorithms which involve iteration, recursion, and extended complex branching. Also in this category, but important enough to deserve its own section, is:
Massive serialism: Different 'limiting speed' for simple cognitive processes. No matter how simple or computationally inexpensive, the speed of a human cognitive process is bounded by the 200Hz limiting speed of spike trains in the underlying neurons. Modern computer chips can execute billions of sequential steps per second. Even if an AI must "burn" this serial speed to imitate parallelism, simple (routine, noncreative, nonparallel) deliberation might be carried out substantially (orders of magnitude) faster than more computationally intensive thought processes. If enough hardware is available to an AI, or if an AI is sufficiently optimized, it is possible that even the AI's full intelligence may run substantially faster than human deliberation.
Freedom from evolutionary misoptimizations. The term "misoptimization" here indicates an evolved feature that was adaptive for inclusive reproductive fitness in the ancestral environment, but which today conflicts with the goals professed by modern-day humans. If we could modify our own source code, we would eat Hershey's lettuce bars, enjoy our stays on the treadmill, and use a volume control on "boredom" at tax time.
Everything evolution just didn't think of. This catchall category is the flip side of the human advantage of "tested software" - humans aren't necessarily good software, just old software. Evolution cannot create design improvements which surmount simultaneous dependencies unless there exists an incremental path, and even then will not execute those design improvements unless that particular incremental path happens to be adaptive for other reasons. Evolution exhibits no predictive foresight and is strongly constrained by the need to preserve existing complexity. Human programmers are free to be creative.
Recursive self-enhancement. If a seed AI can improve itself, each local improvement to a design feature means that the AI is now partially the source of that feature, in partnership with the original programmers. Improvements to the AI are now improvements to the source of the feature, and may thus trigger further improvement in that feature. Similarly, where the seed AI idiom means that a cognitive talent coopts a domain competency in internal manipulations, improvements to intelligence may improve the domain competency and thereby improve the cognitive talent. From a broad perspective, a mind-in-general's self-improvements may result in a higher level of intelligence and thus an increased ability to originate new self-improvements.
3.2: Recursive self-enhancement
...etc
Edited by CSstudent, 27 November 2007 - 02:33 AM.