Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.
What Flavor Of Transhumanist Are You?
#31
Posted 11 December 2003 - 11:13 PM
Did anyone see the author of "The Age of Spiritual Machines" when he did his presentation on Book TV a few years ago. I still have that tape. It impressed me. I will have to look at it again.
His theory was that in ten years, they could have a computer which could have a conversation with you and you wouldn't even know it wasn't human. In twenty-five years, he predicted artificial intelligence would be greater than that of any human.
As I understand the Singularity debate, I think I find it "nonsense" when this AI takes on human motivations (striving for power) and takes over the world.
#32
Posted 12 December 2003 - 12:05 AM
Reason
Founder, Longevity Meme
reason@longevitymeme.org
http://www.longevitymeme.org
#33
Posted 12 December 2003 - 12:54 AM
Reason; what on this thread, exactly, reinforces your mentioned perception? Randolfe stating that the Singularity is nonsense because AIs will always adhere roughly to human goals, like hammers and nails, no matter how intelligent they become, and this will happen automatically? Or was it because Laz suggested remaking "The Krell Machine"?
I do agree that the Singularity/AGI topic is hard to present, but when I try to find out *why*, I often come up empty-handed. There is a whole continuum of beliefs and opinions; a greyscale, not isolated plateaus, so what I'm trying to figure out is "on which specific points do people personally tend to draw the line?". Randolfe draws the line closer to tradition than most, and I'm pretty sure I can think of specific reasons why - I draw the line much further away from tradition, and I know my specific reasons. It's just the area in between that I'm trying to understand better. Can anyone help me out?
sponsored ad
#34
Posted 12 December 2003 - 01:04 AM
Remaking it as a movie with modern effects is a no brainier (groan) and has to be better than the Sci-Fi channel reintroducing us to the Cylon's "Next Generation" in Battlestar Galactica II.
#35
Posted 12 December 2003 - 01:09 AM
#36
Posted 12 December 2003 - 01:23 AM
I think people tend to draw the line with AI simply because it does, in fact, imply better-than-human intelligence—at least eventually. People generally already have trouble coping with the fact that they are not as competent as they would like to be in a world already seemingly saturated with competent people who are already making all the difference. People who are actually competent are just as human as the incompetent with respect to how they would feel when their own competence would begin to diminish in value in the face of superintelligence. People generally don’t care as much about what they have as much as what they feel they represent and can economically and socially accomplish. Tell most people that they will be living in paradise tomorrow but will be utterly worthless to others and the cosmos (since superintelligence is doing all the work), and most likely they would prefer their chaotic world in where they are making a perceivable difference.
#37
Posted 12 December 2003 - 05:43 AM
www.singinst.org
www.nickbostrom.com
www.yudkowsky.net
with more advanced stuff at:
www.sl4.org
Jace, interesting comments you put forth regarding the emergence of superintelligence; perhaps superintelligence will refrain from creating too much visible structure in the eyes of willingly baseline humans so as to avoid scaring or intimidating them - I don't know. I just hope that the first superintelligence can apply the *same* (or an unambiguously improved version of) moral reasoning process as we do when considering which future humanity would genuinely want most.
Having a superintelligence that *genuinely* respects your (and the rest of sentientkind's) volition is different than all previous dramatizations of paradise or Heaven - it would genuinely be the best of all possible worlds. So instead of telling people "you could live in paradise tomorrow", what should I tell them? The real answer is quite complicated - it lies at the end of a long series of steps that take considerable time and effort to transverse.
Regardless of what people in the US and other First World countries claim to want, there are hundreds of millions, or even billions of people in conditions of straightforward pain and suffering. Starvation, rape, repressive regimes, prison camps, gangs, et cetera, et cetera. These people, obviously, deserve to be freed from their torment - and not necessarily through death (unless that's their informed choice.) This makes discussions of whether humans would feel incompetent in the face of superintelligence rather academic.
If a human has a problem with an intelligence greater than itself, then that human is still suffering from a "zero-sum psychology", a mental appraisal system that irrationally categorizes all powerful entities as potential threatening ones. This system comes from the survival challenges of our EEA, and becomes obsolete when dealing with transitions as massive as the Singularity (or even the atomic bomb.) One human's (or even a billion's) zero-sum psychological reaction to the emergence of superintelligence (say, through vocal disapproval) will never outweigh the moral imperative of making major progress in the fight against nonconsensual suffering and death (to a benevolent being, anyway.) Like the hand axe, the H. sapiens shell will eventually phase out, as will zero-sum psychologies, and hopefully, beings of all sizes and shapes will be able to live in peace. (Or maybe not - I obviously can't be 100% sure, humans and jealousy might be around until the end of time, but it seems unlikely.) Doesn't sound like too far-out of a scenario, does it?
#38
Posted 12 December 2003 - 06:16 AM
I'm not so sure if it's always an issue of intimidation. People simply want to feel as though it makes a difference that they are alive. Many people feel that if they committed suicide, nothing will change and everything will go unphased. This is the psychology I'm referring to. It may be similar to zero-sum, but I don't think it is entirely.Michael: If a human has a problem with an intelligence greater than itself, then that human is still suffering from a "zero-sum psychology", a mental appraisal system that irrationally categorizes all powerful entities as potential threatening ones.
You place a lot of emphasis on the hypothetical aspect of the Singularity. Have you written anything in the past that addresses how the economy must evolve to foster a direction that moves toward a Singularity, and not other futures? Perhaps there is some information on this as well to which you can lead me. I've been reading some essays at WTA lately, and it tentatively seems that futurist thinking is only plausible when not taking into account the social variable.Michael: Like the hand axe, the H. sapiens shell will eventually phase out, as will zero-sum psychologies, and hopefully, beings of all sizes and shapes will be able to live in peace. (Or maybe not - I obviously can't be 100% sure, humans and jealousy might be around until the end of time, but it seems unlikely.) Doesn't sound like too far-out of a scenario, does it?
#39
Posted 13 December 2003 - 02:40 AM
If "everyone feeling like they're making a difference" and "the ending of nonconsensual suffering and ignorance" can't both happen at once, then the former will have to be violated, unfortunately. Hopefully, lots of stuff can be done in attempts to accommodate these people. It does point out a good point though, that even the Singularity can't be "perfect", because "perfection" is probably physically impossible, and all past arguments against this have been unconvincing. However, "best of all worlds" may be possible.
I'm not trying to place a lot of emphasis of the "hypothetical aspect" of the Singularity (what is the "hypothetical aspect"?) I was merely trying to comment on the extension of a trend that has already been happening through history (movement from zero-sum to positive-sum psychologies.) This may not be "old fashioned concrete style" like the numbers of human economics, but I wouldn't call it "hypothetical".
Based on your comments, I think we might mean entirely different things when we say the word "Singularity". The Singularity would simply be an invention, the creation of a transhuman intelligence, that would then go on to create more intelligence. I don't see a possible course of the economy in which this is impossible. (The technological prerequisites for transhuman intelligence are either already here or on their way shortly, and are desirable for a lot of other reasons besides "creating transhuman intelligence".) Futurist thinking usually should involve the "social variable", but "the social variable" means "the aggregate behavior of humans", and the aggregate behavior of humans would be largely irrelevant from the point of view of a recursively self-improving transhuman intelligence (in the pragmatic sense, I mean, not the moral one.)
Say you're trying to predict the future of a technological chimp society. Suddenly, one of the chimps invents a dimensional portal that brings in several thousand human beings with jet fighters, mortars, and machine guns. Say they want to kill all the chimps. The "social" patterns of the chimp society previous to the opening of this dimensional portal (and throughout the ensuing massacre) are largely irrelevant. Humanity has to get over itself - what Joe Normal (for that matter, Joe Intellectual) thinks is irrelevant to much of the unfolding of events behind the Singularity, except insofar as the seed AI's (or seed IA's) most recent iteration chooses to care. All we can do is set the *initial conditions* - make sure that transhumans care about humanity, so that we can call ourselves "we", instead of making moral distinctions between us and transhumans - an "Us vs. Them" scenario. If we screw up on the engineering aspect, then there won't be time for us to hate transhumans - the most likely scenario would be immediate disassembly (due to the recursive self-improvement aspect.)
The key to understanding the Singularity is understanding the concept of recursive self-improvement. Very, very few people understand it; I'd say that less than 1% of the transhumanist community does. It's a very technical idea, that employs points gleaned from cognitive science and evolutionary psychology. The very best explanation in existence is here:
http://www.singinst....OGI/seedAI.html
Grasping the "whole picture" requires understanding the technical meanings of the little points he mentions, like "Mutations are atomic; recombinations are random; changes are made on the genotype's lowest level of organization (flipping genetic bits); the grain size of the component tested is the whole organism". That's my current best guess for why most transhumanists don't understand it.
#40
Posted 13 December 2003 - 04:53 AM
Yes, I was referring to older stuff which is where I decided to begin when I initially set out to give some attention to EY. I think it was some quote in an article that went something like this, “If I had to make a choice between AI and humans, I choose AI,” which to me is on the same lines as, “I choose God so that I can speak on behalf of Him and look down upon Humanity since they are congenital subordinates,” and that’s where I got distracted and veered away.Michael: I've read all of his work and I submit that only *some* of it is slightly condescending, mostly the older stuff.
LOL! Noted.Michael: Anyway, of course, if scientists only read the papers of their colleagues that they thought had good personalities, no one would get any science done...
I don’t either. But scientists are usually under a lot of scrutiny. And they certainly don’t accumulate their own capital to carry out their research. Therefore, I can see many elites getting collectively involved if and when their influence begins to become prospectively threatened. Regular people won’t have the power to terminate a Singularity underway, but the elite will, because no matter how intelligent a few AIs are, they are still operating within institutional forces. If they initially begin to obstruct, or prospectively so, the large benefactors of business, a militia will be on call, and the militia will likely have the nanotechnological resources by then to preempt AI effort at accumulating their own resources for combat.Michael I don't see a possible course of the economy in which this is impossible.
It would be a mistake to assume that the Singularity will never take place. But, in my opinion, a more likely scenario will be that AIs won’t physically be allowed to do much more than to figure out ways to augment the elites who, in turn, I think, will posit institutional initiatives that foster the augmentation of regular people before AI are allowed to do much more.
Michael, any talk about the Singularity is hypothetical because theories and principles that are applied to move toward recursively self-improving AI are a different matter than speculation of future scenarios.
#41
Posted 13 December 2003 - 10:52 PM
The key to understanding the Singularity is understanding the concept of recursive self-improvement. Very, very few people understand it; I'd say that less than 1% of the transhumanist community does. It's a very technical idea, that employs points gleaned from cognitive science and evolutionary psychology. The very best explanation in existence is here:
Recursive self improvement is neither technical nor gleaned from cognitive and evolutionary psychology. Recursive self improvement simply states that things that improve their ability to improve themselves create a positive feedback loop that grows exponentially. If I improve my ability to improve my intelligence, and doing so allows further acceleration, I am going be very smart very quick.
Grasping the "whole picture" requires understanding the technical meanings of the little points he mentions, like "Mutations are atomic; recombinations are random; changes are made on the genotype's lowest level of organization (flipping genetic bits); the grain size of the component tested is the whole organism". That's my current best guess for why most transhumanists don't understand it.
Yudkowsky mentions these details to explain the limitations of natural evolution and not to explain recursive self improvement.
The reason most trashumanists do not appreciate recursive self improvement is that human beings have never been able to substantially alter their biology or genomes (yet). We could never improve our ability to improve ourselves so we failed to appreciate the power of doing so. Humans also evolved as the supreme terrestial intelligence. Transhumanists are much quicker to appreciate our ability to improve our health or strength because we know of species that surpass our strength and health. Cheetahs can race cars and tree lives for centuries. But what cheetah or tree ever wrote a book? The fundamental shift I required to understand the Singularity was to realize that intelligence is quantitative and not qualitative.
#42
Posted 14 December 2003 - 12:24 AM
Yudkowsky's explanation of the limitations of evolution goes hand in hand with his explanation of the advantages of intelligence - the essence of study for an AI-based Singularity are the comparative advantages of brains designed by evolution vs. brains designed by intelligence.
I don't think it should require a concrete example for someone to accept the possibility and power of recursive self-improvement - these qualities can be extrapolated straightforwardly from a minimum of assumptions (but require lots of technical data to really substantiate and explain.) Requiring a visual example of something that has been forthcoming for decades or centuries (such as the computer, for example) is the hallmark of short-sightedness.
(Yes, in spite of my railing against misunderstandings of recursive self-improvement in the above paragraphs, I try to avoid arrogant or self-centered tendencies to the best of my ability. Part of the problem is that one often needs to be formal and forceful to come across as credible, and people observing formal and forceful outputs often judge that the source of that output must be arrogant, because in most cases, they are. But not always! Sometimes altruists need to masquerade aggressively in order to get their ideas across.)
#43
Posted 14 December 2003 - 12:49 AM
I especially recommend this:
http://www.singinst....ro/smarter.html
and this: http://www.yudkowsky...ingularity.html
If you feel that Yudkowsky is too arrogant to read, then I guess you'll have to wait until I've polished and published more of my Singularity writings, right? I personally consider Eliezer's seeming arrogance a funny personality quirk - also, it's worth noting that his quote about "Us and Them" was predicated on the idea that if humans and AIs had a conflict, the AIs would be the morally correct ones, because he thought at that point that compassion and intelligence were necessarily linked. Eliezer once mentioned that having all your writings floating around the internet is like having your baby pictures permanently stapled to your forehead. His comments on the media are here; http://yudkowsky.net/eliezer.html. (He took down his bio page due to that silly Wired article, I believe.)
Anyway, I really enjoy conversing with the two of you, Kip and Jace, you both have 1) very high intelligence and 2) excellent communication skills. Both are somewhat rare, but when they come hand in hand, it's really great. Thanks again for sharing all your comments and opinions.
#44
Posted 14 December 2003 - 04:28 AM
I disagree that the (full) concept of recursive self-improvement is neither technical nor gleaned from cognitive and evolutionary psychology. In the most broad possible terms, yes, the definition of "recursive self-improvement" you give does apply. But understanding *why* we should expect recursive self-improvement to be so extremely powerful relative to other advances requires a technical knowledge of the comparative advantages and disadvantages of evolution, human brains, software programs, and young AIs.
You do not need to know anything about evolutionary/cognitive psychology in order to understand reculsive self improvement. Ask a person "what if, every time you lifted a barbell, you increased the amount of weight you can lift and also the amount of muscle that grew per pound lifted?" "You're be very strong very quick." If you apply the same method to intelligence, instead of strength, or anything else, the answer is the same.
For example, few people know that most cognitive programs must execute in 100 sequential steps, due to the limitations of neuron firing and information processing within the human brain, with huge implications for the advantages of engineered vs. evolved brains. Heck, many people are still hung up on the Church-Turing thesis!
Again, you don't need to know any of this (how is the CT thesis, that every effective algorithm can be carried out by a Turing machine, relevant?) to understand recursive self improvement. Nor is the possibility of natural selection evolving a recursively self improvement intelligence zero (indeed, this will soon be exactly what natural selection has done).
#45
Posted 14 December 2003 - 02:25 PM
How to achieve/create/stimulate recursive self-improvement within a human or computer brain is a more difficult question.
#46
Posted 18 December 2003 - 07:07 AM
Reason; what on this thread, exactly, reinforces your mentioned perception? Randolfe stating that the Singularity is nonsense because AIs will always adhere roughly to human goals, like hammers and nails, no matter how intelligent they become, and this will happen automatically?
I didn't say AIs would adhere to human goals. I said I didn't believe that AIs would "adopt human emotions and human goals". If AIs came into existence, why do you think they would have the same lust for power, etc. that we humans have?
#47
Posted 02 April 2004 - 05:50 AM
#48
Posted 02 April 2004 - 04:34 PM
How to achieve/create/stimulate recursive self-improvement within a human or computer brain is a more difficult question.
General intelligence , knowledge of its own hardware + some knowledge of its thinking processes + ultrafast thinking + the rest of the AI advantage should be far more than enough. That is the point that http://www.singinst....OGI/seedAI.html argues very convincingly. The bulk of the task is basically creating some form of general intelligence, which is not an easy one, but once you've achieved it, a Singularity is practically inevitable.
I didn't say AIs would adhere to human goals. I said I didn't believe that AIs would "adopt human emotions and human goals". If AIs came into existence, why do you think they would have the same lust for power, etc. that we humans have?
Well, I mean, they could if we duplicated all the specific complexity underlying the human lust for power, or built in some context-insensitive power-seeking "instinct". Both of these tasks would be difficult, not to mention unwise. So I doubt they would happen. You may be agreeing with me here. But something you said must have given me the impression that you felt AIs would converge towards certain human qualities - what do you think it was? Whatever it is, I'm pretty sure that http://www.singinst....FAI/anthro.html would cover it.
I think it's a lot easier to expect that some really cool computer is going to make you immortal, than that you need to excersise, eat right, and take very specific dosages of supplements.
No, building some "really cool computer" (a seed AI specifically) is a lot harder than exercising, eating right, and taking specific dosages of supplements. Being the *first* to build one is likely to require ample knowledge and innate intelligence, of a level significantly beyond practically anyone on these forums. Also, if you don't build your really cool computer *right*, it will simply proceed to acquiring nanotechnology and implementing its goals, which could easily entail the destruction of humanity through the mass-rearrangement of local matter. The complexity underlying compassion, understanding, benevolence, and plain old common sense will need to be duplicated deliberately in AIs; we can't expect these qualities to pop up on their own. If the first AI *does not* possess them, then the consequences could be really bad, like immediate species death. (See the "Impact of the Singularity" thread.)
As for Singularity timeframes, here are the usual links:
http://www.nickbostr...telligence.html
http://www.accelerat...encehowsoon.htm
See also "What is the Singularity?" if you aren't really familiar with the underlying reasoning:
http://www.singinst....ingularity.html
#49
Posted 18 April 2004 - 10:50 PM
Being the *first* to build one is likely to require ample knowledge and innate intelligence, of a level significantly beyond practically anyone on these forums.
I wouldnt go that far Michael...
There are probably a significant number of individuals who are quite capable of accomplishing said event, although I would wager that the majority of them (if not all) are against the core idea behind the singularity and FAI itself.
I place it within my own capabilities to accomplish such an event with simplistic use of semantical logic and modular programming, as well as models for multiple avenues of what one could call "thought" within a machine. It would be a difficult task, no doubt. However, I have yet to feel so inclined to support the singularity theory or FAI for reasons that I have clearly stated.
Or perhaps not so clearly... [glasses]
This is going on the second year now, and I still am strongly opposed to the implimentation of the aforemetioned Singularity as well as the FAI, as it has been described, labeled, and heretofore defined.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users