• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Revolutionary cycles in singularity sequences


  • Please log in to reply
8 replies to this topic

#1 Gewis

  • Guest
  • 55 posts
  • 0
  • Location:Provo, UT

Posted 27 July 2003 - 10:10 PM


After putting my noggin' to the wheel for a bit, a number of important ideas have come up, and I'd be interested in hearing commentary.

First, the groundwork. Our only present model of intelligence is in humans and higher order animals, in particular advanced mammals. The seperation between humans and advanced mammals here is made only for the purpose of discussion, and does not imply any true or significant dividing line. With humans, however, we see a wide diversity of personality, opinion, choices, etc. Even in those with a more capable and even calculating abstract intelligence, like many who browse and post in these forums, there are wide differences of personality and opinion. In society overall, there are people genuinely malevolent and self-serving, and others who are benevolent and altruistic.

If humans are the best model for developing AI (indeed, we test AI by comparing it to humans and seeing how it measures up, i.e. the Turing test), then it's not an unreasonable expectation that multiple AIs will develop along similar lines as humans, having a broad spectrum of personality. Moreover, my observation of people is that the more intelligent individuals are, the less prone they are to engaging in mob behavior. If the same could be expected of machines, then we can reasonably expect that there will be some AI in favor of benevolency toward humans and some AI in favor of malevolency. We cannot assume that intelligence will always lead to the same conclusions.

Second, we're having these discussions now, unsure about what our future will be when we're no longer, as mostly unmodified humans, the biggest kid on the block. Will the first generation of independent AI have the same worries we do about the next generation? Will they not fear their own obsolesence? Or will it be a process of self-upgrading? Will they want that? Will each generation be successively jealous of their identities, and not sure what will happen to them? Won't some of them say, "Our creators (humans or the previous generation of computers) intended us to be this way, we shouldn't change it."?

I think the complexity of the power structure and struggle that likely accompanies the advent of AI is much greater than humans vs. machines. Humans will only be a small part of the picture, unless we take find a way to keep up and be near the top, if not at the top. The machines vs. human debate is only relevant in so far as they have cause, as free-thinking individuals, to unite against us.

#2 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 07 August 2003 - 05:49 AM

Hi Gewis,

Interesting analysis. Overall, I found it a very interesting series of thought experiments. I hope you don't find my response overly critical, and please feel free to ask for clarification on anything that is unclear:

I find parts of the above pargraphs to be anthropomorphic. For example, there is no law that "more intelligent beings engage in less mob behavior". Within the *human* sphere, yes, more intelligent humans can tend to be less moblike. However, I can imagine a society of moblike superintelligences, just as easily as I can imagine a society of unmoblike ones.

I don't call post-Singularity superintelligences "AIs" because I don't think that is the best word to describe them. Remember, Artificial Intelligences automatically get a speedup factor of millions or billions relative to human beings. AIs will be self-modifying and just plain smarter than humans. (http://www.singinst....tro/impact.html) Recursive self-improvement is the idea of a sentient entity making design improvements to the underlying architecture it runs on, which humans would be capable of doing if we were uploaded. Also, there is the idea of brains that can make use of all the hardware available - if you have an AI that is 10^17 ops/sec big that invents nanocomputing and therefore can squeeze 10^22 ops/sec out of its current computing material through usage of that technology, then you almost instanteously get an AI that thinks with 100,000 times the computational resources it had before. It goes literally beyond our ability to comprehend.

There will be no interim period with human-equivalent AIs thinking at roughly human-equivalent speeds, engaging in power struggles in the same way that humans have with each other for millions of years. Human-equivalent AI is anthropomorphism - you start off with a slightly-dumber than human seed AI, which, through the tremendous advantages it has by virtue of its computational substrate, becomes capable of bootstrapping itself to superintelligence, at which point you get superintelligence. There is no game-theoretical similarity between superintelligence and human beings. If it is superintelligence that does not explicitly care for humanity, (or all sentient beings, or whatever) then we will be viewed as building materials, and destroyed very rapidly during the explosion of recursive self-improvement.

I think you're viewing the process of creating AI like the process of giving birth to new humans; they gradually pop up, interact with each other and the larger society for reputation and resources, and Gaussian distributions of emotions within characteristic boundaries and constraints, which coincidentally match the human boundaries and constraints. The attitudes of future AIs will be contingent on their initial design, because the pattern of all AIs after the first AI starts to recursively self-improve will reflect the goal system of the first AI. (Or, say that a few AIs come first that aren't into recursive self-improvement. Fine, but eventually a RSI AI will come into existence regardless, and unless all the AIs can sense the self-improvement of the others and begin self-improving simultaneously, the state of the world after that day will reflect the morality of the first AI that began to self-improve*.)

Humanity has a characteristic distribution of emotions and tendencies, which reflect our evolutionary past. It was never adaptive to treat your children as worthless. Humans that actually do (and they are *extremely* few) are "broken" from the viewpoint of evolution - something went critically wrong in their development or ontogenesis, like the serious deprivation of oxygen or something. All neurologically normal humans have the same set of emotional hardware for making judgements, forming internal sensations, and creating responses; they are just tuned to different activation thresholds, and conditioned by memory to be associated to slightly different things. These pieces of hardware possess the design signature of biological evolution, and are tuned to respond to ancestrally relevant cues (for example, human facial expressions) only. There are no humans born "genuinely altruistic" or "genuinely malevolent" - the machine called a human just responds differently when it is placed in different contexts or experiences the relevant sets of cues. A baby will become more malevolent, on average, if raised within a malevolent family or society. There are slight propensities to one direction or the other, but the reason why we see no completely altruistic human beings or completely malevolent ones is because neither of these emotion-sets were adaptive. I can imagine a parallel universe where they were, though. We would be at a loss to interpret the emotions of aliens with different, or substantially more complex facial expressions than us.

AI designers will stand with respect to AIs in the same way that evolution stands with respect to us. "Malevolence", or say, "jealously" will not exist in the AI unless the programmers put it there. These complex human responses are attributable to complex underlying machinery put there by millions of years of evolution - they do not pop up spontaneously with equal frequency in blank slate minds. Just because a human designs an intelligence does not mean that intelligence will have the qualities of a human. The qualities of a human are unnecessarily complex relative to the bare-minimum engineering requirements for AI, and even the simplest of them will be outdone by what AI designers come up with. Evolution is naturally slow, blind, and constrained by a host of variables, making it a poor designer relative to intelligent engineering. For this reason, I don't think AIs will have cognitive features heavily inspired by human ones. If the first AI does, and this first AI starts recursively self-improving, the human qualities the programmers put there will either be irreversibly changed through renormalization, reinforced and improved, or yanked out entirely. Self-improvement would take place very rapidly relative to human timescales. If open-ended self-improvement is not taking place, the AI will either be 1) getting out of its confines and accomplishing its goals, whatever they may be, 2) busy trying to get someone to let it out of its confines, or 3) expensive and complex enough to have near-human intelligence, but too stupid to conduct the improvement of its own design.

I think the first generation of independent AIs will not harbor the same worries that we do about them today, because 1) "worry", in the negative sense, is an evolved human emotion that gets in the way of sane thinking and makes the mind paranoid, and 2) there never needs to be a "second generation" of AI because continuity never needs to be broken between the first seed AI and the latest and greatest superintelligences. With regard to 1, I doubt AI programmers would ever insert "worry" in the human sense into the first AI, or at least I hope anyone smart enough to program an AI would see why human paranoid worrying is counterproductive to reaching goals. Or, in the worst case scenario, worry is renormalized into "a planning heuristic of selective attention, maximally useful for foresight, minus emotional baggage" from its earlier incarnation in humans. With regard to 2, AIs will not be physically detatched from the AIs they create in the same way that programmers would be physically detatched from the first AI to be coded. Physical attachment, in the sense of a self-improving AI creating a new being, can be arbitrarily close and arbitrarily precise; if the first AI is concerned about the state of the beings it creates, then it will spend a lot of time on tuning its child's emotions for the work of good and not evil. (I hope that doesn't imply inflexibility. Contrary to popular belief, it is possible to be morally good and simultaneously interesting.)

The following comments are slightly more speculative than the above ones, but I still believe both.

AIs won't fear their current instantation's obsolesence because they will surely not have identity theories as constraining as human ones are. In the world of humans, if someone's head turns into that of say, a rhino, then we would be worried. In the world of uploads and AIs, stuff like that will happen all the time. AIs might want self-upgrading, might not, but it would be silly of them to program themselves to fear for their identities, regardless of what happens. There should be enough computing power for every identity to have its own volition respected. (This might lead to presentient components within superintelligences combining in such a way that sentience and volition is created within the superintelligence, at which point the sentience should be offered the choice to leave the host body.) The question of "should we be this way because our creators intended it?" should not last long - it should be settled with the creation of the first AI. The first AI will acquire a morality through the cognitive content the programmers create it with, and further input from the programmers and external reality. The AI self-improves and makes changes to its own cognitive structure and external reality in order to maximally fulfill that morality. We figure that the AI's moral structure will settle into one of two major attractors; altruism, or, getting people what they want, or, egoism, getting itself what it wants. Altruistic minds that are self-modifying will be capable of making themselves entirely altruistic, and would only question their creation to the extent that the central goal content was preserved.** If these minds actually believe that doing good is the right thing, then regardless of how they were born, they will continue to see doing good as a correct goal. Human moralities are designed to be sensitive to even small changes in external conditions; superintelligent moralities need not be like this.

I agree that the complexity of the power structure and struggle that accompanies the building of the first AI will be greater than that of the interaction of humans and machines, in the sense that you insinuate. If the first AI sees humans as building materials and not as moral agents, then there will be no struggle - humans will be swallowed by the self-improvement process of this AI. The only power structure would be the AI directing its complex and global motor affectors to gather materials for repatterning into new structures. The introduction of benevolent AI into society, if it happens, will make the idea of "humans falling behind" irrelevant - evolutionary competition as we know it would cease, and all humans would need to be offered the chance to be as smart as they desire with their share of resources. There would be no need to be "at the top" because the beings at the top would already be representing us and upholding our rights. If an SI cares about you, it is easy for it to optimize everything for your well-being, plus whatever superintelligent caveats need to be added to that action.

Thank you for the fascinating discussion!

Creating Friendly AI:
http://www.singinst.org/CFAI
What is Friendly AI?:
http://www.kurzweila...tml?printable=1

*If this AI's morality is good, then the world reflecting the AI's morality will simply display more freedom, happiness, or other qualities sentients find morally valuable.
**It sounds again like this is limiting, but building an altruistic AI of this sort seems like mankind's only safe pathway to the future. There may be no objective morality, in which case the AI will need to work with what it has and simply work to enforce the rights of as many beings as possible. History has shown a clear progression of better and fairer moralities - we have no reason to believe that this improving trend would not continue if we created AIs even more benevolent and intelligent than we are.

sponsored ad

  • Advert

#3 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 07 August 2003 - 05:05 PM

Michael/Gewis,

It would be my hope that parallel to or preferably preceding the development of self-evolving computers would be the augmentation of human mental capacity. With the recent advances that Peter/ocsrazor has been dealing with in neuronal interfacing, might we be just as close to capabilities of interfacing with a developing AI? Could we 'teach' by imprinting the neural networks necessary for altruism and empathy directly on the AI matrix guiding the AI into at least a more benevolent direction avoiding the more self-serving aspects of our character. It would seem that 'friendliness' to humanity should be of paramount importance in programming an AI that is self-evolving and tied to the basic systems of operation of its' systems. How 'friendliness' is interepeted is probably the difficulty.

As you might assume I've only just begun to seriously consider AI and the Singularity as real issues and possible threats to at the very least my existence.

Edited by kevin, 07 August 2003 - 05:11 PM.


#4 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 07 August 2003 - 05:19 PM

Kevin what you are proposing is what I think is our best option and more viable sooner than being able to objectively construct the perfect algorithm of altruism.

I compare the process to what in behavioral psychology is identified as "imprinting" and I suspect it may be an easier method of interfacing constructively with emerging AI BUT (isn't there always a caveat?) it is also the same way a malevolent consciousness could infect the Seed AI with destructive thought processes and there must be a very pragmatic approach that anticipates this phenomenon a priori. It is the risk of anthropomorphizing the Seed AI by treating the relationship as parent/child but allows us a single chance to at least get it right and cultivate a symbiotic (commonly called 'friendly') paradigm in lieu of a parasitic one.

Basically we "parent" the child AI that upon maturing sufficiently then becomes our collective "parent" applying what we can hope are the healthiest models for behavior.

#5 NickH

  • Guest
  • 22 posts
  • 0

Posted 08 August 2003 - 05:22 AM

Kevin,

It'd certainly be useful to augment human intelligence before the creation of a Seed AI - we can use all the intelligence we can get. However I don't think it's necessary to wait for intelligence augmentation before starting on Seed AI, given a concrete Friendliness theory such that Eliezer is developing. Of course without such a theory no one should touch any system with even a remote possibility of general intelligence - it's just not worth the existential risk.

Can you explain what you're suggesting in more detail? Using neural interfaces to copy the neural networks human use for morality into the AI, making sure to transfer the good parts in preference to bad?

Lazarus,

The aim of Friendly AI is not to completely define all the details of human altruism, but to give the AI the desire and the ability to revise and expand the interim definition as it gets more intelligent. We need to convey an unambiguous pointer to the species-universial complexity we use to reason about morality, a pointer to what we mean by "good" and how we think about it, so the AI can both correct and complete the approximation to morality the programmers give it.

Can you describe in more detail your imprinting procedure and how it'd apply to AIs?

I don't think parent/child analogies are very accurate here. A human child already has an awful lot of structure an AI will lack until we explictly add it - unlike humans AI's are truly a blank slate. Until we design it, or design the systems that lead to it's acquisition, or design the systems that lead to the systems... etc, the AI will have neither (for instance) a tendency to rebel against it's parents, nor the ability to even start distinguishing right from wrong.

A better analogy, bearing in mind all analogies between humans and AIs can easily lead to anthropomorphism, is that we're in the position not of human parents, but of evolution. We weren't created by our parents, they just brought and raised us. The role of (initial) creator has different strengths and weaknesses which are obscured by the parent analogy.

In general, I don't think it's necessary to see our relationship with the AI through a human lens. In fact I think such attempts are generally misleading as implicit assumptions of human behaviour that we naturally make all too easily slip in as assumptions about how AI will tend to act. As a result I also don't think we should view a Friendly superintelligence as a parent, it's a far stranger relationship than that.

#6 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 08 August 2003 - 06:42 AM

NickH,

As I said before, I am a relative newcomer to these concepts and by no means that well read on the topic of AI/SI.. but I am learning. :p

In suggesting the use of neural interfaces to 'teach' an AI, it was in my mind not to copy human altruism but rather 'guide' the AI in regarding 'altruism' as positive and 'self-serving' as negative. Perhaps like Helen Keller's teacher running water on her hand and spelling water so that she might understand, using a neural interface we might allow the AI to correlate an 'altruism' program with say, the brainwave pattern exhibited by someone exhibiting/thinking about those qualities, and give it a positive connotation and the reverse with selfish thoughts. It might ensure an unambiguous association of what humans consider altruism with what the AI experiences when 'running its altruistic' program'.

The actual mechanism may be different, but to be able to communicate with an AI on a neural interface level through some symbology would go a long way to ensuring that we were on the same 'wavelength'.

A better analogy, bearing in mind all analogies between humans and AIs can easily lead to anthropomorphism, is that we're in the position not of human parents, but of evolution.


The above was something that occurred to me as well. Rather than being the parents of seed AI, we are playing the role of the seed, providing the energy and foundation on which AI can evolve. Upon thinking this I had to ask myself, Why create an independent AI and worry about whether or not the exercise would be the last thing we did? Why not instead concentrate on augmenting our own capabilities, build upon the human template, and be assured that the core of the resulting being would still have part of the essence of humanity? I guess it sounds more like a retrofit instead of starting from scratch, but if we are looking at rebuilding part of the old into the new and the new just might kill us, maybe we should look at the upgrade a little more closely.

Recent studies on interfacing the brain with various digital interfaces is showing that we have a remarkably adaptable and plastic device in our head. Working on devising methods to communicate with it and augment it's processing capabilities might be better than using AI to do the processing for us. Imagine going through a surgery that wires some ports to the various centers of your brain that are gated to open and transmit information (brainwave patterns?) when certain triggers (brainwave patterns again?) are sensed. After a period of training where the devices adjust to the individuals cues, any enhancement modules could be communicated with.

We humans are already self-evolving... Why not build upon that hard wired need to press forward instead of worrying whether one of our creations will supplant us? It may be that eventually the augmented part of our 'brain' would eliminate the need for the biological portion but at least there would still be something of our human pattern making that transition. More than likely we would be at that time far different than the humans we are today and our concerns would also be much changed.

I think AI is inevitable and so is the Singularity, but what involvement or portion of ourselves goes into their creation is more debatable.

#7 NickH

  • Guest
  • 22 posts
  • 0

Posted 08 August 2003 - 09:01 AM

Kevin,

I highly recommend SIAI's work: http://www.singinst.org . In particular the sections on Friendly AI, Creating Friendly AI (this is very out of date and incomplete, but it's still an essential read), and "Why AI?" under the short intros. They say what I'm trying to say far better :p

While brain-computer interfaces would certainly be useful in various ways, when they're developed, I don't think we should rely on them as a method for transferring altruism and other aspects of morality to the AI. We should know what we're doing without having to have direct access to our neurons. We can guide the AI on it's own level, explicitly thinking about how and what needs to be transferred to the AI and transferring it. In a sense we'll already being doing that Helen Keller thing by interacting with the AI in general - guiding it's deliberation process, it's concept development, interacting with it's environment and internal processes, and so on.

Since humans and AI's will likely have a very different code level (see http://singinst.org/LOGI/) I don't see how neural-level communication will have any special communication ability. Of course we could do with an upgrade from keyboards, but I don't think that's what you meant.

SIAI presently has a far stranger and more powerful plan, where there is no independent AI. The programmers and the AI form two parts of a mind-like system. This combines the strengths of the human component with the strengths of the developing AI component. In a sense this supercedes independant human augmentation and AI development.

Remember that there is no force in an AI towards being self-serving or selfish, unless for some stupid reason we add one in. We're not trying to guide them away from selfishness, but towards humane altruism. We do have to worry about that with a human self-modifying, however.


One reason for the focus on AI is that it seems far easier for people like us to accelerate. I don't think human augmentation'll be ready fast enough to actually spark the Singularity, although it could certainly help. To a large extent it's a race against time - there are other existential risks that appear to get risker as time goes by eg. global warfare (especially nanotechnological), unFriendly AI. Not to forget the 150,000 people they die each day, and other present day suffering. We can't wait around for human augements if we already have the ability to safely make AI.

Of course with 6 billion+ lives at stake, you can't rush blindly - you have to get things right. Human augmentation, as an alternative to AI, is not without it's own risks. It's quite possible a maturing FAI would be far more trustworthy than a human:

* Humans have no experience in mind revision, our minds aren't designed with that ability built in nor with that ability in mind, and have little introspective power. A seed AI would have lots of experience in mind revision, both small and large, since it'll be taking part in it's own creation. We can apply foresight evolution failed to muster and design with self-revision in mind - for instance transparent, understandable, modular mind structures. As a result there's a larger risk of a human making a mistake in revising their mind than a seed AI - just because it starts out human doesn't mean it'll end up humane (humane = the good parts of humans, roughly speaking).

* Humans have both, metaphorically speaking, lightness and darkness built in. It seems prudent not to design in hate and selfishness in the FAI, although it'd certainly need to understand them. A human would have to untangle such adaptations from the rest of their mind, without damaging things too much, whereas an AI wouldn't have them in the first place.

* Humans aren't very rational. We're more the political animal rather than the rational one. A FAI could be designed from the ground up as a rational mind - with it's focus firmly on the truth. This would solve a lot of problems humans tend to have.

So, in a sense, the reason why we work towards creating an AI, even though it could be the last thing we do, is because every single route (including just leaving the world be) could be the last thing we do. The world is nearing a tipping point, and every action and inaction contributes more or less to which where the future leads. FAI seems to be less dangerous than the alternatives, and easier to influence positively.

The Singularity isn't inevitable - we could all die first.

#8 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 08 August 2003 - 05:25 PM

NickH,

Thanks for your insights and links. I'll certainly put them in my increasingly longer line up of 'must reads'...

Incidentally, again I find that my 'original' thoughts have been previously espoused by those with a better grasp ( I expect ) than I. While googleing for 'life extension' I found an interview with Max More at http://www.nanomagazine.com/2002_07_13 from last year. No doubt his views on the subject are probably already widely known but they sound familiar...

Question 4: Ray Kurzweil argues that when machine intelligence does occur, it will necessarily soar past human intelligence. Would you agree?

On this issue I am very close to Ray's views, as opposed to Hans Moravec's views. I don't think that there will be a stark distinction between human intelligence and machine intelligence. The picture that Moravec presents rather starkly is that machines will become smarter than us and very rapidly leave us behind and become completely separate beings. I think that there will be a whole ecology of different species, and I think that one option open to humans will be the option of being augmented. I think that we will have increasingly intimate connections with the machines; they're going to get increasingly miniaturized; we'll be surrounded by them, we'll be wearing them, we'll swallow them, and they be will become part of us. I think that a co-evolution of human and technology is more likely, and certainly more desirable, than dominance of machine intelligence.

You have to ask what people are willing to pay for, and I don't see a huge market for autonomous intelligences. There will be a huge market for intelligences that are good at solving particular problems, for instance in oil exploration or in military intelligence or in business strategy or even in research and discovery. But for human-level and human-wide intelligence, I don't think that there is a huge motivation to develop that. They could be outstanding at certain tasks but work intimately together with human beings. Although there may be pure machine intelligence at some point, I think that for the most part we will be augmented. The biological part of us will be increasingly vestigial. After a few decades, we might turn around and think "I'm not using my biological brain much anymore." I think that we may eventually become nonbiological or postbiological entities, though that's a relatively distant speculation-although one that makes sense scientifically and philosophically.



sponsored ad

  • Advert

#9 Thomas

  • Guest
  • 129 posts
  • 0

Posted 08 September 2003 - 11:52 AM

I have nothing to add to what Michael has said. Except, that I think, that motives are more orthogonal to intelligence, than he assumes.

You can safely hook any stupid motive, to no matter how high intelligence, and there will be no other danger for that motive not to be respected, than some other (maybe hidden) motive.

If I am correct, the easiness of controlling the SAI, is unbelievable.

Hope, that I am not wrong, though.

- Thomas




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users