• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Shaping the Singularity.


  • Please log in to reply
9 replies to this topic

#1 mgarriss

  • Guest
  • 9 posts
  • 0

Posted 16 August 2003 - 09:42 PM


I've seen some statements of the following form:

- 'We should give the Sigularity [fill in attribute] so [fill in reason].'
- 'It's important the Sigularity know the difference between good and bad.'
- etc....

I presented the idea of the Sigularity to a rather intellegent friend of mine and her first response was along the same lines as the quotes above.

It seems to me (mind you, after just a little thought on the subject) that since a major aspect of the Singularity is the ability to reinvent itself that nothing we 'program into it' would or should survive this reinventing process. This includes notions of right and wrong, which we could be right or wrong about. [sfty]

I believe that the direction it takes will be 100% in it's control and we will affect this direction only when 'We' become 'It' and join the Sigularity.

Thoughts?

Michael Garriss

#2 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 16 August 2003 - 10:11 PM

Just like there are basic laws to our structure whereby we can't live without oxygen... we could feasbily program into a self-evolving program some 'absolutes' which were not accessible to modification. However, I agree with your friend that given 'free rein' over it's attributes, the 'Singularity' would eventually lead to the 'overwriting' of any tenets we gave it, should those tenets prove to be incompatible with a direction it's evolution considered favourable.

sponsored ad

  • Advert

#3 mgarriss

  • Topic Starter
  • Guest
  • 9 posts
  • 0

Posted 16 August 2003 - 10:49 PM

Just to clarify, it is I who believe that the Sigularity will be 'overwriting' it's tenents, not my friend. ;)

I understand what you are saying but I'm not sure that your analogy works here. We, as humans, can not effect our need for oxygen, and if we could, well, we would no longer be human. However the Sigularity's success lies in it's ability to evolve and change these basic laws iteratively at a rapid pace while maintaining those aspects of it's identity that it chooses to maintain. The only constraining laws will be the laws of physics. Of course, at first we will be able to control much if not all of it's basic elements and even some of it's higher functions but I feel this will quickly change.

#4 NickH

  • Guest
  • 22 posts
  • 0

Posted 17 August 2003 - 12:33 AM

Any Singularity, be it originating from human intelligence or Friendly AI or unFriendly AI, has an unknowable component due to superintelligence that could change anything. This isn't necessarily a bad thing - in a Friendly AI or an altruistic human upload this is a good thing, we don't want to constrain it. This ability to change allows it to correct the mistakes we've made and to further develop our morality. We shouldn't try to constrain a Friendly intelligence with human level laws, because it probably won't work and if it does it will drastically limit how moral it can be. This is the adversarial attitude - the thought that the Singularity or the intelligence behind it is an enemy we need to contain and overrule. See CFAI: Beyond the adversarial attitude for more.

We can have influence on where it starts (eg. does it start out humane?), and on which direction it heads off in (at the start), and this may influence where it leads up. It seems there are divergent possibilities for superintelligence, and that without taking time to give it a humane morality it seems unlikely it'll magically appear later on (there's no reason to predict it, and it's a complex unusual pattern - a bit like supposing an SI would develop sexual attraction towards humans). We try and point it in the right direction, to give it our desire to remain humane.

I don't think us joining the Singularity will effect the direction a humane intelligence would take, "we" would change in directions likewise unpredictable to ourselves. Perhaps it is more clear in this situtation that this unpredictability can be a virtue, allowing a mind to surpass its origin. Mind you, if you so choosed, I imagine humane intelligence would help upgrade you and allow you to directly take part in the future - with the proviso that I can't predict accurately what anyone would do 'given' superintelligence (including myself, of course).

We (as in human-level intelligences) don't want, or need, to directly control where the Singularity goes. We do want to control where it starts out and which general direction it heads off in, when it's still a

#5 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,047 posts
  • 2,002
  • Location:Wausau, WI

Posted 17 August 2003 - 12:45 AM

I believe that the direction it takes will be 100% in it's control and we will affect this direction only when 'We' become 'It' and join the Sigularity.


That is why I believe we should focus on more directly augmenting our human brains than creating AI (although with a little thought I find the 2 are quite intertwined). An altruistic human upload (or machine merger) wouldn't gaurantee a friendly superintelligence but I think it would be a better starting point. A human upload/merger would have prior knowledge and memories and thus may empathize with the rest of us.

#6 mgarriss

  • Topic Starter
  • Guest
  • 9 posts
  • 0

Posted 17 August 2003 - 03:55 AM

One could argue that compassion and love are an evolutionary advantage to societies of individuals but it might not be useful for a 'society' of one, i.e. the Singularity. This point concerns me sometimes. Of course the singularity will not really be alone, compassion for animals and other life alien to it might be an advantage. Also it will have to have compassion for itself.

Does a being with 'X times' our intelligence have 'X times' the capacity for compassion? Some would point out that our greater human intellegence also gives us greater evils. When's the last time you heard about a dog commiting genocide? A dog has a very small capacity for evil, what capacity would a hyper-intelligence have?

Michael Garriss

#7 Mechanus

  • Guest
  • 59 posts
  • 0

Posted 17 August 2003 - 02:07 PM

mgarriss

It seems to me (mind you, after just a little thought on the subject) that since a major aspect of the Singularity is the ability to reinvent itself that nothing we 'program into it' would or should survive this reinventing process.  This includes notions of right and wrong, which we could be right or wrong about.


The goal of Friendly AI is to have a method such that anyone who follows it ends up with an AI with essentially the same goals. This is also known as "programmer independence", and to the extent that someone could follow the principles of Friendliness and still exert their personal influence on the end result, they're "stealing the Singularity". This is considered to be a bad thing.

Although you're quite right that an AI will and should do much reinventing (that's what sets the approach apart from less thought-out ones such as Asimov's Laws and eternally fixed supergoals), you can't expect the end result to be completely independent of the initial conditions. Hopefully it will be independent of the specific details, but not of the general approach used. If an AI is to reinvent itself to update its notions of right and wrong, it has to be built with a structure that allows it to think of its ultimate goals as only temporary approximations. For example, you could probably program an AI to have as its ultimate goal to convert the matter in the universe to as many windmills as possible. Why doesn't it wise up and replace this with a more clever, sophisticated goal? Because that would compromise its ability to convert the universe to windmills (and thereby, just as an uninteresting side effect, make the Earth a monument to nonexistence). To make judgments such as "making as many windmills as possible is not a very sophisticated goal" humans use a lot of brainware that doesn't show up automatically in any mind.

In the mean time, before an AI goes "whoosh" into posthumanity, before it finishes reinventing itself into something it likes (if that ever happens), you wouldn't want nasty things to happen, so it makes good sense to give it certain notions of right and wrong so that nothing goes wrong in that (temporary) period.

After that, it's a bit like throwing a ball (but less predictable). You can choose in which direction to throw it, but you can't "control" it afterward.

I believe that the direction it takes will be 100% in it's control and we will affect this direction only when 'We' become 'It' and join the Sigularity.


Not sure what you mean there, it sounds a bit Borgish. A superintelligence would probably either not value our existence and build computers out of us, or value our existence in itself and leave us around as individuals rather than assimilate us.

If what you're saying is the AI (IA, whatever) will become posthuman first, look for dangers, and then help humans become posthuman, at which point they can (at least partly) take over, then I agree.

Mind

An altruistic human upload (or machine merger) wouldn't gaurantee a friendly superintelligence but I think it would be a better starting point. A human upload/merger would have prior knowledge and memories and thus may empathize with the rest of us.


A properly designed AI would empathize with us, too, unless it had an excellent reason not to ("humans are a virus, like the dinosaur" is not an excellent reason). Furthermore, a properly designed AI would be free from all the icky parts of the human brain that cause us to sometimes be selfish, hateful, xenophobic, irrational, and so on.

The human brain was not designed with forward-compatibility in mind (in contrast to a properly designed AI), and I think there are various ways it could go mad under the stress of constant modification and upgrading. Or it could control the future based on its personal preferences and opinions, which is just as bad.

mgarriss again

One could argue that compassion and love are an evolutionary advantage to societies of individuals but it might not be useful for a 'society' of one, i.e. the Singularity.


This assumes the AI cares about evolutionary advantages. Why should it, if it has the same capacity for altruism and empathy as humans? I wouldn't make war against humanity even if it were an evolutionary advantage (neither would you, I hope), and there's no reason an AI couldn't share this view.

For "it's not useful" to be a valid argument, there has to be some other, more important goal that it's not useful to. A Friendly AI will of course want to survive, but not as a goal that is independent from and more important than goals such as helping humans. To a Friendly AI, what's "useful" is what helps sentient beings, and while staying alive is a good way to achieve this goal, there's no sense in sacrificing that which you're staying alive for (the well-being of sentients) for a very slightly better chance to survive.

Friendliness is not a constraint the AI has to work with while pursuing other goals; Friendliness is what a Friendly AI is, and wants to be.

Does a being with 'X times' our intelligence have 'X times' the capacity for compassion?


A being with 'X times' our intelligence could act on its compassion maybe 'X^2 times' or '10^X times' as effectively.

A dog has a very small capacity for evil, what capacity would a hyper-intelligence have?


A hyperintelligence could achieve 'evil' goals a lot more effectively, too, if it wanted to. The trick is making sure it doesn't want to. Many humans don't want to, and humans are buggier than properly designed AIs, so you could consider this an existence proof that it's possible.

#8 mgarriss

  • Topic Starter
  • Guest
  • 9 posts
  • 0

Posted 17 August 2003 - 05:24 PM

Although you're quite right that an AI will and should do much reinventing (that's what sets the approach apart from less thought-out ones such as Asimov's Laws and eternally fixed supergoals), you can't expect the end result to be completely independent of the initial conditions. Hopefully it will be independent of the specific details, but not of the general approach used. If an AI is to reinvent itself to update its notions of right and wrong, it has to be built with a structure that allows it to think of its ultimate goals as only temporary approximations. For example, you could probably program an AI to have as its ultimate goal to convert the matter in the universe to as many windmills as possible. Why doesn't it wise up and replace this with a more clever, sophisticated goal? Because that would compromise its ability to convert the universe to windmills (and thereby, just as an uninteresting side effect, make the Earth a monument to nonexistence). To make judgments such as "making as many windmills as possible is not a very sophisticated goal" humans use a lot of brainware that doesn't show up automatically in any mind.


We, as biological agents, don't have much experience with changing our goals so it may seem like it's a general rule of intellegent agents. I'm not speaking about 'higher order' goals like: 'I'm going to climb that mountain!' I'm speaking about our fundamental goals such as, keeping well fed, having sex, staying warm, etc. We can't have much effect on these goals because they are separate from our mind (please ignore the anti-Zen implications of that statement). The goal to stay fed originates from other aspects of our biology. With this in mind, it is my opinion that in order to make sure that the AI sticks to it's goals, you will have to seperate it from it's most fundemental goals somehow. It can't have the ability to reinvent these 'outside itself' (that is to say, outside it's mind) goals. Problem with this startegy is that the AI will be many times our intellegence. To use my dog as an example again (sorry Sara), she has little chance of keeping anything separate from me. The intelligence gap between the AI and us will be much greater then that of my dog and I.

In the mean time, before an AI goes "whoosh" into posthumanity, before it finishes reinventing itself into something it likes (if that ever happens), you wouldn't want nasty things to happen, so it makes good sense to give it certain notions of right and wrong so that nothing goes wrong in that (temporary) period.

After that, it's a bit like throwing a ball (but less predictable). You can choose in which direction to throw it, but you can't "control" it afterward.


Well said. I do agree that the direction we 'roll the ball' will have effects on the ball's final resting point.

Not sure what you mean there, it sounds a bit Borgish. A superintelligence would probably either not value our existence and build computers out of us, or value our existence in itself and leave us around as individuals rather than assimilate us.

If what you're saying is the AI (IA, whatever) will become posthuman first, look for dangers, and then help humans become posthuman, at which point they can (at least partly) take over, then I agree.


I suppose it does sound a bit Borgish but I guess I ment it too. In my amateur futurist vision I see the AI being build of the 'small' parts already availible and not out of some 'just for the AI' machine that rolls out of IBM one day. I'm speaking of the growing global computer network which should include human brains if the neural - computer interfaces ever work. If a significant potion of the early AI is made out of, well, us, then it should have some 'human' like characteristics.

#9 Mechanus

  • Guest
  • 59 posts
  • 0

Posted 17 August 2003 - 06:33 PM

With this in mind, it is my opinion that in order to make sure that the AI sticks to it's goals, you will have to seperate it from it's most fundemental goals somehow. It can't have the ability to reinvent these 'outside itself' (that is to say, outside it's mind) goals.


But why would you want to make the AI stick to its goals? That would mean you're deciding the future of the universe yourself, irreversibly. And probably not in the way you want it, either -- you can't possibly anticipate what these goals will be taken to mean, under recursive self-improvement.

What we would want to do is give the AI a push in the right direction by transferring the way we think about morality, then leaving it free to change its fundamental goals to something it thinks is more appropriate, based on this morality, and "forces" such as a preference for truth and rationality. If it turns out we pushed it in completely the wrong direction, that we would have pushed it in a completely different direction had we been more informed, or if it discovers objective moral truths or whatever, then it can correct for our mistakes. I don't expect anyone to be able to get everything "right" the first time, but building a mind willing to change and improve should be much easier than building one that's already perfect. One nice thing about AIs is that they don't have the annoyingly large ego of humans. ;)

An AI can (to my knowledge) think anything a human can think. We would want an AI to do certain things, for certain reasons; we should build the AI so that it does these things for the same reasons, not because it has various drives (social interaction, food, whatever) built in that we think might do the trick. And if these reasons turn out not to make sense, it can choose to do something else instead, something we would have wanted it to do if we had seen our reasons made no sense -- this is a feature, not a bug. Agree?

(All this is made more precise in Creating Friendly AI, which people interested in this should read, by the way)

sponsored ad

  • Advert

#10 mgarriss

  • Topic Starter
  • Guest
  • 9 posts
  • 0

Posted 17 August 2003 - 08:00 PM

But why would you want to make the AI stick to its goals? That would mean you're deciding the future of the universe yourself, irreversibly. And probably not in the way you want it, either -- you can't possibly anticipate what these goals will be taken to mean, under recursive self-improvement.


I wouldn't want an AI to stick to it's goals. I stated that to demonstate (rather poorly I'll admit) that what's required to make an AI 'stick' to our/it's goals would be close to impossible. ;)

An AI can (to my knowledge) think anything a human can think. We would want an AI to do certain things, for certain reasons; we should build the AI so that it does these things for the same reasons, not because it has various drives (social interaction, food, whatever) built in that we think might do the trick. And if these reasons turn out not to make sense, it can choose to do something else instead, something we would have wanted it to do if we had seen our reasons made no sense -- this is a feature, not a bug. Agree?


Agreed.

I have the rather optimistic view that this coming Singularity/AI will indeed be great (if not the greatest) thing for humanity. It will bring us new ideas and experiences that we will soon feel lost without. I might eat these words one day but this fact will not scare me away from trying to contribute to the development of the Singularity in any small way I can. I'm working on some pet AI /Alife projects and I lend my spare computer power to distributed computing projects. I'd encourage anyone looks foward to this coming age to help in anyway they can! :p Any more AI optimists here?




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users