• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

I Dunno.


  • Please log in to reply
8 replies to this topic

#1 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 15 September 2003 - 08:23 AM


...

Edited by Jace Tropic, 07 November 2003 - 09:33 AM.


#2 celindra

  • Guest
  • 43 posts
  • 0
  • Location:Saint Joseph, TN

Posted 15 September 2003 - 09:57 AM

My, oh, my.

I know I’m not adding anything profound. But I don’t care. Anyone halfway paying attention has probably already figured out I use illicit drugs.



Hey, let's not disparage drugs here with terms such as "illicit."

I think AI or GAI or whatever should be created with one thing in mind: choices—ours. We should create inanimate servants that figure out how to allow us to live as long or as little as we want, to learn as much or as little as we want, and to do as much or as little as we want.



Agreed.

What, so it’s better to be robocentric?



Only if you are schizophrenic and have no connection with your own self.

What if I and other people said that all we wanted was to live on Earth until the Sun begins heating up, and that we wanted some robots to pick us up and drop us off in the next star system conducive for life, and so on?



I'd say go for it. Sounds like a preferable situation compared to our current existence.

My vision for AI is to give me freedom—the way I define freedom—not the way anyone wants to define it for me—and designer governances for everyone else to decide for themselves what freedom means to them.



It sounds as if you've encountered some of the Friendly AI camp. [sfty]

Look, the way I define freedom is me doing whatever the hell I want, whenever the hell I want and for whatever reason I want. I'm sure you'd define it the same way. Unfortunately, the only way you can guarantee that is to become that first superintelligence. Otherwise, you're at the mercy of a superior intellect with who-knows-what powers.

*very deep, sad sigh* I jes wanna die.


Please don't. It's so messy.

#3 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 15 September 2003 - 01:28 PM

A good post to my mind Jace and underlining how proponents of AI may need to improve their message. Many feel that shedding the shell of biologicially governed impulses would be of benefit. I personally do not feel it's possible to create a 'self-aware' entity that doesn't include a capacity for error. If the Singularity becomes omnipotent and all-knowing, well I would guess then there would be little point in continuing our journey and we would have hit the jackpot. I don't like it when someone spoils it by telling me the end of the movie and I don't want to skip over all the juicy parts. Given the choice, I'll take the simulation thank you... unless there was something better than ... th..th .. th. thats' all folks..

To say that the Singularity will pretty much take care of itself after some initial directive; that we cannot whip superintelligent, albeit fundamentally dead, objects into acting on our behalf; that we will be creating superintelligences, yet cannot exhale and be complacent because mere, useless humans will somehow still need to be on high-alert status; and that we are being anthropocentric otherwise, is downright ludicrous.


I think it would be difficult for anyone to say that the possibility of the Singularity is zero, consequently complacency isn't an option, and a high alert is necessary. Our technologies thus far have created many wonderful life-extinguishing agents that are barely leashed by our current political systems and AI will be no different, especially as the military will be one of the chief users of it. High alert is totally called for... sadly.

There is every reason to be cautious and diligent every step of the way until AIs become the ultimate problem solvers; for it couldn’t happen otherwise.


...which is I think, one of the most important messages that the AI camp should be focusing on for us. Uploading and the like are vapourware, until even low-level AI is accomplished, and the techniques are only just being discovered and applied now with inferior technolgy. The threat is not imminent. It would be wise however, to be able to recognize a threat as it becomes more plausible, and with technology advancing at a furious pace, it is not difficult to see how developments in both programming and hardware might come together to produce a real threat without much advance warning.

I"m in total agreement that AI should be designed with one goal in mind, the advancement of 'our' patterns... not for the creation of another 'life form' that might threaten our existence. Things have a way of slipping out of our grasp however, especially if inherent in the design is the necessary randomness that a self-evolving entity must incorporate in order to advance itself. Vigilance should be maintained until we know more what we're dealing with and describe consciousness in order to better control it.

sponsored ad

  • Advert

#4 Mechanus

  • Guest
  • 59 posts
  • 0

Posted 15 September 2003 - 01:56 PM

I think AI or GAI or whatever should be created with one thing in mind: choices—ours.  We should create inanimate servants that figure out how to allow us to live as long or as little as we want, to learn as much or as little as we want, and to do as much or as little as we want.


First off, I'm not sure who you mean by "ours". If you mean biological humans or a specific subset of biological humans, then I can't agree. All sentience should be included -- including only humanity is as arbitrary as including only men with beards, or the Luxemburgian people.

To be sufficiently effective at solving problems and at judging what we actually want, an AI should be at least as intelligent as humans and have their capacity for moral thought, otherwise you end up with either an incompetent AI (= AS) or with malevolent-genie type scenarios, where the letter instead of the spirit of the instructions is obeyed. If such a being helps humans, it will be of its own free will, and it should be treated with the same respect as a human.

that we will be creating superintelligences, yet cannot exhale and be complacent because mere, useless humans will somehow still need to be on high-alert status


I'm not sure I see what you mean there. Before superintelligences exist, it's a good idea for humans not to be complacent, because so many things could happen, some very good, some very bad, and because humans are the only ones around to bring these events about. After superintelligences exist, it may be a good idea for humans to be complacent -- if they're still alive, then that probably means the superintelligences value the continued existence of humans for its own sake, and there is no threat.

and that we are being anthropocentric otherwise, is downright ludicrous. What, so it’s better to be robocentric? Fuck that.


That doesn't follow. There's no reason to be either anthropocentric or robocentric.

Of course, it's better to robomorphize robots than to anthropomorphize them, which is something else, having to do with how we imagine their minds to be and how we predict they will act. Robots don't behave like humans, they behave like robots -- they just don't behave like stereotypical sci-fi robots, which is why the word "robot" should probably no longer be used for superintelligences. (Also, there's that "robot" describes the body of such a being more than the mind, which is relevant here. "AI" is better, except that for transhuman intelligences it need not matter whether they started out as metal (artificial) or as meat (natural), and a whole new set of intuitions not captured by the word "AI" is needed.)

So maybe it’s likely that it’s inevitable that we will be uploaded into AI systems; and therefore, the best choices we can make today are those that recognize all inevitabilities and make the best of them.


There is no direct logical connection between the Singularity and everyone being uploaded. By accelerating technology much quicker than we're used to, the creation of transhuman intelligence would make uploading possible, assuming that it can be done at all. If a superintelligence does not have your welfare in mind, though, (say, if all it wants is to make park benches), then it will almost certainly have more use for people as recycled computing matter than as the same minds they used to be. If it does care about your welfare, it will not upload you if you don't want to (unless that turns out to be the obvious fair and humanitarian thing to do, which I assume is not your opinion) -- otherwise, the AI designers will have failed (or, humanity will have been doomed to extinction ever since someone picked up a rock to use as a tool).

Well, I think making the best of prospective smarter-than-human intelligence is to simply aim at making smarter-than-human problem solvers—nothing more. Humans already inspire enough problems to solve. We don’t need any better-than-human thoughts to think of bigger problems.


How do you solve problems without thoughts, though? And why is it a bad thing to think of bigger problems, if they exist? Thinking about big problems normally doesn't create big problems. Asteroids might still hit the Earth if we never thought about them, for example, but we might prevent this by thinking hard about them, developing the necessary technologies, etc. (actually, I have no idea whether that's a realistic example, but you get the idea).

If you mean problems peculiar to superintelligent beings (say, the speed of light limiting one's brain size), why would it be a bad thing to think of these problems -- if you choose to stay humanly intelligent, why should others be prevented from encountering problems on their way through posthumanity?

But all everyone—everyone—really wants is choices and today’s problems, such as death and violence and suffering, eradicated now.


How can you speak for everyone? Giving people choices sounds good, eradicating death and violence and suffering sounds good, but why couldn't people want more (personal growth, or unending pleasure, or solving fundamental questions of science and mathematics, or preventing the death of the universe, or something)?

Are we wrong to have designed smarter-than-human problem solvers, not illusionary feelers demanding liberty, to give us freedom not only to indulge in infinite knowledge and awesome technology, but also to be able to say, “Well, jeez, I would really like my life right now if only people would just get along, if people weren’t always so miserable, the standards of living for everyone ranged from at least very comfortable on up, and if I could do trivial things without the underpinning requirement that I must be one of the economy’s whores in order to survive”?


That seems quite compatible with most or all the ideas I've seen on AI ethics. Living on Earth 2.0 for the next few billion years is certainly not what I would want; there is no reason posthuman incomprehensibilities and a utopic cishuman world couldn't exist in the same reality, though.

I doubt it's at all possible to create an AI able and willing to solve problems and to understand what we want, but not able or willing to uplift to posthumanity those of us who like that sort of thing.

My vision for AI is to give me freedom—the way I define freedom—not the way anyone wants to define it for me—and designer governances for everyone else to decide for themselves what freedom means to them.


That doesn't sound very different from what for example SingInst is trying to do (though keep in mind that the idea with Friendly AI is not to try for one particular kind of outcome without the possibility of anything different -- the idea is to try for an AI that can make humane moral choices, and these humane moral choices are expected to lead to certain kinds of outcome).

"Freedom, the way I define freedom" is a concept that requires a lot of careful thought, though. What if someone defines himself to be free to blow up the universe? I hope you agree this should be prevented. And I'll bet that from any simple, reasonably well-defined axiomatization of what humans should or should not be free to do, one can deduce consequences you and (almost?) everyone else would see as horrors. I think this really is a complex problem that requires a fully functional, intelligent, humane mind rather than a simplified thought-slave.

(Note: usual disclaimers on unforeseen barriers apply)

#5 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 15 September 2003 - 02:25 PM

This thread on "AI, Slavery, and you" overlaps this discussion quite nicely and I suggest a general review.

#6 patrick

  • Guest
  • 37 posts
  • 0

Posted 15 September 2003 - 03:45 PM

DeSade: I have no with to live in anyone's perfect world but my own.

King Mob: Exactly. That's why we're trying to pull off a track that'll result in everyone getting exactly the kind of world they want. Everyone including the enemy.

- The Invisibles, Say You Want a Revolution

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 16 September 2003 - 01:14 AM

I entirely agree with what Mechanus has to say. A few points:

1) Thinking of Singularity stuff as directly related to AI is somewhat misleading. The Singularitarians started off by saying "hey, if we were smarter, or knew people who were smarter than any human who has ever lived, then we could solve a lot of problems faster". It's simply transhumanism applied to intelligence rather than body structure - quite simple. *Then*, after careful analysis, some us said "wow, it looks like there's a very high chance AI will be here before IA, regardless of what anyone would prefer, so we'd damn well better make sure the first AI has enough moral structure to coexist with us peacefully and improve the lives of ALL sentient beings". Only after that was the Singularity Institute formed. If it turned out tomorrow that the chances were very good that IA would come first, then me, Eliezer, and all the other serious Singularitarians would put all the AI stuff on hold and pursue the quickest, safest path to the Singularity through IA.

2) I agree that transhuman intelligence, whether that be me, a Friendly AI, or someone on this forum, should improve verself with our wishes in mind. But guess who "our" should be; all sentient beings. Including cyborgs, humans, AIs, many vertebrate animals, and anything else that turns out to have qualia.

3) A transhuman intelligence wouldn't "automatically" "take care of itself" after "some initial directive", (i.e., its' creation) but the caretaking process is too complicated for us to peek into, in the same way that a monkey wouldn't be much help in advising what actions I should perform throughout the day. In order to see Singularity issues clearly, we need to understand how utterly dumb we would be in comparison to even a weakly transhuman intelligence.

4) There's nothing wrong with finding new problems to solve. Problem-solving is what fun is all about, and (as Mechanus indicated) the problems will be there whether we choose to pay attention to them or not.

5) A robot is not an AI. An AI is not a superintelligence. A correctly built AI would be one of us, and it wouldn't be very polite (or logical) to keep alienating it by calling it an "AI" continuously, in the same way that we don't continuously refer to the race, decent, or clothing of someone while we're talking to them or about them.

#8 Sophianic

  • Guest Immortality
  • 197 posts
  • 2
  • Location:Canada

Posted 16 September 2003 - 02:08 PM

Take heart, Jace.

There is Nothing Wrong With Humanism

(Comment: this essay is as "beautiful" as it is brilliant)

An excerpt ...

"We live in a time that is deeply pessimistic about the human condition. For many people, human activity and human reason are themselves the sources of most of the ills of the world. Half a millennium ago, Descartes viewed reason as ‘the noblest thing we can have because it makes us in a certain way equal to God and exempts us from being his subject.’ Today, many view human reason as a tool for destruction rather than betterment. As the biologist David Ehrenfield put it in his emblematically titled book The Arrogance of Humanism, what he objects to is ‘a supreme faith in human reason – its ability to confront and solve the many problems that humans face, its ability to rearrange both the world of Nature and the affairs of men and women so that human life will prosper.’

There is a widespread feeling that every impression that humans make upon the world is for the worse. For many, the attempt to master nature has led to global warming and species depletion. The attempt to master society, many argue, led to Auschwitz and the gulags. The result of all this has been a growth of anti-humanism, of despair about human capacities, a view of human reason and agency as forces for destruction rather than for betterment.

A prime expression of such pessimism is the denigration of the human subject and of human agency. Historically, humanism - a desire to place human beings at the centre of philosophical debate; a view of human reason as a tool through which to understand both the natural and the social world; a conviction that humankind could achieve freedom, both from the constraints of nature and the tyranny of other humans, through the agency of its own efforts - was the philosophy at the heart of both the scientific revolution and the Enlightenment. Today, though, such a view is often dismissed as arrogant, naive, even irrational."

#9 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 24 September 2003 - 01:31 AM

I agree completley Sophianic we live in and age that is afraid of itself. Human beings literally don't want to "think" too much for fear of screwing things up. We're increasingly reliant on automation and technology which of course all of you guys seem to do as well, but for bettter reasons. It's sad that we can't stop thinking of ourselves as flawed creatures. The denigration of the human subject and agency is happening everywhere now, college, work you name it. It would be nice to appreciate the greatness of normal human beings as well as augmented ones.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users