• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Is the Singularity technological "racism"?


  • Please log in to reply
18 replies to this topic

#1 randolfe

  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 03 April 2004 - 03:08 AM


I doubt I will make any friends by sharing a very personal observation, possibly distorted, which struck me recently.
I was watching a C-Span Book-TV program on anti-semitism. I was suddenly struck by the similarity between the paranoid ideas anti-semites (and those who beleive in other world-control conspiracies) have with those who worry about the singularity.
I'm sure none of those here (apparently about 50%) like such a comparison. However, it stuck me that those who see any supposedly "superior group" manipulating (and controlling or seeking to control) the world share the (in my opinion) paranoid and distorted world view of those of you who see the singularity as a real threat.
What is the real difference between those who see members of The Trilateral Commission or Skull-and-Bones conspirators manipulating the world and those who believe an inhuman "greater intelligence" like the singularity is going to take over the world?
I have to say that I really had a dreadful feeling when this realization leaped out at me. I would be delighted to see the very idea totally demolished by your responses. Until that time, I have to stand by my initial observations. I could have gone into the "singularity is nonsense" thread but I am one of those "Christians"(rhetorical excess) who like the idea of going out and wrestling the lions one-on-one (or one-against-dozens). [lol]

#2 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 03 April 2004 - 06:07 AM

Randy, for me it's very simple. I can not bring myself to feel comfortable about creating anything that will be very much smarter than any human is today. If we're going to do it, I think, it has to be done right the first time. Otherwise, we're all screwed.

Please imagine such a super intelligent entity walking around with the power to do any kind of hocus pocus, perhaps unfortunately at the expense of us -- we silly little beef-bag bleeding humans.

sponsored ad

  • Advert

#3 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2004 - 06:47 AM

Hi Randolfe,

I apologize if I’m not familiar with your views in totality. I know you support cloning and physical immortality. Me too, so I think we’re not extremely distant in our core values.

This is an interesting observation, and I mean that in a good way. Now, I hope I understood you correctly to ask the right questions. Is the Singularity an event we don’t really need to worry about since you believe greater-than-human intelligence is not possible? Or, is the Singularity likely to be more gradually adaptative than anticipated by those who think it can potentially be disastrous? Would you say it’s unnecessary to take certain precautions?

But it also seems like I could interpret your message a slightly different way. Did you mean that the Singularity can potentially be the best thing to happen as long as we’re diligent, and therefore Luddites should be more open-minded and help us to bring about a peaceful Singularity?

Edited by Nate Barna, 03 April 2004 - 06:16 PM.


#4 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2004 - 06:49 AM

Please imagine such a super intelligent entity walking around with the power to do any kind of hocus pocus, perhaps unfortunately at the expense of us -- we silly little beef-bag bleeding humans.

*roflmfao*

#5 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,074 posts
  • 2,000
  • Location:Wausau, WI

Posted 03 April 2004 - 01:20 PM

I would say that people being afraid of AI or SI, is rational given human history. Humans in power positions have always used whatever tools at their disposal to keep their power and manipulate the rest of the world. Therefore I can see many people being suspicious of AI researchers, the military, and anyone who wishes to create greater than human intelligence.

The Singularity, however, is an event. It is hard to be suspicious of (or hold feelings of racism toward) an event. By definition, humans cannot comprehend what if anything will happen after a technoligical singularity, thus it is irrational to hold any fear except fear of the unknown.

Of course...good luck trying to communicate that to the masses.

#6 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 03 April 2004 - 06:54 PM

From http://www.digitalki....efs/sysop.html

First One Wins

Please understand that if someone gets to strong nanotech before everyone else, they rule the world. This is not a subject for debate, you can't fight back, there is no passing Go or collecting two hundred dollars. We're talking about a technology that allows the wielder to destroy everything in the world that looks like a weapon if it doesn't contain trace amounts of some extremely rare element, for example. The wielder stockpiles weapons with the element in question mixed in to all the steel, releases the nanotech, and rules the world instantly. And that's a really brain-dead way to do it, too. Turning the Himalayas into self-guided tanks that respond only to your voice seems much smarter.

#7 randolfe

  • Topic Starter
  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 03 April 2004 - 08:10 PM

Hi Randolfe,

I apologize if I’m not familiar with your views in totality. I know you support cloning and physical immortality. Me too, so I think we’re not extremely distant in our core values.

This is an interesting observation, and I mean that in a good way. Now, I hope I understood you correctly to ask the right questions. Is the Singularity an event we don’t really need to worry about since you believe greater-than-human intelligence is not possible? Or, is the Singularity likely to be more gradually adaptative than anticipated by those who think it can potentially be disastrous? Would you say it’s unnecessary to take certain precautions?


I certainly don't doubt humans can create "greater than human intelligence". Computers alone have proved that. I also don't take issue with Michael's idea that "the first one wins". The atomic bomb proved that.

What I find difficult about discussions involving the singularity is the assumption (not apparent in the postings above) that this "greater mind" is going to become a force unto itself and possibly enslave the world.

That is where I see a disturbing connection between those who focus and fret about the singularity and those extremists (both left and right) who believe the world is controlled by, or in danger of being controlled by, Trilateral Commissions, big corporate conspiracies, international Jewish plotters, or other entities which are dismissed by most people as figments or the imagination or greatly distorted visions of the power and motivations of existing entities.

I certainly want to develop, control and use AI for the mutual benefit of all. I see the dangers of AI being employed by "bad people" against us or even being mistakenly misused by ourselves. However we may feel about atomic power (for producing electricity) no one fantacizes that it will ever develop "a mind of its own"--except if we accidentally started some continuous nuclear fusion that we could not stop.

#8 randolfe

  • Topic Starter
  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 03 April 2004 - 08:18 PM

I see that the concept of "self-empowered" Singularity is indeed incorporated in the above postings. I want to correct that error.

Sometimes, when you read several postings at a time you tend to focus on the one which comes closest to either agreeing with you or misunderstanding you--or both.

#9 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2004 - 08:40 PM

However we may feel about atomic power (for producing electricity) no one fantacizes that it will ever develop "a mind of its own"--except if we accidentally started some continuous nuclear fusion that we could not stop.

Randolfe, I think I understand your position a little better, but please clarify if I'm wrong. You seem to imply that AI will always be under human control, even if they develop special modalities we don't possess, but that are there to assist those who represent a larger body of people. Since you believe human beings are generally responsible (perhaps because you think we keep ourselves in check with an endless variety of worldviews?), then it follows that there will be a Singularity just that it won't necessarily get out of control of the collective that generally represents balance and the desire for peace. But your basic assumptions still seem somewhat unclear, at least to me.

#10 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 03 April 2004 - 08:42 PM

The problem with the AI thing is that we have absolutely no choice over whether AI is eventually developed or not. In the absence of any Apocalypse-class setbacks, AI will inevitably be developed within 40 years at the absolute longest, even if we have to fully scan and upload a functioning human brain to do it.

International Jewish plotters, however powerful and mysterious the conspirators paint them, would still be mere humans. AIs need not be human. They can be super-fast, super-smart, super-strong, acquire nanotechnology and disassemble mountains at their leisure, etc.

What you seem to be arguing here is either 1) everyone on Earth shares your distaste of AI, so it is certain it will never be developed and therefore not worth worrying about or 2) AIs, in spite of being machine intelligences with massive cognitive advantages relative to humans, will inevitably flounder, never becoming threats to humanity. Both arguments are weak.

I am baffled at why you do not think a smarter-than-human mind would be capable of taking over the world or threatening humanity. We're talking about minds that will potentially think at speeds billions or trillions of times faster than humans, minds with different innate brainware so they can see solutions to complex scientific and technical problems in the same way that humans can instantly recognize their mothers faces, minds that can come up with better ways to create enhanced intelligence than human engineers ever could. Vast intelligence, indescribably superior than planets upon planets of Ph.Ds, will quickly snowball into still greater forms of intelligence.

Vast intelligence means ample intellectual resources to devote to the problem of consolidating physical power. Humans have certain strength, speed, and intelligence levels. To take over the world, all one needs to do is design physical apparatus that can beat humans on all three counts. I can imagine large classes of physical systems that meet these specifications, and I'm only a human. For example, quickly distributing a fine mist of anthrax spores over the surface of the Earth would be more than enough to eliminate all human life. Transhuman intelligence can and will be a threat - and to argue otherwise is basically to say "I'm a human, humans are invincible, nothing will ever be a threat to me, HA HA HA!"

The Jewish plots whispered about by paranoid conspirators lack a grounding in reality. Scenarios ending in outcomes where transhuman intelligences are created have firm grounding in what we know about intelligence and technology. Arguing that transhuman intelligence will never acquire game-theoretical superiority over Homo sapiens neglects the fact that we do not represent any theoretical upper bound on smartness or physical power in the space of what is possible.

#11 dcube

  • Guest
  • 5 posts
  • 0

Posted 20 April 2004 - 04:13 PM

The singularity will be an event brought about by humans so this topic is worth exploring. Won't the AI have an aversion to all humans considering our track record of fear and hatred of the other? Jewish plotters, Anti-Semites, Christian members of Skull and Bones, etc. Whether the AI terms us as bad or not, we're all up for potential delete. The AI will make a decision about whether we're ALL worth keeping around, whether we have nifty little upgrades in our meat bodies or not. How will that decision be made? Doubtedly around the colour of our skin or our cultural background.

Will AI be tempted by bribes, extreme wealth? All of these things we humans use to gain power and control will mean a whole lot leading up to the singularity.

So what do you envision in this period? Is the hope to upload in the hopes of becoming acceptable to the AI? Is the hope to merge with the AI? Maybe I should check out some other forum topics for that.

But afterwards?

Racism causes destruction and suffering. There must be a certain amount of temptation among some to throw caution to the wind. AI could inevitably be the great equalizer, by means of total bio-annihilation.

#12 NickH

  • Guest
  • 22 posts
  • 0

Posted 20 April 2004 - 10:12 PM

None of these seem necessary features of an AI, nor (I suspect) easily implementable ones. The first question is not "what will the AI inevitably be like", because there is no single AI-nature, but "what should it be like?".

I don't see why an AI would necessarily have an aversion to all humans. How did we pass this on? Why are we passing on human tendencies towards corruption (eg. temptation by bribes)? Why do we want it to treat those humans with more social status or present power any better? You might like http://www.singinst....FAI/anthro.html

The hope, at least in some circles, is to take full advantage of the choices we have in making an AI and make it right the first time. This requires influence over the first transhuman AI built. Try http://singinst.org and http://singinst.org/friendly/ for further thoughts in this direction.

#13 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 21 April 2004 - 09:58 PM

Ah, one main point many of you seem to be missing it that, by definition, what the Singularity thinks or what impact it has is COMPLETELY UNPREDICTABLE!

There is no absolute truth concerning whether or not it SHOULD be created. These reasons vary from person to person based on their own purposes. The only truth is that the likelihood of it being created (despite any opposition) is very good. Thus, any philosophical or other work you contribute to the occurence of the Singularity gives you a little more potential control over the unpredictable happenings post-Singularity.

#14 NickH

  • Guest
  • 22 posts
  • 0

Posted 22 April 2004 - 07:10 AM

That's one definition of the Singularity, but not the one either Michael or I am using. The Singularity is, by definition, the rise of smarter-than-human intelligence. It's a word we use to refer to this potential event. Others define it differently, which causes no end of confusion. Certainly the fact that this mind is smarter than us leads to a great deal of unpredictability, but we can still tell the difference between different AI designs. It doesn't mean we don't have to work hard to ensure things work out.

Suppose we create an AI that values human happiness, as measured by the number of smiling faces detected in its visual field. This may seem like a sensible idea until we realise that rather than making people happy in a meaningful way, the AI could simply fill the universe with pictures of smiling faces and equip itself with giant robotic eyes. I'm sure there are other better ways for the AI to achieve the things it values, some requiring blinding insight to discover. Perhaps something strange will happen and you'd get a more meaningful outcome, but you can't count on it.

There are a bunch of different designs that lead to different kinds of default outcome. Just taking the above design (which isn't too concrete, although it can be more so) and replacing "number of smiling faces detected in its visual field" with something different is quite sufficent. Say, "amount of neural excitation in brains areas X summed over all humans" where X is correlated with happiness, which is satisifed by a universe filled with hyperactive brain areas X, among other things.

Friendly AI research involves, in part (see eg. http://www.singinst....ucture/why.html for some other examples), running through failure scenarios like this. Sure, the AI's smarter than us, we can't be sure it'd work like this, but it's foolish to rely on the unpredictability saving the day.

This holds even more so if we're not certain what's good and proper (which I'm not), if we want our AI to be fairly built and representative of humanity rather than any particular humans, if we want it to surpass programmer errors, etc. These don't come for free, even given unpredictability.

#15 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 23 April 2004 - 12:38 AM

That's one definition of the Singularity, but not the one either Michael or I am using. The Singularity is, by definition, the rise of smarter-than-human intelligence.


They are the same definition: We can't predict the actions of something smarter than us. If you don't understand that concept go look at various Singularity websites and they will explain it for you. Basically, it has access to a level of comprehension unreachable by humans in their current status, and thus we cannot predict it's actions because they are based on concepts that we cannot understand.

Edit: However, we can predict possible scenarios (such as annihilation) based on reasons we don't understand: but they are just as likely as any other scenario!

#16 kraemahz

  • Guest
  • 157 posts
  • 0
  • Location:University of Washington

Posted 26 April 2004 - 11:51 PM

I think it's a reasonable assumption that the first working AI will have the mind of a child and work up from there (the bottom up approach). As the parents, we could imprint it with the basic mindset of a healthy baby and then teach it as it "grows up" the human values of morality, as parents do for their children. The top down approach seems like it would be much harder (we would need to hardcode everything it means to be human somehow, a level I don't think we're at or will be at in 40 years) and it would be more likely to come out with a less favorable outcome.

Edited by randolfe, 03 May 2004 - 09:49 PM.


#17 baal_zebul

  • Guest
  • 72 posts
  • 0

Posted 27 April 2004 - 08:14 AM

One cannot predict something that one does not know, one cannot develop something that one does not know. That is pure logic and you see that as a problem. My AI functions that the Problem is the Solution (what i mean with that you can ponder upon). If we cannot tell the AI something that we do not know then that also means that it does not have to know it.

It is a little as the Matrix, one cannot program AI, that is impossible, one has to program it to use knowledge. However for it to be real AI it should even have the capabilites to harvest that knowledge even without knowledge.

#18 randolfe

  • Topic Starter
  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 03 May 2004 - 09:47 PM

(quote)"I think it's a reasonable assumption that the first working AI will have the mind of a child and work up from there (the bottom up approach). As the parents, we could imprint it with the basic mindset of a healthy baby and then teach it as it "grows up" the human values of morality, as parents do for their children. The top down approach seems like it would be much harder (we would need to hardcode everything it means to be human somehow, a level I don't think we're at or will be at in 40 years) and it would be more likely to come out with a less favorable outcome."(quote)

I nearly regret using the term "racism". I used it to imply the "paranoid ideas" that are exhibited by prejudiced people about groups they are prejudiced against.

The quote here about AI beginning with "the mind of a child" raises some interesting questions. For one, children have enhanced abilities to learn languages and even to play musical instruments that they lose later in life.

Assuming that AI "will grow" through stages like the human brain is a bit daunting. One would have to believe that inside the "acorn" of AI one would find or be able to create the outline of a grown oak tree.

I don't doubt that AI will be developed. Some types are already working in computers doing millions of calculations in a second or a minute that would consume a human mind for weeks, months or years.

What strikes me is how those who believe in the Singularity see AI as assuming control of itself. I see AI as something that will always be controlled (whether used for good or bad purposes) by those who created it.

I agree that we must win the race to create the best AI and keep ahead of those who would employ it to destroy us. I just don't see how value judgments can be made. One post mentioned creating AI that would respond favorably to smiling human faces and therefore might be programmed to do things that would create "smiling faces". Well, then AI would view all cheering and happy crowds as positive including those witnessing Christians being fed to the lions in Rome and/or pickpockets being publicly hung in France and/or French nobility being beheaded and/or adoring Germans cheering Hitler.

In my opinion, AI will have many special abilities. However, I don't believe it could be designed to make what we mere bio-bag humans call "value judgments".

sponsored ad

  • Advert

#19 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 03 May 2004 - 10:52 PM

What strikes me is how those who believe in the Singularity see AI as assuming control of itself. I see AI as something that will always be controlled (whether used for good or bad purposes) by those who created it.


Randolph I don't think you're getting the concept of recursive self improvement ad infinitum...the singularity will have this option therefore human control is a hope that will quickly be dashed.

Michael is obviously right in saying be WEARY of the Singularity, but for good or for bad it will happen. You may want to argue on the lines of whether it will or will not occur but don't think humans can easily control such an entity. It will have the ability to swallow up entire populations of data in probably nanoseconds who knows? that's the point :)




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users