• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

"The Singularity Myth"


  • Please log in to reply
110 replies to this topic

#91 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 29 March 2006 - 06:01 PM

Shouldn't the goal first be to figure out how to get the suboptimally intelligent to the competant stage (ie, figure out a way, with our advanced brains, of making it possible for other people to be able to grasp abstract math)?

Then we'd have billions more more people who could potentially contribute to the research. And that's a good thing.

#92 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 06:12 PM

Perhaps now we're digressing from the tales of magical lines and intelligence continuums, but, yes, I don't see why you couldn't make that a goal. Personally I don't expect that many people can be forced to give a garshdang about my incompetence from their ivory towers and top-secret military research facilities.

sponsored ad

  • Advert

#93 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 29 March 2006 - 09:35 PM

Shouldn't the goal first be to figure out how to get the suboptimally intelligent to the competant stage (ie, figure out a way, with our advanced brains, of making it possible for other people to be able to grasp abstract math)?

Not if advancing ONE intelligence to the extreme can do all the good of the above goal, as well as much, much more, and could theoretically take much less time and effort. [tung]

Thus far that's precisely what's made it difficult for me to accept claims that superintelligence could conceive of radically different/superior ontologies, while wishful thinking sometimes pulls me right in that direction.

I don't imagine they would be able to concieve of [airquote] radically superior ontologies [/airquote] , although this is a vague concept to speak of, however, an entity undergoing a Singularity would have much more robust functionality actively available, as well as extremely advanced speed (and of course that advanced speed is even more so because of an available source code to work with), etc.

It's operation would be radically superior relative to any given goal.

#94 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 09:47 PM

Yes, Hank. But to clarify, I think we've been implicitly asserting at least that much.

#95 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 09:53 PM

But maybe you're clarifying what we're implicitly asserting. Sorry. ;)

#96 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 29 March 2006 - 09:56 PM

Not if advancing ONE intelligence to the extreme can do all the good of the above goal


Making humans 'sufficiently smart' seems to be much more attainable than making something 'supersmart'. If only because option A seems to be (a) theoretically possible (since sufficiently smart has been stated to exist already in humans) and (b) a logical step to option B, if only because we have to progressively learn how to make something more intelligent.

#97 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 30 March 2006 - 04:36 PM

Making humans 'sufficiently smart' seems to be much more attainable than making something 'supersmart'.

Have you ever moved someone a step higher in the shock level? I mean, that they were actually motivated to make effective action? I've tried, and convinced many people about a lot of things far above their future shock level. But never have I convinced someone in such a way that they were motivated to make higher future shock ideas into actual self-goals. It's really, really hard to do that.

If only because option A seems to be (a) theoretically possible (since sufficiently smart has been stated to exist already in humans)

Just as it is theoretically possible to simulate a human mind at a sufficient level to run it on a computer substrate (given that intelligence is a biological system that is coded with DNA).

(b) a logical step to option B, if only because we have to progressively learn how to make something more intelligent.

Yes, a logical step, but not a plausible step. There are infinitely many alternative methods to learning how to make something more intelligent. These ideas and topics have been written in thousands of scientifically published articles. You can learn how to make "something" progressively more intelligent that is not a human, for example Lenat's program Eurisko. It was simply a computer program that had a lot of rules that worked on some data, but continuing work on that program led to learning how to make something progressively more intelligent, just as one example.

#98 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 30 March 2006 - 04:37 PM

By the way QJones, I really appreciate the thoughtful debate.

#99 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 14 April 2006 - 08:35 PM

ACCELERATION

#100 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 18 April 2006 - 03:58 PM

Michael,

You wrote:

I too believe that Kurzweil places irrational faith in his curves.  But I believe that others don't give them enough credit.  If you're a typical person thinking about the future, it's likely that your view is far too linear.  But Kurzweil's is too biased in favor of his curves.


At the same time, you have stated (to Sander Olsen, I believe) that the Singularity Institute would be surprised if the Singularity occurred after 2020.

Kurzweil places the Singularity way later than that, in 2045, because then AI will be one billion times bigger than biological intelligence.

I believe this to be ridiculously conservative. Does AI really need outnumber bio-intelligence by a factor of one billion before you can speak of a Singularity?

So what makes you say Kurzweil places to much faith in his curves, while he is being so conservative?

#101 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 18 April 2006 - 05:42 PM

Kurzweil places the Singularity way later than that, in 2045

Not exactly.

#102 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 18 April 2006 - 05:45 PM

And I have a really serious problem with your Singularity FAQ.

Q. Is it likely that the Singularity will be initiated by friendly SAI?

A. Probably. So far, the forces of evil (terrorists) have always been outfunded by the forces of good (scientists trying to increase John Doe's quality of life).


This demonstrates a fundamental misunderstanding of the problem of friendly AI. I don't have time to get into it now, but there are a lot of threads around where I have been trying to communicate this (perhaps you could respond in the respective thread?)

#103 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 18 April 2006 - 07:30 PM

At the same time, you have stated (to Sander Olsen, I believe) that the Singularity Institute would be surprised if the Singularity occurred after 2020.


Never said this... most people in the Singularity Institute don't talk about timeframes anymore. Eliezer is quoted as saying the Singularity will happen "some time between now and the rest of eternity".

I might have said that completing successfully before 2020 would be nice, though.

In reality I am quite terrified of the Singularity. Even if you have a theory that looks right, is verified by multiple outside parties, and everyone feels super-confident, you still get a darkmind (reality-swallower) 9 times out of 10.

In the last year or two I have been kicking around ideas for stopping the Singularity entirely or delaying it for the FAI problem to be solved. Anyone who has even *begun* to see the severity of the problem has had these thoughts, whether they admit it or not.

So what makes you say Kurzweil places to much faith in his curves, while he is being so conservative?


When I say Kurzweil places too much faith - I mean exactly that - not like everyone else when they implicitly mean Kurzweil is forecasting major changes too soon. His curves say 2045 according to him, even though he forecasts human-level AI in 2029. The problem with these predictions is they are *too precise*. Kurzweil thinks he has a superhuman ability to predict the future based on historical data. In reality it is quite foggy.

Kurzweil is not too conversative or too excited. He is just too confident in himself.

I also have to complain about this:

Q. Is it likely that the Singularity will be initiated by friendly SAI?

A. Probably. So far, the forces of evil (terrorists) have always been outfunded by the forces of good (scientists trying to increase John Doe's quality of life).


Metaphor does not extend. Making a new mind is different than building a product. New minds tend to have "broad" interests, that is, goal systems with massive Hamming distance from our own. The existence of earth, a huge, totally unoptimized chunk of Nature, is only desirable to entities who evolved to like it. No one else gives a damn, including the AI we're about to build. And a self-improving AI is basically equivalent to a digital autoclave.

#104 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 18 April 2006 - 07:44 PM

Never said this... most people in the Singularity Institute don't talk about timeframes anymore.  Eliezer is quoted as saying the Singularity will happen "some time between now and the rest of eternity".


In this interview (http://www.crnano.or...w.anissimov.htm), you did say it:

"It's hard to place a concrete estimate on when the Singularity will occur. My snap answer is "soon enough that you need to start caring about it". The rise of superhuman intelligence is likely to be an event comparable to the rise of life on Earth, so even if it were happening in a thousand years, it would be a big deal. Like Vernor Vinge, who said he would be surprised if the Singularity happened before 2005 or after 2025, I'd say that I would be surprised if the Singularity happened before 2010 or after 2020."


Metaphor does not extend.  Making a new mind is different than building a product.  New minds tend to have "broad" interests, that is, goal systems with massive Hamming distance from our own.  The existence of earth, a huge, totally unoptimized chunk of Nature, is only desirable to entities who evolved to like it.  No one else gives a damn, including the AI we're about to build.  And a self-improving AI is basically equivalent to a digital autoclave.


I thought the idea was to build a friendly AI, because evil and neutral AI are not acceptable. The Singularity Institute is an institute that has gone public with its message. Its ideas can be peer-reviewed because the literature is out in the open. And because of the Singularity Challenge the public also has an idea what the funding of the institute is.

So it looks to me that, once again, any underground evil is being outfunded by good which operates in a controlled environment and is peer-reviewed.

So why does the metaphor not extend?

#105 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 18 April 2006 - 07:49 PM

Not exactly.


As Michael says... his book clearly states 2045 as the target date for the Singularity. Yet at the same time, he predicts human level AI before 2030.

Not consistent, but he has chosen to go with that timeline.

Perhaps he is being conservative on purpose, in order to give himself some breathing space.

#106 apocalypse

  • Guest
  • 134 posts
  • 0
  • Location:Diamond sphere

Posted 24 May 2006 - 01:22 AM

I thought the idea was to build a friendly AI, because evil and neutral AI are not acceptable. The Singularity Institute is an institute that has gone public with its message. Its ideas can be peer-reviewed because the literature is out in the open. And because of the Singularity Challenge the public also has an idea what the funding of the institute is.

So it looks to me that, once again, any underground evil is being outfunded by good which operates in a controlled environment and is peer-reviewed.

So why does the metaphor not extend?



The power of the individual is increasing exponentially. Small groups and private institutions are growing ever more powerful. As we go beyond the power of the atom and tap that which utterly dwarfs it, we see an influx of overwhelming power into the system. The possibility stands that some peak-intelligence humans(or weak superhuman intelligences, if you count the sort of knowledge and tools at their disposal nowadays), may intentionally or inadvertently unleash beings of unfathomable might. The right minds at the right time with the right tools, can outclass even large government funded projects. It is also possible that due to the open ended evolution of a general intelligence, it might go from friend to neutral or even become a foe, or may even oscillate between the various states. There's also the possibility that it may reproduce or fragment, giving rise to shards of heaven, shards of hell, and shards of something else.

#107 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 24 May 2006 - 03:55 AM

The power of the individual is increasing exponentially.

We have an entire heirarchy of giant's shoulders to walk up, bottom to top.

#108 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 24 May 2006 - 03:59 AM

I'd say that I would be surprised if the Singularity happened before 2010 or after 2020

I'm in complete agreement with that...

#109 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 24 May 2006 - 04:12 PM

I'm in complete agreement with that...


Apparently, Michael himself is not. Which is odd, because they are his own words.

I'd still like that one explained from Michael himself, but he hasn't responded to my email I sent him.

#110 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 24 May 2006 - 11:13 PM

Eliezer is quoted as saying the Singularity will happen "some time between now and the rest of eternity".


The other options would be (1) that the Singularity is already taking place - how does one define Singularity anyway? - and (2) that the Singularity will never take place.

I believe that, philosophically speaking, it is not respectable to place a high degree of confidence in one's time lines.

Furthermore I agree with Michael's statement that Kurzweil has faith (ie, an unreasonable degree of confidence) in his abilities to prognosticate future trends.

At the same time, as futurists we are more than just ponderous philosophers. We are activists. And by definition activism requires action to be taken. The catch-22 is that meaningful action can only be taken based on beliefs (which structure desires and which in turn lead to action) that, as we all know, are notoriously unreliable.

Observe. All of the futurist organizations out there on the web have either implicitly or explicitly committed to future betting - there is no getting around this fact. Pick your commitments with care and be ever vigilant. :))

sponsored ad

  • Advert

#111 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 24 May 2006 - 11:24 PM

Observe. All of the futurist organizations out there on the web have either implicitly or explicitly committed to future betting - there is no getting around this fact. Pick your commitments with care and be ever vigilant.

This is a good statement, Don.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users