• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Valid criticism of the Singularity?


  • Please log in to reply
18 replies to this topic

#1

  • Lurker
  • 0

Posted 16 March 2008 - 10:19 PM


First of all, this is my first post in this forum - I'm so glad that I found it! I have been looking for something like this for a while now...

Anyways, my question is this:

What is (in your opinion) the most valid argument against the singularity?

I have been obsessively curious about these ideas for several years now and have read hundreds of scientific papers and articles, several books and watched hundreds of lectures on the topic and have yet to find a criticism that is anything other than sheer incredulity or just name-calling, etc. Bruce Sterling and Douglas Hofstader come to mind, but I have read many others' critiques and none of them contain a shred of evidence contradicting the science or engineering behind the idea of a technological singularity.

I'm becoming worried that the singularity is inevitable and am really very interested in reading logical and well-thought-out criticisms.

thanks!

#2 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 16 March 2008 - 11:03 PM

Here are some that Vinge put out in an article "What if the Singularity Does not Happen":
http://www.kurzweila...rt0696.html?m=1

sponsored ad

  • Advert

#3

  • Lurker
  • 0

Posted 17 March 2008 - 12:17 AM

Here are some that Vinge put out in an article "What if the Singularity Does not Happen":
http://www.kurzweila...rt0696.html?m=1


Thanks - yeah, I have read that and listened to his lecture at Long Now. The thing is, Vinge coined the term Singularity and he still thinks it is going to happen in the next few decades. Do you know of a scientist who just doesn't think it is possible?

Thanks again for the link.

#4 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 17 March 2008 - 01:04 AM

If think that there are not really any sound argument opposing the oncoming of the singularity. Ray Kurzweil's (who i believe to be the most known singularitarian) predictions so far have been mostly accurate, and if everything keeps going on and happening as predicted, we will reach the singularity. But a lot of stuff can go wrong until then, so there really is no way to have a sound argument against it right now, we will have to wait to see what will happen, to see if nothing will go wrong.

#5 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 17 March 2008 - 01:07 AM

But as of now, i do believe that the chances of the singularity happening, sooner or later, are very, very high. The main question i have is if i will still be alive by then, so keeping myself alive for as long as i can is my main goal.

#6 Andrew Shevchuk

  • Guest
  • 75 posts
  • 0
  • Location:Tucson, AZ

Posted 17 March 2008 - 02:59 AM

I don't know that I can point to a scientist offhand that would say it's definitely impossible. Someone here probably could though.

Honestly, I think by now there is a ton more evidence for the Singularity than against it, especially if we're defining the Singularity as the creation of human-level AGI. I don't think that there is a real technological or engineering justification against that goal anymore, just an ingenuity and implementation barrier that we're slowly tackling and will eventually break through. Eliezer and Ben at SIAI have talked about how the computational threshold for AGI is well below that of human intelligence and is probably already in place. I agree with them. It's more a matter of the right algorithm at this point.

Of course, anyone who believes the Singularity is possible has made a few key assumptions from the start. This includes statements like "Human-level AGI is possible" and "we can understand intelligence well enough to create one." These are the kinds of statements that are best proven with an existence proof. There may be no infallible logical or mathematical proofs of these statements and therefore there is room for skepticism in the minds of some scientists.

I think the logic that lets one conclude the Singularity is possible is straightforward, but others seem to disagree for rather nebulous reasons. Often they have a problem with one of the two statements above, but it seems their explanation can always be reduced to a lack of real knowledge about the subject or a stubborn and persistent case of irrational doubt overriding logic. Many people base their arguments on the complexity of the human brain and how little we understand it. These are irrelevant; it is well known that we're trying to create a functionally equivalent AGI independent of our own cognitive design for both efficiency and friendliness reasons. If the goal actually involved fully comprehending our own cognitive design then things become slightly stickier due to the "how can the brain understand itself?" argument. But it doesn't. We're trying to create the same functionality with a simpler design, and it seems we understand the brain's functionality rather well.

My logical justification (and perhaps the justification of others) is this: The fact that a zero-intelligence process (evolution) created our level of functionality with this design virtually guarantees that an intelligent process could emulate that functionality with a superior design. Evolution is a very slow optimization process; intelligence is a very fast optimization process. Our design is still clearly sub-optimal in a multitude of ways and can be rapidly optimized via intelligence (although it would also be optimized via evolution with enough time). The thing is that optimizing our existing functionality also lowers the learning curve for developing more advanced functionality, hence the hard take-off scenario. Oh the irony that scientists believe we exist due to a zero-intelligence process and zealots believe that we exist due to a maximally-intelligent being.

The best argument against the Singularity? I've already hinted that I think it's inevitable....unless we annihilate ourselves first.

#7 affinity

  • Guest
  • 44 posts
  • 1
  • Location:Northwest

Posted 28 May 2008 - 09:55 PM

The thought of technological singularity sounds like divinity. A computer AI that evolves itself to help us faster.

So take your pick? Immorality, the destruction of our humanity, the lack of control, self inflated interests. It may not even work until we ourselves evolve a little more.

Technology is not entirely predictable. Economic and sociological events play a role in its development. Christianity squandered intellectual thinking early on. As did other events.

The most valid argument against singularity that I feel is that it relies too much on things you cannot control and do not know as of now. We know much about the body. Why do we cry? Why do I yawn? They have answers but like a vast number of other biological functions such as consciousness they go unanswered mostly. May take only one year to make the singularity but centuries to understand ourselves. Perhaps neither or both.

If the singularity comes I hope it is a slow transition to a positive advancement. Otherwise I hope AI and our own body modification and prosthesis merges peacefully and we advance together harmoniously.

I have the same feelings on the end of the world and any other spiritual doomsday/deliverance fantasies. Hope for the best.

In the time in between, you gain nothing from waiting. Learn and create.

The road to immortality and knowledge should be a self propelled journey of knowledge and community strengths and equality.

It is without doubt computers will be sentient years from now and without doubt if we do not follow suite in their likeness we will be only grains of sand compared to their knowledge.

I am not trying to slam Singularity down as impossible. I have yet to read more and need more to see where all these strong opinions come from. There are however drawbacks with anything. In the course of immortality we may loose ourselves.

#8 Cyberbrain

  • Guest, F@H
  • 1,755 posts
  • 2
  • Location:Thessaloniki, Greece

Posted 28 May 2008 - 10:20 PM

What is (in your opinion) the most valid argument against the singularity?

Wait, are you asking for arguments against the technical plausibility of a technological singularity or arguments against the need for a technological singularity?

As far as technicalities go, there are many computer scientists which could argue against the singularity (my professors being one of them).

As for arguments against the need to bring forth a singularity, I think the bio and ecoluddite's speak for themselves :)

#9 affinity

  • Guest
  • 44 posts
  • 1
  • Location:Northwest

Posted 28 May 2008 - 10:26 PM

What is (in your opinion) the most valid argument against the singularity?

Wait, are you asking for arguments against the technical plausibility of a technological singularity or arguments against the need for a technological singularity?

As far as technicalities go, there are many computer scientists which could argue against the singularity (my professors being one of them).

As for arguments against the need to bring forth a singularity, I think the bio and ecoluddite's speak for themselves :)



That's a good point; I assumed Mike meant against was in relation to the possibility. Not so much the need. The idea; the concept. Far outweighs the risks. But as whether it will actualy work or not is the other end of the spectrum.

Assumptions are deadly.

#10 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 29 May 2008 - 12:56 AM

My main concern, which is totally unprovable, is that our innovation curve might slow down below some critical threshold as the population ages.

#11 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 29 May 2008 - 03:41 AM

My main concern, which is totally unprovable, is that our innovation curve might slow down below some critical threshold as the population ages.



I don't think that will happen. But one way or the other, i think that the age of retirement should increase now. They should increase it to 75 years old minimum. If you want to retire sooner, build some goddamn patrimony instead of carelessly spending your money in useless stuff!

#12 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,074 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 May 2008 - 07:03 AM

unless we annihilate ourselves first.


Some sort of worldwide catastrophe. That might stop progress.

#13 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 29 May 2008 - 07:38 AM

They have answers but like a vast number of other biological functions such as consciousness they go unanswered mostly.


Consciousness is a rhetorical, not biological, function. :)

The term "singularity" has become a little loaded lately. Many actors simply assume it's the creation of a "human level" agi or whatnot. In many respects, our machines have already reached human level capacities. In most cases, they have surpassed us, or are not even comparable. I think it's notable to realize, with respect to developments in AGI, that most of these innovations are defined to the negative. I'd argue, that by some definitions, we've already reached the "Singularity," with the advent of the Cloud, the meme of the market's invisible hand, etc.

The main argument against the most common definition of the Singularity I have found is the "frog in the pot" argument. Our machines will never outpace us, because *they are us.* We'll never have AGI's smarter than humans, because there will be humans who become cyborgs. Like the invention of fiat or the vaccine or whatever else, people won't recognize such innovations as a "Singularity," although from the perspective of those before adoption of such technologies, it will certainly seem we'd hit an asymptote.

I actually buy this argument. I don't really think there will be a "technological Singularity." As we continue to create more knowledge, our abilities to process that knowledge also increase.

But that's all I can really say. I'm glad you've found this forum. You'll find many people who share your interests here. I, personally, am much more interested in Ben's work over Aubrey's. These forums are hospitable to many interests, many of which I don't share (cryonics comes to mind: sorry Shannon).

#14 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 29 May 2008 - 06:27 PM

The main argument against the most common definition of the Singularity I have found is the "frog in the pot" argument. Our machines will never outpace us, because *they are us.* We'll never have AGI's smarter than humans, because there will be humans who become cyborgs. Like the invention of fiat or the vaccine or whatever else, people won't recognize such innovations as a "Singularity," although from the perspective of those before adoption of such technologies, it will certainly seem we'd hit an asymptote.

I actually buy this argument. I don't really think there will be a "technological Singularity." As we continue to create more knowledge, our abilities to process that knowledge also increase.



I have some hard time buying this argument completely. I think that we may be underestimating the difficulties that could arise from trying to "update ourselves" into machines as smart as SAIs. We probably won't be able to become cyborgs or machines overnight. There will be a gap, between the time that we create SAIs, until we manage to become as smart as SAIs. This may take some time...

#15 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 30 May 2008 - 03:11 AM

The reason I don't really buy the singularity hype is that I don't see it as being some all-amazing moment when all hopes and fears play out in a cataclysm of drama. Let's look at the prediction that computers will equal human intelligence by 2029 and surpass it by 2034 and thereafter the future becomes instantly unpredictable. That strikes me as malarkey. I don't argue that computers will eventually exceed the computational power of the human intellect. In many areas such as mathematics and possibly chess they already have. As time goes on computers will be capable of exceeding more and more of the intellectual feats of humanity. However computers will not reach some magic point after which nothing already known still applies. They will lead mankind in some areas while lagging in others for perhaps hundreds of years although the areas where they fall short will continue to be worked on.

A virtual mind capable of independently inventing and innovating would certainly be a phenomenal technology superior even to the promise of the RepRap project. However I don't think life would instantly become unrecognizable. Basic pieces of economic theory would still apply. Newtonian physics would still be the main substrate in which "real" life is played out. Certainly many difficult problems that have dogged humanity for centuries as well as new ones we didn't even realize we had would be solved. However this will take place over the course of time just as it does now.

If holodeck technology is discovered that completely sucks in humanity as happened to these two, I would consider that a form of "singularity". In that scenario civilization would only continue in the form of Amish or other deliberately primitive societies. I don't think that is what most are referring to when they reference the technological singularity though.

One of my key problems with those focused exclusively on the singularity is that they seemingly put such a mystical faith in its ability to right all wrongs, cure all ails or destroy the world that the concept seems counter to the more likely scenario that progress will continue to build upon the shoulders of our current giants and many great prizes will be won through dedication and hard work rather than all appearing as a consequence of a single sudden advance.

Don't misunderstand me though. I am a huge fan of eliminating drudgery in the world. I would like nothing better than to have the economy transformed so that no one needed to work and only those who wished could do so. I suspect that life will continue a halting and unpredictable ascendancy over the next several hundred years.

One thing I think is really cool that I haven't heard anyone specifically mention is how although basic sciences such as materials science do not advance at the pace of Moore's law, they do advance over time. In the 1990s while attention was focused on the tremendous progress being made in information technology, physical sciences were not standing still. Even though much of the "wealth" created by the dot com boom was subsequently destroyed, we retain the advances made in areas such as battery composition, transparent aluminum oxynitride and carbon nanotubes although society as a whole was not intently focused on these areas.

In summary: progress - yes, nearly without limit, singularity - not likely.

#16 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 30 May 2008 - 04:07 AM

The reason I don't really buy the singularity hype is that I don't see it as being some all-amazing moment when all hopes and fears play out in a cataclysm of drama. Let's look at the prediction that computers will equal human intelligence by 2029 and surpass it by 2034 and thereafter the future becomes instantly unpredictable. That strikes me as malarkey. I don't argue that computers will eventually exceed the computational power of the human intellect. In many areas such as mathematics and possibly chess they already have. As time goes on computers will be capable of exceeding more and more of the intellectual feats of humanity. However computers will not reach some magic point after which nothing already known still applies. They will lead mankind in some areas while lagging in others for perhaps hundreds of years although the areas where they fall short will continue to be worked on.



That's why Kurzweil established a gap of time of between 2029 when AIs get as smart as humans and 2045 when he thinks that the singularity will happen. Yet i also think that kurzweil doesn't see the singularity as "one single moment that changes everything". He says that for the people living inside it, it will not look as magical as we see it now. Only after some time after the singularity started, people that are unenhanced may have trouble keeping on with all the innovation. All in all, what kurzweil means with the singularity concept is that it is going to be a time of technological development so fast like nothing we've seen before. I think that's quite accurate and there's nothing wrong with that.

#17 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 30 May 2008 - 06:19 AM

Yet i also think that kurzweil doesn't see the singularity as "one single moment that changes everything". He says that for the people living inside it, it will not look as magical as we see it now. Only after some time after the singularity started, people that are unenhanced may have trouble keeping on with all the innovation. All in all, what kurzweil means with the singularity concept is that it is going to be a time of technological development so fast like nothing we've seen before.


OK. So maybe what Kurzweil calls the singularity and thinks will take place from ~2029-2045 I call the enlightenment and think is taking place over the span of ~1500 AD - 2500 AD. He has just chosen a special segment of it where he calculates that a specific technological milestone will occur. The extrapolations he makes from his calculation are where I differ from his opinion.

I suspect it will be almost impossible for him to win long bet #1. Machines may be capable of many valuable tasks by 2029 that they cannot do now, but I suspect they'll require ~100 additional years to become flawless at emulating human foibles. Indeed, their ability to do so would seem to be one of the less important accomplishments we should look for them to make.

Maybe I'm just so forward looking that it already doesn't appear magical :)

Edited by lunarsolarpower, 30 May 2008 - 06:21 AM.


#18 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 30 May 2008 - 06:47 AM

Yet i also think that kurzweil doesn't see the singularity as "one single moment that changes everything". He says that for the people living inside it, it will not look as magical as we see it now. Only after some time after the singularity started, people that are unenhanced may have trouble keeping on with all the innovation. All in all, what kurzweil means with the singularity concept is that it is going to be a time of technological development so fast like nothing we've seen before.


OK. So maybe what Kurzweil calls the singularity and thinks will take place from ~2029-2045 I call the enlightenment and think is taking place over the span of ~1500 AD - 2500 AD. He has just chosen a special segment of it where he calculates that a specific technological milestone will occur. The extrapolations he makes from his calculation are where I differ from his opinion.



I understand your thinking that any significative technological advance we do could be viewed as the beggining of a new singularity. But when you think about it, we have never done an advance so great as the creation of a SAI would be. All the advancements that we have done until now have been because of intelligence and the increase in order and complexity. Now the creation of SAIs would make our rate of advancements increase many many times, because our intelligence (capacity to create greater complexity and order) has increased many times too.

sponsored ad

  • Advert

#19 affinity

  • Guest
  • 44 posts
  • 1
  • Location:Northwest

Posted 30 May 2008 - 10:04 AM

It seems that any sentient AI would need to experience the universe and manipulate it to learn. It may do this quickly but the laws of physics would be a great drag.

Processing data is a feat in and of its own. Not much the same as a human brain however. Mechanically speaking.

On topic: Opinions against singularity are opinions much like those against nirvana or heaven. My own opinion is that individuality is too strong of a personality trait to rely wholesomely on, however limited an opinion that is.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users