• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Who Believes Kurzweil?


  • Please log in to reply
49 replies to this topic

#31 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 09 November 2005 - 09:30 PM

I see that you have edited (correction]

For the record Justin, although Berkeley's school of thought may be internally consistent it still begs the question, "from what does the mental arise?"  And there in lies the problem.  We are left once again with a "causa sui" answer which is, obviously, inadequate.  So we can delve further into simulation scenarios, or other similarly speculative and unsubstantiated meta-physics -- or we can come back to reality and recognize that there is almost certainly an objective reality waiting to be discovered.  In this light, the pragmatism embraced by James might be the most effective means of attaining "truth".
----------------

Nate, time is limited right now, but I'll try to address your comments in the next day or two.


You still don't understand the inherent problem. Oh well.

I see that you have edited (correction, replaced) your initial response in favor of a less tactful and more 'reactive' repartee.  How enlightened.


At least I have to try to be an asshole. [lol]

#32 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 09 November 2005 - 09:40 PM

Justin

At least I have to try to be an asshole.



That is strictly a matter of opinion. [thumb]

I had almost forgotten why I'd given up internet communications in the first place. Thank you for reminding me Justin. Now, if you'll excuse me, I'm going back to my life of frivolity.

To book this BIOSCIENCE ad spot and support Longecity (this will replace the google ad above) - click HERE.

#33 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 09 November 2005 - 11:33 PM

What Justin wants is a theory that says no one's smarter than he is, and his vacuous high-IQ associates, and can't be.


How wrong you are. I am very well aware that many people are smarter than me. I have no idea what you mean by this.... [huh]

Edited by justinb, 10 November 2005 - 10:50 PM.


#34 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 10 November 2005 - 02:46 PM

I think Nate just made a bunch of flak because he doesn't like the fact that a lot of the things he holds dear are no-where near being guaranteed.

Earlier, referring to you, I said:

You demand scientifically sound discourse and guarantees. Something's gotta go.

Do you know what that means? It means that guarantees are not associated with scientific literacy. Obviously I want to be scientifically literate. Therefore, obviously I choose to manage uncertainty and don't seek guarantees.

Again, you are being aimlessly argumentative. Your blemished ego denies you both salient and subtle counsel, and that's why you're a waste of time and henceforth will be ignored.

#35 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 10 November 2005 - 09:30 PM

Nate, I'd like to continue our conversation via PM. ;))

#36 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,039 posts
  • 2,000
  • Location:Wausau, WI

Posted 10 November 2005 - 10:33 PM

I believe Ray Kurzweil because he has been right before and I see the trends he described in a couple of his books happenning right before my very eyes. What I see from most of his critics is "sour grapes". Seriously. His critics usually have nothing substantive to say, and none have themselves made accurate predictions of the future.

And Don, the "new religion" thing you brought up is a non-sequiter to the argument. I would personally not like to see another religion, but it has nothing to do with whether his predictions are right.

There is also the problem of defining progress. Sure, you can say the interenet is nothing but fancy radio, tv, and telephone all co-mingled into a new presentation. Then of course Radio and telephony is nothing more than a fancy souped up telegraph. And of course the telegraph is really just a semi-creative way of hooking up wire and magnets. So really we haven't had an iota of progress in 2 or 3 hundred years. If that is the argument....it is silly. Hey how about this one "there has been no medical progress since the first caveman cracked open his neighbor's skull to relieve a headache a few hundred thousand years ago". Because, of course, the fancy tools we use nowadays are just specialized rocks.

Click HERE to rent this BIOSCIENCE adspot to support LongeCity (this will replace the google ad above).

#37 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 10 November 2005 - 10:54 PM

Again, you are being aimlessly argumentative. Your blemished ego denies you both salient and subtle counsel, and that's why you're a waste of time and henceforth will be ignored.


You are ignoring many problems with immortality. Anyways, I do apologize for any undue words I might have used. If you would like to address my points via PM or MSN so we can get them out of the way in our own private fashion, than lets do so. I would like to remain friends and you must realize that I am going through a very difficult time of my life. Anything that I might say is not meant as a personal attack. I want to address legitimate concerns of mine. You might view them as trivial and I would like to know why, in private though so others wont interrupt our communication.

#38 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 12 November 2005 - 01:21 AM

His critics usually have nothing substantive to say, and none have themselves made accurate predictions of the future

Perhaps it is because the future is not accurately predictable? When enough people guess blindly, one of them is bound to be right a couple of times. This does not say anything about who will be in the future.

#39 manowater989

  • Topic Starter
  • Guest
  • 96 posts
  • 0

Posted 13 November 2005 - 05:07 AM

This has gotten intensely dense and divergent- not that the discussion hasn't been interesting although some of it, I'm sorry to say, didn't strike me as very valuable discourse- but I just wanted to know, basically, whether people though that radical technological leaps of the types he predicts will be forthcoming anywhere near the timeframes he lays out, or, as others have earlier suggested, are they more likely to take far longer to appear?

#40 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 14 November 2005 - 03:32 AM

I'm sticking with my agreement with Kurzweil's timeline, if not sooner. ;)

#41 manowater989

  • Topic Starter
  • Guest
  • 96 posts
  • 0

Posted 14 November 2005 - 04:36 AM

Really? Hmm, I hadn't previously thought most of you were that optimistic. Let's make this a poll: who generally agrees with this? Are there many major doubters? I remember, when I first started getting interested in transhumanism circles and found Kurzweil's site, someone saying something to the effect of "I wouldn't go near anything that starts with 'kurz' and ends with 'weil' ". Has that position shifted, or was I misled in ever believing that it was widespread?

#42 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,039 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 November 2005 - 07:37 AM

I think Kurzweil's predictions are pretty good based on what I see today. Although the specific details are not going to be correct, his highlighting of exponential growth in progress is correct.

What I see from the critics is just a redefining of progress. They say the advance of computer and communication speed is not progress...ergo Kurzweil is wrong.

#43 jamesg

  • Guest
  • 2 posts
  • 0

Posted 14 November 2005 - 10:19 AM

I also believe Kurzweil is too conservative. If you chart out computational power of supercomputers and project through 2015, you see we hit human level intelligence at around 2013. In fact, several companies have announced they will have human level intelligent (10 petaflops) supercomputers in 2011. Computers of these types will do much more to accelerate the "singularity" then anything we've seen before, and they'll only get more powerful each year. I believe it's clear there will be explosions of interest in AI when we are on the boundary of a computer that's as smart as a human. It won't be like now where hardly anybody's heard of today's supercomputers, when they are as powerful as a human brain and it's within the realm of possibility of having it as smart as a human is, I believe EVERYONE will be hyped about it and that hype will lead to an ever increasing super-acceleration of AI developement, especially with experts on CNN talking about what kind of things we could do with a human intelligence level computer. I believe there will be frenzies about such computers designing nanobots that could extend our lives indefinitely, or having conversations about the nature of the universe or religion, etc. I think people are just not prepaired for how big an event it will be, even the "singularitarians" don't see it from what I can tell. After those programs recieve proper programming in a couple of years time or sooner, they'll start designing nanobots, and humans will start building those nanobots. After we build nanobots, our computational capacities will increase to near their max (for our solar system) very quickly, within months to a year, as nanobots would have the capability to turn all dumb matter in our solar system into computers in that time frame. After that we will have to wait years for them to travel to other stars then convert more mass into computers. But it's obvious moore's law will not continue smoothly, we will have a huge jump in cpu power when we make nanobots way past what moore's law would dictate, then a sudden slow down for years as we wait for the nanobots to reach the first stars then more huge jumps. But we will be way ahead of Moore's law and Kurzweil's predictions. I think kurzweil says what he says because history backs him up, he's clearly ignoring the potential of nanobots in order to sound "more sane". It's also a more defendable position, since you can just point to moore's law and say "see, I'm right", but it's a little dishonest.

#44 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 14 November 2005 - 03:35 PM

someone saying something to the effect of "I wouldn't go near anything that starts with 'kurz' and ends with 'weil' "

That was me and as you can see above I have not changed my mind. I'm no computer scientist though.

#45 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 15 November 2005 - 01:07 AM

Hypothetical future scenarios are not something to prove or disprove with a static argument. They are something to make happen.


Beautiful statement there.

Limiting our ambitions by *highly uncertain* measures of prediction is foolish and lazy.

#46 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 15 November 2005 - 04:33 AM

Mind wrote:

Although the specific details are not going to be correct, his highlighting of exponential growth in progress is correct.

And exponential progress is a new idea? C'mon guys! You're writing as though Ray invented futurism! In my own bookshelf there are books going back as far as 1936 (The Next Hundred Years by C.C. Furnas) that make stunningly accurate predictions about the future, with exponential economic and technological growth cornerstones of most of them. Even the idea of "escape velocity" in life extension, although popularized by Aubrey de Grey, dates back to at least the 1970s. Hans Moravec wrote lots about hyperintelligent machine evolution back in the 1980s. And then there's Drexler. Lots of people have been having great thoughts about the future for many years.

Incidentally, Kurzweil doesn't say progress is strictly exponential, but that it sometimes pauses at plateaus before resuming. It's hard not to notice that although computers are 100,000 times faster, the speed of personal travel hasn't increased in 40 years.

Are there many major doubters?

Okay, I'll bite. I think Ray is too optimistic, especially about biological problems.

---BrianW

#47 randolfe

  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 15 November 2005 - 08:36 PM

John said a few posts back:
"QUOTE
His critics usually have nothing substantive to say, and none have themselves made accurate predictions of the future

Perhaps it is because the future is not accurately predictable? When enough people guess blindly, one of them is bound to be right a couple of times. This does not say anything about who will be in the future"

I would take exception to the idea that previous success in predicting the future is "NOT" a valid indicator for future success. Obviously, to have been right before required an understanding of social and scientific trends. To be right about the future will require the same basic skills.

There may not be a 100% correlation. However, having been proven right in the past does dramatically increase the chances you will be proven right in the future.

#48 apocalypse

  • Guest
  • 134 posts
  • 0
  • Location:Diamond sphere

Posted 19 November 2005 - 07:47 AM

My problems with kurzweil lie with regards to the great complexity barrier and the laws of physics. Many individuals assume we'll still be making grand discoveries centuries from now, that may be the case if somehow we couldn't go beyond human lvl intellects. Currently the complexity of our tools, of the information that is now at our disposal is growing exponentially. We've had to become ever more specialized and it's now becoming nigh impossible for any single individual to keep up with much more than just his field of expertise. Even amongst the various fields, it is now only a few that are able to truly excel and dominate their particular field, given the vast amount of knowledge that has been accumulated in most all.

IF we can significantly exceed the human intellect limitation, and cope with the ever increasing complexity, we should hit another wall, the laws of physics. Unless, revolutionary insights that allow us to basically modify and alter the physical laws to our whims lie in store, we will most likely encounter that with the laws as is, we'll probably reach the limits of the possible technologically speaking, given posthuman superintelligence shortly after, probably a century or two at the most.

I agree with him, that we are in store for a great wave of change, as is. Given that in less than 50yrs we are likely to possess the ability to enhance or trascend the limitations of the human intellect, our tools are also becoming ever better, and our information/knowledge of the world increasing exponentially. What I've trouble seeing is the exponential progress continuing indefinitely without further discoveries hinting at new laws or ways to change the laws themselves. Some of the comments I've heard from kurzweil hint at him believing higher intellects will find ways to get around the laws as we continue to progress, but that is not necessarily the case.

#49 boundlesslife

  • Life Member in cryostasis
  • 206 posts
  • 11

Posted 28 November 2005 - 05:40 PM

My problems with kurzweil lie with regards to the great complexity barrier and the laws of physics.  Many individuals assume we'll still be making grand discoveries centuries from now, that may be the case if somehow we couldn't go beyond human lvl intellects.  Currently the complexity of our tools, of the information that is now at our disposal is growing exponentially.  We've had to become ever more specialized and it's now becoming nigh impossible for any single individual to keep up with much more than just his field of expertise.  Even amongst the various fields, it is now only a few that are able to truly excel and dominate their particular field, given the vast amount of knowledge that has been accumulated in most all.

It seems reasonable to suppose that we need to transcend the limitations of "brains as they are" to cope with advances in the amount of information available, and, more importantly perhaps, to be able the clearly visualize complex solutions that today, to the best of us, may be quite hazy.

IF we can significantly exceed the human intellect limitation, and cope with the ever increasing complexity, we should hit another wall, the laws of physics.  Unless, revolutionary insights that allow us to basically modify and alter the physical laws to our whims lie in store, we will most likely encounter that with the laws as is, we'll probably reach the limits of the possible technologically speaking, given posthuman superintelligence shortly after, probably a century or two at  the most.

That's true, and some things we presently imagine to be practical may turn out to be "practically" impossible, such as the repair by nanotechnology of a hundred trillion synapses, so as to restore, atom by atom, the cryopreserved human brain, with even approximate "atom by atom" fidelity. (As just one illustration of a possible difficulty, information fundamentally necessary to such a standard as "atom by atom" fidelity might be virtually "reduced to noise", even in what appears to be a well preserved, vitrified brain, at least, it is not presently possible to demonstrate otherwise.)

At the same time, perhaps it will be possible to reconstruct even a somewhat damaged brain by nanotechnology to emulate the original, where sufficient memory is conserved to satisfy the practical goals of "reanimation". These realizations, both as to the impracticality of repairing biological brains and the comparative ease and desireability of "uploading" might be of such a complexity that they could only be reached by intellects brought forth in the attempt to generate these emulations, as suggested in an early story titled Nothing's Impossible.

And, if the very worst were to occur, such that the transhumanist's vision of identity survival by way of information alone (including some measure of genetic information) were to be all that was possible, and uploading "didn't work" either, then perhaps the scenario presented in another early fictional work (Travelling) might be more realistic:

In either case, personality survival by one standard or another might be achieved (though it might not be recognized as such by many living today). One thing we must take into account is that the standards by which "personality survival" are measured in the future will be those accepted then, not those that are most prevalent now, even among "transhumanists".

I agree with him,  that we are in store for a great wave of change, as is.  Given that in less than 50yrs we are likely to possess the ability to enhance or trascend the limitations of the human intellect, our tools are also becoming ever better, and our information/knowledge of the world increasing exponentially.  What I've trouble seeing is the exponential progress continuing indefinitely without further discoveries hinting at new laws or ways to change the laws themselves.  Some of the comments I've heard from kurzweil hint at him believing higher intellects will find ways to get around the laws as we continue to progress, but that is not necessarily the case.

If we get a "spike" that seems to be like an upward leap of almost unmeasuable steepness at some point, perhaps that would be enough specification of a "singularity" for any practical purpose.

Certainly, the central figure in the above story (Nothing's Impossible) found, looking back historically, that when the entire world "rushed to upload" and they all moved their identities into "hyperbrains" over a period of just a few years, something took place that, looking forward from today, we would be likely to describe as a "singularity".

boundlesslife

To book this BIOSCIENCE ad spot and support Longecity (this will replace the google ad above) - click HERE.

#50 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 29 November 2005 - 04:22 PM

In response to manowaters initial post I would say that someones belief or disbelief is irrelevant. What is relevant however, is what the data suggests and someones interpertation of that data. Belief implies an opinion, which is worthless absent an expertise in the matter being discussed.

The data suggests that kurzweil is generally correct in light of past predictive prowess as well as the current model of accelerated progress. It would seem that aside from a global catastrophe, our species is headed toward the most dramatic and profound progress of its evolutionary history......The singularity. It is our convergence or lack thereof which will ultimately determine the fate of homo sapiens.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users