• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Who Believes Kurzweil?


  • Please log in to reply
49 replies to this topic

#1 manowater989

  • Guest
  • 96 posts
  • 0

Posted 06 November 2005 - 12:16 AM


First, sorry about the long absence. Second, sure, most of us would like to, but who actually feels scientifically comfortable taking Ray Kurzweil at his word at all? Discuss.

#2 reason

  • Guardian Reason
  • 1,101 posts
  • 241
  • Location:US

Posted 06 November 2005 - 04:23 AM

My take: he's fine on generalities, about as likely as any futurist to be right on specifics, and wrong on timing.

http://www.fightagin...ives/000612.php

To book this BIOSCIENCE ad spot and support Longecity (this will replace the google ad above) - click HERE.

#3 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 06 November 2005 - 09:14 AM

I'd have to go statement by statement to be really comfortable saying I believe or disbelieve anything he says. I don't care how much any human might know, I'm never going to get to the point of believing any speculation based solely on the fact that it's coming from them.

#4 manowater989

  • Topic Starter
  • Guest
  • 96 posts
  • 0

Posted 06 November 2005 - 04:08 PM

I'm not saying that you would. In fact, I'm kind of suggesting the opposite: what I'm getting at, as reason suggested in his linked-to commentary, is whether his saccharine-sweet timescales could be believable. It's all about the 21st century, I think, how soon within it do we start seeing these real changes? For the 20th, you could arguable say that the biggest paradigm shifts came around the 1920's-1930's. The major advances of the 50's, 80's, and 90's were merely extensions of those. The 21st century's major shift will come, as the 20th's did, when things cease to look more or less like the previous century. We have the internet now, but that's not THAT fundamentally different from a combination of phones and televisions; our early 21st-century America still looks like a slightly souped-up 20th-century world. So, really, when will we see things start to change as much in this century? Will it be the 2020's? The 2040's, or 50's? The 2070's? The 2090's? Kurzweil set the date for singularity at 2048, if I remember correctly, and most find that too early. 2058? Are people more comfortable with that? How about the first nanomachines that can build simple objects out of base molecules? The first AI that can pass the Turing test? I just really want to know everyone's take, even though, obviously, none of us can know for certain. Agree or disagree with Kurzweil's timetables, and BY HOW MUCH?

#5 manowater989

  • Topic Starter
  • Guest
  • 96 posts
  • 0

Posted 07 November 2005 - 01:02 AM

I really want to know what people think about this, the timescales.

#6 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 07 November 2005 - 07:38 AM

Kurzweil argues that this is a difficult task for most people to complete because of linear thinking versus exponential reality. Do I agree with his timetables? Depends on the moment ;) Frankly, the past few weeks I've been thinking that the Technological Singularity will come by 2015 and the real significant changes you are talking about will start occurring in the next few years.

Otherwise, I am comfortable with a date around 2048.

One thought exercise I like to try is taking one of today's hyped top of the line consumer electronics and extrapolating what is next. For example, what is next for the Apple iPod now that it supports video? Wireless, increased PDA functionality, Mac OS X, Flash or SVG, high definition video, games? None of these are that futuristic or difficult. What about by 2010? 3D interface, OLED or ePaper, eBook support, Internet, voice recognition, cellular, VoIP? Is any of this too fantastic? Okay, now what about by 2015? Intelligent agent interface and/or brain-machine interface, holographic projection? A little more fantastic but bits and pieces are already available or well on their way. We are only at 2015 and the exercise is too easy.

How about capacity? Right now the top of the line iPod has a 60 GB hard drive, and the Nano has a 4 GB flash chip. Let's take a conservative doubling of capacity every two years (it has actually been shown to occur nearly annually now) and see where that leads us:

Year | Hard Drive | Flash
2005 | 60 GB | 4 GB
2007 | 120 GB | 8 GB
2009 | 240 GB | 16 GB
2011 | 480 GB | 32 GB
2013 | 960 GB | 64 GB
2015 | ~1 TB | 128 GB

What does it mean in 2015 to have a portable device with approximately 1 TB of storage, with at least the capablility of all current home and business personal computers? Is this significant at all? What if we were too conservative and the same device can instead store 32 TB of data? Does that make a difference?

This exercise is not necessarily meant to provide answers, but instead to try to wrap your mind around exponential growth and get you to start thinking about whether Kurzweil and others are spouting bs or not. This exercise is also not meant to suggest that the Apple iPod will still be around in 10 years. The concept of a portable media player may have evolved into something else entirely by then, or turned into a dead end.

Personally, when I run these exercises in my head, and then start combining the separate threads together, the Singularity does not seem so fantastic. In fact, it starts to feel a little too pedestrian and simple.

Click HERE to rent this BIOSCIENCE adspot to support LongeCity (this will replace the google ad above).

#7 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 07 November 2005 - 05:29 PM

I think Kurzweil is being way to conservative on purpose. His idea needs to be defendable and marketable, which is why he can't afford to shock people too hard.

Putting the Singularity off way into the future is a good way of dampening the blow.

At the rate human-brain-reverse-engineering is happening, I'd say it is entirely possible to have had a Singularity by 2020. However, if it hasn't occurred yet in 2030, I'm going to eat my underpants.

#8 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 07 November 2005 - 07:45 PM

Make sure you keep them well-laundered. In 2030 baby boomers will be in dying in droves, consoled only by really, really good virtual reality adult entertainment (which unlike rejuvination or brain-computer interfacing, won't be controlled by the FDA).

---BrianW

Edited by bgwowk, 08 November 2005 - 12:41 AM.


#9 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 08 November 2005 - 10:28 PM

ah, kurzweil, kurzweil, kurzweil...so good for transhumanity, and yet also so very bad for transhumanity. At this point I view him as a net positive for the movement as a whole, but that might change with time.

manowater

Agree or disagree with Kurzweil's timetables, and BY HOW MUCH?


Kurzweil is beholden to his "time tables" and that is a great deal of the problem with him. Whether the methodology used to construct his speculation is meticulous is, quite frankly, irrelevant. There are simply too many variables in projecting future trends, and this fact alone should make one cautious in discussing any time tables whatsoever...that is, unless one is willing to go beyond the bounds of respectable philosophical inquiry. [sfty]

No, Kurzweil's time frames are not really what I take issue with. I mean don't get me wrong, from my somewhat "conservative" transhumanist perspective, Kurzweil is way over the top on his prognostications, but as with any futurist speculations, a great deal of the opinion we as individuals espouse is grounded in *intuition*. And I would further contend that the veracity of said intuition has very little to do with brute "IQ" or analytical capabilities, but everything to do with flexibility of thought and the self acknowledgment of one's limited understanding of our objective reality. After all, Yudkowsky, Goertzel and Kurzweil all probably have genius level IQs, but they nonetheless disagree fundamentally on the Singularity and AI related issues. A philosopher does not, neccesarily, a technologist make.

But I digress. My main problem with Kurzweil is his convoluted values and his suspect motives. If you don't understand what I mean by this, then you too are a part of the problem....


pssstt....hey kids, can I sell ya on some of that new religion??

#10 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 12:11 AM

Justin, if those thoughts help you justify apathy, so be it. Meanwhile, the respectable fields of cognitive science, neuroscience, neurophilosophy, and philosophy of neuroscience, aided by exponentially increasing computational power, continue to move forward, with no fundamentally unbreakable barriers yet in sight.

#11 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 09 November 2005 - 12:16 AM

justinb:

What does computer speed and storage capacity have to do with creating an artificial intellect?


It is a starting point for this discussion and for future research. Simply put, are we able to build a artificial substrate of sufficient capability to emulate human intelligence? Kurzweil argues that we will succeed.

Is this a matter of technological progression? Of course it is. Even a "coherent model of mind" will require technological progression, with advancement in hardware a starting point. The man or women or other that develops a "coherent model of mind" will not do so in a vaccuum devoid of technological tools.

Everyone here that honestly thinks "strong" A.I. is just around the corner is operating on assumptions that are based on further assumptions.


So what? Progress continues regardless, researching the matter from all sides, including assumptions based on further assumptions. Wrong assumptions will be discarded and correct assumptions will go into future research. Anyone that honestly thinks "strong" A.I. is NOT just around the corner is also operating on assumptions that are based on further assumptions. Technological progression will show us one way or the other.

#12 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 09 November 2005 - 12:36 AM

DonSpanton:

My main problem with Kurzweil is his convoluted values and his suspect motives. If you don't understand what I mean by this, then you too are a part of the problem....

pssstt....hey kids, can I sell ya on some of that new religion??


If all transhumanists and other technology progressives thought this was a new religion, then there might be a call for this suspicion. I cannot know what Kurzweil truly thinks, but when he talks about the spirituality of machines, it seems to me he is defending against those who insist on viewing technological progress as cold and unfeeling. Cast the products of technological progress in a spiritual light and you fend off one avenue of attack by critics.

I find much of value in Kurzweil's work but there are a few beliefs he has in common with many transhumanists that bug me. One example: the insistence that humans merging with technology is a transcendent event in a spiritual, sum-greater-than-its-parts sense. There is nothing transcendent about any of this unless the definition of transcendence is restricted to "surpassing others." Transhumans and posthumans will be the result of rapid technological progression, not spiritual transcendence. That would be like calling complex weather patterns a transcendent phenomena, when it is instead a study in complexity. The posthuman historian will not need to use hand-waving to describe the evolution of humans and technology into posthuman forms.

Kurzweil support of defense and the restriction of some knowledge (such as the genome of the Spanish Flu virus) are other beliefs I do not support. However, I am not about to disregard everything the guy says.

#13 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 01:24 AM

The Singularity certainly does mean more than "surpassing human intelligence." It means launching us toward the absolute mergence of thought and being. This implies a lot, and we can modify our current thoughts and actions accordingly.

I'm not sure what your point is. No one with a sufficient understanding of the Singularity is suggesting that "all we need to do is wait." Enough information, knowledge, and wisdom exist to continue improving upon the management of information, increasing our knowledge, and enhancing our wisdom, while we can be reasonably certain that such advances can facilitate further advances. We stop advancing when we can't, not when we guess we probably won't be able to.

Hypothetical future scenarios are not something to prove or disprove with a static argument. They are something to make happen.

#14 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 02:12 AM

1. Determinism is not a problem. Plenty of pleasurable experiences can occur in a deterministic universe.

2. Our personalities depend on a dynamic process of signals (information/forces), not necessarily on the material you associate with the human nervous system.

3. Cosmology does not take into account a universe saturated with intelligent processes. Don't assume entropy will kill us if you don't supply the rigor you demand.

You are being aimlessly argumentative. Your concerns have been thoroughly put to rest all over the place. You just need to look, instead of indulging in fragmentary and distorted conceptions, self-defeat.

#15 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 02:48 AM

You demand scientifically sound discourse and guarantees. Something's gotta go.

The fact of the matter is you need problems other than strong AI to concern yourself with. Otherwise take heed in your own self-confidence or die.

#16 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 03:03 AM

I don't demand anything except that people stop operating on assumed outcomes of certain enterprises.

How else is there to operate? Not expecting the outcome sought? That's an irresponsible way to proceed. Indeed, there's a separate argument that enterprises can't proceed without having expected outcomes.

Don't expect it to work ...

So basically you're saying it's better to engender X and not expect X than to engender X and expect X. What a trivial difference!

#17 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 03:57 AM

You're still digressing, now with a straw man. Expectations are not the same as blind enthusiasm. Quit wasting everyone's time.

#18 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 04:24 AM

... I am simply saying that erroneus expectaions are created by blind enthusiasm in the immortalist meme.

You were saying more than that, all of which were sufficiently invalidated. What you're "simply saying" is pointless.

I really need to work on my articulatory skills.

Stop babbling and get to it then.

#19

  • Lurker
  • 0

Posted 09 November 2005 - 05:59 AM

Why have you decided to delete your posts, Justin?

#20 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 09 November 2005 - 06:19 AM

Justinb made a good point about a need for a hard science approach to the Technological Singularity theory. Singularity literature is replete with grandiose images of the future. It would also be nice to hear more new voices, preferably via peer-reviewed research papers.

#21 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 09 November 2005 - 06:45 AM

Why have you decided to delete your posts, Justin?


I don't know. (Really, I don't.)

I have been "out of it" for awhile now. I think Nate just made a bunch of flak because he doesn't like the fact that a lot of the things he holds dear are no-where near being guaranteed. Plus, there are many problems with immortality, entropy and lack of FW... to name just two out of dozens if not hundreds of problems with immortality.

If we ever increase our intellects to the outer-limits of human capacity I believe we will be horrified by several facts and either commit suicide or go insane.

Or a colorful way we might die is to enhance ourselves to such a degree that we loose our personalities and end up killing ourselves in a out-of-control spiral towards posthumanism and slowly watch ourselves wane away into nothingness. It would most likely take only a short time period to do this though.

#22 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 07:20 AM

Justinb made a good point about a need for a hard science approach to the Technological Singularity theory.

What's this theory supposed to explain, that we can't be specific about post-Singularity states, that we need to be responsible, that transhuman intelligence is the last invention Homo sapiens need to make? We already know this. What Justin wants is a theory that says no one's smarter than he is, and his vacuous high-IQ associates, and can't be.

It would also be nice to hear more new voices, preferably via peer-reviewed research papers.

Do you know about the Journal of Evolution and Technology and the upcoming Future of Humanity Institute (among a number of other risk-management organizations which account for the Singularity)? Which new voices are you looking for as opposed to whom?

#23 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 09 November 2005 - 03:43 PM

I am not all that certain that we do already know this. Even if we did, there is yet much work to be done to flesh out a Theory of the Technological Singularity.

The links you provided are exactly what I was hoping for. New voices joining the better known, all discussing the singularity and implications.

I sense there is some other debate going on here, but I will stick with the topic of the original post. I believe much of what Kurzweil theorizes. However, there is more work to be done, whether or not the Singularity is inevitable.

#24 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 05:42 PM

Richard, I'm glad those links were helpful.

I believe much of what Kurzweil theorizes. However, there is more work to be done, whether or not the Singularity is inevitable.

Yes. I mean the same.

#25 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 09 November 2005 - 06:53 PM

enoosphere

If all transhumanists and other technology progressives thought this was a new religion, then there might be a call for this suspicion.  I cannot know what Kurzweil truly thinks, but when he talks about the spirituality of machines, it seems to me he is defending against those who insist on viewing technological progress as cold and unfeeling.  Cast the products of technological progress in a spiritual light and you fend off one avenue of attack by critics.


If you want to know what Kurzweil "truly thinks" then go to Barnes&Noble and read his new testament lying out there on the front display case.

Ray Kurzweil: We need a new religion.

Um actually, no, we don't Ray.

Religion, an ambigious term to be sure, but one that (to me at least) represents the large scale unification of belief and the establishment of dogma; the ultimate subjugation of the will, the death of the free thinker. And for what? To satisfy the all too human psychological need for certainty and meaning?

History teaches us that religion leads to erroneous assessments and tragedy.

I find much of value in Kurzweil's work but there are a few beliefs he has in common with many transhumanists that bug me.  One example: the insistence that humans merging with technology is a transcendent event in a spiritual, sum-greater-than-its-parts sense.  There is nothing transcendent about any of this unless the definition of transcendence is restricted to "surpassing others."  Transhumans and posthumans will be the result of rapid technological progression, not spiritual transcendence.  That would be like calling complex weather patterns a transcendent phenomena, when it is instead a study in complexity.  The posthuman historian will not need to use hand-waving to describe the evolution of humans and technology into posthuman forms.


But could not the amplification of the human mind to some unprecedented level of ultra intelligence result in the radical reassessment of our values? Could this not, in a sense, be considered...transcendent?

And also, if I may put forward one more rhetorical question, does transcendence necessarily demand a positive valuation?

Technologists often grant positive value to technological progress, but I have found that frequently their justification for a positive assessment is based on the satisfying of needs residing exclusively within the biological paradigm. This simply can not do, for with a drastic redesign of the human mind, and the instantiation of various types of meta-programming, the current amalgam of urges, impulses and base logic that are selected for or against in human societies (LL uses the term 'human selection') will finally give way to/be eclipsed more fully by the memetic paradigm. This is not to say that various forms of hedonism will not still be fashionable, only that such impulses total influence over evaluative processes will be greatly reduced, if not eliminated entirely.

However, I am not about to disregard everything the guy says.


I read Kurzweil with due diligence.

#26 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 09 November 2005 - 07:09 PM

1. Determinism is not a problem. Plenty of pleasurable experiences can occur in a deterministic universe.


Yes

2. Our personalities depend on a dynamic process of signals (information/forces), not necessarily on the material you associate with the human nervous system.


Yes

3. Cosmology does not take into account a universe saturated with intelligent processes. Don't assume entropy will kill us if you don't supply the rigor you demand.


Yes

#27 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 09 November 2005 - 07:22 PM

Oh, and one more thing Justin.

Berkeley's inverted-monist ontology may be impenetrable, but it also the laziest of philosophies.

#28 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 09 November 2005 - 08:20 PM

Hi, Don. I'm still coming to grips with your apparent, somewhat radical shift in perspective, so I don't know if I'm totally getting you. But here it goes.

If you want to know what Kurzweil "truly thinks" then go to Barnes&Noble and read his new testament lying out there on the front display case.

Ray Kurzweil: We need a new religion.

Um actually, no, we don't Ray.

I think this is a bit out of context. Recall what he says on page 370:

George Guilder has described my scientific and philosophical views as "a substitute vision for those who have lost faith in the traditional object of religious belief." Gilder's statement is understandable, as there are at least apparent similarities between anticipation of the Singularity and anticipation of the transformations articulated by traditional religions.

But I did not come by my perspective as a result of searching for an alternative to customary faith. The origin of my quest to understand technology trends was practical: an attempt to time my inventions and to make optimal tactical decisions in launching technology enterprises. Over time this modeling of technology took on a life of its own and led me to formulate a theory of technology evolution. It was not a huge leap from there to reflect on the impact of these crucial changes on social and cultural institutions and on my own life. So, while being a Singularitarian is not a matter of faith but one of understanding, pondering the scientific trends I've discussed in this book inescapably engenders new perspectives on the issues that traditional religions have attempted to address: the nature of mortality and immortality, the purpose of our lives, and intelligence in the universe.

Later on, he generalizes even further, and more effectively, by stating that the principle we would want to keep from traditional religion is simply the respect for human consciousness, where he uses 'human' with a more comprehensive and enlightened meaning.

Religion, an ambigious term to be sure, but one that (to me at least) represents the large scale unification of belief and the establishment of dogma; the ultimate subjugation of the will, the death of the free thinker. And for what? To satisfy the all too human psychological need for certainty and meaning?

If the death of the free thinker is subjugating the will to religion, then so is the tenacity on being aimless. The free thinker needs wisdom and knowledge for direction. There must be some element of discipline and commitment here.

But could not the amplification of the human mind to some unprecedented level of ultra intelligence result in the radical reassessment of our values? Could this not, in a sense, be considered...transcendent?

No. Not really. There's a fundamental pattern in intelligence that we can all recognize. Intelligence is the process toward merging thought and being. Even those who are anti-intelligence are trying to be right about something. The mere act of trying to be right is an attempted step toward merging thought and being. One's values can probabilistically either move one away or move one closer to the mergence of thought and being. Anti-intelligence or ignorance are more likely to move one away. Intelligence or intellectual endorsement are more likely to move one closer.

And also, if I may put forward one more rhetorical question, does transcendence necessarily demand a positive valuation?

It looks that way.

Technologists often grant positive value to technological progress, but I have found that frequently their justification for a positive assessment is based on the satisfying of needs residing exclusively within the biological paradigm. This simply can not do, for with a drastic redesign of the human mind, and the instantiation of various types of meta-programming, the current amalgam of urges, impulses and base logic that are selected for or against in human societies (LL uses the term 'human selection') will finally give way to/be eclipsed more fully by the memetic paradigm. This is not to say that various forms of hedonism will not still be fashionable, only that such impulses total influence over evaluative processes will be greatly reduced, if not eliminated entirely.

In this context "hedonism" would be more appropriately replaced with "eudaemonism." But even that's an insufficient description of the aims of responsible intelligence. I disagree that the needs of intelligence reside exclusively within the biological paradigm. We can reasonably anticipate that as we merge with nonbiological technology, and later become totally nonbiological, we will still operate with survivability, the concept of technology, and intelligence enhancement paradigms.

It's simply irresponsible to ignore fundamental patterns other than memetic ones, which are in fact facilitated by these other fundamental patterns.

#29 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 09 November 2005 - 09:04 PM

Oh, and one more thing Justin.

Berkeley's inverted-monist ontology may be impenetrable, but it also the laziest of philosophies.


On the contrary. But since you have very uncouth and unclever "quips" or tautological anwsers for everything, I wouldn't think you would understand. Not to mention your rampant use of "proofs."

Perhaps it is time for you guys to stop relaying on other people and think for yourselves.

Oh, a good place to start would to actually understand what the second law of thermodynamics means. It seams very few people here actually know.

Edited by justinb, 10 November 2005 - 07:42 AM.


To book this BIOSCIENCE ad spot and support Longecity (this will replace the google ad above) - click HERE.

#30 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 09 November 2005 - 09:25 PM

I see that you have edited (correction, replaced) your initial response in favor of a less tactful and more 'reactive' repartee. How enlightened. [sfty]

For the record Justin, although Berkeley's school of thought may be internally consistent it still begs the question, "from what does the mental arise?" And there in lies the problem. We are left once again with a "causa sui" answer which is, obviously, inadequate. So we can delve further into simulation scenarios, or other similarly speculative and unsubstantiated meta-physics -- or we can come back to reality and recognize that there is almost certainly an objective reality waiting to be discovered. In this light, the pragmatism embraced by James might be the most effective means of attaining "truth".
----------------

Nate, time is limited right now, but I'll try to address your comments in the next day or two.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users