First, sorry about the long absence. Second, sure, most of us would like to, but who actually feels scientifically comfortable taking Ray Kurzweil at his word at all? Discuss.
Who Believes Kurzweil?
#1
Posted 06 November 2005 - 12:16 AM
First, sorry about the long absence. Second, sure, most of us would like to, but who actually feels scientifically comfortable taking Ray Kurzweil at his word at all? Discuss.
#2
Posted 06 November 2005 - 04:23 AM
http://www.fightagin...ives/000612.php
#3
Posted 06 November 2005 - 09:14 AM
#4
Posted 06 November 2005 - 04:08 PM
#5
Posted 07 November 2005 - 01:02 AM
#6
Posted 07 November 2005 - 07:38 AM
Otherwise, I am comfortable with a date around 2048.
One thought exercise I like to try is taking one of today's hyped top of the line consumer electronics and extrapolating what is next. For example, what is next for the Apple iPod now that it supports video? Wireless, increased PDA functionality, Mac OS X, Flash or SVG, high definition video, games? None of these are that futuristic or difficult. What about by 2010? 3D interface, OLED or ePaper, eBook support, Internet, voice recognition, cellular, VoIP? Is any of this too fantastic? Okay, now what about by 2015? Intelligent agent interface and/or brain-machine interface, holographic projection? A little more fantastic but bits and pieces are already available or well on their way. We are only at 2015 and the exercise is too easy.
How about capacity? Right now the top of the line iPod has a 60 GB hard drive, and the Nano has a 4 GB flash chip. Let's take a conservative doubling of capacity every two years (it has actually been shown to occur nearly annually now) and see where that leads us:
Year | Hard Drive | Flash
2005 | 60 GB | 4 GB
2007 | 120 GB | 8 GB
2009 | 240 GB | 16 GB
2011 | 480 GB | 32 GB
2013 | 960 GB | 64 GB
2015 | ~1 TB | 128 GB
What does it mean in 2015 to have a portable device with approximately 1 TB of storage, with at least the capablility of all current home and business personal computers? Is this significant at all? What if we were too conservative and the same device can instead store 32 TB of data? Does that make a difference?
This exercise is not necessarily meant to provide answers, but instead to try to wrap your mind around exponential growth and get you to start thinking about whether Kurzweil and others are spouting bs or not. This exercise is also not meant to suggest that the Apple iPod will still be around in 10 years. The concept of a portable media player may have evolved into something else entirely by then, or turned into a dead end.
Personally, when I run these exercises in my head, and then start combining the separate threads together, the Singularity does not seem so fantastic. In fact, it starts to feel a little too pedestrian and simple.
#7
Posted 07 November 2005 - 05:29 PM
Putting the Singularity off way into the future is a good way of dampening the blow.
At the rate human-brain-reverse-engineering is happening, I'd say it is entirely possible to have had a Singularity by 2020. However, if it hasn't occurred yet in 2030, I'm going to eat my underpants.
#8
Posted 07 November 2005 - 07:45 PM
---BrianW
Edited by bgwowk, 08 November 2005 - 12:41 AM.
#9
Posted 08 November 2005 - 10:28 PM
manowater
Agree or disagree with Kurzweil's timetables, and BY HOW MUCH?
Kurzweil is beholden to his "time tables" and that is a great deal of the problem with him. Whether the methodology used to construct his speculation is meticulous is, quite frankly, irrelevant. There are simply too many variables in projecting future trends, and this fact alone should make one cautious in discussing any time tables whatsoever...that is, unless one is willing to go beyond the bounds of respectable philosophical inquiry. [sfty]
No, Kurzweil's time frames are not really what I take issue with. I mean don't get me wrong, from my somewhat "conservative" transhumanist perspective, Kurzweil is way over the top on his prognostications, but as with any futurist speculations, a great deal of the opinion we as individuals espouse is grounded in *intuition*. And I would further contend that the veracity of said intuition has very little to do with brute "IQ" or analytical capabilities, but everything to do with flexibility of thought and the self acknowledgment of one's limited understanding of our objective reality. After all, Yudkowsky, Goertzel and Kurzweil all probably have genius level IQs, but they nonetheless disagree fundamentally on the Singularity and AI related issues. A philosopher does not, neccesarily, a technologist make.
But I digress. My main problem with Kurzweil is his convoluted values and his suspect motives. If you don't understand what I mean by this, then you too are a part of the problem....
pssstt....hey kids, can I sell ya on some of that new religion??
#10
Posted 09 November 2005 - 12:11 AM
#11
Posted 09 November 2005 - 12:16 AM
What does computer speed and storage capacity have to do with creating an artificial intellect?
It is a starting point for this discussion and for future research. Simply put, are we able to build a artificial substrate of sufficient capability to emulate human intelligence? Kurzweil argues that we will succeed.
Is this a matter of technological progression? Of course it is. Even a "coherent model of mind" will require technological progression, with advancement in hardware a starting point. The man or women or other that develops a "coherent model of mind" will not do so in a vaccuum devoid of technological tools.
Everyone here that honestly thinks "strong" A.I. is just around the corner is operating on assumptions that are based on further assumptions.
So what? Progress continues regardless, researching the matter from all sides, including assumptions based on further assumptions. Wrong assumptions will be discarded and correct assumptions will go into future research. Anyone that honestly thinks "strong" A.I. is NOT just around the corner is also operating on assumptions that are based on further assumptions. Technological progression will show us one way or the other.
#12
Posted 09 November 2005 - 12:36 AM
My main problem with Kurzweil is his convoluted values and his suspect motives. If you don't understand what I mean by this, then you too are a part of the problem....
pssstt....hey kids, can I sell ya on some of that new religion??
If all transhumanists and other technology progressives thought this was a new religion, then there might be a call for this suspicion. I cannot know what Kurzweil truly thinks, but when he talks about the spirituality of machines, it seems to me he is defending against those who insist on viewing technological progress as cold and unfeeling. Cast the products of technological progress in a spiritual light and you fend off one avenue of attack by critics.
I find much of value in Kurzweil's work but there are a few beliefs he has in common with many transhumanists that bug me. One example: the insistence that humans merging with technology is a transcendent event in a spiritual, sum-greater-than-its-parts sense. There is nothing transcendent about any of this unless the definition of transcendence is restricted to "surpassing others." Transhumans and posthumans will be the result of rapid technological progression, not spiritual transcendence. That would be like calling complex weather patterns a transcendent phenomena, when it is instead a study in complexity. The posthuman historian will not need to use hand-waving to describe the evolution of humans and technology into posthuman forms.
Kurzweil support of defense and the restriction of some knowledge (such as the genome of the Spanish Flu virus) are other beliefs I do not support. However, I am not about to disregard everything the guy says.
#13
Posted 09 November 2005 - 01:24 AM
I'm not sure what your point is. No one with a sufficient understanding of the Singularity is suggesting that "all we need to do is wait." Enough information, knowledge, and wisdom exist to continue improving upon the management of information, increasing our knowledge, and enhancing our wisdom, while we can be reasonably certain that such advances can facilitate further advances. We stop advancing when we can't, not when we guess we probably won't be able to.
Hypothetical future scenarios are not something to prove or disprove with a static argument. They are something to make happen.
#14
Posted 09 November 2005 - 02:12 AM
2. Our personalities depend on a dynamic process of signals (information/forces), not necessarily on the material you associate with the human nervous system.
3. Cosmology does not take into account a universe saturated with intelligent processes. Don't assume entropy will kill us if you don't supply the rigor you demand.
You are being aimlessly argumentative. Your concerns have been thoroughly put to rest all over the place. You just need to look, instead of indulging in fragmentary and distorted conceptions, self-defeat.
#15
Posted 09 November 2005 - 02:48 AM
The fact of the matter is you need problems other than strong AI to concern yourself with. Otherwise take heed in your own self-confidence or die.
#16
Posted 09 November 2005 - 03:03 AM
How else is there to operate? Not expecting the outcome sought? That's an irresponsible way to proceed. Indeed, there's a separate argument that enterprises can't proceed without having expected outcomes.I don't demand anything except that people stop operating on assumed outcomes of certain enterprises.
So basically you're saying it's better to engender X and not expect X than to engender X and expect X. What a trivial difference!Don't expect it to work ...
#17
Posted 09 November 2005 - 03:57 AM
#18
Posted 09 November 2005 - 04:24 AM
You were saying more than that, all of which were sufficiently invalidated. What you're "simply saying" is pointless.... I am simply saying that erroneus expectaions are created by blind enthusiasm in the immortalist meme.
Stop babbling and get to it then.I really need to work on my articulatory skills.
#19
Posted 09 November 2005 - 05:59 AM
#20
Posted 09 November 2005 - 06:19 AM
#21
Posted 09 November 2005 - 06:45 AM
Why have you decided to delete your posts, Justin?
I don't know. (Really, I don't.)
I have been "out of it" for awhile now. I think Nate just made a bunch of flak because he doesn't like the fact that a lot of the things he holds dear are no-where near being guaranteed. Plus, there are many problems with immortality, entropy and lack of FW... to name just two out of dozens if not hundreds of problems with immortality.
If we ever increase our intellects to the outer-limits of human capacity I believe we will be horrified by several facts and either commit suicide or go insane.
Or a colorful way we might die is to enhance ourselves to such a degree that we loose our personalities and end up killing ourselves in a out-of-control spiral towards posthumanism and slowly watch ourselves wane away into nothingness. It would most likely take only a short time period to do this though.
#22
Posted 09 November 2005 - 07:20 AM
What's this theory supposed to explain, that we can't be specific about post-Singularity states, that we need to be responsible, that transhuman intelligence is the last invention Homo sapiens need to make? We already know this. What Justin wants is a theory that says no one's smarter than he is, and his vacuous high-IQ associates, and can't be.Justinb made a good point about a need for a hard science approach to the Technological Singularity theory.
Do you know about the Journal of Evolution and Technology and the upcoming Future of Humanity Institute (among a number of other risk-management organizations which account for the Singularity)? Which new voices are you looking for as opposed to whom?It would also be nice to hear more new voices, preferably via peer-reviewed research papers.
#23
Posted 09 November 2005 - 03:43 PM
The links you provided are exactly what I was hoping for. New voices joining the better known, all discussing the singularity and implications.
I sense there is some other debate going on here, but I will stick with the topic of the original post. I believe much of what Kurzweil theorizes. However, there is more work to be done, whether or not the Singularity is inevitable.
#24
Posted 09 November 2005 - 05:42 PM
Yes. I mean the same.I believe much of what Kurzweil theorizes. However, there is more work to be done, whether or not the Singularity is inevitable.
#25
Posted 09 November 2005 - 06:53 PM
If all transhumanists and other technology progressives thought this was a new religion, then there might be a call for this suspicion. I cannot know what Kurzweil truly thinks, but when he talks about the spirituality of machines, it seems to me he is defending against those who insist on viewing technological progress as cold and unfeeling. Cast the products of technological progress in a spiritual light and you fend off one avenue of attack by critics.
If you want to know what Kurzweil "truly thinks" then go to Barnes&Noble and read his new testament lying out there on the front display case.
Ray Kurzweil: We need a new religion.
Um actually, no, we don't Ray.
Religion, an ambigious term to be sure, but one that (to me at least) represents the large scale unification of belief and the establishment of dogma; the ultimate subjugation of the will, the death of the free thinker. And for what? To satisfy the all too human psychological need for certainty and meaning?
History teaches us that religion leads to erroneous assessments and tragedy.
I find much of value in Kurzweil's work but there are a few beliefs he has in common with many transhumanists that bug me. One example: the insistence that humans merging with technology is a transcendent event in a spiritual, sum-greater-than-its-parts sense. There is nothing transcendent about any of this unless the definition of transcendence is restricted to "surpassing others." Transhumans and posthumans will be the result of rapid technological progression, not spiritual transcendence. That would be like calling complex weather patterns a transcendent phenomena, when it is instead a study in complexity. The posthuman historian will not need to use hand-waving to describe the evolution of humans and technology into posthuman forms.
But could not the amplification of the human mind to some unprecedented level of ultra intelligence result in the radical reassessment of our values? Could this not, in a sense, be considered...transcendent?
And also, if I may put forward one more rhetorical question, does transcendence necessarily demand a positive valuation?
Technologists often grant positive value to technological progress, but I have found that frequently their justification for a positive assessment is based on the satisfying of needs residing exclusively within the biological paradigm. This simply can not do, for with a drastic redesign of the human mind, and the instantiation of various types of meta-programming, the current amalgam of urges, impulses and base logic that are selected for or against in human societies (LL uses the term 'human selection') will finally give way to/be eclipsed more fully by the memetic paradigm. This is not to say that various forms of hedonism will not still be fashionable, only that such impulses total influence over evaluative processes will be greatly reduced, if not eliminated entirely.
However, I am not about to disregard everything the guy says.
I read Kurzweil with due diligence.
#26
Posted 09 November 2005 - 07:09 PM
1. Determinism is not a problem. Plenty of pleasurable experiences can occur in a deterministic universe.
Yes
2. Our personalities depend on a dynamic process of signals (information/forces), not necessarily on the material you associate with the human nervous system.
Yes
3. Cosmology does not take into account a universe saturated with intelligent processes. Don't assume entropy will kill us if you don't supply the rigor you demand.
Yes
#27
Posted 09 November 2005 - 07:22 PM
Berkeley's inverted-monist ontology may be impenetrable, but it also the laziest of philosophies.
#28
Posted 09 November 2005 - 08:20 PM
I think this is a bit out of context. Recall what he says on page 370:If you want to know what Kurzweil "truly thinks" then go to Barnes&Noble and read his new testament lying out there on the front display case.
Ray Kurzweil: We need a new religion.
Um actually, no, we don't Ray.
George Guilder has described my scientific and philosophical views as "a substitute vision for those who have lost faith in the traditional object of religious belief." Gilder's statement is understandable, as there are at least apparent similarities between anticipation of the Singularity and anticipation of the transformations articulated by traditional religions.
But I did not come by my perspective as a result of searching for an alternative to customary faith. The origin of my quest to understand technology trends was practical: an attempt to time my inventions and to make optimal tactical decisions in launching technology enterprises. Over time this modeling of technology took on a life of its own and led me to formulate a theory of technology evolution. It was not a huge leap from there to reflect on the impact of these crucial changes on social and cultural institutions and on my own life. So, while being a Singularitarian is not a matter of faith but one of understanding, pondering the scientific trends I've discussed in this book inescapably engenders new perspectives on the issues that traditional religions have attempted to address: the nature of mortality and immortality, the purpose of our lives, and intelligence in the universe.
If the death of the free thinker is subjugating the will to religion, then so is the tenacity on being aimless. The free thinker needs wisdom and knowledge for direction. There must be some element of discipline and commitment here.Religion, an ambigious term to be sure, but one that (to me at least) represents the large scale unification of belief and the establishment of dogma; the ultimate subjugation of the will, the death of the free thinker. And for what? To satisfy the all too human psychological need for certainty and meaning?
No. Not really. There's a fundamental pattern in intelligence that we can all recognize. Intelligence is the process toward merging thought and being. Even those who are anti-intelligence are trying to be right about something. The mere act of trying to be right is an attempted step toward merging thought and being. One's values can probabilistically either move one away or move one closer to the mergence of thought and being. Anti-intelligence or ignorance are more likely to move one away. Intelligence or intellectual endorsement are more likely to move one closer.But could not the amplification of the human mind to some unprecedented level of ultra intelligence result in the radical reassessment of our values? Could this not, in a sense, be considered...transcendent?
It looks that way.And also, if I may put forward one more rhetorical question, does transcendence necessarily demand a positive valuation?
In this context "hedonism" would be more appropriately replaced with "eudaemonism." But even that's an insufficient description of the aims of responsible intelligence. I disagree that the needs of intelligence reside exclusively within the biological paradigm. We can reasonably anticipate that as we merge with nonbiological technology, and later become totally nonbiological, we will still operate with survivability, the concept of technology, and intelligence enhancement paradigms.Technologists often grant positive value to technological progress, but I have found that frequently their justification for a positive assessment is based on the satisfying of needs residing exclusively within the biological paradigm. This simply can not do, for with a drastic redesign of the human mind, and the instantiation of various types of meta-programming, the current amalgam of urges, impulses and base logic that are selected for or against in human societies (LL uses the term 'human selection') will finally give way to/be eclipsed more fully by the memetic paradigm. This is not to say that various forms of hedonism will not still be fashionable, only that such impulses total influence over evaluative processes will be greatly reduced, if not eliminated entirely.
It's simply irresponsible to ignore fundamental patterns other than memetic ones, which are in fact facilitated by these other fundamental patterns.
#29
Posted 09 November 2005 - 09:04 PM
Oh, and one more thing Justin.
Berkeley's inverted-monist ontology may be impenetrable, but it also the laziest of philosophies.
On the contrary. But since you have very uncouth and unclever "quips" or tautological anwsers for everything, I wouldn't think you would understand. Not to mention your rampant use of "proofs."
Perhaps it is time for you guys to stop relaying on other people and think for yourselves.
Oh, a good place to start would to actually understand what the second law of thermodynamics means. It seams very few people here actually know.
Edited by justinb, 10 November 2005 - 07:42 AM.
#30
Posted 09 November 2005 - 09:25 PM
For the record Justin, although Berkeley's school of thought may be internally consistent it still begs the question, "from what does the mental arise?" And there in lies the problem. We are left once again with a "causa sui" answer which is, obviously, inadequate. So we can delve further into simulation scenarios, or other similarly speculative and unsubstantiated meta-physics -- or we can come back to reality and recognize that there is almost certainly an objective reality waiting to be discovered. In this light, the pragmatism embraced by James might be the most effective means of attaining "truth".
----------------
Nate, time is limited right now, but I'll try to address your comments in the next day or two.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users