• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

"The Singularity Myth"


  • Please log in to reply
110 replies to this topic

#31 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 19 March 2006 - 01:20 AM

There is a lot in Kurzweil's ideas to critique, but personal attacks and value judgements add nothing to the conversation. "His delusion," "insert himself," "celebrity-seeking," "way overstates" etc. are all value judgements, saying more about one person's judgement of Kurzweil than the validity of Kurzweil's ideas.

What frustrates me in all debates is how the participants often feel they have the authority to make pronouncements about each others. You and I have absolutely no authority to judge Kurzweil. Instead, we have the authority to logically and reasonably discuss his ideas and why we do or do not agree with them. "I do not agree with him because he is a quack" is not a valid point.

If I were to say "I respect Kurzweil because he follows his passions" then I would be making some sort of value judgement about how people of passion should be listened to and people with no passion should be ignored. But do you honestly care if and why I respect Kurzweil? Why should we sit around and compare the objects of our respect when instead we could be discussing their ideas? There is so much more information per unit in a discussion of ideas than a comparison of personal beliefs and judgements.

#32 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 19 March 2006 - 01:38 AM

As a figure head, the credibility of the futurist movement is directly tied to his credibility. We have every right to scrutinize him.


Where did this idea that we can judge people in debate come from? I see it here in this forum, in blogs, and in comments in the mediasphere. We absolutely do not have any right to scrutinize a person. We have every right to scrutinize their ideas.

Of what relevance is it to say "He is stupid" in reference to President Bush, "Ayn Rand cheated on her husband" in reference to the validity of objectivism, or "He's celebrity-seeking" in reference to Kurzweil? Ideas should stand up on their own, separate from the person who uttered them. I cannot dismiss a person just because I don't happen to agree with some part of their lifestyle, or agree with them simply because most people think they are a genius, for example. A serial killer or dictator may provide insightful ideas just as a scientist or parent may provide bad ones.

By focusing on criticism of Kurzweil's ideas, we can potentially forge better futurist memes. By focusing on criticism of Kurzweil, I guess we could potentially forge a better Kurzweil, but that seems a little outside the scope of whatever mission we have set ourselves upon.

sponsored ad

  • Advert

#33 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 19 March 2006 - 02:30 AM

Two separate issues Richard. The first issue is whether Kurzweil's speculations are sound. The second is whether Kurzweil, as a figure head, is beneficial to the movement. Admittedly the second issue is off topic for this thread, but saying that it is always an inappropriate discussion misses the point.

#34 Richard Leis

  • Guest
  • 866 posts
  • 0
  • Location:Tucson, Arizona

Posted 19 March 2006 - 03:36 AM

Okay, I see the difference. Does it matter to the movement if Kurzweil is a figure head, or if he is beneficial?

#35 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 19 March 2006 - 04:40 AM

How would nano-assemblers be able to coordinate their actions and exert a sophisticated level of control over the physical world without a tremendous amount of computational power supporting them?


"You" are a great example of a bunch of nano-assemblers coordinating their actions to exert a sophisticated level of control over the physical world, although I'm not too sure at how much computational power is driving the process... :)

#36 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 19 March 2006 - 04:53 AM

Kevin, hmm in this case the coordinating computer-power seems to be emerging from intrinsic properties of the nanoassemblers themselves, in a way much unlike any control circuits yet dreamt up by their final product... That does make it hard to estimate, but still it's imo another shortcoming of Kurzweil's predictions -- he seems to underestimate the power of the biological systems he wishes our machines to surpass. Compare e.g. the number of dedritic connections per neuron or intra-neuronal computer speed given by Kurzweil with estimates from the primary literature.

#37 dangerousideas

  • Guest
  • 60 posts
  • 0
  • Location:Alberta, Canada

Posted 19 March 2006 - 05:10 AM

Our "gut" tells us it must be wrong - so we will rationalize relentlessly until we have convinced ourselves that it is.  It takes real courage not to rationalize away the insights that don't match our instincts.  So is it a higher level of gullibility, or a higher level of intellectual honesty?  I wonder...


Heh. I'd have to say, Dangerous, that I couldn't disagree more. My belief, which I would contend is supported by a large body of evidence, is that human beings have an innate tendency to "buy their own bs". Very rarely do human minds have the proper heuristics in place that would allow them to make an honest appraisal of the intellectual positions they espouse.


I agree entirely! And I do also agree that the body of evidence " that human beings have an innate tendency to "buy their own bs"" is indeed impressive. I am simply arguing that an important part of that BS is what their "gut" - their preconceived notions and instincts - is telling them. I am suggesting that it is the "strangeness" of the predicted result that leads to rationalized denial.

Dangerous...

#38 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 19 March 2006 - 05:42 AM

John Schloendorn: Kevin, hmm in this case the coordinating computer-power seems to be emerging from intrinsic properties of the nanoassemblers themselves, in a way much unlike any control circuits yet dreamt up by their final product... That does make it hard to estimate, but still it's imo another shortcoming of Kurzweil's predictions -- he seems to underestimate the power of the biological systems he wishes our machines to surpass. Compare e.g. the number of dedritic connections per neuron or intra-neuronal computer speed given by Kurzweil with estimates from the primary literature.


I'm not sure I agree with you John that Kurzweil is underestimating the complexity of biology, but even if he is, I tend to think that not all the complexity is useful, especially when it comes to the phenonmenon of 'intelligence'. My previous post was only to draw attention to the fact that the relationships between molecular entities can themselves provide the computational power necessary. We are talking about 'self-assembling' systems after all and this 'intelligence' would have to be built into the physical entities and having a computer to somehow centrally coordinate these relationships is unnecessary and ultimately less desirable in terms of flexibility. The modelling of evolutionary organisms in silico and projects like Virtual Cell, and even the new brain column modelling being done by IBM will allow us to see how we might construct biological systems from scratch that are self-maintaning and evolving, hopefully more "intelligently designed" than what we are dealing with now. I'm sure there are more than a few *potential* flaws in Kurzweil's arguments and perhaps even some outright inconsistencies, but overall as enoosphere says, if you accept even a little the direction that he is leading, there is really nowhere else to turn. The power of exponential growth is ultimately impossible to counter, even if he is off by a bit in his estimates.

This technological transcendance he proposes does not help the 100,000 people who died today, or will die tomorrow however. If listening to such a message of technological transcendance induces a 'transhuman' to sit back and let the ongoing atrocity of suffering and death continue while they wait for their reward, I question what use they would have been without the message to begin with.

#39 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 19 March 2006 - 11:00 AM

His dubious claims about medicine and biology and his celebrity-seeking do have some bearing on the validity of his argument, if only because he will throw discredit on the quest for radical life extension when his health finally crashes, well before he reaches 100.


He's not the only one representing the life extension developments. Developments that are very premature and yet unproven, which means that a heuristic approach for an individual who wants to practice them at this moment in time is inevitable.

Far more worrying should be the issue that the supplement industry is using similar claims in their marketing, seducing the general public in using unproven technology. Them being unaware of the fact that the background research is premature and unproven. This makes it very easy for governmental organisations in their initiatives to apply restrictive interventions to the application of LE technologies. This has a far more destructive effect.

#40 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 19 March 2006 - 11:36 AM

As a novice in the LE and strong AI field, the following two relating questions have a very intriguing effect on me.

How can someone claim that we could reach immortality within a few decades from now. To my opinion we must understand the complexity of biological systems in general before we could ever dream of to be able to influence them in a predictive manner. We should not only understand the technological issues at molecular level, which we are trying to do now, but also the more abstract processes that are carried out by our systems of organs. With the highest level of semantics the mind – body interaction. We do know nothing about that. As an example, we cannot even cure a “simple” patient that has rheumatic arthritis. About the only treatment we have here is to suppress the entire immune system of this patient in a very crude and barbaric way.

Something similar is our understanding of building of “biological” systems from scratch, that could be based on existing biological structures and combined nano technology. I agree that a coordinating central computing instance or entity that controls all of our nano stuff is not needed and even unwanted in this highly distributed environment. But this still requires from us to understand the complete pyramid of physical and logical processes that are manifesting themselves in such an environment. We need to understand all the interaction that is going on here at the different levels, regardless the fact that we do not have to build a central high-level coordinating entity. We still need to know exactly what’s going on.

#41 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 19 March 2006 - 05:28 PM

How can someone claim that we could reach immortality within a few decades from now.


Because we wouldn't be the same "we" that we are now.

Currently our brains are crappy, slow, full of errors and distractions, etc. The better we improve our intelligence, the easier solving incredibly huge problems becomes.

It takes real courage not to rationalize away the insights that don't match our instincts.

Am I the only one who can actually see the significance of this statement? Read Judgment Under Uncertainty by Kahneman and Tversky. Our instincts are crap, so if you tend to rationalize according to your instincts, you are going to be incredibly, incredibly wrong, a lot.

#42

  • Lurker
  • 0

Posted 19 March 2006 - 11:08 PM

If we're to plan ahead with the Technological Singularity in mind, then perhaps we should approach it's prospect with self-honesty. Rather than reflexively disregarding criticism, it would seem in our best interests to actively seek it and then determine whether our "educated bet" in the singularity has successfully (albeit tentatively) weathered such counter-arguments.

We can't treat the technological singularity like we do hard scientific theories because of the relatively large degree of indeterminacy involved, but to the extent that current debate can inform us, maybe it should be pursued.

It's been suggested that highly specialized scientists don't have a macroscopic view of technological progress required to properly criticize the technological singularity. They may not, but their collective insights are valuable in helping determine whether a Kurzwelian macroscopic interpretter of scientific and technological progress has erred in some portion of his or her predictions.

Edited by cosmos, 20 March 2006 - 02:48 AM.


#43 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 March 2006 - 12:08 AM

The main point Kurzweil is always making is that progress is exponential, not linear. When retro-futuristic devices (flying cars, robotic maids), don't come to pass, it isn't because exponential progress isn't occurring, but because upon closer inspection, these devices are proven to possess inferior cost/benefit ratios. Other avenues of research are pursued instead, and exponential progress persists.

Progress really will continue exponentially (unless there is some show-stopping disaster), even if you, the person reading this, doesn't do anything about it. Geniuses can have a large impact, accelerating a specific advance by as much as a few years or even (in the most isolated cases) a decade, but ultimately individuals don't matter that much. It's not a feel-good message, but it's what the evidence says. :(

Even though this can encourage an air of complacency, we shouldn't pooh-pooh an idea based on its laziness-encouraging factor alone. The truth or falsity of an idea is independent from that. Also, people that are looking for excuses to be lazy will always find them, and people that are self-motivated will always get work done, so a book by a futurist will never make that much of a difference one way or another.

I too believe that Kurzweil places irrational faith in his curves. But I believe that others don't give them enough credit. If you're a typical person thinking about the future, it's likely that your view is far too linear. But Kurzweil's is too biased in favor of his curves.

It used to be that young people were too enthusiastic about the future, and old people were too pessimistic. Now that we're nearing the knee of the curve, both old and young are too pessimistic! Many young people in the 70s would have called the global Internet poppycock. Young, enthusiastic people in the early 90s wouldnt've envisioned Bittorrent.

The Singularity concept is pretty well-defined, as long as you look at the literature outside Kurzweil. It's the creation of smarter-than-human intelligence. Superintelligence means that you can't predict the rate of progress. Superintelligence running on superfast substrates means that millions of years of progress could occur in a few hours. The sudden creation of self-improving superintelligence will be far more impressive than the incremental emergence of Homo sapiens civilization.

One of the biggest sources of skepticism about Kurzweil's claims, I think, is that people are just about as happy as they've always been, despite exponentially advancing technology. This is because the human brain is set up to experience a characteristic mix of happy and sad, pretty much independent of technological circumstances.

Until we change our brain, this mix shall remain.

#44 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 20 March 2006 - 12:11 AM

It's been suggested that highly specialized scientists don't have a macroscopic view of technological progress required to properly criticize the technological singularity. They may not, but their collective insights are valuable in helping determine whether a Kurzwelian macroscopic interpretter of scientific and technological progress has erred in some portion of his or her predictions.


That's exactly the point I was trying to make. The pyramid that forms the entire space of levels of knowledge and perception between macro and micro level is greatly unknown. We are gathering knowledge about specialized micro issues that have a very specialised physical or molecular “low level” character.

I assume it’s very hard or even impossible to prove the scientific merit of Kurzweil’s visionary macro level approach. We simply don’t have the knowledge, the measured facts, to link all or most of his visions to reality. But maybe he’s right. I surely hope there’s a lot of truth in it.

But judging the way we treat some health issues right now, that reflects our current state of knowledge, doesn’t come with a lot of optimism. But again, that might be the reason to put sufficient strength into his predictions, in the form of some kind of faith, to avoid them to be blown away by “pessimistic” individuals like me. I think his way of promoting his vision is the only way to get noticed and possibly have some influence.

#45 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 20 March 2006 - 12:51 AM

"You" are a great example of a bunch of nano-assemblers coordinating their actions to exert a sophisticated level of control over the physical world, although I'm not too sure at how much computational power is driving the process...


I would say that there is a great deal of computational power present, but you do bring up a few interesting ideas Kevin.

My previous post was only to draw attention to the fact that the relationships between molecular entities can themselves provide the computational power necessary.  We are talking about 'self-assembling' systems after all and this 'intelligence' would have to be built into the physical entities and having a computer to somehow centrally coordinate these relationships is unnecessary and ultimately less desirable in terms of flexibility.


I am not sure how you arrived at the conclusions you did about my original statement. In effect, what I said is, "nanobots will be advanced AI systems." Obviously, most control over their interaction with each other and the external world would be maintained at the unit level for, as you said, reasons of coordination and flexibility.

You state that "the relationships between molecular entities can themselves provide the computational power necessary", but this is too general a statement. Which molecular entities? Heck, I'm a molecular entity! [lol] Do you mean structures consisting of nanobots, nanobots themselves, structures within nanobots...? This strikes me as in many ways analogous to the current debate within neurophilosophy over the difference (or lack thereof) between form (structure) and function.

If one is to be reductionist about it, computation itself is nothing other than complex physical patterns. The actual qualitative nature of such physical patterns is completely open to development and technological evolution. The computers of tomorrow will certainly not be the computers of today, whether we are talking about quantum dot physical manipulation, parrallel 3D computing, etc etc. Different tracks of technological evolution are apparently converging on one another and in the future this will blur the lines between computation and nano-scale manipulation.

But my original point, mostly unstated, was simply this: nanobots would, by their very nature, exhibit a level of intelligence and control over their physical environment that far exceed the capabilities of today's biological cells. Whether an executive level of control, represented by a hierarchical structuring, will be necessary for such novelties as *utility fog* or *polymorphic capabilities* is hard to say. In the long run I doubt it will be. If one accepts a functionalist position on philosophy of the mind, then the physical substrate is only important in that it allows for distributive mental processes to take place on it (and also, some would argue, in how it affects these processes, and thus is part of the over all process). One of the big difference between the physical structure supporting consciousness now and in the future would be that the nanobots of the future would serve a dual purpose, performing both as mental and mechanical architecture for an entity.

#46 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 20 March 2006 - 02:24 AM

Thanks for clarifying Don, and for for your characteristic good natured ability to take a poke.. :) couldn't help myself

#47 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 20 March 2006 - 03:44 AM

Superintelligence running on superfast substrates means that millions of years of progress could occur in a few hours.

Not sure -- is intelligence all it takes? In order to effect technological progress I would suppose a superintelligence would still need scientific knowledge. And knowledge comes only from doing at least some non-simulated real-world experiments, which can be planned only upon the results of previous experiments. But experiments take a constant minimal time, simply because the physical processes they are supposed to test take time. So I don't see how technological progress can become indefinitely fast, even when someone is smarter than us. The question is of course, how fast can progress become before running into this sort of limit, and I would subject that there is only one way to find out.

#48 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 20 March 2006 - 03:53 AM

In order to effect technological progress I would suppose a superintelligence would still need scientific knowledge. And knowledge comes only from doing at least some non-simulated real-world experiments, which can be planned only upon the results of previous experiments. But experiments take a constant minimal time, simply because the physical processes they are supposed to test take time.


That's an excellent point John, and one that just added to my level of understanding.

Thanks!
Don

#49 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 20 March 2006 - 04:22 AM

Michael Anissimov wrote:

The Singularity concept is pretty well-defined, as long as you look at the literature outside Kurzweil. It's the creation of smarter-than-human intelligence. Superintelligence means that you can't predict the rate of progress. Superintelligence running on superfast substrates means that millions of years of progress could occur in a few hours.

But what kind of progress? Mathematical theory? If we had a superintelligence with unlimited computing capacity RIGHT NOW, the belief that we could just let it read the Internet and major world problems would be solved "in a few hours" is indefensible. Solving practical problems faced by *humans* is what most people mean by progress.

If the "singularity" means the sudden emergence of a supercivilization in information space, that would be a stupendous historical event. Its significance to human affairs would compare to the invention of writing, the printing press, and the Internet. But on the human level, it will just be another information source, albeit an unprecedently powerful one. Solution of *human* problems outside information space will still take years, and in some cases generations. A case in point is the solution to problems that most humans don't even see as problems (e.g. biological aging).

---BrianW

#50 Brian

  • Guest
  • 29 posts
  • 1
  • Location:Earth

Posted 20 March 2006 - 04:31 AM

Not sure -- is intelligence all it takes? In order to effect technological progress I would suppose a superintelligence would still need scientific knowledge. And knowledge comes only from doing at least some non-simulated real-world experiments, which can be planned only upon the results of previous experiments. But experiments take a constant minimal time, simply because the physical processes they are supposed to test take time. So I don't see how technological progress can become indefinitely fast, even when someone is smarter than us. The question is of course, how fast can progress become before running into this sort of limit, and I would subject that there is only one way to find out.


John, perhaps you saw the recent news article in the past week talking about how some researchers have now simulated an entire virus, atom by atom, from fundamental calculations, for a short period of simulated time.

This is just one example of more and more experimentation moving purely into the non-physical realm. Eventually with enough computing power, software tools, and starting/fundamental knowledge, most all experiments will move completely into the non-physical zone. This will (already is) speed up research dramatically while also reducing costs. By the way, this is touched on in Kurzweil's book. The only areas that will still require physical experimentation will be research into unknown fundamentals, but even here in many areas the physical research is being dramatically sped up by robotics/automation techniques, and this trend will also continue.

#51

  • Lurker
  • 0

Posted 20 March 2006 - 05:29 AM

Fundamental physical research probably cannot be simulated because it's the basis upon which simulations are created, in many cases. "Real world" improvements in theory-laden apparatus sensitivity leads to more precise observations of phenomena, which leads to more oppurtunities for currently accepted theories to face risky tests and, if found inadequate, to be supplanted by more observationally consistent theories.

edit: replaced "accurate" with "precise"

Edited by cosmos, 20 March 2006 - 06:08 AM.


#52 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 20 March 2006 - 06:00 AM

Fundamental physical research probably cannot be simulated because it's the basis upon which simulations are created


Indeed, this is a crucial insight. It doesn't mean that Brian's line of reasoning isn't valid, but it does mean that there are constraints on what simulation can provide us with. Of course, increased automation will allow us to speed up even real world exploration of design space, but arguing for a billion fold increase (1,000,000x365x4ish) in the pace of technological progress doesn't seem justified in light of this.... [airquote] ugly fact [/airquote] .

#53 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 20 March 2006 - 06:48 AM

But my original point, mostly unstated, was simply this:  nanobots would, by their very nature, exhibit a level of intelligence and control over their physical environment that far exceed the capabilities of today's biological cells.  Whether an executive level of control, represented by a hierarchical structuring, will be necessary for such novelties as *utility fog* or *polymorphic capabilities* is hard to say.

Stem cells developing into functional cells is not truly polymorphic, but is seems to come close? This "polymorphic" development takes a lot of time and is a "one off" occasion, therefore not real polymorphism I assume. But a more important question for me would be this. You seem to express that only by creating nanobots that have sufficient coordination and communication skills themselfs could possibly eliminate the need for a hierarchical controlling entity. The only advantage of not needing to utilise such a “super bot” would be that it does not need to be created. In all cases we need to have and develop all the knowledge that is involved in all interactive processes. In all cases we need to understand all possible processes. This even could be more difficult in a truly distributed environment without a controlling hierarchy. And the irony seems to be (to me at last) that insight and knowledge about these processes in biology already is missing, so what can we possibly do with nanobots in this regard on short notice?

One of the big difference between the physical structure supporting consciousness now and in the future would be that the nanobots of the future would serve a dual purpose, performing both as mental and mechanical architecture for an entity.

What do you think what the (mechanical) aggregation level of a nanobot would be compared to a cell?

#54 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 20 March 2006 - 09:17 AM

Brian Wowk,

Why can't a superintelligence conduct many experiments in a nanosecond that would take human scientists an hour? Chemistry and physics phenomena occur on atomic distances and timescales. Reductionism allows us to answer biological and social science questions with sufficient chemical and physical knowledge, along with a few easily-observable heuristics.

Nanotech computers will be much faster than today's best. Computing power many orders of magnitude faster than human neurons, on a kilogram-by-kilogram basis. Incredibly complex systems will be simulated rapidly from basic principles. If I'm an AI that runs on hardware equivalent to billions of human brains, then (if I wanted to), I could simulate the creativity and brainpower of billions of human geniuses to solve a problem quickly. (Or better yet, simulate transhuman intelligences, which might be superior to human geniuses in the way that humans are superior to bugs.)

The "singularity" means your ability to predict what the entity cannot or cannot do disappears - to say what a superintelligence can or cannot do is just silly, because how on Earth could you predict what a superintelligence can achieve? The whole idea is that you're crossing an intelligence gap larger than the gap between H. sapiens and other homonid species, if not massively larger... why are you talking as if H. sapiens is the final word and that genius H. sapienses are about at the ceiling of problem-solving ability?

The emergence of AI is not really "just another information source" if an AI is programmed to use nanotechnology to fabricate billions of cyborgs that look exactly like humans, but with AI brains... this would be possible quite quickly if AIs gained access to molecular manufacturing. I'm not saying that an AI would do this, but that it would theoretically have the ability - so why are you still thinking of AI as some abstract thing stuck on early-2000s-era Internet, without any robotics whatsoever?

Human problems are in the information space - are you thinking of human beings as some sort of special outside system, impossible to simulate with sufficient computing power? And why would superintelligences not be radically better at solving human problems than human are? An entirely reprogrammable brain would be able to experiment with a number of different cognitive processing methods in order to find best suited to deriving solutions to human problems.

#55 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 20 March 2006 - 05:32 PM

The "singularity" means your ability to predict what the entity can or cannot do disappears

Right, and therefore there are no grounds to make concrete, quantitative predictions like "millions of years of progress could occur in a few hours". The singularity is a very exciting and promising avenue of research, but I believe both in terms of scientific respectability and raising public interest, quantitative predictions would better be left to the imagination of the audience.
[Edit: fixed your "cannot" issues in the quote]

#56 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 20 March 2006 - 05:40 PM

Right, and therefore there are no grounds to make concrete, quantitative predictions like "millions of years of progress could occur in a few hours"


Like Anissimov said, when you introduce nanotechnology and nanocomputing, any computation or physical operation can be carried out many, many orders of magnitude more quickly, so even from our measly human perspective, we know AT LEAST that much is possible.

#57 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 20 March 2006 - 08:33 PM

Michael Anissimov wrote:

The emergence of AI is not really "just another information source" if an AI is programmed to use nanotechnology to fabricate billions of cyborgs that look exactly like humans, but with AI brains... this would be possible quite quickly if AIs gained access to molecular manufacturing. I'm not saying that an AI would do this, but that it would theoretically have the ability - so why are you still thinking of AI as some abstract thing stuck on early-2000s-era Internet, without any robotics whatsoever?

Because we are still stuck on an early-2000s-era Internet with an economy based on bulk manufacturing, and yet people are saying "The Singularity" could happen tomorrow! Claims that the world is going to become unrecognizable in a few decades are divorced from any sense of how things happen in the real world. Human factors aside, they are divorced from considerations of physical technology, resource availability, energy availability, transportation, and ultimately physics itself. The only "million years of progress in a few hours" that occurs in this century will be in the form of self-amusement of AI beings, not solutions of human problems. It will take generations for humans to even decide what they really want from technologies that approach the limits of physical law.

I'm sorry, Michael, but I've been hearing this stuff-- this same AI-nanotech-remakes-the-world-overnight-stuff --for decades, and it's getting really tiresome. Even if it were true, as a purely political issue, Singularity proponents should be more sensitive to how crazy their prognostications sound to people who've experienced how hard it is to make actual physical products and services. The whole thing smacks of religious millennialism to an incredible degree, and that comes from someone who himself has been practically labeled a millennialist for sympathizing with cryonics and molecular nanotech. Yes, many great and wonderful things are coming, things beyond which the world may be inscrutable, and we have to prepare for them (e.g. Foresight and Singinst) but this will not happen overnight.

---BrianW

Edited by bgwowk, 20 March 2006 - 10:21 PM.


#58 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 March 2006 - 10:25 PM

John:The singularity is a very exciting and promising avenue of research, but I believe both in terms of scientific respectability and raising public interest, quantitative predictions would better be left to the imagination of the audience.


I would say Kurzweil has been doing a lot of imagining. He has been [airquote] plugged in [/airquote] to progress and technological development for a few decades and has been successful. Lately he has been trying to put some scientific/logical heft behind what he has been imagining (or reasoning inductively). Maybe he has not been too successful in that endeavor. Still, I think it is a benefit to have him talking about the speed of progress. People have to accept the idea of, and adapt to, rapid change. Too often, people who are afraid of change, react violently when it comes knocking on their door.

#59 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 20 March 2006 - 10:26 PM

I have to agree with bgwowk. Although I would not place myself in the realm of singularity opponents. On the other hand, I have a lot of experience in the Information Technology business. It’s indeed almost hilarious how much effort we need sometimes to get the simplest concepts operational. To add to that, I'm not stupid, nor are the other members of the team I work in. After managing the technology to become practical, there will be political, social, communicational, individual, commercial or what-ever-ial issues that step within the path to success.

I understand that in the dawn of the singularity our human limitations will fade away, but for this dawn to start, our limitations could very well be …. eeuuhm, well, limiting. :)

sponsored ad

  • Advert

#60 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 20 March 2006 - 10:44 PM

whole thing smacks of religious millennialism to an incredible degree, and that comes from someone who himself has been practically labeled a millennialist for sympathizing with cryonics and molecular nanotech.

It seems like you didn't really learn the lesson, then, did you?

this same AI-nanotech-remakes-the-world-overnight-stuff --for decades

Really? Well, I haven't been alive for >1 decade (unless you count womb-time). It seems implausible that anybody would be discussing these issues >=2 decades ago. Of course I wasn't there, so I could be wrong. Vinge didn't even coin the term "singularity" until 1990s.

Claims that the world is going to become unrecognizable in a few decades are divorced from any sense of how things happen in the real world

Did you even read my prior post about how things can happen many, many orders of magnitude faster on a nano-scale? Have you taken into account the fact that difficult approaches trivial as intelligence approaches infinity?

It will take generations for humans to even decide what they really want from technologies that approach the limits of physical law.

Unless you design a mechanism that can reliably extrapolate the volition of humans and use it to define a Friendly AI. Even if you don't do that, there are millions of obvious things we want out of technology that we don't have.

Even if it were true, as a purely political issue, Singularity proponents should be more sensitive to how crazy their prognostications sound to people who've experienced how hard it is to make actual physical products and services.

Yeah, that's an issue. But just because making small things happen in the real world is hard for a human simply does not imply that these things will be hard for a recursively self-improving AI. People will either reject the idea because it "sounds ridiculous", or they will investigate the idea and eventually realize that it is, at least, plausible.

The problem isn't that Singularitarians don't understand the difficulties necessary to overcome in the real world, the problem is that everyone else doesn't understand *how much* easier these difficult things would be with dramatically enhanced intelligence, and how incredibly simple (although not easy with our measly human minds) it is to create an AGI. The Singularity CAN happen within the next few years, and the Singularity CAN dramatically alter the world faster than what we intuitively expect.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users