• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

"The Singularity Myth"


  • Please log in to reply
110 replies to this topic

#61 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 20 March 2006 - 10:59 PM

Okay, side question.

How does Kurzweil expect to benefit from spreading his futurist ideas? He gets money by selling to a stable market. How else, though? What social change is he trying to engender to make his life better?

#62 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 20 March 2006 - 11:08 PM

Really? Well, I haven't been alive for >1 decade (unless you count womb-time). It seems implausible that anybody would be discussing these issues >=2 decades ago. Of course I wasn't there, so I could be wrong. Vinge didn't even coin the term "singularity" until 1990s.


AI concepts, environments and implementations have been around since the eighties of the previous century. They never lived up to their promise. That can very well change and be changed of coarse.
But it’s always good to [airquote] fly with both feet on the ground [/airquote] . :)

sponsored ad

  • Advert

#63 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 21 March 2006 - 01:33 AM

Stem cells developing into functional cells is not truly polymorphic, but is seems to come close? This "polymorphic" development takes a lot of time and is a "one off" occasion, therefore not real polymorphism I assume.


While I think that nanobots possessing polymorphic capabilities are a strong possibility, this was not the point I was trying to make. A polymorphic entity or an environment constructed of utility fog both represent potential applications of nanobot technology. As such, I was refering to collective or *emergent* properties of nanobot systems.

But a more important question for me would be this. You seem to express that only by creating nanobots that have sufficient coordination and communication skills themselfs could possibly eliminate the need for a hierarchical controlling entity.


Perhaps we should better define what we mean when we say "hierarchical structuring". In terms of cognition we now know that there is nothing of the sort. Functionalism has thoroughly refuted "Cartesian dualism" (as well as Cartesian materialism for that matter). There is no "one place" where consciousness comes together. Cognition, which currently operates off of a biological substrate, is a distributive process. Likewise, the cognition of the future, which would take place on some type of synthetic substrate, would also be a distributive process.

So, within a nanobot swarm where would consciousness reside? If it is distributive, the answer would be *everywhere*. There would no longer be a distinction between musculature, CNS, etc. As I said in my previous post, the mental and the mechanistic would merge.

You asked how much more coordination and control a nanobot would have in comparison to a biological cell. Let me ask you brainbox, can you, by conscious thought,
change your skin color? Can you morph your retinas to give yourself hawk vision? Can you increase your body's density thereby decreasing its aggregate size? Of course, these are rhetorical questions, but I think they illustrate nicely the point I was trying to make. In our current form we are extremely limited morphologically. As polymorphs, we would possess a degree of morphological freedom unprecedented in the biological world.

Edited by DonSpanton, 21 March 2006 - 01:54 AM.


#64 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 21 March 2006 - 02:39 AM

AI concepts, environments and implementations have been around since the eighties of the previous century.

I know, I'm definitely aware of that. It's fortunate for us, because now that we have attacked the problem from so many different angles we are going through stages of generalization in theory completely unlike those highly specialized AI systems of the past. The mathematical foundations of the year 2006 are ripe for AGI development. Take for example the work of Hutter, Burgin, Pearl, and others.

#65 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 21 March 2006 - 02:52 AM

hankconn wrote:

Really? Well, I haven't been alive for >1 decade (unless you count womb-time). It seems implausible that anybody would be discussing these issues >=2 decades ago. Of course I wasn't there, so I could be wrong. Vinge didn't even coin the term "singularity" until 1990s.

In the 1980s, the Singularity was called the Assembler Breakthrough (look it up). It was all supposed to be a done deal by now. Remember that AI goes back to the 1960s, and the idea of self-replicating machines to Von Neuman himself. In fact, now that I think of it, the general idea of AI and associated machinery making worlds unrecognizable goes back at least 50 years to the sci-fi movie Forbidden Planet

http://www.grg.org/charter/Krell2.htm

More generally, the idea that technology can suddenly and quickly solve all human problems dates back to the early 19th century, when some people predicted the steam engine was going to do it. No lie.

The Singularity CAN happen within the next few years...

This is the kind of craziness that gives Singularitarianism a bad name.

---BrianW

#66 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 21 March 2006 - 05:30 AM

I know, I'm definitely aware of that. It's fortunate for us, because now that we have attacked the problem from so many different angles we are going through stages of generalization in theory completely unlike those highly specialized AI systems of the past. The mathematical foundations of the year 2006 are ripe for AGI development. Take for example the work of Hutter, Burgin, Pearl, and others.


Thanks for mentioning these names. I really have some reading-up to do to be honest with that, to put my comments into some perspective since I honestly hate myself for being on the negative flip-side again. I also never worked with AI professionally.

But the practical approach should be mentioned as well.

Hopefully the more generic AI concepts will be more successful.

But in current development methodologies, verification and validation of the algorithms you dream up is very important. To name just one example. This reviewing and testing could partly be done on a generic level also, but a system, to be validated correctly, need to be fed with real-life problems as well. For example, a generic solution for use within a control system for a nuclear power plant will have to be tested thoroughly. And, believe me, before I allow some form of AI to be connected to my brain, I will ask for even more proof. This means that an analysis should be made of possible (worst case) scenarios that could occur. These scenarios should be fed into such a system to observe it's behaviour before it is actually employed. This is very time consuming. So, an efficiently developed and implemented generic algorithm needs to be reviewed and tested with a lot of low level scenarios.

Bummer. Here we still are at the low, time consuming level of perception.

To enlighten it from the positive side, to bypass this low level of perception partly or entirely, a whole new concept of methodologies needs to be developed for the realisation of these new generic AI implementations to become practically useful. We need a lot of additional organisational and infra structure. This takes a lot of time also.

Is anyone aware of any thoughts in this area? I would like to know.

Only the faith of all the singularity proponents will not prove this pudding.

Edited by brainbox, 21 March 2006 - 05:49 AM.


#67 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 23 March 2006 - 03:56 AM

"Kick"

I happen to like this thread quite a bit. A small kick in an attempt to attract some attention.....

:)

#68 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 23 March 2006 - 03:59 AM

*Bump* would be the proper terminology. :))

#69 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 23 March 2006 - 04:05 AM

Hmmm, bump would be a downwards movement, where I associate a kick with a more positive upwards intention.....
But I will addapt, as allways...

:)

#70 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 23 March 2006 - 04:14 AM

*Unsolicited event* <=

(and all other posts by this post's author)

[g:)]

Edited by Nate Barna, 23 March 2006 - 11:21 AM.


#71

  • Lurker
  • 0

Posted 26 March 2006 - 11:26 AM

Coincidentally, soon after the discussion on pg. 3 regarding the future of scientific research I came across this Futurepundit blog entry.
Computers To Start Formulating And Testing Hypotheses?

Citing the following Economist article most prominently.
Computing the future

#72 advancedatheist

  • Guest
  • 1,419 posts
  • 11
  • Location:Mayer, Arizona

Posted 26 March 2006 - 02:57 PM

Okay, side question.

How does Kurzweil expect to benefit from spreading his futurist ideas?  He gets money by selling to a stable market.  How else, though?  What social change is he trying to engender to make his life better?


For one thing, Kurzweil expresses loneliness in TSIN, and he wants to find new friends. Refer to pp. 370-71:

Being a Singularitarian has often been an alienating and lonely experience for me because most people I encounter do not share my outlook. Most "big thinkers" are totally unaware of this big thought. In a myriad of statements and comments people typically evidence the common wisdom that human life is short, that our physical and intellectual reach is limited, and that nothing fundamental will change in our lifetimes. I expect this narrow view to change as the implications of accelerating change become increasingly apparent, but having more people with whom to share my outlook is a major reason that I wrote this book.



#73 emerson

  • Guest
  • 332 posts
  • 0
  • Location:Lansing, MI, USA

Posted 26 March 2006 - 04:14 PM

some people predicted the steam engine was going to do it.  No lie.


Funny, but the more I think about it, the more I think they might have been right. Not in the specifics, but the rapid transportation of goods is one of the main things which allowed our current level of prosparity. To someone meeking out an existance in the dustbowl, people in the modern western middle class really have had all their problems solved.

#74 advancedatheist

  • Guest
  • 1,419 posts
  • 11
  • Location:Mayer, Arizona

Posted 26 March 2006 - 04:16 PM

More generally, the idea that technology can suddenly and quickly solve all human problems dates back to the early 19th century, when some people predicted the steam engine was going to do it.  No lie.


For example, despite all the propaganda about "the New Economy" allegedly made possible by IT, after investing well over $1 trillion into computers and digital communications, American businesses still see the "productivity paradox." We hear about all these enormous increases in the "productivity" of the service part of the economy, including retail, but I still have to wait in line at Wal-Mart, even when I use the automated checkout stands.

#75 psudoname

  • Guest
  • 116 posts
  • 0

Posted 26 March 2006 - 06:00 PM

MichaelAnissimov

The main point Kurzweil is always making is that progress is exponential, not linear.  When retro-futuristic devices (flying cars, robotic maids), don't come to pass, it isn't because exponential progress isn't occurring, but because upon closer inspection, these devices are proven to possess inferior cost/benefit ratios.  Other avenues of research are pursued instead, and exponential progress persists.

Progress really will continue exponentially (unless there is some show-stopping disaster), even if you, the person reading this, doesn't do anything about it.  Geniuses can have a large impact, accelerating a specific advance by as much as a few years or even (in the most isolated cases) a decade, but ultimately individuals don't matter that much.  It's not a feel-good message, but it's what the evidence says.  :(

...

I too believe that Kurzweil places irrational faith in his curves.  But I believe that others don't give them enough credit.  If you're a typical person thinking about the future, it's likely that your view is far too linear.  But Kurzweil's is too biased in favor of his curves.


I agree, Kurzweil seems to think the whole of human progress can be described as e^kt, which is simplistic to say the least.
Seems to me that Kurzweil knows less about mathimatical modeling then my lecturer on it, who at least knows about logerithmic curves. I'm sure better models of progress could be made, but they still could go very wrong (a nuclear war or theocratic government could put an end to exponential growth).

The Singularity concept is pretty well-defined, as long as you look at the literature outside Kurzweil.  It's the creation of smarter-than-human intelligence.  Superintelligence means that you can't predict the rate of progress.  Superintelligence running on superfast substrates means that millions of years of progress could occur in a few hours.  The sudden creation of self-improving superintelligence will be far more impressive than the incremental emergence of Homo sapiens civilization.


Kurzweil seems to define the singularity as when $1000 of computer is a billion times as inteligent as the human race is now. This will happen in 2045, according to Kurzweil

This is

a) compleatly arbitary

b) given to far to high a degree of precision

c) ignoring that effect that superintelligence will have on the rate of progress

I don't think Kurzweil understands the singularity. The idear of the singularity is not about exponential progress, it's about almost asymptopic progress.

Until we change our brain, this mix shall remain.



Well we already can change our brain chemestry with drugs. Best to wait till we have nanotech though.

#76 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 March 2006 - 07:35 PM

Being a Singularitarian has often been an alienating and lonely experience for me because most people I encounter do not share my outlook. Most "big thinkers" are totally unaware of this big thought. In a myriad of statements and comments people typically evidence the common wisdom that human life is short, that our physical and intellectual reach is limited, and that nothing fundamental will change in our lifetimes. I expect this narrow view to change as the implications of accelerating change become increasingly apparent, but having more people with whom to share my outlook is a major reason that I wrote this book.


Well that's certainly reasonable, considering that's exactly how I would describe my situation, except my book will be fiction, and it's not here yet.

#77 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 28 March 2006 - 12:37 PM

Only the faith of all the singularity proponents will not prove this pudding.


You're talking about whether or not a particular AI will behave as its design specifies. When Singularity proponents say that its likely that future AIs will behave predictably, it's not really due to faith, but because they assume that our ability to check and verify code will increase along with our ability to create it.

Why must the word "faith" always be invoked whenever anyone is a proponent of some powerful technology? To our ancestors, the power of electricity might seem magical and godlike. This doesn't mean that Edison and other inventors had "faith" in electricity like someone has faith in a religion.

This is the kind of craziness that gives Singularitarianism a bad name.


The Singularity could happen in a few years, or even tomorrow. It only takes one smarter-than-human intelligence. As soon as a greater intelligence is on the scene, human predictions about what it can or cannot accomplish are automatically disqualified.

We have absolutely no idea how difficult it is to build a self-improving AI or augment a human brain. All we know is that we haven't done it yet. "No idea" doesn't mean that it will happen in the distant future. It means we truly don't know. It could happen in the distant future, it could happen tomorrow.

The idea of the singularity is not about exponential progress, it's about almost asymptopic progress.


Asymptotic change, yes, asymptotic progress, not necessarily...

#78 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 28 March 2006 - 07:12 PM

it could happen tomorrow

Hmm, so you obviously don't think one has to start with an artificial construct of a complexity rivalling the brain, do you...? Do you think you can just take a piece of fairly simple code on one of today's computing machines, teach it what "self-improve" means, tell it to do it, give it an interface to some real-world resources and let it figure the rest, is that the idea?

#79 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 28 March 2006 - 07:12 PM

Isn't the internet something that rivals the human brain as far as computing power is concerned? Or are we not there yet?

#80 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 28 March 2006 - 07:14 PM

Nobody even knows the computer power of the brain, all I do know is that Kurzweil vastly and possibly deliberately underestimates it (as argued way above).

#81 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 28 March 2006 - 07:59 PM

The computing power (storage capacity, processing speed, etc.) of a human brain can be bounded within one or two orders of magnitude by basic physical constraints. It's pretty clear that Moore's law will cause personal computers to pass it in a small number of decades. But that does not automatically mean computers will become smarter than people. The algorithms to emulate a human mind don't yet exist.

Believing that human intelligence will never be met or exceeded artificially is pretty indefensible. Conversely, believing in a hard timeline for AGI, or that "there will be a million years of progress in a few hours" anytime soon is equally indefensible.

---BrianW

#82 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 28 March 2006 - 08:19 PM

Believing that human intelligence will never be met or exceeded artificially is pretty indefensible.


I worry that we're not smart enough to discover how to make ourselves smarter. Have you ever interacted with someone with a lower IQ, due to damage? Could you ever imagine a race populated by people of that intelligence somehow figuring out how to become smarter? I can't.

And so I worry that we're also not smart enough, we just think we are. Of course, I could be wrong, and somehow the human intelligence is somehow past the magical line that needs to be achieved before the entire process is possible.

#83 bgwowk

  • Guest
  • 1,715 posts
  • 125

Posted 28 March 2006 - 11:24 PM

And so I worry that we're also not smart enough, we just think we are. Of course, I could be wrong, and somehow the human intelligence is somehow past the magical line that needs to be achieved before the entire process is possible.

Intelligence is not really a continuum. There is in fact a "magical line". That line is abstract mathematics. Just as a species that has developed writing can store unlimited amounts of information independent of their brain capacity, I believe that a species intelligent enough to develop and use mathematics can effectively understand any amount of complexity that is understandable independent of their brain capacity. It's purely a question of speed.

---BrianW

Edited by bgwowk, 29 March 2006 - 12:56 AM.


#84 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 29 March 2006 - 12:05 AM

I worry that we're not smart enough to discover how to make ourselves smarter.


Human beings are certainly "smarter" than the mechanism of natural selection, yet through small, dumb, incremental steps it managed to-- [airquote] discover [/airquote] us. Go figure.

#85 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 29 March 2006 - 01:46 AM

Human beings are certainly "smarter" than the mechanism of natural selection, yet through small, dumb, incremental steps it managed to--  discover  us. Go figure.

[lol]



... [sfty]

#86 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 29 March 2006 - 01:58 AM

hehe. Biological or technological evolution; in both cases the underlying process is algorithmic, with the lead end taking "intuitive leaps" into uncharted design space.

#87 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 04:23 PM

Intelligence is not really a continuum. There is in fact a "magical line". That line is abstract mathematics. Just as a species that has developed writing can store unlimited amounts of information independent of their brain capacity, I believe that a species intelligent enough to develop and use mathematics can effectively understand any amount of complexity that is understandable independent of their brain capacity. It's purely a question of speed.

Usually that's my intuitive judgment as well. Thus far that's precisely what's made it difficult for me to accept claims that superintelligence could conceive of radically different/superior ontologies, while wishful thinking sometimes pulls me right in that direction. Well, wishful thinking and vaguely wild ideas about possible worlds with absolutely no regularity except for self-presumably enduring conscious agents through constantly radically modifying environments, embodiments, and memories. But perhaps that's not a picnic. And it still doesn't necessarily eliminate the possibility that 'consciousness,' 'environment,' 'embodiment,' and 'memory' can be abstracted from this all…

[Edit: inserted "/superior"]

Edited by Nate Barna, 29 March 2006 - 05:13 PM.


#88 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 04:43 PM

In any case, I don't see why it wouldn't be an acceptable idea to allow for the possibility of this intelligence continuum (let the subconscious work on it, at least), if there's even the minutest chance of preparing for it.

#89 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 29 March 2006 - 05:32 PM

Intelligence is not really a continuum


It sure seems to be. Various people can master a task at different rates, and some people seem incapable of learning a task, regardless of effort. I posit that there is some minimal intelligence required to figure out how to make greater-than-human intelligence (or augment human intelligence above what we have now). And I don't know if we have that level of intelligence.

I see no way to prove either side other than actually augmenting intelligence (so I support this research, of course).

sponsored ad

  • Advert

#90 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 29 March 2006 - 05:39 PM

It sure seems to be. Various people can master a task at different rates, and some people seem incapable of learning a task, regardless of effort.

Various people haven't crossed the magical line of abstract mathematics, for whatever reasons. Regardless, I think the magical line is where the subtle debate begins, because if there is an intelligence continuum, the differences we perceive among members of H. sapiens are overwhelmingly negligible.

[Edit: clarification.]

Edited by Nate Barna, 29 March 2006 - 09:44 PM.





0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users