• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

The Singularity


  • Please log in to reply
36 replies to this topic

#1 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 05:54 PM


The Singularity is very likely going to be started by a mind on computer substrate.

The Singularity can help humanity, in general, far faster and more effectively than anything else.

The Singularity is an existential risk.

A mind on computer substrate can become far smarter than humans, even with less computing power.

A mind smarter than the smartest human mind is unpredictable eg. I can't predict his chess moves, although I can predict he will win.

The only plausible defense against a bad Singularity is a good Singularity that either happens faster, or earlier in time.

A mind can be built or loaded onto a computer any day now, and the probability of that day being today is increasing as time goes on.

I can do something to decrease the risk of the Singularity, that at the same time increases the probability of a good Singularity.

I should do something ...


Explain arguments against or for the above.

#2 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 31 March 2006 - 06:24 PM

The Singularity can help humanity, in general, far faster and more effectively than anything else.


Progress is not easy. Intelligence and creativity are not the only ingredients necessary to change the world. While an advanced intelligent entity would be useful, it would not by itself be capable of changing everything. Now if it were able to control a vast army of replicating machines, have access to large audiences via media pathways, and be able to conduct research in the real world, then, it would have a shot at really changing the world.

Sometimes I think people get so focused on the intelligence quotient of the box that its like the old MHz bragging rights thing. Intelligence is meaningless without application and I don't see machines having large amounts of relavent input on the world without access to large streams of information about the world. If you want general intelligence, the substrate has to have access to general information/data.

A mind can be built or loaded onto a computer any day now, and the probability of that day being today is increasing as time goes on.


I think the timeframe is much farther out for the simple reason that most current efforts lack the equivalent of sensory perception. If you watched the contestants in the DARPA grand challenge, you noticed that the most successful contestants were covered with myriad sensors. Intelligence cannot be general unless its data flow is general. (disclaimer: after general intelligence has been achieved, I can imagine paring it down to be generally intelligent in the absence of general data.)

sponsored ad

  • Advert

#3 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 08:05 PM

While an advanced intelligent entity would be useful, it would not by itself be capable of changing everything. Now if it were able to control a vast army of replicating machines,

...That's the point. It can research and develop faster than any human if it is smarter than any human.

Intelligence is meaningless without application and I don't see machines having large amounts of relavent input on the world without access to large streams of information about the world. If you want general intelligence, the substrate has to have access to general information/data.


What is "general information/data" as opposed to "information/data", and why do you need it?

you watched the contestants in the DARPA grand challenge

Robotics and weak AI are not on the path to a Singularity. None of these approaches can even theoretically build a strong AI.

#4 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 31 March 2006 - 08:48 PM

I agree, Hank. Except that 'smartness' should be clarified and "IA probably won't beat AGI" articulated. Please see Why Friendly AI?

[g:)]

#5 stevethegreat

  • Guest
  • 34 posts
  • 0
  • Location:Doesn't really matter

Posted 01 April 2006 - 12:41 AM

Why won't we "embody" singularity to homo sapiens?

I mean all technological advancements we made was to assist us, a lower intelligence than ours can very well play the role of the assistant, however a higher than ours intelligence (Strong AI - SAI) has no way to play the role of the assistant. A big problem that occurs there is that most people can't understand that AI and SAI are different entities and they are functioning a lot differently in proportion to Homo Sapiens. I think that is my point, while AI is completely predictible SAI is quite the opposite, completely random or seems as such to lower intelligence, the same way we are unpredictible to animals.

The only way for SAI to assist us is to become us. Otherwise it is a wild animal regardless of how Friendly or non-Friendly it is, in fact this is of no relevance.
At the end I don't believe that SAI will be in any of use for us as long as it remains a different entity, it seems pretty useless.

#6 lunarsolarpower

  • Guest
  • 1,323 posts
  • 53
  • Location:BC, Canada

Posted 01 April 2006 - 02:01 AM

Robotics and weak AI are not on the path to a Singularity. None of these approaches can even theoretically build a strong AI.


I didn't intend to imply that they are. I guess what I am trying to say is that our intelligence - the only kind we have to base our knowledge on - is initially highly dependent on a fat pipe of streaming info. Intelligence seems to be mostly about finding patterns that have value. The more information you have to work with, the more likely that you can cull value from it assuming you don't overload the ability of the agent to process data as in the case of the NSA and their omnivore email sniffer.

#7 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 01 April 2006 - 04:07 PM

Why won't we "embody" singularity to homo sapiens?

I assume you are asking why either

A: Why not upload a brain?
B: Why not create devices, etc, for improving a human intelligence?

These things will all happen, I assume. However, in terms of the Singularity,

1. Power has a strong tendency of corrupting humans, whereas an AGI can be designed to be Friendly, verifiably.
2. It will take a long time for the basic technologies to be developed, whereas the technology for AGI is readily available.

I mean all technological advancements we made was to assist us, a lower intelligence than ours can very well play the role of the assistant, however a higher than ours intelligence (Strong AI - SAI) has no way to play the role of the assistant.

Actually AGI can help us more than anything else. An AGI can recursively improve it's intelligence, exponentially. Thus, because the AGI can be very smart, it can figure out ways to help people, for any given goal of humanity, much more efficiently than any human. Perhaps you want to clarify why you came to this conclusion?

he only way for SAI to assist us is to become us. Otherwise it is a wild animal regardless of how Friendly or non-Friendly it is, in fact this is of no relevance.

That's not true. An AGI can have a goal system radically different from any human, however, if it's goal system is tailored to be Friendly, it will seek out methods and implement actions that are to the benefit of humanity, rather to the detriment of humanity. It is likely it's methodology will heavily involve interaction with humans, such that we know what it is doing and why, and it knows what we want and why. There is no reason for it to be anything resembling a human intelligence.

At the end I don't believe that SAI will be in any of use for us as long as it remains a different entity, it seems pretty useless.

Ok, imagine if I were to create an AGI, which spends a year recursively improving it's own source code and hardware base, and then researches and developes molecular nanotechnology within another year, creating tools to enable immortality, lifelike virtual reality, planet terraforming technology, etc. How useless is it then? There is no rule that says all intelligent entities must resemble humans in order for them to be useful or Friendly to humans. There is a rule that says an AGI goal system must be designed to stay Friendly under self-improvement, or else we are likely quite screwed.

#8 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 01 April 2006 - 04:15 PM

I guess what I am trying to say is that our intelligence - the only kind we have to base our knowledge on - is initially highly dependent on a fat pipe of streaming info

We easily have technology to stream large amounts of information to some system, although I don't agree that having the extreme amount of streaming information that is available to us humans is NECESSARY for intelligence. Consider Helen Keller, just as an example.

#9 stevethegreat

  • Guest
  • 34 posts
  • 0
  • Location:Doesn't really matter

Posted 02 April 2006 - 03:53 AM

That's not true. An AGI can have a goal system radically different from any human, however, if it's goal system is tailored to be Friendly, it will seek out methods and implement actions that are to the benefit of humanity, rather to the detriment of humanity. It is likely it's methodology will heavily involve interaction with humans, such that we know what it is doing and why, and it knows what we want and why. There is no reason for it to be anything resembling a human intelligence.

This exactly is what I wonder of. How can we program a Friendly AGI while you yourself said that humans are not that good at planning what benefits them. Keep in mind that making a Friendly AI is a lot tougher task than actually making the first AGI, if it is more difficult act, how do we suppose to achieve it before actually reach the "Singularity point".

As for finding pointless making the AGI to be different than us, I base it to the fact that being pleased is not our only concern, another concern of ours -more important- is to feel sef-fullfilled. We can feel so only if we reach a point where few intelligence has reached it. With AGI we will become mostly pleasure-seekers than anything else but we will lose the feel of self-fullfillment as AGI will achieve anything before us, with two words we will be its dear pet. I don't thing we'll actually advance our lives that way even if beyond singularity is impossible to think of. Creating a race higher than us isn't sane, but making this race to be us is a lot saner.

If we become extra-intelligent then we will lose most of our negative traits you are concerned of and we will have no need of AGI.

#10 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 04:31 AM

Keep in mind that making a Friendly AI is a lot tougher task than actually making the first AGI …

Perhaps this means that the idea of Friendly AI and its reasons need to be expressed more often, at the right times, at the right places, or else you should try to alleviate the situation as best as possible in some novel way. But merely pointing this out and doing nothing about it is probably the same thing as doing nothing about it.

As for finding pointless making the AGI to be different than us, I base it to the fact that being pleased is not our only concern, another concern of ours -more important- is to feel sef-fullfilled. We can feel so only if we reach a point where few intelligence has reached it. With AGI we will become mostly pleasure-seekers than anything else but we will lose the feel of self-fullfillment as AGI will achieve anything before us, with two words we will be its dear pet. I don't thing we'll actually advance our lives that way even if beyond singularity is impossible to think of. Creating a race higher than us isn't sane, but making this race to be us is a lot saner.

If we become extra-intelligent then we will lose most of our negative traits you are concerned of and we will have no need of AGI.

This piece here might represent a stealthy bias for which I don't know yet if there's a technical name or analysis. If we can agree provisionally that there probably will be sufficiently advanced technology, then we should be able to agree provisionally that no pattern about one's self isn't subject to a complete overhaul by processes originated by one's own decisions. You can hang on to patterns of yourself that generate particular mental representations of desired sub-end states to which you assign 'self-fulfillment,' but this isn't a requirement for what you now implicitly consider personhood. Furthermore, you could become an AGI, when all of your patterns can be products of your decisions, and, therefore, it could be something that you don't come to view as unnecessary.

#11

  • Lurker
  • 1

Posted 02 April 2006 - 05:13 AM

Having said all the above can someone point me to the latest technical advances on the development of an AGI or AI of the type being discussed here? I am not talking about conceptual discussions and hypothetical conjectures, but actual research which has yielded some evidence that such an event as the singularity is possible.

(to parrallel discussions on the prospect of immortality I often use as practical examples immortalized cell lines that exist in labs around the world and as supporting evidence the built-in mechanisms of senescence that seem to be a requirement for evolution and survival of biological communities)

#12 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 07:28 AM

Prometheus, first it should be said that I don't think it's possible that everyone currently can be persuaded about the feasibility of AGI. Not suggesting that you're right at this extreme, but anyone can choose to deny a possibility until it actually occurs. Still, I can't approximate how close you are either.

Secondly, probably everyone starts out making decisions with insufficient information. One's initial set of heuristics and biases may or may not allow one to pursue an area of investigation very long until one perceives intolerable opportunity costs from the denial of other areas of investigation that appeal to one's current set of heuristics and biases.

If you're going to make any use at all from the signposts you're shown from this area's outskirts, rather than wasting your own time, you should infer that even if you do achieve practical biological immortality tomorrow, that doesn't necessarily mean that you've maximized your adaptability/cognizance or that you're prepared to begin. Observe: 'Non-stupid cognitive agents are working to maximize their and/or their machine's adaptability/cognizance, right now' and very little information can be extracted from this other than it may predispose one to perceive the void 'Perhaps that doesn't necessarily mean that my (or whoever matters) adaptability/cognizance will be considered for sufficient maximization. What should this mean to me (or whomever matters)?'

Now, if you're still interested in what you believe you're asking for, let me know and I can redundantly provide links and further counsel (I'm sure Hank Frenzy will anyway (LOL)), or you can begin Process X that I believe it improbable could begin from this point.

#13 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 02 April 2006 - 03:07 PM

I base it to the fact that being pleased is not our only concern, another concern of ours -more important- is to feel sef-fullfilled

You are missing the point, but that's ok because this is an easy point to miss.

You can program the AGI to take this, and any other dilemmas, into account. That's what Friendliness is all about- "In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. "

If we become extra-intelligent then we will lose most of our negative traits you are concerned of and we will have no need of AGI.

Do you know of any credible scientific papers that state a correlation between being smart and being nice in humans? I have never seen such a document, and I believe it is safe to say that no document like that will ever arise.

Keep in mind that making a Friendly AI is a lot tougher task than actually making the first AGI, if it is more difficult act, how do we suppose to achieve it before actually reach the "Singularity point".

Humans have done a lot of difficult- seemingly impossible- things. All the time. We have before, we still do it today, and we will continue to do what is seemingly impossible in the future. Just because a problem is hard doesn't mean we need an FAI to do it for us, although many like to say we would. An FAI just happens to be the fastest, safest option available.

#14 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 02 April 2006 - 03:15 PM

Having said all the above can someone point me to the latest technical advances on the development of an AGI or AI of the type being discussed here?


One could point to Novamente. I'm still waiting on the bookArtificial General Intelligence.

Also, these people have contributed some extremely interesting work on the problem:

Pearl
Hutter
Lenat
Voss
Yudkowsky
Kurzweil
Hofstadter
Goertzel

This is all that comes to mind immediately, but there are many more people with interesting projects, papers, and other contributions.

#15 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 02 April 2006 - 03:20 PM

If we become extra-intelligent then we will lose most of our negative traits you are concerned of and we will have no need of AGI. 


Secondly, someone is going to create an AGI. This is an extreme existential risk. We don't have time to try and hack ourselves to compete with an intelligence on a computer substrate- it will be way, way faster. We have to make sure a Friendly AGI is created first.

#16 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 08:11 PM

Hank, anyone can find the same stale superficial propaganda on their own, which doesn't seem to work on very many people who aren't already impressionable or the technophilic pure mathematician type. That's merely my intuition.

BTW, I hope you're a Crocker.

#17 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 02 April 2006 - 11:28 PM

stale superficial propaganda

Ahh! [:o]

Nothing here is stale superficial propaganda! Please specify what you have a problem with so I can explain more clearly!

#18 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 11:44 PM

I think that's the problem, Hank. It's almost too clear. ;)

That type of propaganda appeals mostly to a coherent set of intuitions. While we all probably have a coherent set of intuitions, our entire set of intuitions tends to be incoherent, as in, we tend to have conflicting, vague intuitions. A decent first step in appealing to entire sets of intuitions is perhaps deliberately trying to avoid dishing out superficial propaganda, unless, of course, we're trying to appeal to those [airquote] wise [/airquote] people who already have a full set of coherent intuitions.

#19 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 02 April 2006 - 11:48 PM

Hmm, an idea being too clear, concise, and easy to understand.

That is a criticism you don't often hear..

#20 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 11:51 PM

And, of course, you must be someone with a full set of coherent intuitions. Excellent! ;)

#21 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 02 April 2006 - 11:55 PM

And, of course, you must be someone with a full set of coherent intuitions. Excellent! ;)


Well of course I think all of my intuitions are coherent. :) I have learned a lot from hank's posts on this board, he always seems to know what he is talking about, make a lot of sense (to me anyway), and I appreciate the clarity he brings.

#22 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 11:56 PM

Well of course I think all of my intuitions are coherent. smile.gif I have learned a lot from hank's posts on this board, he always seems to know what he is talking about, make a lot of sense (to me anyway).


No argument there, liveforever22.

;)

#23 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 April 2006 - 12:05 AM

aah, just not the clarity part? I see..

#24 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 12:07 AM

Oh, you must have inserted it in there after I posted. Oops.

#25 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 April 2006 - 12:08 AM

Well, that and the fact you said this:

I think that's the problem, Hank. It's almost too clear. ;)



#26 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 12:12 AM

I said this:

I think that's the problem, Hank. It's almost too clear. ;)

That type of propaganda appeals mostly to a coherent set of intuitions. While we all probably have a coherent set of intuitions, our entire set of intuitions tends to be incoherent, as in, we tend to have conflicting, vague intuitions. A decent first step in appealing to entire sets of intuitions is perhaps deliberately trying to avoid dishing out superficial propaganda, unless, of course, we're trying to appeal to those [airquote] wise [/airquote] people who already have a full set of coherent intuitions.

And, yes, I also said this:

uitions, our entire set of intuitions t



#27 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 April 2006 - 12:15 AM

umm, right..

Which led me to post:

Hmm, an idea being too clear, concise, and easy to understand.

That is a criticism you don't often hear..


At which point it seems we are reposting the entire conversation we just had. I think I had a dream that was similar to this one time.

#28 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 12:17 AM

Agreed.

Taking things out of context led you to post that.

;)

#29 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 April 2006 - 12:20 AM

Aah, ok, so I misunderstood you to begin with and you didn't think hank was being too clear? Well then, I apologize! ;) If you had brought that to my attention after my original post I would not have responded.

sponsored ad

  • Advert

#30 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 12:26 AM

I accept your apology. And please accept mine. I didn't assume that you might've misunderstood.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users