• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

The Singularity


  • Please log in to reply
36 replies to this topic

#31 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 12:53 AM

While you misunderstand, however, it still seems like you don't understand, unless, of course, you understood and you're still upset that I haven't made you my friend and won't.

But supposing you are not upset that I haven't made you my friend and you are genuinely ignorant, I shall clarify. When I said, "It's almost too clear," although it signifies an almost literal meaning, it's said in the context of a smile and in reference to the idea of a coherent set of intuitions. If you're genuinely ignorant, perhaps, until you are further educated, it is lost on you that a full set of intuitions can contain members as sets of intuitions.

Now, if you're genuinely ignorant, it's common not to understand subtleties in language and ideas. So, it's quite all right, you are not blameworthy.

;)

#32 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 April 2006 - 02:17 AM

While you misunderstand, however, it still seems like you don't understand, unless, of course, you understood and you're still upset that I haven't made you my friend and won't.

But supposing you are not upset that I haven't made you my friend and you are genuinely ignorant, I shall clarify. When I said, "It's almost too clear," although it signifies an almost literal meaning, it's said in the context of a smile and in reference to the idea of a coherent set of intuitions. If you're genuinely ignorant, perhaps, until you are further educated, it is lost on you that a full set of intuitions can contain members as sets of intuitions.

Now, if you're genuinely ignorant, it's common not to understand subtleties in language and ideas. So, it's quite all right, you are not blameworthy.

;)


Wow, that was perhaps one of most derogatory posts directed at me that I have ever seen. I will, however, respond with the rebuttal that you seek, even though it will further drive this thread off topic (which is evidently what you are angling for).

First, to clarify, I have not received a formal degree in linguistic studies or a related field (nor have you, I am willing to wager), but I believe I comprehend the English language adequately enough for what you have said. Second, while I do hold out hope that I am able to (in general) have people as my friend, it is not a requirement that I have for a discussion on a message board.

So, you gave me two apparent options to pick from:
1) “you understood and you're still upset that I haven't made you my friend and won't.”
2) “you are not upset that I haven't made you my friend and you are genuinely ignorant”

I believe that neither of these is completely true. The first is the closest (IMO) in that it states that I understand. However, as I stated above, the “friend” requirement is not one that I require (although I do hope that you do not mistake my arguments as being “unfriendly”, only logical or illogical)

Now to go through what you said, and why I mistook it as you saying that you thought Hank was being too clear in his arguments.

I think that's the problem, Hank. It's almost too clear. :)

I interpreted this to mean that you thought his arguments were almost at the point of being too clear. I think most people who read that would think the same thing, but of course I can only speak for myself, so I referenced Dictionary.com for advice on what the word “almost” means:
al•most adv.: Slightly short of; not quite
So, if something is slightly short of being “too clear”, then it is about as close to being “too clear” as it can get without actually being “too clear”. If you meant that it wasn’t clear at all, then of course that is what you would have said, so my assumption that you thought Hank was being just shy of “too clear” is a well founded one (IMO).

You, of course, responded to what I thought with this:

When I said, "It's almost too clear," although it signifies an almost literal meaning, it's said in the context of a smile and in reference to the idea of a coherent set of intuitions.

So, you state “it signifies an almost literal meaning”, which of course I agree with, however using the phrase “almost literal meaning” to describe “almost too clear” shows that you do not realize using a word or phrase to describe (or as a clarification of) itself does not show your grasp of “subtleties in language” (almost = almost in this example).

Next you said (original post that I replied to):

That type of propaganda appeals mostly to a coherent set of intuitions.

I do not believe that what Hank said is propaganda, but since I am in no way qualified to argue merits pertaining to The Singularity, I will stay focused on the argument at hand (my apparent misunderstanding of what you said). The fact that you said “appeals mostly to a coherent set of intuitions” further strengthens my original assertation that you thought Hank’s comments were “too clear” or too easily understood. If something is intuitive, it is self evident (to me at least) that it is easy to understand.

Next:

While we all probably have a coherent set of intuitions, our entire set of intuitions tends to be incoherent, as in, we tend to have conflicting, vague intuitions.

I concur that “our entire set of intuitions tends to be incoherent”, that has nothing to do with what I originally said, so it is not a point of contention. Your use of the word “intuition” here tells me that you seem to agree with my previous statement that intuitive (or intuition) = easy to understand.

Next:

A decent first step in appealing to entire sets of intuitions is perhaps deliberately trying to avoid dishing out superficial propaganda, unless, of course, we're trying to appeal to those [airquote] wise [/airquote] people who already have a full set of coherent intuitions.

I will again refrain from arguing points of the Singularity, or how is best to argue about the Singularity, only my supposed misinterpretation of what you were saying about the clarity of statements that Hank made. Your use of the phrase “superficial propaganda” further validates (in my mind) that you were saying Hank was being “too clear” or too easy to understand.


In summary, I felt that your original post was saying that Hank was “being too clear, concise, and easy to understand.” (my original words) If you meant for your original comment to be sarcastic, or in some other way facetious, then it is apparent I did not understand your intention. Again, I apologize for misunderstanding what you said, but as you can see I thought the intent of your message was clear.

I also apologize to everyone else reading this and the sidetracking of a thread that was becoming informative.

sponsored ad

  • Advert

#33 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 03 April 2006 - 02:22 AM

I think that's the problem, Hank. It's almost too clear.

...?

That type of propaganda appeals mostly to a coherent set of intuitions.

I'm almost afraid to ask, but can you specify what it is you are talking about here? I don't know anything about "coherent sets of intuitions", and so it's difficult to make a proper response.

A decent first step in appealing to entire sets of intuitions is perhaps deliberately trying to avoid dishing out superficial propaganda, unless, of course, we're trying to appeal to those  wise  people who already have a full set of coherent intuitions.

I don't think we've made any progress getting to the bottom of just what qualifies this as superficial propaganda, such that I can learn to avoid that quality. You speak of appealing to entire sets of intuitions. Can you give an example or two, and perhaps a counter-example or two? Secondly, why do you feel that this will help me connect better with my audience? I am trying to appeal to anyone, although unless you are particularly intelligent and curious, you will unlikely be able to do the reading required to understand what I'm stressing in the first place (not "you" personally, Nate).

Also, to others: please don't let this tangent in the thread throw you off. I am still very interested in responses to the original post, as well as the subsequent discussion. I'm especially open to the kind of meta-discussion Nate brings up (ex. Hank you are being stupid -> here), as long as it is relevant and understandable, that is ;) .

#34 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 02:31 AM

I see that liveforever22 still misunderstands and now I see that you misunderstood, Hank. While I'm sure my earlier post could use more clarification, perhaps everyone is content enough with the initial thrust of thread.

#35 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 03 April 2006 - 08:51 PM

While an advanced intelligent entity would be useful, it would not by itself be capable of changing everything. Now if it were able to control a vast army of replicating machines, have access to large audiences via media pathways, and be able to conduct research in the real world, then, it would have a shot at really changing the world.


Of course the implication is that the advanced intelligence would be able to grant itself access to these enablers, or manufacture them from scratch, more rapidly than any human counterparts. All you need to do to "beat" humans is to develop a motor framework that can do more "work" than them, at a faster pace, directed at least as intelligently.

If you want general intelligence, the substrate has to have access to general information/data.


Imagine an AI that has exposure to nothing except the world of atoms and molecules. This AI would lack "general" intelligence, but could easily bootstrap its nanoscale-knowledge into a means of making sense of the macro-scale world. For example, making a simulation of a large object based on low-res simulations of its interacting components. So AI that starts off in one small quadrant of knowledge could very well escape that quadrant quickly, with sufficient intelligence. To many types of alien species, it might look like humans lack "general intelligence" - look at our ability to deal with math, for example! But yet we can accomplish so much.

I think the timeframe is much farther out for the simple reason that most current efforts lack the equivalent of sensory perception.


As Hank mentioned, Hellen Keller possessed general intelligence, despite lacking many sensory faculties. A blind, deaf, un-smelling, un-tasting, lame baby stuck in a wheelchair could develop normal general intelligence by simple interaction with Braille. Also, there are instances of microcephalia (small-brain) and hemispherectomies (removal of one half of the brain) where the subjects possess only slightly-below-average intelligence, even if only in possession of 30% or less of average brain volume.

Robotics and weak AI are not on the path to a Singularity. None of these approaches can even theoretically build a strong AI.


Why not? It would just be more difficult than building a strong AI directly. Myriad specialized AIs might be condensed into a multi-functional general AI. This is a decent premise for a story.

Power has a strong tendency of corrupting humans, whereas an AGI can be designed to be Friendly, verifiably.


I'm not entirely certain about this - "Friendly" defined as "beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile" may be verified in an AI at a less-than-human or roughly-human intelligence level. But how to verify that this behavior will continue through endless cycles of recursive self-improvement, keeping in mind that future humans will have radically evolving ideas about what "beneficial", "benevolent", etc., exactly means?

The best we can say, I think, is that a given AI is more Friendly and more likely to remain Friendly through self-modification than any given human.

The Coherent Extrapolated Volition model helps to alleviate concerns by tying the cognitive dynamics of the AI even more closely to human preferences than ever before. But is it close enough? Or too close? The CEV document is great on generalities, but implementation specifics could easily make the difference between UFAI and FAI. Like string theory, CEV gives rise to a huge number of possible solutions, only a subset of which will correspond to "optimal success".

We may need to settle for suboptimal success due to time constraints. Or perhaps a suboptimal stopgap will need to be introduced. It may require superhuman intelligence to invent a complete FAI theory, such that no quantity of genius human foresight is sufficient to build a program that stays Friendly indefinitely.

Also, to others: please don't let this tangent in the thread throw you off.


Funny how a discussion thread on the Singularity slightly collapses into a discussion between two Singularitarians on how to best approach discussion of the Singularity... this is probably the type of thing best resolved via PM, AIM, or IRC.

In general, Hank seems to "tow the party line" in his phrasing of explanations, and Nathan tends to phrase things from a more independent point of view. Part of the problem is that young Eliezer was *so* smart that he leaped about 5-10 years ahead of everyone else in his thoughts regarding AI safety and possibilities - so there is a lot of phraseological inertia in favor of his terminology - despite the fact that much of the digerati is starting to discuss these issues in their own words lately.

#36 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 03 April 2006 - 11:14 PM

Thanks, Michael.

Funny how a discussion thread on the Singularity slightly collapses into a discussion between two Singularitarians on how to best approach discussion of the Singularity... this is probably the type of thing best resolved via PM, AIM, or IRC.

And to Hank's credit, he PMed me in the earlier stages of my thread on FAI to make a suggestion about how its quality could be improved.

sponsored ad

  • Advert

#37 RighteousReason

  • Topic Starter
  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 03 April 2006 - 11:27 PM

Myriad specialized AIs might be condensed into a multi-functional general AI.

Sure, that is plausible. I think it is more likely to take a lot more computer power to make that happen, and so although it becomes more plausible every day I think the more likely solution will be a generalized approach to building intelligence. I think what I was trying to say there was that the traditional approaches in the fields of robotics and weak AI have hit theoretical dead ends in terms of solving general intelligence. Perhaps if someone tried to integrate a large portion of these theories and applications into a single model/system, as you are proposing, they might find success.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users