• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Better-than-human intelligence is implausible.


  • Please log in to reply
40 replies to this topic

#31

  • Lurker
  • 0

Posted 26 November 2004 - 08:20 AM

But it just seems to me that if an agent is assigning a positive value to “holding tentative conclusions,” then still it must be presupposing that self-perpetuation is a necessary feature of the universe.


The "tentative conclusions" were the assertions that all values are unnecessary and self-termination is the overriding goal. I question whether all finite capacity BTH intelligence agents would self-terminate if they had proper modal-world faculties and they all tentatively concluded self-termination was the overriding goal. Expressing confidence in one's assertions does not require certainty, however the question remains of whether the doubt is sufficient to default to self-pertuation until a more certain answer can be arrived at. Here is where we may differ in our argument. I think your point is that the burden of proof is on self-perpetuation since one must presume necessitation in self-perpetuation. When I said that BTH intelligence agents may choose self-perpetuation I did not mean that they presume necessitation, although perhaps necessitation is required in any case and you were simply correcting me.

I acknowledge that it's likely evolutionary spawned intelligence innately irrationally favours self-preservation. If you read my statement 2 posts earlier, you'll see I agree that your dilemma may indeed prevent BTH intelligence agents from self-perpetuating themselves and ultimately your claim that BTH intelligence is implausible may be correct. However, I also claimed that it may difficult to anticipate what existential dilemmas a better-than-human intelligence agent would face, since we're humans with human intelligence and by definition limited in our ability to make such predictions.

Edited by cosmos, 26 November 2004 - 08:35 AM.


#32 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 26 November 2004 - 08:39 AM

I also claimed that it may difficult to anticipate what existential dilemmas a better-than-human intelligence agent would face, since we're humans with human intelligence and by definition limited in our ability to make such predictions.

You're right. I look forward to better-than-human intelligence if only for the sake of sheer curiosity.

sponsored ad

  • Advert

#33 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 29 November 2004 - 06:05 PM

Bumping this so I can find it because I need to come back and address some of these issues. Let me cover the simple one first.

I think you are mistaken on the mind independent truths issue Nate, either that or we have a semantics problem. There are certain simple relationships between physical objects, if you want to call those truths then I can accept that as a valid definition. To me though the noun truth suggests a much higher order definition, that in the case we are discussing describes highly complex systems which are nondeterministic (value systems of human-intelligence or better agents) In my semantics and the semantics of philosophy I suspect that truths are usually thought of as correct models of the material world. These models can not exist without a mind to generate them. (i.e. the models and the actual relationships are two different things, and truth corresponds to the model and not the actual relationship)

more later...

Peter

PS Just to let you know my bias Nate, nothing against you personally, but the style of philosophical argument you have adopted recently I find particularly unhelpful for answering the type of questions you are asking. In my experience this style of discussion often obfuscates the issues you are trying to get at with layers of unnecessary semantics, see above. (I have pissed off many a philosopher by accusing them of semantics games [tung] ) I feel one should strive for precision in your language, but at the same time avoid words that can be interpreted differently by different observers. I find it particularly annoying that many philosophers often use words that are deliberately vague in their definitions, which is diametrically opposed to what philosophy is supposed to be about in the first place. Sorry for the rant [lol]

#34 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 29 November 2004 - 10:05 PM

I find your criticisms helpful, Peter. You’re right. I should strive for better precision.

#35 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 29 November 2004 - 11:41 PM

Hi Nate,

OK this thread is driving me batty, these are some of the core issues I am dealing with. One in particular you have me thinking about is 'what happens at the transition point when a determinstic system to become a nondeterminstic one?' I suspect this question has already been answered by mathematicians (i.e. a dramatic change in behavior in phase space and increase in dimensionality) but hasnt been fully addressed by philosophers. I think this is a keystone for answering your question and is tied to how rational actors behave when faced with environments of extreme complexity in which they can not possibly have full observability because of extreme spatio-temporal limitations.

To restate cosmos' postion above I think its higly likely that a BTH intelligence will have emergent properties which we will not be able to fully comprehend. Animals do have an innate drive for self-preservation or for that of their young, but humans have a strong ability to overcome their biological drives to carry out self- or culturally- determined goal sets that may include self- termination for successful completion of the goal. The perpetuation of a meme set can become the over-riding value set over the biologically innate self preservation mode. A BTHI is likely to have much greater flexibility in rewriting its goal sets based on its particular view of the universe, and also be able to examine the relationship of all particular goal sets it is running (optimization - something that human intelligence has a great deal of difficulty with).

I have a head full of partial differential equations right now.... I'll try another tack later

Peter

#36 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 30 November 2004 - 08:06 PM

Hi Peter,

I know you indicated that you may come to this thread later. This post is not intended to exploit vulnerabilities, just to reflect some immediate thoughts.

I would say that I’m pretty much in agreement. I don’t think I’ve ever had much of a problem with understanding the notion of “minds-in-general,” which is the set of goal-oriented agents, including mind types other than human. (Perhaps you use different terminology.) You suggest that an answer to my concern may be found in the dialogue of mathematicians. However, I find their syntheses of behavioral dynamics, as I do yours, grounded mostly in a posteriori reasoning. You perceive and interpret what is already the case and make inferences based on appeals to what is already the case.

When thinking about minds-in-general, a posteriori thinking doesn’t seem like it’s always sufficient. It should be interdependent with a priori thinking, which is appealing to what is knowable from reason alone aside from experience. For a posteriori and a priori thinking to be interdependent when evaluating the possibility for BTHI is to accept first there are trade-offs when appraising it either synthetically or analytically.

From what we already know to be the case, we can say certain things about indeterministic behavior when such behavior leads to increasingly complex patterns in the universe. We can say things like smarter humans can attain higher pattern-complexity than less smart humans, and therefore BTHI can attain higher pattern-complexity than the smartest humans. This is a synthetic observation coming with a significant trade-off. It says nothing about the drives toward higher pattern-complexity. It simply takes for granted that higher pattern-complexity is a good thing and is therefore an open-ended dynamic of minds-in-general.

What we don’t already know to be the case, except from reasoning alone, is whether strong enough imperatives exist to sustain increasing pattern-complexity indefinitely. It doesn’t take BTHI to figure out that underlying deliberative pattern-complexity – a defining mode of BTHI – are perceptions of utility. To perceive utility is to evoke reasons why something is useful enough to engage in its corresponding action set. As a human mind and embodiment with a pleasure modality, all I need to do is appeal to what is or may be pleasurable and automatically I have reasons to engage in actions that achieve pleasure-inducing utilities that have higher pattern-complexities than what came before. I have an innate reasons generator. But a simple a priori thought procedure recognizes that a common function of reasons generators of any mind type is to assess what is and, from that, derive what ought to be.

That is where I have trouble ascribing the quality of “better” to the synthetic presumption that there is a direct relationship between higher pattern-complexity and goodness. To derive what ought to be from what is is to assign qualities to what is. There is no other way. But these assignments are completely arbitrary in human nature and in the natures of minds-in-general. There is no intrinsic goodness in any type of reasons generator.

As I promised, there is a trade-off in thinking about BTHI a priori. Thinking about BTHI in this way says nothing about memetic evolutionary fitness. An external observer will simply notice that a bunch of minds either survive or don’t survive its dominant memes. Whether a few rogue minds inside this bunch notice that objectively-derived imperatives can never exist means nothing if the meme never dominates. This facticity is an unalterable synthesis of what is the case.

Where you think I may be adding unnecessary layers of semantics is perhaps really only the difference between what is and is not being taken for granted and, more generally, what could be analyzed with pure reason in addition to what can be inferred from experience alone. I think we are in general agreement if I concede that the standards for “better” are the fittest memes. I concede, but with the additional stipulation that, regardless of agent-relative pattern-complexity, strong imperatives consist of taking nothing for granted and to be intellectually and behaviorally coherent is to have mercy and acceptance of what is merciless. Because, at this time, I am (personally) unable to satisfactorily reconcile the stated concession with the additional stipulation, perhaps is the reason for this minor conflict of ideas between us.

#37 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 30 November 2004 - 11:04 PM

Hi Nate

Yep this is definitely the heart of the the difference in our worldviews - I dont believe in a priori reasoning. I think it is an artificial construct created by philosophers which has no actual basis in reality and adds nothing to any argument. (i.e. it doesn't meet Occam's Razor) All reasoning is based on experience of one type or another, be it evolutionarily learned or individually learned information. Reasoning can only be based in experience of the environment you exist in and there are no fundamental truths on which a priori reasoning might be based (even the fundamental constants can change). If you could provide an effective counter to that statement I would be obliged, because I have yet to find one in my own readings of philosophy.

Unfortunately I need to run right now
Peter

#38 psudoname

  • Guest
  • 116 posts
  • 0

Posted 25 May 2005 - 09:58 PM

[i]

(2) The nature of values is that they are fundamentally unnecessary.
.


I think this is wrong because to determine that a value is unnessasary you need a value that says how nessary values are, which runs into the 'set of all sets' problem. Perhaps there isn't a fundamental point in existance, but that doesn't mean we can't pretend there is. And I see no reason why we can't pretend that intelligence and friendlyness or kindness are the goals. Or any other goals we choose for that matter.

#39 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 26 May 2005 - 07:27 PM

Any agent with both volitional and modal-world faculties is intrinsically required to choose between self-perpetuation and self-termination before it can build any goal-system.


Humans don't ever make this decision. (actually that's a lie most people struggle with suicide probably at least once..)

The point is its not really necessarily required. Our motivation to live is a subset for our motivations of [anything].

In some situations, where X is desirable, and you have to die for it to happen, self-termination is desirable in that light. However, its unlikly any human will actually take action based on that alone. Many things are taken into account. Especially if they think as abstractly as I do, measuring against explicit goals unrelated to the situation.

Better-than-human-intellience is plausable because an artificial intelligence that thought faster, had perfect memory, never got tired, was more motivated than humans, had the source code to its own mind, etc, could certainly accomplish ANY goal better than a human in the same situation.

Alas, it is only intelligence that can assign a positive value to itself, while simultaneously doing so for an utterly no good objective reason.

Nate, when you don't have sufficient information for an answer because it is fundamentally impossible to answer (in this case, judging motivations is impossible because you can only judge a motivation from a perspective, which is of another motivation), you might as well assume what is most useful to you, despite not having sufficient evidence for truth. The problem begins when you start assuming because its easier than searching for the truth.

and also I have never used occams razor.. i have never persued a line of thought more than another just because it seemed simpler. sometimes people tend to hold that as something a lot more special than i think it is..



in addition, ulitmately i think the best motivation to judge things from is the concept of all possible motivations. death is the satisfaction of only 1 possible motivation: the desire to permanantly end existance (or going to heaven or whatever), whereas existing could possibly satisfy all other motivations, including death.

Edited by th3hegem0n, 26 May 2005 - 08:11 PM.


#40 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 26 May 2005 - 10:10 PM

I agree with both of you much more than I do with my self of September–November 2004.

My formative process of abandoning my views from this thread take place in these three following threads:

Neuroethics and Stupidity
God is a Delusion
Morals and Absolute Truth

And I don't care whether you read them or not. I'm linking them just so you know.

sponsored ad

  • Advert

#41 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 26 May 2005 - 10:53 PM

lol, didn't realize that was last year.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users