• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Better-than-human intelligence is implausible.


  • Please log in to reply
40 replies to this topic

#1 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 22 September 2004 - 09:05 PM


Obsolete.

Edited by Nate Barna, 10 November 2005 - 09:03 PM.


#2 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 23 September 2004 - 02:20 PM

Hey Nate,

This thread is definitely thought provoking, and although I do not have your level of skill in philosophy (I'm currently taking my first philo class -- logic :) ), I think I see where you are going with this.

My main area of contention with your argument would probably be Premise #2. I'm not sure that I could come up with a logically consistent argument for why I object to #2 (nor, unfortunately, do I have the time), but I would definitely like to see why you maintain this existentialist perspective.

I'll try to post more later

DonS

sponsored ad

  • Advert

#3 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 September 2004 - 05:35 PM

Although the logic is valid, and the premises seem valid on the surface, I do disagree. And it took me a while to see why I disagree.

Your contention is that better-than-human intelligence cannot be created. But as I see it, the logical argument you presented actually should conclude that optimally-better-than-human intelligence cannot be created. We can settle for something which is suboptimal, but still better-than-human. By allowing something that is sub-optimal, we can agree with points (2) and (3), but disregard (4), which means we do not even get to (5), (6), and (7).

In fact, if we assume some degree of control over what "optimally" is, we can stipulate that (4), (5), and (6) are valid, with the exception that we "hard-code" as it were that self-termination be a low-priority goal, for subjective reasons (i.e. a value), and allow all other goals to be pursued for objective reasons (i.e. not values).

To contradict this, we must show an argument that disregarding all values except the subjectively low priority assigned to self-termination cannot be better-than-human. I'm not saying that such an argument cannot be created, only that I don't think you have done so yet. Your description of smarter-than-human and kinder-than-human was a good start, but needs more formal and acceptable premises on which to build an argument, as well as the steps to the argument itself (which you laid out nicely in your argument that, as I see it, optimally-better-than-human intelligence cannot be created).

Nonetheless, very thought-provoking. :)

#4 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 September 2004 - 06:03 PM

I realized another thing that didn't bother me at first, because I misinterpreted it. It's in your definition of "values":

value – a cognitively derived subjective aim

(my emphasis added)

Perhaps you can see where I'm about to go with this, when I italicized "cognitively". When I first read your definition, I focused on "subjective", even though I saw "cognitively". I do agree that once something becomes subjective, it is no longer fundamentally necessary, and hence I agreed with item (2) in your argument.

However, when I was describing a suboptimal situation where we can at least prescribe an arbitrarily low priority to self-termination, and then allow all other goals to be objectively prioritized, I realized something. If we assume that we start in a situation where all goals are equally prioritized, if we do not allow goals to be re-prioritized, then this "intelligence" would not be able to function without infinite processing power, whether parallel or serial. Without prioritization, it must "think about" and "act upon" all goals at once, or perform these tasks in some order. If we do not allow some sort of prioritization, then the only "optimal" order to perform tasks in, per Occam's Razor, is a "random" one. Random could be alphabetically, or in order of processing time, but even these decisions put a subjective priority on spelling or the "size" of the task. To be truly impartial, only a truly random order of performing tasks is "optimal".

At which point, short of having infinite processing power available, such an optimally-better-than-human intelligence cannot exist. Assuming the universe is a "computer" of sorts, then no subset of it can be optimal, so that the universe itself is the only thing which "might" be optimal. And even this is debatable, though it's not my area of expertise, so feel free to disagree.

On the other hand, if we allow tasks to be prioritized in some sort of systematic fashion, then we can proceed with less-than-infinite processing power. Given that we only have less-than-infinite processing power available, then in that environment, the optimal solution is one that is sub-optimal, so to speak. One that does assign priorities.

I had said that those priorities can be "objectively"-assigned; however, since those priorities must be deduced by some method, even a completely rational one, we must have some non-universal beginning axoims or postulates, which introduces subjectivity. (By non-universal, I mean that we cannot know with a certainty that they apply to the "real world" (which goes back to cosmos's uncertainty clause of uncertainty), so that we in effect must make a subjective decision in setting the initial axioms.)

In other words, short of starting with "universal truths" and applying valid logic, then some level of subjectivity must be applied in prioritizing goals, tasks, etc.

So, without infinite processing power, I contend that "ultimately-optimally-better"-than-human intelligence cannot exist. However, given the amount of processing power available within the finite universe we can see, there is an optimal intelligence that can be created (because it only needs to be better than the alternatives available), that will not self-terminate, and that intelligence is most likely better than human.

But I cannot present here an argument for or against that assertion. Let's see if we can hack one out. :))

#5 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 September 2004 - 06:14 PM

(Jay, i just noticed your responses after i wrote the following reply to Don. Thank you. I just want you to know i'm not ignoring them.)

Thanks, Don, you pointed out a good hole. That whole premise probably needs clarification, as does the perceived inherent value in Occam’s razor. Also, even if all values in a value set were cognitively prescribed equal valuation, some value of no inherent importance would need to rise above the rest to make this assignment. There are definitely serious problems with the argument. Unfortunately, i don’t yet have the toolkit to fully articulate and defend them (if they can be defended).

Without being able to give a very good reason why, it seems like values are purely anthropic phenomena. Yes, some programs and programmed robots are active and manipulate reality, but their actions are based purely on anthropic initiative. Perhaps it’s beyond my knowledge or genuine ability to grasp, but it seems like nothing can function in a deliberative manner without having a value system. This fact is not problematic. What is problematic is that a value system is an elaborate reward mechanism. The reason that values have higher influences than others is because of the subjective experience of pleasure. This is the anthropic way. If anything other than human is to ever seek optimizing its phase spaces, it would need to experience pleasure or be hardwired to give reason for having specific aims that are higher than others. If the reasons happen to all be computational, the computations still can’t escape computing the reasons to compute reasons, unless it inherited stupidity from its programmers.

Any volitional agent able to identify the approximate ontology of value has to either deal with endless circular problems (which is appropriately dealt with by normalizing the simplest plausible Theory of Subsistence, i.e., self-termination), continue operating according to its built-in reward mechanism, or operate without sempiternal question within the Intersubjective Institution (II is the aggregate force of all volitional agents upon each volitional agent, minus the freedom to self-terminate).

#6 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 24 September 2004 - 01:03 AM

Hi Jay,

Upon reading your responses carefully, i see that i might’ve touched on some areas in my reply to Don that could be relevant in a reply to you. But leaving it at that wouldn’t do you much justice. You bring up new and insightful ideas.

Firstly, i’m not a computer scientist or like a computer scientist, so the chance remains that whatever i say would probably be received as a transgression on my part by an AGI computer scientist. However, there are things that can be said about volitional activity regardless of which mind types are being volitional.

I think you expounded on the implication of equal-priority values very well. I suspect you are either Bayesian or have the qualities that would make the transition to Bayesian go quite smoothly. Personally, i am not. My studies in mathematics haven’t taken me yet in the direction of statistics and quantitative reasoning. But that doesn’t mean i would disagree with a Bayesian approach. On the contrary, based on what little i’ve read on a Bayesian approach to survival, the highest possible subjective assignment to the value Life is Pointless must still be treated with even the slightest uncertainty. It goes without saying, even the slightest uncertainty demands not deliberately terminating oneself.

That, of course, is a strictly a quantitatively rational approach. It doesn’t fully represent an actual subjective experience – at least not yet. For the time being, it simply seems as though morals can’t be arrived at in a completely rational fashion. There are no developing morals in a universal realm apart from the subject, because that realm does not demand anything from the subject other than the subject cannot transgress the yet unknown, since minds are still always finding out workarounds in nature in order to meet desires, default limitations of reality. It can do anything immediately below that point down to the point of self-termination. That’s a wide range of freedom and yet most of us manage to figure out how to decently contain that freedom.

In this light, i can see how the development of a better-than-human intelligence in the original sense could be managed. As you indicate, it's a different question altogether whether ultimately-optimally-better-than-human intelligence is physically possible. I think that would depend on a subjective evaluation, which in turn would depend on a subjective evaluation, ad infinitum. But as you say, and i agree, there is only so much knowledge a mind can apply in any given phase space and that this should be enough (under normal circumstances) to at least do something rather than nothing.

#7 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 24 September 2004 - 03:15 AM

From the above link

Thus, even in the event that all choices are arbitrary, the Singularity would still be the best way to serve the goal of building the best possible world consistent with the laws of physics and the maintenance of individual freedom.

this probably sums up AGI researcher thinking the most nicely in regards to the axiological concerns i've been having lately.

#8 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 24 September 2004 - 02:03 PM

As you indicate, it's a different question altogether whether ultimately-optimally-better-than-human intelligence is physically possible. I think that would depend on a subjective evaluation, which in turn would depend on a subjective evaluation, ad infinitum.

(my emphasis added)

Exactly! Without infinite processing power (which would allow such an intelligence to proceed with truly equal starting priorities for all tasks), we would have to subjectively evaluate or create a mechanism to prioritize remaining tasks (whether functional or purely cognitive/"emotive" tasks). We could evalutate such a subjective system based on another system, but without infinite processing power, even this system must be subjective. At some point, we must take the less-than-optimal path, what we often refer to as heuristics. Of course, Occam's razor dictates that, if we must take the heuristic approach, then we should at least take the minimally heuristic approach based on given resources.

Now, heuristically speaking, we can still say that a particular approach is optimal given the resources available. However, proving that such is optimal may or may not be possible with the given resources. (As a lame example, I understand the principle expressed in Fermat's last theorem. If I needed to use this theorem to do something, I would want to know if it's true. I can test it for as many sets of numbers as I want, until I am convinced that it's at least probably true. I could probably even follow the logic for the proof for an exponent of 3, or 4, etc. But as for the general proof of all cases, I do not even remotely understand the proof, which is something like 800 pages long of advanced number theory. In other words, it is true enough that for all intents and purposes, it doesn't matter to me if there is an exception. For the level of computation that I would be using the theorem for, I don't have enough computational experience or power to prove the theorem.)

Anyway, where I was going with this is, even if we came up with a heuristic approach that was optimal given the available resources, proving that such was optimal might not be possible, in which case we would have to come up with a subjective evaluation of whether that heuristic approach were truly optimal.

In the end, then, what we should be concerning ourselves with is, are there better heuristic approaches than the ones that human-level intelligence can generally come up with, and how do we get to those approaches?

After all, we may have to start with a subjective system, but one that can design a system that relies on fewer subjective inputs. The next system can redesign based on fewer subjective inputs, until we are left with the minimum amount of subjective inputs based on available resources.

We sort of do that already. But since the "available resources" are so limited, the number of subjective inputs is very high. As more resources become available, we can start with fewer subjective inputs and still get useful results.

Finally, this might sound lame, but perhaps it is a subjective evaluation in the first place that subjective evaluations, to whatever degree possible, should be eliminated. After all, given our less-than-optimal intelligence, we could be wrong about that point. :)

#9 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 24 September 2004 - 04:23 PM

Jay, your continued revelations of the vast expanse of your knowledge amaze me. So i have a question, since you may have wrapped your mind around the prospects of superintelligence enough to relate to it. Would you suspect that better-than-human intelligence is unlikely to give us any more insight into mind-independent truths such as those revealed in formal logic, but rather only empirical-based insight that would assist us in functioning better in our mind-dependent world? (Anyone else can chime in if they like, of course.)

#10 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 22 November 2004 - 05:01 PM

Jay, your continued revelations of the vast expanse of your knowledge amaze me. So i have a question, since you may have wrapped your mind around the prospects of superintelligence enough to relate to it. Would you suspect that better-than-human intelligence is unlikely to give us any more insight into mind-independent truths such as those revealed in formal logic, but rather only empirical-based insight that would assist us in functioning better in our mind-dependent world? (Anyone else can chime in if they like, of course.)

Short answer? Yes. I suspect that "better-than-human" intelligence will not discover truths that are undiscoverable by human intelligence. Rather, a BTH intelligence will discover them faster, sooner, etc. Humans in the 16th century could not solve the problems of quantum physics, not due to a deficiency in "human intelligence", but rather a deficiency in both empirical evidence and in the theoretical frameworks created by men to assimilate that evidence.

I don't see a limit in sight for human comprehension, at least not a limit that could be surmounted by BTH intelligence operating within our closed universe.

One way I like to think of it is by thinking about the set of facts that humans work on when working on problems.

For example, I recently had this strange experience. I was thinking about the number 27, and whether it was a multiple of 3. Now this wasn't some thought experiment, or an exercise of my math skills. It was just a mundane task as part of my daily life. I don't remember the details, but let's say I had 27 of something, and I was trying to figure out if it could be cut into three equal parts.

Well, in my mind, I remembered that 27 is a multiple of 9. How many multiples of 9? As it turns out, 3. And there was my answer...

Only in the second or two after I thought this through did my brain remember that 27 is 3 cubed: it is clearly a multiple of 3!

In very sterile mathematical terms, 27 is 9x3. 3 is a factor, so clearly it's a multiple of 3.

But with human minds, an interesting thing occurs. 27 is 9x3, so it's a multiple of 9. But is it a multiple of 3? Our minds sometimes don't see the whole problem at once, and that slows us down. And this is just a very simply example. You can imagine how much more this becomes a hindrance in complex problems. This is where BTH intelligence comes in: it won't as easily be slowed down by such inconveniences as limited working memory. That doesn't mean it can solve problems that we can't, it can just solve them faster.

Where I see the major improvement in BTH intelligence is in non-deterministic decision making. Not just gambling, where statistics comes into play: humans can do this too. But where values come into play. How much information does a person need before being able to make a good decision, where the correct decision cannot be known without vastly more information. I think BTH intelligence will make its best showing in this category of questions: these are the questions that plague politics. Neither side has all the information, and so both sides could be right. Perhaps neither side is right, but one side's option is better: in other words, they're right. Perhaps one side is better in the long term, but worse in the short term. Between a bad short term and good long term, and a foreseeably much worse long term and good short term, which is better? Which will we choose? Values continue to come into play, and I think that a BTH intelligence can evaluate and answer these questions, not only faster than humans, but with a less complete view of the whole picture. And not just because humans are impulsive; let's stick with rational debate in politics (or is that an oxymoron?).

The problem with this? Should we trust these answers of BTH intelligence? I know it was a lame movie, but consider "I Robot"? What if the best answer is to restrict freedom in the name of security, and we left that decision to BTH intelligence? In the end, the questions that a BTH intelligence is more fit to answer than humans are the very questions we wouldn't want to hand over to someone else to answer for us.

#11 quadclops

  • Guest
  • 316 posts
  • -1
  • Location:Pittsburgh, PA

Posted 22 November 2004 - 08:35 PM

Wow, I often feel I need a higher order intelligence just to follow Nate's discussions! [lol]

On the subject of superior brains in combination with superior morals, let's take Ghandi as an example. I'd be the first to admit the man was much smarter than I am, and at the same time much more ethically advanced. Here's a person that achieved both of these things in combination. So, why should it be implausible that an AI person would be able to exceed Ghandi in both grades and goodness?

#12 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 22 November 2004 - 08:36 PM

Just quickly skimmed your argument Nate and the replies. Some quick comments, I'll try to give you a deeper argument later.

The argument that values are unnecessary cannot be successfully defended. Restating what Jay said above in a different way - Values are simply a game-theoretic calculation of the best possible choices when an agent is faced with scarce data about a complex environment. They are absolutely necessary to any agent which does not possess a gods eye view of its environment, i.e. one that isnt totally observable.

More intelligent beings (on the average) are much more inclined to be kind, not less. The two concepts are mutually inclusive, not exclusively. This is for very deep reasons in the structure of our universe which has to do with self-organizing systems.

Volitional agents are able to change their reward mechanisms based on their goal sets, humans show a remarkable ability do this even in the face of millions of years of evolutionary programming. I suspect BTH intelligence would show a great deal of flexibility in changing its goal states.

There is no such thing as mind-independent truth - truth is not possible without a mind to make it so. This is a long argument, but we can address it if you want to take it on.

Jay - I would completely disagree about BTH not being able to discover truths that humans can't. Their brains are likely to have much greater complexity than ours and will likely being able to 'see' in many more dimensions than we can which allow them to access information spaces to which we will simplay never be able to. We are just monkeys pounding on keyboards in comparison :)

Best
Peter

#13 eternaltraveler

  • Guest, Guardian
  • 6,471 posts
  • 155
  • Location:Silicon Valley, CA

Posted 22 November 2004 - 11:31 PM

Didn't read all of your post yet or any of the replies. I just wanted to come to the subject of Occam's razor's.

Occam's razor may be good for placing bets, and it may be good to quickly decide one what scenario to test first (if a test is possible), but other than that it is largely usless. Complicated explainations often do turn out to be correct. Esspecially when you are dealing with complicated systems like human beings and consciousness.

Also any situation divined through occam's razor may be the most likely, but that doesn't even mean that it has a 51% chance to be true. It could have a .00001% chance of being true and still be the simplest most likely scenario.

I like ocsrazor's quote, and I thnk it applies here :)

"Everything should be made as simple as possible, but not simpler"

#14 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 01:20 PM

The basic problem originally motivating this thread is that all volitional action of an agent, minus suicide, must presuppose that self-perpetuation is a necessary feature of reality. Any agent with both volitional and modal-world faculties is intrinsically required to choose between self-perpetuation and self-termination before it can build any goal-system. To make this choice with a modal-world faculty is for the agent to unavoidably deduce it is a contingency and therefore not a necessary feature of reality. If it isn’t a necessary feature of reality and it knows this, how can it possibly deduce that it’s necessary without begging the question for the already known false conclusion that self-necessity equals true?

#15

  • Lurker
  • 0

Posted 23 November 2004 - 02:34 PM

Could you provide an example of what you do deem a necessary feature of reality?

#16 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 03:24 PM

A necessary feature of reality is true in all possible worlds. One example is the nomological laws of universes. They cannot not be the case. Another, in arithmetic calculus, it must always be the case that a formula in the system is either true-but-unprovable or false-but-provable. I, on the other hand, can not be the case. I am an agent with both volitional and modal-world faculties.

#17 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 November 2004 - 04:40 PM

Hmm, Nate's new direction has lost me; I'll go back to ocsrazor's comment.

Without going back and re-reading all of what was said, I think the gist of it wasn't that values are unnecessary. After all, most of human behavior is dictated by values and heuristics, even the "value" we place on supposedly rational discourse.

However, the point can be easily defended that no particular set of values can be objectively defended or derived. Starting only with objective axioms and laws, a program/being could derive quite a bit of new knowledge. But how do we define what initial set of axioms and laws is sufficient as a base of knowledge, without applying our own values?

No set of values is objectively defendable; therefore, who are we to say which set of values is good or bad? Even a BTH intelligence would not be able to come up with a "universal" objective set of values and "truths", because it would have to apply, at a minimum, some set of subjective values to get started.

Only with infinite processing power and memory could a program start with all potential values and truths, and "objectively" come up with a set of universal values. Maybe. Of course, even this presupposes the value of having all the information available to us as being superior to working with an initial subset of data.

In the end, we end up with precisely what ocsrazor said: "There is no such thing as mind-independent truth - truth is not possible without a mind to make it so."

#18 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 06:13 PM

In the end, we end up with precisely what ocsrazor said: "There is no such thing as mind-independent truth - truth is not possible without a mind to make it so."


This is false.

Using myself as an example, I have two exclusive choices: self-perpetuation or self-termination. I am intrinsically required to make one or the other before developing a goal-system. A goal-system cannot possibly be based on any imperatives other than self-perpetuation or self-termination. If an agent apprehends that it has those two exclusive choices before it can develop a goal-system, then it has both volitional and modal-world faculties.

The definition of a modal-world faculty is that which enables an agent to objectively deduce either its actual necessity or contingency. These are mind-independent truths. Either has to be the case regardless if a mind apprehends whether one or the other is or is not the case. If a mind does not have a modal-world faculty, then it has no significant intelligence. If a so-called better-than-human intelligence has no modal-world faculty, then it’s not better than human; it’s a deterministic mechanism perpetuating from the arbitrary whims of exuberant programmers.

Here is the puzzle stated differently and hopefully more simply: To choose self-perpetuation is an imperative that must derive from either having the true knowledge that one is necessary or begging the question “I am necessary.” Only agents with either a faulty or no modal-world faculty can beg the question “I am necessary," because it’s impossible to have true knowledge “I am necessary” if one is in fact and mind-independently contingent. If this is the case, how can a better-than-human intelligence be plausible if to be better than human is to have at least both volitional and modal-world faculties, and if to have a modal-world faculty entails begging the question “I am necessary” in order to self-perpetuate?

I’m tempted to delete everything that I’ve posted in this thread other than what I’ve written today. It distracts from the central problem I originally wanted to articulate but couldn't.

#19

  • Lurker
  • 0

Posted 23 November 2004 - 06:58 PM

If I'm right you are applying modal logic, correct? Which as you can tell I'm not entirely versed in.

So your dilemma is how a BTH intelligence agent can come to the conclusion that as a necessary truth it must exist within it's universe/system.

Am I correct?

#20 jaydfox

  • Guest
  • 6,214 posts
  • 1
  • Location:Atlanta, Georgia

Posted 23 November 2004 - 07:09 PM

it’s a deterministic mechanism perpetuating from the arbitrary whims of exuberant programmers.

And who's to say that this is not also true of us?

#21 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 07:49 PM

Cosmos, yes and partially yes. To the second, to be more precise, the problem is arriving at the imperative to self-perpetuate without question begging a known-to-be-false conclusion.

#22 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 07:55 PM

Please, Jay, I ask only for a solution or a genuine appreciation of the problem, not red herrings.

#23

  • Lurker
  • 0

Posted 23 November 2004 - 07:59 PM

Can an agent create self-necessitation within it's system? Does self-necessitation have to exist as an inherent property to begin with?

Perhaps a BTH intelligence agent can anticipate with some likelyhood that it can create self-necessitation within it's system, therefore justifying it's previous actions if self-necessitation is achieved. Self-pertetuation with the goal of self-necessitation.

Am I making any sense here?

#24 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 08:16 PM

cosmos Does self-necessitation have to exist as an inherent property to begin with?

I’m sure it would, but blindly at the outset. A seed AI wouldn’t have a built-in modal-world faculty. It would eventually acquire it. When it does, you suggest the following:

Perhaps a BTH intelligence agent can anticipate with some likelyhood that it can create self-necessitation within it's system, therefore justifying it's previous actions if self-necessitation is achieved. Self-pertetuation with the goal of self-necessitation.

On the surface, this looks good. However, the AGI would need to have a reason to want to posit the self-necessitation utility function in the first place. It can’t do this without a question begging imperative driver.

#25

  • Lurker
  • 0

Posted 23 November 2004 - 08:50 PM

What if a BTH intelligence agent entertains the notion that if it is possible to achieve self-necessitation it will eventually happen within it's non-finite system. If a BTH intelligence agent thinks that it may play a role as the constituent part of a system becoming sentient (self-aware) of itself in it's entirety, it may pursue such goal. The BTH intelligence agent may deem it a universal goal of all agents capable of achieving such self-necessitation to do so as an inherent option to contribute to the system as a whole.

I may have lost myself in this thought experiment. What are your thoughts on this?

#26 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 23 November 2004 - 10:45 PM

I’m inclined to think those are all anthropomorphic reasons to self-perpetuate. Entertaining notions still requires calling on reasons to choose between self-perpetuation and self-termination.

We’re able to question beg and get away with it, even when an assertion is flagrantly false. Our genotype tends not to want it to be otherwise (which implies that most non-suicidals have faulty modal-world faculties), so most of us don’t have such scruples. However, transhuman intelligence will need some way to circumvent the aforementioned scruple that’s likely to arise. It won’t have a genotype underlying its intelligence to predispose it to fantasize about gods and purposes. They will need to become genuine self-deceivers.

#27

  • Lurker
  • 0

Posted 24 November 2004 - 12:00 AM

I did not intend to use anthropomorphic reasons, but simply to play a devil's advocate position in this discussion.

In humans it seems the function of spirituality (and religion) is by and large to supress existential angst.

In transhumans with BTH intelligence, if the development of "modal-world faculties" is inevitable and your dilemma is insurmountable they will likely self-terminate en masse.

However it would seem that with BTH intelligence there are inherent abilities well above and beyond the human brain's abilities. Whereas smarter-than-human (but not better) intelligence does not result in beyond human reasoning, simply human like results produced with greater speed, accuracy, and frequency. When one forces BTH intelligence in to this hypothetical, one must acknowledge that there will likely be things BTH intelligence can grasp which human intelligence may never be able to in it's natural condition. As a result, this may leave the door open as to whether BTH intelligence agents would self-terminate with the argument you've presented. That may sound like a cop out, but by the nature of the hypothetical we may not be able to answer our own questions with any degree of certainty.

If BTH intelligence agents are struck by the same dilemma you are, and self-terminate at threshold intelligence, then perhaps higher forms of intelligence cannot sustain themselves. In which case sustainable BTH intelligence is indeed implausible.

However if you see the akwardness in a situation where an individual with human intelligence tries to define the nature of the existential dilemma which may or may not face a better-than-human intelligence agent then perhaps you understand the limitations of this hypothetical scenario.

#28

  • Lurker
  • 0

Posted 24 November 2004 - 12:17 AM

Keep in mind though, even as I said the aformentioned, that I have not technically concieved of a resolution to your specific dilemma.

#29

  • Lurker
  • 0

Posted 25 November 2004 - 05:15 PM

value – a cognitively derived subjective aim
...
Argument:

(1) Values precede all volitional activity.

(2) The nature of values is that they are fundamentally unnecessary.

(3) Occam's razor optimally deals with unnecessariness.


Is there any room for debate about the claim that all values/goals are unnecessary? Is there no aim which is universal? If that claim is made, what arbitrary standard is applied to deem a contradictory possibility as negligible?

jaydfox makes a valid point about BTH intelligence, perhaps asserting absolute mind independent truths requires infinite processing power. Can/Will a BTH intelligence agent assert the overriding goal of self-termination if it has a finite capacity to think? If so, does the BTH intelligence agent have an arbitrary non-absolute standard by which it deems it's existance worth termination?

Suppose a BTH intelligence agent has proper modal-world faculties. Suppose this BTH intelligence agent comes to the tentative conclusion that all values/aims/goals are unnecessary and self-termination is the overriding action that should be executed. The BTH intelligence agent is then left with two options, the first is to self-terminate and the second option is as follows. The BTH intelligence agent can pursue tentative self-preservation with the primary goal of getting smarter. The goal in pursuing this ever-increasing intelligence is to tackle the problem of whether self-termination is necessary and all values/aims/goals are unnecessary. If the BTH intelligence agent does not have an arbitrary non-absolute standard by which it executes the self-termination function, then theoretically the BTH intelligence agent should pursue infinite processing capacity in it's aim of coming to an absolute answer either way.

Holding a tentative goal is a reversible action, while self-termination is supposedly permanent.

sponsored ad

  • Advert

#30 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 25 November 2004 - 08:07 PM

Perhaps, cosmos. But it just seems to me that if an agent is assigning a positive value to “holding tentative conclusions,” then still it must be presupposing that self-perpetuation is a necessary feature of the universe. It’s not arbitrary and nor does it require infinite processing power to have true knowledge that the self has zero purpose from a purely unbiased perspective. Alas, it is only intelligence that can assign a positive value to itself, while simultaneously doing so for an utterly no good objective reason. A property of intelligence (that wants to self-perpetuate) is that it’s fundamentally driven by pure irrationalism. Now, the questions are:

(1) If intelligence is fundamentally driven by pure irrationalism, would this mean that everything it does is intrinsically irrational?

(2) If intelligence is fundamentally driven by pure irrationalism, can it be asserted that an intelligence is irrational only when it chooses to self-perpetuate – the necessary base choice of a goal system that doesn’t include self-termination – but sufficiently rational in achieving goals?

We could probably agree on (2) unless you reject its premise that intelligence is fundamentally driven by pure irrationalism. If you reject it, then I’ll be willing to agree to disagree with you and simply concede that there exists true and penetrating a priori knowledge of little heed.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users