• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Neuroethics and Stupidity


  • Please log in to reply
30 replies to this topic

#1 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 March 2005 - 04:40 PM


This is just a thread for my own self amusement and to work out some ideas I have for a writing project in progress. I'll put up some questions and quotes first without biasing the thread with my opinions.

Two definitions up front:
Ethics - I tend to use the Jeremy Bentham utilitarian definition of ethics, i.e. that ethical behavior is what creates the creates good (human happiness) for the greatest number of people.

Stupidity - I think about stupidity as being an inability to internally model the real world robustly in a way that is fast enough to be of benefit (and therefore possbily harmful) to an individual or those who his behavior effects.

Some questions to consider:

If it was possible to enhance a human brain through physical augementation in what situations is modification a) ethical and/or b) desirable?

Should stupidity be defined as medical condition for which treatment is justified? (and possibly covered by a national health care system or insurance)

Is it ever ethical to enhance a person of lower intelligence against their will? For example situations in which their level of intelligence may threaten the life of themselves or others?

Should people in a society be subject to intelligence testing to determine if they are qualified to participate in certain functions of that society and then possibly given the option of modifying to reach a higher level of participation? (i.e. is it ethical to restrict their freedom in society based on their level of intelligence?)

Quotes on stupidity

Most people would die sooner than think; in fact, they do.
Bertrand Russell

Two things are infinite: the universe and human stupidity; and I'm not sure about the universe.
Albert Einstein

There is no sin except stupidity
Oscar Wilde

Conservatives are not necessarily stupid, but most stupid people are conservatives.
John Stuart Mill ;)

At least two thirds of our miseries spring from human stupidity, human malice and those great motivators and justifiers of malice and stupidity, idealism, dogmatism and proselytizing zeal on behalf of religious or political idols.
Aldous Huxley

Stupidity cannot be cured with money, or through education, or by legislation. Stupidity is not a sin, the victim can't help being stupid. But stupidity is the only universal capital crime; the sentence is death, there is no appeal and execution is carried out automatically and without pity.
Robert Heinlein

Edited by ocsrazor, 10 March 2005 - 05:27 PM.


#2 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 10 March 2005 - 05:19 PM

This is a very good topic, Peter. I tend to agree that, given the technology, intelligence should be normalized to get rid of all the parasites and stupid people. I think the difficult part, however, would be formalizing the notion of intelligence. What is it? What does it mean to increase happiness? Is it the duty of one to be functional enough to make others happy, or at least not make them unhappy? Or is it the duty of those seeking happiness to understand and deny their illusions of sensuality, thereby not assigning magnitude-of-function qualities to others?

The notion of intelligence is value-laden, regardless whether it’s associated with amassing accurate data; e.g., what are the reasons for amassing accurate data, and why? I would really – I mean I really, really would – like to know how one would get around that problem so that when Eli or you integrate with your AIs before everyone else does and want to objectify your imaginary ends, imagining sentient obstacles in the process, we can finally neutralize the anti-intellectual revolt like we see today with the post-modernists.

This is with all due respect, of course, Peter. I'm just someone who might not understand many things you understand, even though I'd really like to.

sponsored ad

  • Advert

#3 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 10 March 2005 - 05:44 PM

Thanks for making me chuckle ocs..

I think that helping people who wish to be intellectually augmented so that they are capable of becoming more productive idea machines is fine. Forcing them to however isn't. Those that do not want to move from their current situation are entitled to stay right where they are, just like other evolutionary dead ends, or perhaps they will continue to occupy this niche quite happily for as long as entropy allows.

The way I see things going, today's human genius will be roughly equivalent to an amoeba in a the not-distant future relative to the beings which we (or our machines) will become, beings who will have little desire to participate in our current lives much as we are not interested in interacting with unicellular organisms in a social manner.

Nate your question as to why we should amass data is a good one but its answer will have to wait for hindsight I'm afraid.

#4 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 10 March 2005 - 05:48 PM

Nate your question as to why we should amass data is a good one but its answer will have to wait for hindsight I'm afraid.

I agree, Kevin, but answering that question alone would be taking it out of the context I presented.

#5

  • Lurker
  • 0

Posted 10 March 2005 - 06:00 PM

If one is so profoundly retarded that their capacities don't exceed that of certain non-sentient animals, and one were born in this condition never having been sentient, then I don't believe there is a requirement or duty to make that individual intelligent and sentient. Does that make it unethical to impose intelligence and sentience on those individuals? No, if once those individuals are augmented they are given the right to choose whether to revert to their dumber form or self-terminate.

Is it ethical to restrict freedom based on intelligence? Only in cases where there is no other alternative and their stupidity could harm or detrimentally affect others.

Is it ever ethical to enhance a person of lower intelligence against their will? For example situations in which their level of intelligence may threaten the life of themselves or others?


Against their will? Perhaps. The alternative could be to restrict that person's rights so that they are unable to harm others (as mentioned above).

Should people in a society be subject to intelligence testing to determine if they are qualified to participate in certain functions of that society and then possibly given the option of modifying to reach a higher level of participation? (i.e. is it ethical to restrict their freedom in society based on their level of intelligence?)


Yes to both questions. Intelligence tests should not be compulsory as a member of society, but they can be compulsory for certain functions within the society (careers). Restricting rights of the less capable should only be done out of necessity. In other words, such people should be allotted the maximum rights that can be safely be afforded.

(This post was written before I realized others had responded to this thread, so excuse any overlap or repetition.)

edit: These are my current personal views, subject to possible change, open to argument.

Edited by cosmos, 12 March 2005 - 01:17 AM.


#6 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 March 2005 - 06:02 PM

Hi Nate,

My general operational defiintion of intelligence is that it is an ability to model the real world, make predicitions based on that modeling, and to act in such a way to enhance ones happiness using those predictions. I dont see intelligence, if it is defined objectively, as being value laden. I think the way we have tested intelligence in the past is value laden though and so better performance measures are desperately needed.

I think happiness can be formalized. I am big fan of Csikszentmihalyi's work on finding optimal states for human beings that are based in a balance of challenge and ability to achieve and this has affected my thinking on what happiness is. Optimal happiness probably occurs when one learns to use sensual rewards to direct their behavior to enhance their overall life condition (and that of others). One of the realizations that one comes to is that maximizing the happiness (in this optimal sense) of informationally nearby others will lead to an enhanced environment for you as well.

This is really interesting stuff for me, and I think its deeply tied to the arrow of complexity issues I am trying to sort out for myself. There is a driving force in the universe which is causing an increasing information density and therefore an increase in the informational connections between all systems - humans are at the local wave front of this increasing connectionism. I think this optimal human happiness I mention is an expression of that deep force which causes increasing complexity. I have made it a point to search for why this complexity is a GOOD thing in the really big sense, and what are the factors that affect how fast it can go. (too much complexity too quickly seems to be a BAD thing in the grand sense because it leads to system instability) So the answer to your question, is that if I was an AI I would be trying to figure out how maximize the 'happiness' and consciousness of all nearby systems, but this is just intuitive at this point so I'm trying to formalize these thoughts.

I think the way you neutralize the revolt is by putting forth a very simple, very clear set of ethics based on the above that rings true with people at all levels of education. Of course there are many cultural system bugs in place (i.e. religion) that need to be overcome to really make this happen.

So to bring it back to stupidity, I am questioning for myself when it would be right to enhance someone's intelligence - i.e. does this actually increase their optimal happiness and their ability to help others?

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 10 March 2005 - 06:17 PM

"Very intelligent people do very stupid things all the time"
is what the epitaph of Robespierre's grave should have read IMHO.

History is full of such examples.

Individual intelligence is very rarely broad enough to encompass all aspects of living and that is why we are a social species and not just a bunch of individuals trapped in only competition but also seeking forms of compensation in pragmatic social methods.

Churchill remarked on democracy that it is a lousy form of governance but it is still the best we have and I think that is where the much of the problem lies for many because they must share power under such a system with those that possess an equal right to be stupid.

Stupidity is both defined by perspective and also by result.

The vote of a homicidal maniac or a fool weighs as much as an Einstein or Gandhi under democracy. That is why Plato came to reject the entire system that his teacher valued most of all so much he gave his life for it, but also what Plato saw as killing him through its corruption *stupidly*.

Should stupidity be defined as medical condition for which treatment is justified? (and possibly covered by a national health care system or insurance)


Stupidity is already defined as a mental condition and we normally refer to it in terms of *competency*.

I switched your order of questions intentionally because the first and third questions may be better understood as dependent conditions that way.

If it was possible to enhance a human brain through physical augmentation in what situations is modification a) ethical and/or b) desirable?

and

Is it ever ethical to enhance a person of lower intelligence against their will? For example situations in which their level of intelligence may threaten the life of themselves or others?


It is both ethical and desirable only so long as it is *voluntary* however there is also a social value for *compulsory education* and this returns to politics and who controls the messages from that system IMO.

Now I will return to laughing MAO and I sincerely hope this rant made you feel better, it certainly made me.

However I think what you want to call intelligence is really wisdom and a far more difficult thing to teach or program for.

#8 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 March 2005 - 07:44 PM

Thanks guys,

Some great responses. You hit many of the issues that I have been thinking about. Specifically the problems with less capable/intelligent individuals hampering the advance of an open society as a whole. The problems we have had locally with board management and the results of the last election here in the US really have hammered home how a lack of critical thinking skills by even a minority in an organization can really make things unpleasant for everyone. Stupidity is a human system critical instability. Also, less capable individuals tend to group to insulate themselves from criticism, thereby avoiding selection pressures.

PS Laz if you really want a laugh, the figurative image that has been in my head of recent events has been of few chimpanzees having a shit flinging fight in the middle of a really great dinner party :D

#9 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 10 March 2005 - 08:52 PM

I think I understand your perspective a little bit better, Peter. Thanks. I just have one more issue I want to bring up regarding some things you said in response.

I think happiness can be formalized. I am big fan of Csikszentmihalyi's work on finding optimal states for human beings that are based in a balance of challenge and ability to achieve and this has affected my thinking on what happiness is.

So the answer to your question, is that if I was an AI I would be trying to figure out how maximize the 'happiness' and consciousness of all nearby systems, but this is just intuitive at this point so I'm trying to formalize these thoughts.

OK, there are several neurohackers racing to the finish line, and one of them is you. I think that’s great.

Another seeming fact is that you hold the above two basic values. I’m wondering whether they might be in conflict with each other. On one hand, you believe balancing challenge and ability is universally a good thing. On the other, you want to optimize only “nearby” systems to minimize the stupid factor.

The reason I think this matters is because, barring great advances in social engineering by transhuman intelligence, in developed societies most of the nominal intelligence is concentrated on the engine of the much larger, stupidity-infested service industries such as banking, insurance, real estate, etc., and within the microeconomies of all industries.

Even if many people become acquainted with the idea of transhuman intelligence for the first time in the next several years, they are already out of the race, especially if transhumans will only be focusing on nearby constituents. That’s fine from my perspective, of course. But if the greater populace already lost the race before even trying to enter, your value of wanting to minimize stupidity at large conflicts with the desire to see agents optimize happiness through balancing challenge and ability, since a large part of balancing challenge and ability is to have enough dollar votes to match the challenges with the abilities. Because of the pace, they have no choice but to continue wasting time getting dollar votes doing only stupid things indefinitely.

#10

  • Lurker
  • 0

Posted 10 March 2005 - 09:07 PM

This is a little bit off topic but...

http://www.reason.co...ka.shtml#008709

Leon Kass and another assault on progress.

As I asked when the report was first issued: What does Kass mean by all research? "Is this an effort to turn back the clock on such beneficial technologies as assisted reproduction or pre-implantation diagnosis of genetic diseases in embryos? Does this debate include another fruitless and contentious effort to force a national consensus on the morality of abortion and contraception? Kass lost the debate on assisted reproduction in the 1970s. Is this a way for him to reopen that debate for a second round in which he hopes to
fare better?"

So the answer to my question is yes, Kass is trying to turn back the clock. It is clear that
he has never given up on trying to deny the benefits of safe reproductive and biomedical
technologies to future generations and that he never will.


If "Stupidity is a human system critical instability", then we're due for a system crash. [glasses]

#11 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 10 March 2005 - 09:33 PM

Hi Nate,

I meant "nearby" in the practical sense, as in a system can't affect another system that is informationally distant. Its not a matter of want, its a matter of can't. Actually when I said that I was thinking more of things in terms of the Universal sense. As in bringing consciousness to the solar system, nearby star systems, etc., ad infinitum. I think of all of earth as an informationally local system. I dont think any human on the planet is going to be able to escape the sweep of the type of things we are talking about it.

From my perspective the insurance and banking industries are already becoming highly automated and seem to require less and less input of intelligence over time. Will have to think about that one some more. I'm very concerned about information flows within cultural systems.

I think the perspective of a race is not the one I would select, I think we are looking at coalescing systems, not competing ones. The system I am envisioning will want to incorporate as much freely offered intelligence as possible.

#12 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 11 March 2005 - 10:51 AM

If we subscribe to utilitarianism, I would say, yes of course, cure the stupid. Incidentially I don't. Your definition of stupidity depends on that of happiness, which I think is so complex that we cannot assess it with sufficient confidence to justify a patronizing intervention (i.e. to "amend" their condition against their will). See Kip.
A superhuman AI could (by definition) be clever enough to assess human stupidity, but I would guess it would always run into trouble in understanding the stupidity/intelligence/happiness of its own peers, or else these guys could become pretty bored with life. I would venture to say that social interaction is fun only with those whose stupidity/intelligence/happiness you cannot sufficiently understand to justify a patronizing intervention.

#13 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 11 March 2005 - 07:51 PM

Hi John,

I dont think my definition of happiness is that complex, just defined in a way that is precise. Its a balance between between ability to make progress toward goal states and sufficient challenge to make that progress interesting to the system in question. If things are too hard or too easy there is a fall off in happiness.

My concern about stupidity and judging when it might be ethical to modify it has more to do with the negotiations that must occur when lower order complexity systems rub up against higher order complexity systems in battles for scarce resources. We already have experienced a taste of this in the ongoing battles for animal rights.

My gut feeling is we are going to run into these types of issues more and more frequently as humanity begins to evolve into a variety of new forms. We will want to try and give the lower order systems the benefit of the doubt, but we will also want to allow more complex systems the chance to continue to maximize their ability to use resources - information space becoming the most critical resource. This is the heart of the ethical questions I am asking.

Thank you for bringing the Kip Werking piece to my attention, I am highly critical of it though. There are several oversimplifications that make his positions untenable. There is a critical flaw in his argument about maximization of happiness. He assumes that happiness is directly tied to sensual (genetically based) pleasure. This is a highly questionable oversimplification, as people have long been able to override their sensual desires in favor of higher order social achievements. Sensual pleasure seeking still drives much of human behavior, but it isnt the basis of ethical behavior and certainly not of utilitarianism.

Also, his ideas on behavioral prediction are also oversimplified. Even animal behavior is statistically distributed, there is no absolute prediction. All animals operate within a particular behavioral space. The more complex the nervous system - the larger the behavioral space - and the more statistical uncertainty about prediction. As such his 4th conceit is a fabrication and does not meet occams razor - it is an unnecessary and unprovable assumption. Hard determinism does not exist in the real world, as Schroedinger's legacy so clearly points out. For the same reason your last sentence is absolutely correct John, only when one can get a fairly robust picture of the behavioral space of a system can one intelligently make modifications in that system. To control a highly dynamic state space you have to have control of many many variables at once.

#14 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 11 March 2005 - 08:49 PM

Hi Peter,

I’m not sure whether Kip would want me to defend TPC, but I acknowledge that I hope not to disgracefully misrepresent him.

I don’t think its conclusions revolve so much around the intimate ties between happiness and gene propagation as much as it does the general outcome when happiness, whether it’s derived from balancing challenges and abilities or maximizing pleasure centers, and gene propagation, despite whether or not it phases out, are fully automated. This general outcome will be agents having the capacity to directly observe the unnecessariness of ethics or moral philosophies. It’s difficult for us to envision this now, in a time when we still face existential threats and possess very real-seeming sensualities. But nonetheless it’s a naturalistic fallacy to assign goodness to complex, self-enhancing systems just by observing that this occurs somewhere in reality. I would not assign the goodness quality to it, but I would simply say that this behavior is permissible for no other reason than because the cosmos doesn’t tell us it’s not. Coincidentally, converging on this ideal is more likely to trump the alternatives, but this acknowledgement should have no relation to choosing the actual course.

BTW, sorry for the race analogy. It was a bad one. I think being a little indifferent to the stakes might be inherent in not having as much of an understanding of what’s going on at the forefront as those who are already there. I hope to continue improving my understanding with time. I appreciate your patience.

Edited by Nate Barna, 11 March 2005 - 09:05 PM.


#15 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 11 March 2005 - 09:31 PM

Hmmm Nate,

My working hypothesis is that there is a deep ethics in the universe which is independent of what particular substrate you are running on. As such I dont think greater-than-human-intelligence agents will be less aware of ethics, but will be more so. I think ethics will be a native part of any highly self aware system (its a necessary system property) and that is why I object to Kip's reasoning that this won't be true. There must be rules for best possible behavior when part of a highly complex system of which you are a member.

I assign goodness to complex, self-enhancing systems because they bring consciousness into existence. I start with consciousness as my fundamental assumption of goodness and extend that both backward, to the processes that were necessary to create consciousness, and forward, to those processes which increase self-awareness and are able to bring more consciousness into being in the universe. For me the reverse is also true, processes which destroy consciousness on the whole are inherently bad.

Thank you Nate, I think you just got me to cite my worldview in the most short and straightforward expression I have yet to produce [lol]

#16 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 11 March 2005 - 11:07 PM

ocsrazor I assign goodness to complex, self-enhancing systems because they bring consciousness into existence. I start with consciousness as my fundamental assumption of goodness and extend that both backward, to the processes that were necessary to create consciousness, and forward, to those processes which increase self-awareness and are able to bring more consciousness into being in the universe. For me the reverse is also true, processes which destroy consciousness on the whole are inherently bad.

That’s a good way of looking at it, Peter. I was trying to sort that very thing out in my thread, “An Immortalist Proposition.” But instead of invoking consciousness to justify goodness, I invoked intelligence to simply observe the actual properties of arbitrariness. I still think it’s a naturalistic fallacy to propose that goodness is intrinsic of consciousness. The notion of goodness implies ought, and religion is what we want to try to get away from. I’m not sure if my invocation was any more successful, but it at least attempts to get away from implying what ought to be done on the basis of what is – the naturalistic fallacy.

If we can, on a large scale, intersubjectively agree on a particular universal description of intelligence, I think we’ll be able to agree on the reasons why some of us might just so happen to meet each other somewhere on the complex, self-enhancing systems vector without being religious zealots. Coming to a universal description of intelligence at this time, however, would be impractical. It’s possible that we may instead come to a compromise of several items of the description that relates intelligence and arbitrariness.

(1) Intelligence is a system with self-interest. “Self-interest” here means that the system is capable of regarding itself when modeling and normalizing courses of action; indeed, it couldn’t do otherwise since the system itself is what would have to act based on any action model.

(2) Intelligence is a system not only with self-interest, but one that optimizes its ability to exist and persist, since its non-existence would be a void of intelligence, and a void of intelligence is less intelligence, which is, by definition, not an intelligent course of action to model.

(3) The notion of arbitrariness is an abstraction that only a system with an ability to abstract it can perceive and understand. Arbitrariness does not exist independently of such systems, which implies that events such as ocean currents are not arbitrary. Intelligence is one such system that can abstract the notion of arbitrariness, and the notion means courses of action that are based on whim and not rationality or intelligence.

(4) Because modeling courses of action in the direction of less intelligence is not based on intelligence, it would be arbitrary to model and act on such courses of action. Therefore, the notion of non-arbitrariness is bound up in the systems of intelligence.

None of this tells us what’s good or what we ought to do, but at least we know that it’s not arbitrary to converge on the complex, self-enhancing systems vector, whereas it likely would be if we did anything more stupid.

#17

  • Lurker
  • 0

Posted 12 March 2005 - 12:23 AM

None of this tells us what’s good or what we ought to do, but at least we know that it’s not arbitrary to converge on the complex, self-enhancing systems vector, whereas it likely would be if we did anything more stupid.


Are we then left to come to the decision of what we ought to have done, in hindsight (as Kevin suggests), while we "converge on the complex, self enhancing systems vector"?

#18 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 March 2005 - 12:34 AM

cosmos Are we then left to come to the decision of what we ought to have done, in hindsight (as Kevin suggests), while we "converge on the complex, self enhancing systems vector"?

Cosmos, the only decisions you're left with are the ones you choose among what's physically possible. You're an agent who models and normalizes courses of actions. I don't need reasons for why you choose what you choose. All I think is that being a religious zealot simply might cause your abstract system, however you abstract it, to crash. Whether or not you want that, that's up to you.

#19 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 March 2005 - 12:55 AM

In other words, I’m just representing what is the case. I will do what I will do. Others will do what they will do. In the end, some of us will converge on the complex, self-enhancing systems vector. Some of us won’t. For those who do, "those who do" is the case. For those who don’t, "those who don't" is the case. No drama, just what is the case is the case.

#20

  • Lurker
  • 0

Posted 12 March 2005 - 01:15 AM

Nate, I see your second response to my post and have changed my reply.

In other words, I’m just representing what is the case. I will do what I will do. Others will do what they will do. In the end, some of us will converge on the complex, self-enhancing systems vector. Some of us won’t. For those who do, "those who do" is the case. For those who don’t, "those who don't" is the case. No drama, just what is the case is the case.


I've emphasized the part of your post that I don't understand. Are you saying that there is no overriding obligation to act either way? There is no requirement to choose a non-arbitrary course of action over an arbitrary one? Sorry for persisting with this line of questioning, but I'm curious.

#21 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 March 2005 - 01:23 AM

cosmos

Nate Barna In other words, I’m just representing what is the case. I will do what I will do. Others will do what they will do. In the end, some of us will converge on the complex, self-enhancing systems vector. Some of us won’t. For those who do, "those who do" is the case. For those who don’t, "those who don't" is the case. No drama, just what is the case is the case.

I've emphasized the part of your post that I don't understand. Are you saying that there is no overriding obligation to act either way? There is no requirement to choose a non-arbitrary course of action over an arbitrary one? Sorry for persisting with this line of questioning, but I'm curious.

Yes, that's exactly what I'm saying. You have no real obligation to act either way. The only kind of obligation you might perceive would be an abstraction that might formulate as a result from some abstract contract you might have with yourself or others. In reality you're still not required to act on it, but you, the way your system abstracts itself, might perceive conditionals (if-then abstractions) and tend toward not violating them. But in the end, what is the case is the case, and physics isn't heartbroken either way.

#22

  • Lurker
  • 0

Posted 12 March 2005 - 01:29 AM

I appreciate the clarification.

#23 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 12 March 2005 - 03:47 AM

Yes, that's exactly what I'm saying. You have no real obligation to act either way. The only kind of obligation you might perceive would be an abstraction that might formulate as a result from some abstract contract you might have with yourself or others. In reality you're still not required to act on it, but you, the way your system abstracts itself, might perceive conditionals (if-then abstractions) and tend toward not violating them. But in the end, what is the case is the case, and physics isn't heartbroken either way.


Hi Nate,

You are absolutely correct that we will tend toward the physically possible. To fight against the directionality of increasing complexity, self-organization, and consciousness (or intelligence if you like) induces suffering in human systems. You can be an ultimate moral relativist and insist that there is no good or bad, that things just are, but there is an observable loss of information and an increase in pain in systems that go against the "flow". There are paths forward which are less painful (in the grand sense) than others, and this type of pain should be perceivable by any intelligent system on any scale. (to link this back to the original discussion, stupidity in my conception are processes which cause this type of pain)

Being a religious zealot means adopting a system of morals - a fixed ethical code. I do not believe the Universe is in any sense fixed, even the fundamental constants seem to be evolving, so this would seem to indicate that ethics should also be evolving as system relationships change. The ethics I am seeking is about these relationships between systems and the most effective way to optimize those relations, and would adapt to changing relationships. There is something about informational economy and increasing specified complexity that the Universe "likes" and rewards systems that follow these trends with an increasing access to information. You can choose to not label this as good, but it is what we perceive as good or desirable, and I dont think this is specific to our particular incarnation of intelligence.

The problem with physics is that it has no understanding of contextually specified information, and so therefore does not assign value to any particular action. In the sense of Warren Weaver (of information theory fame) there is no measure of effectiveness of the transfer of meaning anywhere in physics as yet. Current physics still denies the arrow of specified complexity for the most part and this is a major flaw which has only recently been acknowledged. That physics isnt heartbroken over one particular course of action or another is a lack of perception of deep meaning on the part of physics.

#24 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 12 March 2005 - 04:43 AM

Nice thoughts there.

I dont think my definition of happiness is that complex

Yes, exactly that was my criticism. Mine is. Thus I am left with no choice but to play the game of life without being able to formalize how and why I play the way I play. Glad to hear this is "permissible", thanks Nate ;)

Nihilism is seems to me more successfull at meeting occams razor than a "deep ethics in the universe", even when it results from the staggering complexity of the perceivable world. (Duly recognizing that the razor is more like a guideline than a binding law.)

#25

  • Lurker
  • 0

Posted 12 March 2005 - 08:37 AM

John Schloendorn:

Nihilism is seems to me more successfull at meeting occams razor than a "deep ethics in the universe", even when it results from the staggering complexity of the perceivable world. (Duly recognizing that the razor is more like a guideline than a binding law.)


This has been a source of continual confusion for me. I think it's unlikely that a static universal ethic exists, but neither do I necessarily subscribe to extreme moral relativism.

oscrazor:

I do not believe the Universe is in any sense fixed, even the fundamental constants seem to be evolving, so this would seem to indicate that ethics should also be evolving as system relationships change. The ethics I am seeking is about these relationships between systems and the most effective way to optimize those relations, and would adapt to changing relationships.


However, I tend to agree with this assessment of ethics (granted my agreement remains tentative and intuitional). I suppose what oscrazor describes, could be defined as a form of situational ethics. edit: Hmmm... situational ethics as it is currently known seems to be associated with Christianity, developed by a priest, I didn't intend to make that association. Take "situational ethics" in this post, to mean what each word is described as denotatively.

I'm thinking out loud here, feel free to ignore this post or to point out a nonsensical statement.

Edited by cosmos, 12 March 2005 - 09:06 AM.


#26 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 12 March 2005 - 12:41 PM

Nihilism is seems to me more successfull at meeting occams razor than a "deep ethics in the universe", even when it results from the staggering complexity of the perceivable world. (Duly recognizing that the razor is more like a guideline than a binding law.)


Hi John and Cosmos,

Nihilism suffers from the same problem as physics, its operating on a noncontextual informational worldview. It ignores the increasing complexity in the world and is based on a Newtonian-Darwinian perspective which is now severely in need of updating in the cultural imagination. Its not that nihilism meets occams razor, it simply ignores many of the phenomena we see in the real world, and so is a non-explanation. We do not live in a random universe, entropy does not rule, the arrow of increasing specified complexity continually drives the formation of higher order systems despite entropy. (as an aside, I'm a huge fan of Nietzche, but in many ways he just gave up rather than face the really hard questions. We can forgive him because he just didnt have the intellectual toolbox we gained in the 20th century)

I do find Situational Ethics interesting, because one of the defining princinples for how systems move forward along the path of specified organization is by walking the line between organization and chaos. Systems that stray to far to either side of the line will stagnate or become unstable. Joseph Fletcher has also correctly assessed that the only contribution of Christian thought that is worthy of being added to our worldview is that love is a supreme law. What we know as love is another expression of the 'deep ethic' that I was talking about earlier and is this interesting deep 'desire' of all systems to organize into higher order systems. The concept of love likely represents one of the most fundamental physical laws and this is interesting because Fletcher has transmuted God into a physical law.

#27 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 March 2005 - 01:36 PM

Hi Peter,

I shall come at this from another angle, while trying to tie it in with the central topic. After this, I’ll give you the last word if you want, unless there are questions for me.

Humans tend to speak in vague generalizations. The notions of goodness or badness are vague generalizations. A generalization is a representation whose referents aren’t completely specified in the abstractions (observer information, or components on a component hierarchy) of the agent. The more referents that are subsumed by a generalization, the more vague the generalization, and the higher the probability its referents are too inconsistent for the sense of the generalization to be considered truth-bearing.

Granted, to accurately specify all of the referents of a generalization is typically impossible for a human mind. However, one can at least know when one’s generalizing too vaguely, that this implies probable inaccuracy, and that a more accurate specification of the dynamic referents is in order. Incidentally, we may also note that the inaccuracy of a vague generalization indicates that it may not even exist at all, but that it’s just a mystical construct, such as a god, in the attempt to explain things, albeit inaccurately.

When we begin to specify the notions of pain, goodness or badness, we use symbolic signals to represent a set of dynamic signals we can’t entirely account for. But we can still identify enough members in this set to figure out that the abstractions of pain, goodness or badness are not inherently truth-bearing notions. They are mystical manifestations that attempt to describe an unaccountable set of dynamic signals that say nothing about an independent ethical nature. We’ll be better able to conceive of this the better we become at not generalizing, especially as superintelligent agents.

Unfortunately, we must still operate in a context of agents who think their vague generalizations are truth-bearing. Those who are working on the development of transhuman intelligence, and need to garner more support or dollar votes, may still need to appeal to the mystical intuitions of the average individual. For the ends, I’d say such means are harmless. Suppose you achieve transhuman intelligence, you’ll probably be much better equipped to deal with mystical intuitions and direct them toward the opportunities, through democratic policies, to achieve your level of understanding and competency.

All the while I don’t think there is any need for the transhumanist to be a mystic herself.

#28 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 12 March 2005 - 02:25 PM

Hi Nate,

You are running up against the limitations of philosophy here, which tends to get lost in semantics games. A scientists definition of generalization has a much different semantics than the one you have just used. In science a generalization is an understanding of system relationships that can be applied across many systems. It is not vague in any sense of the word and in fact becomes more likely to be correct the more system relationships it can accurately describe.

The ideas I am talking about are also not mystical in any sense, it is my strong mathematical intuition that qualities such as goodness, badness, and pain can be quantified. These are descriptors of relationships in an informational universe. What we perceive as good or bad, painful or pleasurable, are sensations that are directly tied to
real world information about how agents relate to each other and to other objects.

Constructs such as gods are very low resolution descriptors of real world processes. I absolutely agree they are highly inacccurate. In most cases they are simply the attribution of intelligence to physical or informational processes or collections of processes.

So, our key disagreement is that I believe it is possible to quantify the concepts of good and bad in terms of information flows in the Universe and that what we have come to know as good and bad is directly tied to increasing specified complexity. This is a testable hypothesis though, so there should be a resolution to these questions and an accurate description of relationships that deepen specified complexity.

To try and make myself absolutely clear, I want a highly accurate picture of what relationships between systems at all scales produce increasing specified complexity. I believe human brains emotionally perceive increasing specified complexity as 'good'. 'Good' is a low resolution concept, but its directionality is correctly aligned with the arrow of complexity, and as such is not mystical and represents the sensation of real world informational processes. It is much like our sensation of time, which is also a informational relationship descriptor.

Your comments are helping me clarify my position Nate, so feel free to continue if you would like.

#29 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 12 March 2005 - 03:59 PM

ocsrazor You are running up against the limitations of philosophy here, which tends to get lost in semantics games.

I think this might be a slightly unfair judgment. We are successfully calibrating I think, Peter, by getting to the heart of the matter, and philosophical tools are playing an important role in this. Far from a semantic game, philosophy works to achieve cognitive clarity.

ocsrazor A scientists definition of generalization has a much different semantics than the one you have just used. In science a generalization is an understanding of system relationships that can be applied across many systems.

I should’ve made a distinction between a useful generalization and a vague generalization that causes problems. I’m fully aware that science uses generalizations to understand system relationships. But your statements here don’t invalidate that the nature of the generalization is subsuming referents that are not explicitly specified in the generalization itself. We agree on this, and there is no reason to suggest we are operating on completely separate semantics, as if to suggest philosophy and science aren't interdependent. I simply didn’t go far enough and grant that there are useful and truth-bearing generalizations, which you refer to as – as I approximate – high-enough resolution descriptors that are qualified to be considered reified (i.e., the treatment of conceptual representations as if they are real).

ocsrazor The ideas I am talking about are also not mystical in any sense, it is my strong mathematical intuition that qualities such as goodness, badness, and pain can be quantified.

Yes, I certainly agree with you that they can be quantified. This is what I meant by “accurate specifications.” My only other proviso is that it may be possible for these particular qualities to be too vague, such that their natures don’t contain referents that correspond to an independent ethical system that doesn’t exist in reality. We may, of course, conceptualize of one – or several, in the case of evolving ethics – but one would be hard-pressed to suggest that ethical universals exist independently of agents, especially in a presentation to rational policymakers and constituents.

At this point I don’t think we’re very much in disagreement. I don’t have a moral objection to how you deal with information. I think I understand how you treat information. If I’m still misunderstanding, then that might simply be an indication that I need to work toward having a larger overlap in our knowledge bases. But this shouldn’t imply that philosophy is an inhibition. On the contrary, it’s a liberator, and, besides helping in a calibration, it made possible for an agent to realize that it should know what another knows to understand the other better.

sponsored ad

  • Advert

#30 ocsrazor

  • Topic Starter
  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 12 March 2005 - 08:24 PM

Hi Nate

Nate Barna
I think this might be a slightly unfair judgment. We are successfully calibrating I think, Peter, by getting to the heart of the matter, and philosophical tools are playing an important role in this. Far from a semantic game, philosophy works to achieve cognitive clarity.


I do have a chip on my shoulder for modern philosophy, which as you can probably tell I have rejected as a tool for getting meaningful answers for the current intellectual problems we are facing. I have been terribly disappointed by modern cognitive philosophers, who at best seem to be able to do little else than describe what science has already done (for at least the last 50 years), and at worst are completely obfuscating issues that were fairly clear to begin with. This is where my frustration stems from. I also have a problem with professional philosophers use of language, which for me is not maximally compressed ;)

I should’ve made a distinction between a useful generalization and a vague generalization that causes problems. I’m fully aware that science uses generalizations to understand system relationships. But your statements here don’t invalidate that the nature of the generalization is subsuming referents that are not explicitly specified in the generalization itself. We agree on this,


Absolutely we agree on generalizations subsuming referents. I am particularly sensitive to the terminology of generalization being used negatively because I am in a field (neuroscience) which desperately needs to escape the reductionistic paradigm and to start looking at system generalizations. When a mature generalization is made in science it can be specifically and accurately applied to all systems which it describes though.

and there is no reason to suggest we are operating on completely separate semantics, as if to suggest philosophy and science aren't interdependent.


I think science may be becoming independent of philosophy, much as philosophy became indepedent of religion. This is to do with the fact that I believe science has a more precise and more robust way of describing phenomena than philosophy. This is another discussion though [lol]

I simply didn’t go far enough and grant that there are useful and truth-bearing generalizations, which you refer to as – as I approximate – high-enough resolution descriptors that are qualified to be considered reified (i.e., the treatment of conceptual representations as if they are real).


Yes. Energy, space-time, information being the canonical ones.

but one would be hard-pressed to suggest that ethical universals exist independently of agents, especially in a presentation to rational policymakers and constituents.


A slight further clarification, the 'deep ethics' of which I spoke is actually the expression of a physical law of complexity and organization which becomes ethics when you get to the level of complexity of intelligent agents.

But this shouldn’t imply that philosophy is an inhibition. On the contrary, it’s a liberator, and, besides helping in a calibration, it made possible for an agent to realize that it should know what another knows to understand the other better.


I certainly wouldn't discourage anyone from learning philosophy. It is a very valuable tool for conditioning the mind. For the reasons I mentioned above I would suggest you be wary of thinking it can actually solve any of the really interesting BIG questions though.

Peter

sponsored ad

  • Advert



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users