• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Ethical Love


  • Please log in to reply
9 replies to this topic

#1 Avatar Polymorph

  • Guest Techno-Rapture
  • 22 posts
  • 0
  • Location:Melbourne Australia

Posted 24 October 2002 - 11:48 PM


I have a new version of my book The Path of Love, retitled as Ethical Love, on the internet at www.paradigm4.com.au/way/ - in chapters. Points will be placed in a more hierarchical fashion in a few weeks.

Contact can be made through: avatarpolymorph@hotmail.com

This book argues for the moral integrity of human mortals, the positivity of amortality including immortality, principles of maximum choice and minimum force, support of systems operation in the Singularity, control of negative emotions and so on. It does not promote the notion of uploading by atomic level map copying of neurological patterns or required integration into computronium virtualities. It supports the notion of flexible choice of subsystem structure including AI structures and beyond by upgrading techniques where desired, mixed structures and reversible structures. It also supports notions of personal privacy regardless of informational capabilities in terms of environmental sensory impact (e.g. through dust-like or fog-like pseudo-AI structures) and virtuality/overtuality (and reverse) overlays.

Concurrently with Ethical Love I am continuing to promote my basic 6 point plan as an IMMEDIATE approach outside of the wider implications of amortality (called emortality by some), the Singularity and the Techno-Rapture. This has only included wide scale postering at this stage but I am shortly (hopefully in December) to place my first advertisement which should be fun.

"6 points

1.
INCREASED FUNDING FOR CANCER RESEARCH.

2.
INCREASED FUNDING FOR HEART CELL RESEARCH, HEART REGROWTH RESEARCH, GROWING OF REPLACEMENT HEARTS FROM SELF CELLS AND STROKE RESEARCH.

3.
INCREASED FUNDING FOR ALLEVIATING THIRD WORLD MALNUTRITION (CURRENTLY UNDER ONE BILLION IN PARTS OF AFRICA AND INDIA).

4.
INCREASED FUNDING FOR GENETIC AND PROTEOMIC THERAPIES AND INTER AND INTRA-CELLULAR NANOTECHNOLOGY (MOLECULAR COMPUTERS CONTROLLING MOLECULAR CELL REPAIR MACHINES) FOR LIFESPAN EXTENSION AND YOUTHFUL APPEARANCE, AND FREE PROVISION OF SUCH TO ALL CITIZENS (CULMINATING IN JUST OVER A DECADE).

5.
INCREASED FUNDING FOR SELF-REPRODUCING ASSEMBLER NANOTECHNOLOGY (MICRO-FACTORIES REQUIRING ONLY ENERGY, MINERALS AND INFORMATION WHICH ALLOW FOR THE PRODUCTION OF MATERIAL GOODS AND MACROBOTS WITHOUT MANUAL LABOUR) AND GUARANTEED ACCESS FOR ALL CITIZENS TO SUCH (CULMINATING IN JUST UNDER TWO DECADES).

6.
LEGISLATE AGAINST THE KILLING OF PARTIALLY AWARE ANIMALS SUCH AS APES, DOLPHINS AND DOGS IN ORDER TO ALLOW DISCUSSION OF THEIR FUTURE STATUS."

------

"ETHICAL LOVE
By Avatar Polymorph

http://www.paradigm4.com.au/way/

A book about nanotechnology in your body, augmenting your brain, living forever, virtuality, using self-reproducing nanotechnological assemblers as micro-factories, including for making macrobots. A book about love and how to assist your fellow sentients. A book about the promotion of choice and protective shielding. A book about how to reach hypertopia, personal growth, meditation, self-awareness, self-control and the joy of living within choice. A book for ethicals."


Towards Ascension!
Aumentar!
In joy
In Celebration of the Techno-Rapture
33 After Armstrong

#2 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 18 November 2003 - 03:44 AM

I highly recommend Avatar Polymorph's works. He says a lot, and I agree with the majority of it. I'm going to check out his update right away.

In Love,
Michael

#3 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 18 November 2003 - 05:12 AM

ImmInst members; does this qualify as a "troll" post?

sponsored ad

  • Advert

#4 reason

  • Guardian Reason
  • 1,101 posts
  • 251
  • Location:US

Posted 18 November 2003 - 11:18 PM

Yup, a troll, and a fine example of the type.

Reason
Founder, Longevity Meme
reason@longevitymeme.org
http://www.longevitymeme.org

#5 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 November 2003 - 06:59 AM

Yet vote_for_bush posted the exact same thing on the "Why Work Towards the Singularity?" thread and Laz and John_Doe didn't have a problem with it? We should reach some sort of consensus on whether posters like vote_for_bush should be allowed to freely post. Can you imagine what would happen to the overall quality of this forum if there were 10 like him, or merely 5?

#6 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 November 2003 - 07:17 AM

Yet vote_for_bush posted the exact same thing on the "Why Work Towards the Singularity?"


Michael that is the point, it wasn't exactly the same thing and he is entitled to his opinion and to express it.

Is that what you want a Super Intelligent AI to do; suppress all speech that does not conform?

#7 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 November 2003 - 08:37 AM

Nope; I would want an SI to be much more benevolent than I am, that's the whole point of creating one. Also, the material wealth and new space that SI could open up would change the context of problems like these appreciably.

Laz; forums like these can be *eaten from the inside* if too many trolls like this one show up. You'll see that on nearly every web page that gives advice to moderators of internet forums.

The higher our standards for members, the more valuable and productive conversation we will have.

#8 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 November 2003 - 08:44 AM

Nope; I would want an SI to be much more benevolent than I am, that's the whole point of creating one. Also, the material wealth and new space that SI could open up would change the context of problems like these appreciably.


Then consider this a lesson in programming :))

How would you trust it to be more benevolent than its programmers [?]

I know I have heard this before because it is Super Intelligent and not constrained by human foible. Well guess what its standards of benevolence then could mean that it decides to put us out of our misery. [:o]

Laz; forums like these can be *eaten from the inside* if too many trolls like this one show up. You'll see that on nearly every web page that gives advice to moderators of internet forums.

The higher our standards for members, the more valuable and productive conversation we will have.


This is the ancient division of Plebeian and Patrician Class at work and the class rift is being encouraged by this just as we have within our grasp a means of literally eliminating it. Most people will seek this medium like higher ground in a flood unless we fall prey to divisive forces that prefer exclusion and in that way minimize our effective voice.

Yes there are thousands of now private clubs out there effectively doing very little except pruning one another's virtual lice. [huh]

Do you want to be effective, or comfortable [?] [":)]

#9 AgentNyder

  • Guest
  • 166 posts
  • 1
  • Location:Australia

Posted 19 November 2003 - 10:48 AM

Translation:

1 Tax the wealthy and give money to bureaucrats.
2 Tax the wealthy and give money to bureaucrats.
3 Tax the wealthy and give money to bureaucrats.
4 Tax the wealthy and give money to bureaucrats.
5 Tax the wealthy and give money to bureaucrats.
6 Worship the beast.


Actually this is quite a reasonable interpretation unless Avatar Polymorph is more specific as to where the funding is to originate from.

#10 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 November 2003 - 01:18 PM

Laz, it should be possible to build AIs more benevolent than we are, because there is no rule that says beings can't create other beings more benevolent than they are, just as there is no rule that says beings can't create other beings smarter than they are. The absence of these rules is what the Singularity is supposed to exploit. These are ideas that lie at the very foundations of transhumanism; we can become smarter, nicer, and better people by pulling ourselves up by our own bootstraps. We don't need supernatural assistance to improve, and the like.

Physically, a selfish mind will have a certain kind of goal system, a certain kind of pattern; a selfish mind works in a certain way. A selfish mind generates and accomplishes goals based on the way it works. A psuedoselfish mind, like a human mind, fluctuates back and forth between selfish and altruistic behaviors, based on person and context. A pseudoselfish mind works in a certain way as well. A purely altruistic mind, although one doesn't exist yet, would also work in a certain way; it would have a certain goal system, a goal system that didn't center around itself. In evolved entities, goal systems automatically center around organisms; that's the way natural selection works. But when we create a mind from scratch, it needn't be selfish. AI allows us to write the code for the mind; it's not "tending towards" any sort of psychological state; the state is there because we program it.

If an AI had "standards of benevolence" that decided to "put humans out of their misery", then that would be a failed AI, wouldn't it? All we need is a sane, nice, altruistic AI; a successful one, that *didn't* have "standards of benevolence" that involved killing people. Why is this so hard to conceive? When you ask a friend or relative to do a certain thing, like bring you a cup of tea or something, do you expect them to come back and throw the tea in your face, or go off and never come back again, or poison your tea? Heck no!

Normal behavior requires a certain amount of underlying cognitive complexity, and that complexity *has to come from somewhere*, from a programmer, for it to exist at all. In humans it came from evolution, in AI it will come from human programmers. Sanity is something real - it can exist, minds can actually be stably sane, benevolent, and fair. All you need to do is disengage the goal system from the observer, so decisions can be made selflessly (for the AI; I'm not saying that I make decisions selflessly here, or that everyone should, or that we should use coercion to enforce selflessness; just that the *first AI* should be selfless because it occupies such an important position. The selfish AIs can come *after* existential risks are no longer a threat.)

Yes there are thousands of now private clubs out there effectively doing very little except pruning one another's virtual lice.


Right, but I'm saying we're light years away from becoming anything like this; aren't we?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users