• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity - What For?


  • Please log in to reply
14 replies to this topic

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 01 October 2002 - 05:33 AM


[quote]"Why is it that you wish to see the singularity realized?"[/quote]

I think Eliezer Yudkowsky says it much better than I could:

http://singinst.org/...ingularity.html

(a short-ish document.)

[quote]From what you have stated, you claim to realize that there is indeed alot of suffering and pain that occurs as many would put it "Needlessly", on this rock that we call home.[/quote]

To quote Anand's "Why Support the Singularity Institute?":

"180 million are injured intentionally or unintentionally per year. 20 million children die per year from hunger. 680 million have a mental or physical illness. 25 million are in slavery by force, or by the threat of force. 3 billion live on two dollars or less each day. One person dies every two seconds; 150,000 die per day; and 55 million die per year. These are problems that thousands of for-profit and non-profits try to solve with hundreds of billions of dollars. If one-hundredth or one-thousandth of the resources used to try and solve them with humanity’s present intelligence were instead applied to improving humanity’s intelligence, thereby improving our ability to solve all of them, the latter’s return on investment would be much greater than the former, even with the disproportion in investment. John Morley is often quoted as saying, “It is not enough to do good; one must do it in the right way.” This relates to the necessary steps to improve our world. The first is the intention; the second is the determination of the most effective way to fulfill that intention. By not focusing on the safe improvement of humanity’s intelligence, the 100 largest non-profits in the world, all with yearly incomes over $90 million, have not taken the second step. It’s unfortunate SIAI isn’t one of the largest non-profits, because we have taken that step. We are working to help everyone in the most effective possible way—by safely achieving the Singularity."

[quote]You, in your compassion, a compassion no doubt directly influenced by your own established "morals" and ideals, wish to end such suffering.[/quote]

Yep.

[quote]After much ado, you realize that given the apparently current "Human limitations", these problems will never be solved.[/quote]

Yes. *Real* limitations, not apparent.

[quote]So, you opt to find the most efficient method for solving them right?[/quote]

Yes, but the phrase "most efficient" might throw somebody off. If the efficiency of the problem-solving process becomes an end in itself - then all you've done is created another problem, obviously. Part of intelligence is seeing when your problem-solving method is a problem.

[quote]Well what if we could create this super intelligent system, one modeled off of our supposed "Idealistic Benevolence", that could end up thinking and enacting its own artificial will far beyond human capacity?[/quote]

The phrase "Idealistic Benevolence" is misleading. It implies that someone is taking their idea of Idealistic Benevolence and forcing it upon everybody else. If the benevolence created in the spark of the Singularity is oppressive, bothersome, or annoying in any way to anyone, then the initial group of people that sparked the Singularity failed to protect its integrity. Another reason why I want this group to be bigger, smarter, and more active.

"Enacting its own artificial will far beyond human capacity" is also misleading. This implies that something is going to come along and snatch up all the trophies of potential future accomplishments - if you really want to reach a final goal by traversing a long incremental path with a series of rewards dependent on the psychological sensation of "being there before anyone else", then what kind of benevolent being would belittle that?

The term "artificial" is largely useless in this context. A toaster is "artificial". A freeway is "artificial". Windows is "artificial". A complex algorithm created in a sea of neural networks and adaptive systems engaged in live feedback with an army of programmers with advanced interfaces that produces the cure for AIDS is "sort of artificial", but not really. A true AI would not be "artificial" in any important sense at all. And a full-blown superintelligence, what you're referring to in the above paragraph, transcends all notions of natural-ness or artificial-ness to create a novel variety of complexity altogether.

Also, the entire way you present this argument is based on the assumption that we have a choice of whether to create transhuman intelligence or not. The state and trajectory of our society today is not one that we could sustain for long without substantial improvement. A planetwide nano threat watching system would need to be piloted by an altruistic transhuman, the demand for judges to be purely altruistic and objective is there, the demand in politics, economics, etc...eventually collective demand would result in the emergence of a transhuman intelligence anyway. See Mitch Howe's "Rapids of Progress", posted right here in this forum for your viewing amusement.

[quote]Ok first off, it appears to me like you "jumped the gun" as the saying goes, or rather, decided to take the highest conceivable leap in heuristics and reasoning; one that concludes that your own reasoning (as well as the rest of all humanity) is flawed and needs "augmentation" by a system that has been allowed to evolve past its own limitations, limitations defined and created as a direct result of our own inferior reasoning...etc...etc...[/quote]

Whoa there. Lots of words with heavy emotional connotations in this paragraph that are useful in the context of human social networks, but completely break down when you're looking at things from the Singularity perspective. First, the foundational tenet of *transhumanism-in-general* is overcoming biological limitations through technological progress. The big, scary "Singularity" concept is simply an extrapolation of the consequences of a greater-than-human intelligence going out and making a difference in the world. It's easy to say "oh no, automation of automobile manufacturing facilities will make all those poor factory workers loose their jobs!", but it's hard to conceive of the new, better employment opportunities that emerge spontaneously in an economy where the grunt work is done by machines.

Secondly, you're saying as if the creation or non-creation of transhuman intelligence is solely my choice, either that or you're talking to me as a representative of the intelligence advocacy community in general. There are forces at work here, bigger, greater, above all of us. See Kurzweil's book precis at KurzweilAI.net. Progress is accelerating - and riding this wave is the only way we can survive it.

"Flawed". Saying "human intelligence is flawed" is a very general blanket statement, with a wide range of potential interpretations. Let me try to single out a few. Speaking as someone who's grown up in a cultural environment composed solely of human-level intelligences with human-characteristic motivations and concerns, human intelligence is fantastic. It solves problems, creates fun, carries moral weight, and so on, but our yardstick for measuring the value of intelligence is a yardstick custom-made for measuring intelligence amongst other humans, *not* for measuring the competence of a given intelligence relative to the space of all possible intelligent systems. We *don't know* what an entity just off the right of the bell curve could do, because we've never seen one, and the advantages of such an intelligence would be qualitative as well as quantitative. Compared to such possibilities, human intelligence is "an interim step", not "flawed" or "incorrect" or "in dire need of augmentation". Our respective judgements of humanity's necessity for enhancement are not based on the respective intensities of our love for humanity, but on the size of the yardstick we choose to use in our measurements.

Incorrect interpretations the "flawed" statement might elicit:

1) "Michael is immoral because he believes human intelligence is flawed. If he believes human intelligence is flawed, then he must think that human beings have low moral value, and I'll assume that in place he assigns spare moral value to, um, himself, or some scary artificial machines."

2) "Michael believes human intelligence is flawed, and everyone knows that hatred of humans implies alienness and insanity, and therefore Michael is alien and insane. Therefore he is arguing for building these scary artificial machines as an escape from the reality which he cannot avoid."

and so on...

[quote]Do the words: "Slow down a sec partner. Yer gonna get thrown off yer high horse if you kick him with a spur under yer saddle..." mean anything to you?[/quote]

Thrown off? How?

[quote]I've read all the arguments, I've even read the books on it and the proposals for "what could be" as well as "what should be" etc...and while the benefits seem astounding in many areas, not one of them seems "beyond human comprehension" , not even remotely.[/quote]

That's because what you've read what written by humans. Reading books written by humans, by definition, ensures that all their contents will be humanly comprehensible. 1000-letter languages are outside of your comprehension. You may be able to analyze a pathway to a goal that is dependent on the conjunctive occurence of a relatively small set of actions, but you can't forsee a solution which takes half the time but involves 10 times the actions of the former. Humans solve problems with human-level elegance. Humans generally don't even pay attention to problems that would require greater-than-human problem solving elegance because our evolutionary history dictated that these goals are adaptively irrelevant (too far out of reach) and therefore made them perceptually unsalient. A real "meaning of life", a declarative philosophical goal that all humans interpret as personally important - is outside of your ability to conceive (it may be inconceivable even in principle, actually.) They say that there's concepts that are better communicated in different languages than others - this may or may not be entirely true, but if there is *any* concept that can be stated more elegantly using a different style of linguistic tags, then the extra time or effort used to state the concept in the "inferior" language was a waste. Creating higher standards for intelligence and elegance will not solve all possible problems and then suddenly make everything boring - if a society of amoebas happened to accidently solve all their "problems" by increasing their intelligence, they would not have reached the end of creation, but a new beginning which opens up a combinatorally larger problem-solution space. Are there still experiences left in the world which you think would "open up your mind" a bit more? Since your mind isn't opened up yet, then apparently that open-minded state is beyond your current comprehension, patiently waiting for you to cross paths with that experience to elicit a new enlightenment within you. Intelligence augmentation is likely to have many of the same benefits of relevation through experience, yet more so.

[quote]I can conceive of anything that exists within my defined reality.[/quote]

Yep, that's because our realities are defined by what we can conceive...

[quote]By what combination of sensory data, both manipulated internally and externally, am I (or any other human for that matter) unable to comprehend the super-radical artificial mind of the so-called Singularity?[/quote]

Not sure what you mean there...but I'll guess. Say there's an entity 1000 times more complex than you, and its overall goal system and motivations are dictated by the complex interaction of the atomic components of that entity. How could you possibly project the goals of that entity, or guess how they would change over time, or how that entity would react to a given set of sensory stimulus? Pretend that humans evolved in a higher-entropy environment. We would need a load of supplemental neural circuitry in order to develop general intelligence, otherwise it would take us forever to perceive regularities in reality and their causal relatedness. The time it takes for a given entity to perceive an external detail and match it up against all previously learned, internal complexity is strongly correlated to that entity's ability to draw causal connections between that detail and other events, use that external detail as a tool in accomplishing a desired action, reassign the appropriate level of salience to varying categories of future sensory input, and so on. All these tasks are central to our current definition of intelligence, and even if humans have the ability to create absurdly coarse-grained mental imagery of past, future, and present events, doesn't mean that they can "comprehend everything in the universe" in the way a superintelligence could. What is dark matter? Maybe a superintelligence will be the first to figure that out.

[quote]That's a big affirmation. One that I've already tested on many accounts.[/quote]
See http://www.psych.ucs...cep/primer.html

[quote]That will herald the era of ultimate (or seemingly ultimate) power and control, the likes of which is only currently attributed to distinguished monthesitic "Gods", if you will.[/quote]

"Seemingly ultimate"! We can't know because we aren't at that plateau yet! Basically what you're saying is, "I can see the ground beneath my feet, and that peak over there, even though I'm just a lowly human! I can see everything!" Just because what you know is impressive to mainstream humanity doesn't mean it would be impressive to something smarter than a human. Maybe we're thinking about the wrong goals entirely, or framing them conceptually in the most convoluted and unnatural of ways. We don't know, because we aren't there yet.

What is the a priori chance that the first species to evolve self-awareness will be able to comprehend every physical possible configuration of matter in the universe and every possible concept that goes along with these configurations?

Imagine a society imposed entirely of 1-micron high human beings. These human beings would be more concerned with dust specks, bread crumbs, and sand particles than cars, skyscrapers or shopping malls. They could create a whole culture, language, customs, beliefs, methods, sciences, etc completely different than our own. But in this case we're at least talking about *humans*! What about Garflunks? What if we're talking about entities that communicate by selectively contracting or expanding a millions-long series of tiny hairs on their abdomens, and these Garflunks live in a world with different physical law than ours, physical law that allows the stable formation of Escher-esque planets and cupcake-shaped stars? What if these beings communicate and think so quickly that a human being introduced to their environment would be so blinded by the complexity as to render an attempt at a meaningful classification or interpretation useless? The human culture and society you see today is a tiny subset of a tiny subset of a tiny subset of all possible cultures or societies, how can you say there's nothing new to learn, to see, or to comprehend?

[quote]Did I need a super-intelligent artificial system to do that? No.

Will I need a super-intelligent artificial system to transpose those thoughts into substantiated reality? Possibly...but I highly doubt it. The mechanics in and of itself, should prove to be all the "Power" I will need.

Now will I need super-intelligent artificial systems to create/invent/engineer such "power(s)"? I highly doubt it.[/quote]

The Singularity isn't a superintelligent artificial system. The Singularity is an event in which a slightly-greater-than-human intelligence is created with the will to do good. That's all. This intelligence could create an army of organics as easily as it could an army of artificials, but both of these labels miss the point.

[quote]Granted, technology can and usually does assist in decreasing the subjective ammout of time needed to master such a skill, however, I don't see the need to radically change an existing human system for one that's already established and proven to work.[/quote]

Great. Then your system doesn't need to be changed after the Singularity. But you might consider working towards the Singularity for those who *do* want their system changed.

[quote]Another simple quote: "If it ain't broke, don't fix it."[/quote]

Why don't you tell that to the 180 million who are injured per year, the 20 million children who die per year of starvation, the 680 million who have a mental or physical illness, the 25 million who are in slavery by force, the 3 billion who live on two dollars or less each day...

What this whole thread appears to come down to is your fear of humanity no longer being the greatest force of intelligence in the universe, or your life being changed when you don't want it to be. If you plan to live forever, the Singularity *will* happen in your lifetime. Quite shortly, actually. The kickoff of the Singularity will render all current technological goals obsolete in the objective scheme of things, although some individuals may implicitly chose to continue their struggles, a choice which will be respected by beings of transhuman intelligence and morals. Futurists and intellectuals from all walks of life would have a lot of great things to say about the Singularity if they could just get by that initial knee-jerk reaction to the prospect. Part of the reason why I spread the Singularity meme is to increase the body of people available for input before the real Singularity is actually initiated and it's too late for anybody to have a say over the initial conditions of the first transhuman mind. Every day, all of humanity works towards a vague better future, but if their vision of the future comes to pass, it will be within the context of a pre-existing Singularity. If we want to have leverage over the *real* future, the *real* crossroads that separates eternal life and cold death, then we must consider the Singularity itself - all other variables in the vast sea of reality soup are only relevant insofar as they influence the outcome of the Singularity - I didn't make it like that, but it appears to be the way the universe currently works.

#2 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 01 October 2002 - 06:01 AM

By the way, welcome back to the forums, please enjoi!

sponsored ad

  • Advert

#3 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 October 2002 - 01:21 PM

Michael says:
The Singularity isn't a superintelligent artificial system. The Singularity is an event in which a slightly-greater-than-human intelligence is created with the will to do good. That's all. This intelligence could create an army of organics as easily as it could an army of artificials, but both of these labels miss the point.


It doesn't miss anyones notice, who is paying attention, that this is what is claimed but as you are also limited by your Pre-sing understanding (like everyone else) it comes off as a promise that you are in no position to make.

We have had adept (human avatar, charismatic genius] inspired advances many times throughout history and prehistory. We have more than once had our fellow apes dragged screaming and kicking into the future but this isn't what you are saying about the Singularity, so a "Slightly-greater-than-human intelligence" isn't sufficient.

And the simple unadorned question, begging the definition of "Good" screams from the page.

How can such a human based subjective concept stand review before a truly objective consciousness? Neitzche eloquently and disturbingly challenges the religious notions of "Moral Absolutes" in his work "Beyond Good and Evil" and "Thus Spake Zarathustra" but the fact is that "Moral Relativism" is relevant. Is a hurricane evil in its wanton destruction? How about a plague, or famine, or a draught?

The problem of anthropomorphizing ethics is that it reduces the standards for a doctrine of what is "Good" to meaningless terms.

If we are destroyed by a rogue cosmic body is that good? If we succeed at stopping it by interfering with "natural processes" is that bad? The very discussion we are having about the "Singularity" must itself get beyond good and evil. We need a new more subtle and yet rational lexicon in order to avoid the distraction caused by these terms.

Evil for humans isn't necessarily bad and good for humans can be bad objectively.

#4 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 01 October 2002 - 08:18 PM

It doesn't miss anyones notice, who is paying attention, that this is what is claimed but as you are also limited by your Pre-sing understanding (like everyone else) it comes off as a promise that you are in no position to make.


All I claimed/implied in the above paragraph was that:
1) The definition of the Singularity is when a smarter-than-human entity comes into existence. (Vinge)
2) If a smarter-than-human entity came into existence and humans are still around to see it, that means this entity cares about humans, and probably fulfills their requests. Anthropomorphic attitudes that are human-characteristic, like selective egotism or spite, simply wouldn't be stable behavioral architectures for superintelligences. It's meaningless to call it a "Singularity" if that "Singularity" is caused by an indifferent SI with a static goal system - humans are delicate information patterns that can only be preserved when any smarter entities around care about their existence and engage in a morality not based solely upon reciprocation.
3) A superintelligence could fabricate many sentient organic entities, or many sentient "artificial" entities, if it wanted to. (How is this "a promise I am in no position to make", might I ask?)

Which of the above points do you have a problem with?

We have had adept (human avatar, charismatic genius] inspired advances many times throughout history and prehistory. We have more than once had our fellow apes dragged screaming and kicking into the future but this isn't what you are saying about the Singularity, so a "Slightly-greater-than-human intelligence" isn't sufficient.


Don't understand what you mean. A slightly-greater-than-human intelligence isn't sufficient for what?

And the simple unadorned question, begging the definition of "Good" screams from the page.


"Good" is a dynamically evolving system, that mostly has to do with fulfilling volitional requests and not bothering beings that don't want to be bothered. If there is a good beyond this, it will be the job of a Friendly AI to find it and share it with humanity. "The will to do good" means "the will to do good at least as good as a human upload or community of uploads could be". You need to relax - defining the ultimate definition of good isn't the job of a Singularitarian, but the job of smarter and more moral entities.

How can such a human based subjective concept stand review before a truly objective consciousness? Neitzche eloquently and disturbingly challenges the religious notions of "Moral Absolutes" in his work "Beyond Good and Evil" and "Thus Spake Zarathustra" but the fact is that "Moral Relativism" is relevant. Is a hurricane evil in its wanton destruction? How about a plague, or famine, or a draught?


I don't understand what you're trying to argue for. I never said "good" wasn't a subjective concept. Good can "stand review" before a truly objective consciousness because that consciousness started out with the desire to be good and took all actions since then to maximize its current and future benevolence. Are you basically saying "no entity can ever be good"?

If we are destroyed by a rogue cosmic body is that good? If we succeed at stopping it by interfering with "natural processes" is that bad? The very discussion we are having about the "Singularity" must itself get beyond good and evil. We need a new more subtle and yet rational lexicon in order to avoid the distraction caused by these terms.


If the words bother you, feel free to invent new ones...

#5 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 01 October 2002 - 09:17 PM

No Michael what bothers me is how most people who use the words mean them and your willingness to see only your version of this lexicon.

And the "Slightly -Greater-than-human intelligence" is your phrase to describe a necessary condition for Seed Al as I took it from your text.

Michael says:

Don't understand what you mean. A slightly-greater-than-human intelligence isn't sufficient for what?

but first he said:

The Singularity isn't a super intelligent artificial system. The Singularity is an event in which a slightly-greater-than-human intelligence is created with the will to do good. That's all. This intelligence could create an army of organics as easily as it could an army of artificials, but both of these labels miss the point.


Insufficient to create the event in question, the Singularity. And virtually impossible to be both Super Intelligent and to be bound by a human definition of good.

Language wasn't the invention of a single person. I can invent all the words I want but they would still require two things to be meaningful,. First that there very definitions are useful at describing the condition as separate from the previous lexicon and second that persons involved with the use of this lexicon are in accord with these "better word choices," such that they enter the register of considered use.

I am definitely addressing semantic and philological issues with the intention to outline how many of the words in your chosen vocabulary describe concepts in such a manner as to be questionable. For example I have now read a lot of what is posted on the Sing at the various sites that you have referenced and I will say that I do not like the concept of friendship for AI. It is a meaningless anthropomorphic application of the concept. I do however see the concept as defined in evolutionary biology of "symbiosis" as both objectively more appropriate and subjectively more acceptable. But then again this is no surprise.

Humans are "good" or "evil". A tiger is only""Good" or "bad" if one anthropomorphizes its behavior. Unless by bad you mean unsuccessful as a hunter killer, mate, parent, or some other evolutionarily defined characteristic. AI is either modelled on humans, which carries a lot of baggage in itself, or it is modelled outside the paradigms defining human behavior and this means that is useless to try and apply words like "good", "bad", "evil", or any number of adjectives that are really only appropriate in a very human context. A god isn't good or evil, and neither is a Super Intelligence the concept ceases t be particularly meaningful without a COMMON frame of reference.

1) The definition of the Singularity is when a smarter-than-human entity comes into existence. (Vinge)


I and many others are smarter than human entities, so what? History is full of smarter than human entities leading the rest of humanity on all kinds of joy rides. Sometimes its been fun, sometimes tragic. Many of us are about to test this definition by simply going to the next level of cognition and nobody is frankly asking for permission to try.

2) If a smarter-than-human entity came into existence and humans are still around to see it, that means this entity cares about humans, and probably fulfills their requests. Anthropomorphic attitudes that are human-characteristic, like selective egotism or spite, simply wouldn't be stable behavioral architectures for superintelligences. It's meaningless to call it a "Singularity" if that "Singularity" is caused by an indifferent SI with a static goal system - humans are delicate information patterns that can only be preserved when any smarter entities around care about their existence and engage in a morality not based solely upon reciprocation.


There are a lot of angels dancing on that pinhead Michael. But this is full of holes. First off, the prime assumption is false, and the subsequent validations are as good as the meme for justifying slavery. "A Smarter-Than-Human Intelligence" might keep humans around for all sorts of reasons besides the idea that it is fond of its pets. It might even find us entertaining, when in doubt make the Gods laugh, they might decide it is in their interest to keep you around.

Benevolence like this is just another form of violence. God protect us all from the well intentioned moralist that is more righteous then the reformed whore. Anyway you are trying to have it both ways, first you think you can validate your a priori assumptions for what criteria is the basis of "Singularity Behavior" and then you routinely beg the question when confronted with specifics that this can't be so because a Friendly AI based Singularity Event couldn't act in this or that "bad" way because we've defined its structure as " good" but of course since it is a Smarter than human intelligence it isn't bound by human definitions of good, etc.


And that leads to point number three:

3) A superintelligence could fabricate many sentient organic entities, or many sentient "artificial" entities, if it wanted to. (How is this "a promise I am in no position to make", might I ask?)


You certainly can ask and I will answer.

As I point out above you aren't that Super Intelligence, you haven't tested the hypotheses pragmatically upon which you base your assumptions and you are making political type arguments offering "promises" for benefits that you are in no position to fulfill or guarantee. Oh a Singularity might do all kinds of things and it also might not do anything you think it will do. Remember all along you are saying that it is smarter then YOU so why should it be bound by any of the criteria you use for making personal judgements?

The irony Michael is that the idea of a going ahead and trying out Seed AI doesn't particularly worry me because I think we have more than a little in common with this new species we are going to have to develop a working relationship with. But I don't particular think a superintelligence deserves to be "revered". nor obeyed, Nor do I think it will be so easy to get over on everybody as easily as you think. Not everybody falls for the same tricks. Some people can become extremely obsrtuctive if they choose to.

Anyway keep up the "good" work and don't let "Old Curmudgeons" like me burst your bubble, I want to see you go for the gold, and together we'll all go to the stars if you are right, but maybe you'll be glad that some of us play a "good" back up if a few of your core assumptions prove false.

#6 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 01 October 2002 - 11:17 PM

[quote]No Michael what bothers me is how most people who use the words mean them and your willingness to see only your version of this lexicon.[/quote]

I think you're misinterpreting my usage of the word. In what way, ever, did I use the word "good" in a way that wasn't the common usage seen in the dictionary? Just because we differ on what is possible for an altruistic mind to accomplish doesn't mean that my usage of the word "good" was ever incorrect.

[quote]Insufficient to create the event in question, the Singularity.[/quote]

You're saying that slightly-greater-than-human intelligence is not sufficient to generate a Singularity. Ok, we have different opinions on this. This doesn't mean I am confused, or biased, or anything like that. Even if we aren't talking about a slightly-greater-than-human intelligence, say we're talking about a seed AI that runs on hardware millions of times faster than human neurons. You don't think this mind could crack the protein folding problem, create nanotechnology, and continue the feedback process of smart minds building ever-smarter versions of themselves?

[quote]And virtually impossible to be both Super Intelligent and to be bound by a human definition of good.[/quote]

I disagree. I believe a mind can be benevolent and altruistic and stay that way. Keep in mind that a mind-in-general could just as easily split itself up into a committee of minds if that were necessary to preserve morality. Systems that produce better and better morality over time already exist - the social and memetic environment of planet Earth is one example.

[quote]I am definitely addressing semantic and philological issues with the intention to outline how many of the words in your chosen vocabulary describe concepts in such a manner as to be questionable.[/quote]

Can you give me a list of words, how I use them, and how you would rather have them used? Are you sure this is a semantic issue, and not simply a conceptual one?

[quote]For example I have now read a lot of what is posted on the Sing at the various sites that you have referenced and I will say that I do not like the concept of friendship for AI. It is a meaningless anthropomorphic application of the concept.[/quote]

Have you read Creating Friendly AI? How much do you know about "Friendliness" outside of what I've said?

[quote]I do however see the concept as defined in evolutionary biology of "symbiosis" as both objectively more appropriate and subjectively more acceptable. But then again this is no surprise.[/quote]

Subjectively more acceptable to whom? Appropriate why? What is this "symbiosis" concept? How is it any different than a team of Friendliness Researchers raising a baby Friendly AI?

[quote]AI is either modelled on humans, which carries a lot of baggage in itself, or it is modelled outside the paradigms defining human behavior and this means that is useless to try and apply words like "good", "bad", "evil", or any number of adjectives that are really only appropriate in a very human context. A god isn't good or evil, and neither is a Super Intelligence the concept ceases t be particularly meaningful without a COMMON frame of reference.[/quote]

Ok. Can I say "a mind which takes actions which can be interpreted by humans as good, rather than evil, positive, rather than negative, friendly, rather than unfriendly"? The basic concept remains the same.

[quote]I and many others are smarter than human entities, so what? History is full of smarter than human entities leading the rest of humanity on all kinds of joy rides. Sometimes its been fun, sometimes tragic. Many of us are about to test this definition by simply going to the next level of cognition and nobody is frankly asking for permission to try.[/quote]

Nope, you're very much human. We all share roughly the same basic set of DNA and that sets the specifications for the size and speed of our brain very rigidly. I'm talking about hardware-level modifications of the brain through nanotechnology and the like. This has to do with the concept of "smartness". Have you read "Staring Into the Singularity"?

www.sysopmind.com/singularity.html

[quote]"A Smarter-Than-Human Intelligence" might keep humans around for all sorts of reasons besides the idea that it is fond of its pets. It might even find us entertaining, when in doubt make the Gods laugh, they might decide it is in their interest to keep you around.[/quote]

I didn't say superintelligences would keep humans around as pets. In this sentence you explicitly accuse me of doing so. Why? What are you reading from my words that I'm not trying to project? I think you're overly theomorphizing superintelligences - did you ever consider a superintelligence that kept humans around because it is benevolent, rather than because humans entertain it?

[quote]Benevolence like this is just another form of violence.[/quote]

Agreed. But why are you mentally modelling a benevolent SI like this? What you're talking about is obviously unbenevolent - why wouldn't the minds at the core of a truly benevolent Singularity see this as blatantly and obviously negative? Read the words "truly benevolent". I'm not saying that the Singularity will necessarily be truly benevolent, but if it is, do you agree that it would be able to take benevolent actions intelligently, noninvasively, for the benefit of all?

YES, I concede that if we mess up, we will get an AI that will probably kill us. I keep saying this again and again, but people act as if what I'm saying is "The Singularity and Friendly AI is our ultimate saviour, come to us now!" All I'm saying is that the Singularity has the *potential* to be really really good, and aiming towards fulfilling that potential is what humanity should be concerned with right now.

What do you disagree with in the above paragraph?

[quote]Anyway you are trying to have it both ways, first you think you can validate your a priori assumptions for what criteria is the basis of "Singularity Behavior" and then you routinely beg the question when confronted with specifics that this can't be so because a Friendly AI based Singularity Event couldn't act in this or that "bad" way because we've defined its structure as " good" but of course since it is a Smarter than human intelligence it isn't bound by human definitions of good, etc.[/quote]

Can you clarify this sentence? This is my argument, sort of - what I've been saying is that ego, greed, and meanness are just evolved traits, not characteristics of minds-in-general, and that a mind-in-general could have the ability to navigate itself towards an idealized form of benevolence. Can you argue against this claim without insulting it or getting all riled up?

[quote]As I point out above you aren't that Super Intelligence, you haven't tested the hypotheses pragmatically upon which you base your assumptions and you are making political type arguments offering "promises" for benefits that you are in no position to fulfill or guarantee.[/quote]

I wasn't offering any promises. Which promises was I offering? I said a superintelligence *could* fabricate all sorts of things, if one chose to - how is this an outstanding or particularly shocking claim?

[quote]Oh a Singularity might do all kinds of things and it also might not do anything you think it will do. Remember all along you are saying that it is smarter then YOU so why should it be bound by any of the criteria you use for making personal judgements?[/quote]

Are you using this as an argument for why a Friendly AI might run amok? One could, I never said it couldn't. It could also wipe everyone out very fast if it chose to. What's your point? This is an even better argument for taking personal responsibility for the integrity of the Singularity, isn't it?

[quote]if a few of your core assumptions prove false.[/quote]

Which core assumptions? That a human-level mind can start out with the desire to stay benevolent and successfully stay benevolent all the way up until superintelligence?

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 02 October 2002 - 12:43 AM

Please excuse the strident tone if that is how you perceive it Michael. I am neither angry with you nor against the fundamental concept of "Friendly AI", I just don't accept that it will work out quite as you seem to expect.

Can you give me a list of words, how I use them, and how you would rather have them used? Are you sure this is a semantic issue, and not simply a conceptual one?


Good, bad, evil, (un)friendly, altuism, benevolence, as just a few of the words that are ALL relative to a TOTALLY HUMAN frame of reference (please excuse my humble self for yelling). None are ultimately important if you are not operating within a common set of criteria. None of them in all the millennium of debate has ever yeilded totally satisfactory forms of absolute understanding that is qualifiable objectively. This is true from Socrates to Kant and on through today's best Johnny come lately attempts.

Systems that produce better and better morality over time already exist - the social and memetic environment of planet Earth is one example.


This is just patently false and you are demonstrating a particular cultural bias and little more. You have no OBJECTIVE CRITERIA with which can justify this claim. It is just an example of cultural chauvanism. You like your cultural ideals so you assume them superior. The Spartans practiced eugenics and have as much right to the claim.

Democracy isn't "Qualitatively better" from an altuistic perspective over a enlightened monarch, read the "Republic" by Plato. It is however OUR cultural preference. That is a VALID SUBJECTIVE choice but not necessarlilly the conclusion of an Superintelligence that you say might be more objective.

This is my argument, sort of - what I've been saying is that ego, greed, and meanness are just evolved traits, not characteristics of minds-in-general, and that a mind-in-general could have the ability to navigate itself towards an idealized form of benevolence. Can you argue against this claim without insulting it or getting all riled up?


If ego, greed, and meaness are all just subjective aspects of evolved DNA then don't expect an AI that not comprehensive of what these feelings are to be able to transcend them because the frame of reference is totally uninvolved with the motivations for living creatures. Why should such an AI care about what you care about if lacks either the motivation or any particular interest?

Have I insulted you anywhere? I am not particularly riled either but just for the record could you review my posts and see if I either intentionally or NOT insulted you? If I did I humbly beg your forgiveness.

Ok. Can I say "a mind which takes actions which can be interpreted by humans as good, rather than evil, positive, rather than negative, friendly, rather than unfriendly"? The basic concept remains the same.


You can say anything you want it won't make it necessarilly true. But in this case the argument is little different than the Churches historic justification for the Inquisition. "We burned them at the stake but made sure to torture a confession first out of them. That way God may forgive their soul". What I mean by way of metaphor is that A Smarter than uhman intelligence can have people doing all kinds of bad and dumb things if it wanted to and peolpe would think is was good. Thisis little different to how politicians manipulate the masses now. It simply doesn't matter about what we think is Good or Not. Not to the AI. UNless you program it AS a HUMAN. Out comes the baggage like a mother in law going along on the honeymoon.

#8 Omnido

  • Guest
  • 194 posts
  • 2

Posted 02 October 2002 - 01:51 AM

I'm will have to agree with Lazurus on this one.
Ah Laz, you have a beautiful gift for words, and to the point.
I am merely beginning to understand how to use such words en masse with such ease.
;)

To address the point at hand.
To quote a scene from The Matrix:

Agent Smith: "Why isn't the serum working?"
Other Agent: "Perhaps we're asking the wrong questions..."


First off, lets establish what seems to be the issue here.

Michael Argues that:
1) The world sucks and could be better.

Ok, on the majority I couldn't agree more.

2) Humans are limited at present and cannot solve the worlds sucky condition without a greater means other than themselves, i.e. the Singularity, Friendly Seed AI, etc..

On that, I completely disagree. The world may have its troubles, but we do not need anything of the sort to solve our problems for us. The idea of "Oh shit, we're F***ked and cant get out of the mud we stuck ourselves in" is ludicrous! Granted, we may have put ourselves here, but that doesn't mean we have to invent some other system to think for us and pull us out.
Now, there are many points you made Id like to address.

we're talking about a seed AI that runs on hardware millions of times faster than human neurons.

Yeah, so what? Calculators can add math millions of times faster than I, but the calculator cannot debate, it cannot offer any comfort, (none that possess any affiliation with, and/or identification with human suffering) it cannot philosophize, it cannot do anything without the direct input of its user.
That having been established, the same would apply for the Seed AI as well. It too, cannot do anything other than what it has been programmed with.
Now heres my question:
How in the hell do you program an AI with objectively desirable qualities, as so to attain a greater-than-human intelligence?
Thats like trying to ice-skate uphill. Granted, you might (MIGHT being the important word here) make some headway, but the odds are 1 in a million against you.
Nothing can be greater than the sum of its parts, unless the sum of its parts are greater than themselves. Infinite regress could be the solution, but at this point I will be honest and claim that "I don't know."
Now how, I ask, is a super-intelligence; one created by a non-super intelligence, going to ascertain the answers to what we might accept as ambiguous? Does the super-intelligence possess some "accidental spark" that gives it something we didn't already have? Is it an "Act of God" that this super-intelligence somehow comprehends that which we do not, considering that, it is all that we gave it?

A true AI would not be "artificial" in any important sense at all. And a full-blown super-intelligence, what you're referring to in the above paragraph, transcends all notions of natural-ness or artificial-ness to create a novel variety of complexity altogether.

If that is so, then the super-intelligence to which you refer cannot nor could not be created by us. Since we do not possess all of those aforementioned qualities, then anything that stands as a testament to our own volition can also not possess those qualities. Such a "transcendence" is a logical contradiction of objective nature, not subjective nature. Therefore, any such super-intelligence could not in theory exist. It would merely exist as a faster-intelligence. However, speed does not endow a thing or state of affairs with qualities thus described as "Trancendent."
Science fiction authors have long since predicted the existence of computers, some decades and possibly centuries before they existed. According to your theory, they could never have comprehended where computer science would have evolved, and yet that is obviously not true.
It is my opinion that you place far too many limitations upon human cognition.
Speed being one of the obvious ones, yes. But then again, even human cognitive speed is subjective as well as relative.
I can conceive of things that don't exist, that might exist millions of years from now, and yet because of my lack of tools, I cannot make them substantiate at this immediate juncture. So for you to claim that I cannot understand, nor comprehend the mind of some super-intelligence, then you are also claiming that there exist concepts beyond the comprehension of humans. There is no proof for this. Ignorance does not determine intelligence. Ignorance determines presence or absence of measurable degrees of experience. So what happens when a human being decides to embrace the totality of all possible attention, in all possible reference, with all possible conception; through the use of logical constructs, be they semantic, observable, or interactively demonstrative? [unsure]
What if a human being decides to see what they haven't seen, to hear what they haven't heard, to feel what hasn't been felt, and to conclude what hasn't been demonstrated as conclusive?
Lets take Einstein for example. A creative genius who "discovered" relativity through the use of pure imagination. He did not know what he was talking about now did he? Did he have conclusive proof? No. He merely thought to himself: "I wonder what it would be like to be a photon..." and then he derived the rest from thesis, analysis, and logical/mathematical demonstration. He was proven 99% right with what could be tested at hand. However, parts of his theory cannot be tested yet. That does not make him incorrect.

You also argue that humans cannot comprehend the "mind" of such a system.
I disagree. I have no problem comprehending it at all. That either means one of 3 things:
1) I am deluding myself.
2) I am not deluding myself.
3) I am deluded outside of my own cognition with no ability or means to be made aware of such a delusion.

The first two I will accept as a possibility, therefore the last is a near-impossibility.
However, since I utilize logistics, empirical data, and rational thinking to achieve my affirmations, then the first option also becomes a near-impossibility. Granted, I may be incorrect, but thus far no debate has been able to demonstrate that. And suffice it to say, it is not merely my affirmation. Many posses it as well, which would reduce the law of averages against us, making opposing arguments far easier to invalidate. This does not mean that "Majority rules", no. Indeed, many people can share a delusion while demonstrating their "correct-ness" as fuel to feed the fires of their rhetoric.

However, back to the issue.
Anything that is created of humans, will inherent human characteristics.
The gun for example. Does a gun resemble human characteristics?
Sure it does. Its designed to kill at a distance. A modification of the bow & arrow, which is a modification of the club, which is an extension of the hand and fist, which acts from the impulses of the human mind.

Therefore, anything as such created by humans, will also be an extension of humanity. So the super-intelligence would also fall into this category, making "transcendence" that you speak of impossible.

Don't get me wrong, I'm all for technological improvement, yes. But necessity is the mother of invention, and I don't think we need the assumed "super-intelligent" AI.
Granted, thousands die daily. Thousands will continue to do so. Such is the nature of statistics.
Am I "Ok" with this? Well there certainly isn't alot I can do about it, not as of yet anyway.
But if what you are asking is: "What if the Singularity were the only possible means of solving such issues?" I would answer: "I doubt that is the case, and if it were to come to that, then we're already too far gone."

It is my opinion that we are no where near close to "Too far gone."

#9 Cyto

  • Guest
  • 1,096 posts
  • 1

Posted 02 October 2002 - 02:17 AM

If its so smart then it can reprogram to make itself even better. So if you try and "frim wear" the "I will associate with the humans, my friends/masters," then it could pick this out as a slavery issue and purge the firm wear.

So then if its so smart will it get tired of us or just decide we are in the way? If it has human intentions, which I bet me left eye it would, wouldn't it take advanage of our "stupidity?"

Heh, I will refer to the experiments done on rats in challeging enviroments with things to interact with as compaired to the rats with a bare cage. The rats with the enritched enviroment had, I think, double the synaptic enstablishements compaired to the bare cage rats. Doesn't having a "super AI" ruin the fun of figuring out problems our self? I say we would get board, stupid, lazy and in a posistion to be taken advantage of.
I say this due to the "all-knowing" taking away our need to problem solve.
"Just ask the AI."

And whats with the AI? Can't we just think about the nanotech, genetics ummm and other stuff? This AI sounds like it can have a lot of holes.

#10 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 02 October 2002 - 06:02 AM

The definition of symbiosis is:

1. The living together in more or less intimate association or close union of two (or more) dissimilar organisms. 2. The intimate living together of two ( or more) dissimilar organisms in a mutually beneficial relationship.
Websters


I added (or more) because I am using a very old Websters and ecological science is essentially predicated on the "or more" part.

I should emphasize normally such relationships are friendly, but they don't have to be. We, this includes you, have a close intimate relationship with E-Coli and Mitochondria for example. You don't intentionally harm them and they are beneficial to your survival and on some levels that is the nature of what you are claiming will be the quantum social and psychological level of distinction between Friendly AI and humans after the Singularity Event.

It's after ten O: Clock I don't really give a damn where my E-Coli are as long as I don't inflict digestive distress on them and they do their job. What I am trying to say is that "Friendliness" isn't a universal idea. It is a very subjective one and maybe not everybody can be friends.

But even enemies can respect common self interest. If we were being directly threatened by an asteroid in ten years and they confirmed it do you think they could quit fighting around the world long enough to address the threat?

Probably. Afterwards we could expect all hell to break out as new and old alliances vie again for dominance but for a while even Churchill and Stalin were allies. Friendship is based on trust not logic, they are human paradigms that include a highly subjective component of "faith" that cannot be extricated without altering the nature of the relationship to the extent it is no longer recognizable as friendship. A dog for instance can be a friend, but because humans define the establishment. The pack mentality of the dog has adapted to human social structures. It isn't that big a leap.

When I say you are selling us a bill of goods, don't think I am accusing you of being disengenuous, quite the contrary I recognize that you believe very STRONGLY in this chosen purpose you've identified for yourself. Frankly I commend your commitment, I just disagree with your conclusions that things will work out even remotely as planned. Your "promises" are the professed "beliefs" in what these trends "Will" yield as well as "when" they will yield it and I think you have grossely over simplified the obstacles as well as overestimate the advantages. Have you ever seen any of your friends sell products to their parents? The really good salespersons always believe in their product.

And lastly please stop equating speed with intelligence. It wasn't the machine mind that recognized the Protein folding problem it was a human, and it was humans directing the machine and telling it which processes to analyze that solved the computational concerns to in fact design protein sequences. I am just surprised at how easilly you overlook the human aspect. Yes a machine performs computations faster then a human, if you myopically look at a single series of specific problems.

I parallel process and multitask hundreds of times more functions then any computer still built. I won't try to make that claim in any one area, but the fact is that my brain is running my heart, operating my lungs, digesting and exctreting body fluid and nutrient/waste while I am walking, talking, thinking about this and literally hundreds of other metabolic and subconscious problems simultaneuosly while observing and recording my daughter's behavior and assessing my tasks for tomorrow. That is that meat brain that you show so little consideration for in the future scheme of things. That self assembled cranium has a better track record for function, performance and reliability then any machine yet made by humans and enhanced by CAD. Though I would like better autonomic control over the keyboard and a direct cortical control over the mouse.

I will add that I think the future is bright for the brain. I wouldn't throw this baby out just because we have a new shiny toy. I might however take full advantage of what can be gotten from machine mind technology and willingly uplift myself. I don't foresee a terrible crisis from the idea of merged minds. I think that we will achieve this in less than ten years for any who are willing to initiate the interface with other humans as techlopaths. As this becomes a reality we will also be able to interface directly with the machine for data uplink and download. This doesn't have to be 30 years away. It doesn't depend on machine mind being all that much more intelligent then it is now, it depends on the sophistication of the connections to the body, and the complexity of the interface software.

I also think that we can take a moment to reflect on the fact that as little as a 1% difference in brain size and lobe specialization can relate to significant differences in output function. This means that tweaking the brain may be easier then we though,t though it implies a concordant added level of risk. Not all brains are the same.

But these Transhumans will have what relationship to the rest of humanity that gave them birth? Long before the machine becomes the alien I expect some of us to fulfill that role. In this I tend to agree with Mind. I think that near term holds more possibility for brain enhancements then for machine mind, and by the time computer hardware and software catch up to their promise perhaps at least some humans will also be further along that road to establishing a more equal level of communication such that humans can develop true symbiosis with the new species of our creation.

#11 Cyto

  • Guest
  • 1,096 posts
  • 1

Posted 02 October 2002 - 06:25 AM

Are you responding to Mind, me or Mike?

#12 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 02 October 2002 - 12:18 PM

I wrote this response to Mike prior to gong online so ironically I hadn't read your subsequent posts. Then I lost my connection so I didn't get a chance to tie these together becuase I am trying to weave elements of previous discussions together.

But Michael did specifically ask me to define symbiosis. From a geneticist standpoint and an evolutionary biologist standpoint the question should remain open. I just figured it was time to provide the counter position and stick my head out instead of just slicing off Michael's every time he is openly brave enough to provide us the target. Fair is fair after all.

I am going to wait to comment too much until Psycho weighs in on this because I suspect we are not in as close aggreement as at first it may seem from a casual rereading of this thread. I am also still developing the concept of Symbiosis for AI and I haven't developed a paradigm for how to encrypt such a concept logically such that an algorithm might be derived. But I think it may turn out to be more practical and less subjective then trying for "friendship" and I offer two distinctly different proven relationships to study that may offer clues as to how to accomplish the goal.

In the case of both E Coli and Mitochondria we have a simpler genetic structure that is "progammed" to cooperate within a host. Both of those organisms must have this behavior encrypted genetically into their DNA. If that is the case then the relationship of interspecies genetics in such an analysis may yield more then just the associated segments for DNA governing behavior it may yield a type of program language and methods for creating such relationships between different species.

What is subtler and needs a moment to realize is that humans logically possess a corresponding genetic program. We must be reciprocratly "hard-wired" to both benefit from and shelter these creatures that have tied their simpler single celled destinies to our complex mulitcellular social structures.

On close inspection I suspect that we will find numerous examples of this across nature and that once understood it will an easier task to structure, or model a relationship between us and the designed species of our creation to have a more practical and rewarding interdependancy.

I do see the concept of AI as an interspecies one. I recognize elements of xenobiology and xenopsychology as incorporate in the analysis. It is either that or just an extension of the human model and if that's the case we are back to just arguing about ourselves in another "sheep's fleece".

Oh Grandma, mighty big thoughts you have there... [ph34r]

#13 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 02 October 2002 - 01:07 PM

But I think it may turn out to be more practical and less subjective then trying for "friendship" and I offer two distinctly different proven relationships to study that may offer clues as to how to accomplish the goal.


Laz, keep in mind that creating a moral architecture for a seed AI isn't something that you can do in your spare time. You have to devote your entire life to it to even be a reasonable candidate. I appreciate the way you presented your symbiosis concept as an alternative to only arguing against what I said, but I don't think you know what "Friendliness" really is at all. Let me tell you that it's not the dictionary definition of "friendliness". If you're serious about your "symbiosis" theory whatsoever, then you would definitely need to research all preexisting AI morality theory, right? That means reading Creating Friendly AI all the way through and understanding it, doesn't it? Do you have *any* commentary on *any* of the points made in Yudkowsky's paper? I would like to respond to your above post, but first off, it is off-topic and needs to be created as a stand-alone topic (could you do that? I can't move individual posts.) I didn't mean to "equate" speed with intelligence, although I think they could be powerfully correlated. Who is more likely to win a chess match between two players of equal skill: one who has twice as much time as his opponent or one who has half as much as time as his opponent to make a move?

Please withold responding to the above until a new topic is created, you've posted your "symbiosis" post, and I've posted the above post below that. Thank you in advance.

#14 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 03 October 2002 - 05:02 PM

Michael says: I would like to respond to your above post, but first off, it is off-topic and needs to be created as a stand-alone topic (could you do that? I can't move individual posts.) I didn't mean to "equate" speed with intelligence, although I think they could be powerfully correlated.


Actually it is not off topic but it does deserve a separate thread.

I suggest that we create a linkage system that can cross reference the threads under separate commentary so as to not have to resort to quoting in text as much. I miss the old Highlight feature too but regardless, often many are now inserting direct links to other areas but generating these links is time consuming. Any suggestions for BJ on how he might say; alter the bold function so that it can get colored?

And I am not saying that YOU in particular have emphasized the speed equals intelligence myth, I am saying that a: It is a myth and b: this myth is not only commonplace, it is misleading. It generates unproductive conclusions.

But you do tend to slip back and forth occasionally leaving one the impression that you are equating the two. And by the way I get the same impression when reading Eliezer and Kurzweil.

So I feel this is very relevant to making the issue more clear in a general sense. It is an area that deserves better attention to detail instead of a simplistic debate back and forth about whether the assumption is valid that "Speed equals Intelligence."

The problem is that "Speed is a Measure" of intelligence. It is just only one though, of a number of critical measures that deserve a clearer outline, and frankly it is not the most important measure on that significant short list. I would prefer follow up commentary on this listing before offering my "take" on it.

Laz, keep in mind that creating a moral architecture for a seed AI isn't something that you can do in your spare time. You have to devote your entire life to it to even be a reasonable candidate. I appreciate the way you presented your symbiosis concept as an alternative to only arguing against what I said, but I don't think you know what "Friendliness" really is at all. Let me tell you that it's not the dictionary definition of "friendliness". If you're serious about your "symbiosis" theory whatsoever, then you would definitely need to research all preexisting AI morality theory, right?


I realize that you folks aren't using the dictionary definition of "Friendliness and I understand the distinctions that are being offered in the alternative. I am saying that to do so is inherently suspicious. When I first discussed this directly with Eliezer over three years ago I said I see this a form of "bait and switch." I don't however think you are setting out to deceive, are you?

The principle of friendliness isn't defined by the dictionary it is a psychological and organic state between beings. I do research AI Theory, and I am not an amature, I am like the silent hundreds worldwide that are just going ahead and applying their personal hyoptheses and thus testing them out. I am presenting symbiosis as not just an alternative to friendliness, I am saying it is consistently more appropriate an organic model. It is a more honest promise, thus honest trade.

Understand this carefully, I haven't argued against many underlying arguments for what is being offered because I am in general agreement with Eliezer's observations and propositions as to causality. So why should I debate picayune aspects? I disagree with the conclusions offered.

In particularone conclusion I think that you can't overcome is the problem of anthropomorphication of AI if you try to incorporate "Friendliness".

Michael you must address the issue that if the idea of "Friendship" being offered ultimately in NO WAY resembles any definition that is understood by the common and even more esoteric meanings of the word then it isn't valid to use the word. Now who is inventing an arbitrary lexicon?

"Mean what you say and say what you mean" isn't just a palindrome it is a way to avoid GIGO in writing computer programs.

If you're serious about your "symbiosis" theory whatsoever, then you would definitely need to research all preexisting AI morality theory, right? That means reading Creating Friendly AI all the way through and understanding it, doesn't it? Do you have *any* commentary on *any* of the points made in Yudkowsky's paper?


I will raise more specific issues if you want but as I said I don't have a lot of problems with the general argument he is making. Yes I am however quite serious, and having the discussions here, and reading not just the papers themselves, but the commentary of many that share this interest is a valid way of addressing the developmental theory.

It is as much here, where the perception of how to go forward is being formulated by those that are in fact building the physical and conceptual architecture for AI. Here in the various forums extent throughout the web, not just from acclaimed web and print publications. Though it does help to reach a larger audience. But the thinkers and the doers on this look for other's to compare ideas with and it is here where those folks gather. So it is our "parley" that becomes the review for tomorrow's students of this debate. We are creating the future textbook on this subject as we speak. Isn't that why you have such a religious fervor about the details of that gets kept in the thread? When the Seed AI awkes where do you think it will first go to learn of itself in this wide world. I am not even sure that anything ever erased will be beyond retrieval and inspection by this entity.

That said I do however think we need a faster cross linkage ability for ourselves in these forums, we need to recognize the importance of consistency to a topic as primary as long as it respects the need to test the limits of relevance, and don't forget the scroll button ;o) . Even though in print, it doesn't mean that we need to reread everything that is posted as we become familiar with the various areas. Spinning off should be "Organized" and we need to understand, a priori that not everybody is going to agree to the importance of what constitutes a "New Thread" but this also shouldn't just be a way of deep sixing an alternative hypothesis.

I will also work towards improving my publication skills but for the moment I have been satisfied with the forum arena instead of the "Professional Journal". In the meantime I have been on topic and doing a comparative analysis of the core idea of "friendliness" is salient to that end. Again I ask Michael, and I do not mean this to pry into your personal relationships, but do you have friends?

Why is it not relevant to compare what you are offering technologically to what We understand WE possess of it and compare that state with what you're presenting?

I know, I know, "because it really isn't predicated on a human understanding of "what "friendship" means." I am trying to tell you how UNfriendly this seems on close inspection.

Go out with your friends Michael. Analyze the "love" between you as you attempt to quantify in algorithmic code and understand it and what I am talking about.

Oh no Love? That cumbersome highly emotional irrelevant subjective concept!!!

No Michael, no love and it isn't friendly. You may be talking about a number of other types of relationships, in fact I offer one as an alternative perspective, but you aren't talking about friendship.

No Love = No Friendship
No Trust = No Friendship
No Respect = No Friendship
No Sublimation of Self = No Friendship (go ahead squirm everyone on this one)
No Desire for it = No Friendship

I pose all of these general and common perceptions of Friendship as reason to argue that this is either a bait and switch or you have an inherent subjective issue of anthroporphism and subjective emotionalism. [ph34r]

Oh and Michael I hope in no way you are interpretating my words as insulting personal commentary. I suspect that you have friends and are a very good one to them. I am asking you to carefully re-examine the human quality of friendship as relevant. I don't accept at all the nay say argument that it is just our humanity getting in the way of a Universal Altruism.

Because when it comes to friends Michael; It takes one to know one...
Nay, nay, nay-nay, na ;o) ;o) ;)

sponsored ad

  • Advert

#15 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 04 October 2002 - 08:19 AM

[quote]And I am not saying that YOU in particular have emphasized the speed equals intelligence myth, I am saying that a: It is a myth and b: this myth is not only commonplace, it is misleading.[/quote]

Thanks for clarifying that - if I've been too defensive, I apologize.

Speed advantages are only mentioned primarily because they are easy to quantify - more predictable than software complexity. The implication is that the human brain is bounded and not really *that* complex - after all, much of the complexity of the brain is highly repetitive, algorithmically reducable and largely devoted to sustaining functions rather than the necessary tools of sentience or consciousness. Since the brain largely evolved in the absence of general intelligence, back when cognitive selection pressures were less; and it made more evolutionary sense to grow a stronger hide or sharper fangs rather than additional nervous system complexity, the cognitive evolution speed limit was rather low. Much weaker relative design forces were shaping or molding the mind into anything complex - but over time, evolution's blind selection process stumbled onto a portion of the fitness space which includes the family Animalia and our complex, multifaceted, energy-intense nervous systems. This means the bulk of the complexity is evolutionarily recent - it upgrades the design template for the brain in *better* ways, evolutionarily-trained "catastrophe points", where bitflips in low quantities of genes result in design changes that substantially improve the overall function of preexisting cognitive complexity. The causes of cognitive construction range across a sliding complexity scale, where the root causes of cognitive creation can be seen as a series of events on this scale like a timeline; the human brain is characterized by many cognitive adaptations. ~1,000 distinct events in fact, by my amateur-y guess based on the following calculations:

((3 bits of information per generation x 5,000,000,000 years of human evolution) / (16 years average replication time x ~1/1000 of gene bitflips constituting cognitive improvements)) = ~1,000 cognitive improvements

Reference: A Speed Limit for Evolution?

Hey, wait, only 1,000 adaptations separating us from animals. That can't be right, that's too few. Anyway, humans aren't as complex as we like to think we are. Human equivalency, in reality, lies *just outside* of humanity's current complexity management capacity in software, as long as they had the right reference information and some really smart people. I currently see SIAI as fulfilling this role, but there is a nonzero possibility that I am rationalizing this perceived outcome because I really, really want the Singularity to happen, (I do) but as far as I can tell, that's what the reality of the situation appears to be. Remember that the complexity of the brain is dictated by only 750KB of genetic complexity. Consider also that an AI does not need to precisely match all existing human complexity to acheive consciousness - an AI only needs to be a sufficently functional *approximation* of human cognition - we're building a *general intelligence* here, not a human. An AI project will also have the benefit of a "complexity snowball" effect, as described in singinst.org's Introductory Article - Seed AI" (short) and Levels of Organization in General Intelligence (long, but an "11 out of 10" paper). There is the potential for masses of complexity being attained through simply telling a sufficiently intelligence prehuman intelligence "See that phenomenon? Analyze and duplicate its' causes." At this point it becomes a question of what the AI can *digest*, rather than what the programmers have to *build*. A sufficiently advanced AI can take advantage of the millionfold-accelerated forward-burn serial speed of hardware *on top of everything else*, to participate and improve in its' own design, whether that level of improvement be a compiler, a metacompiler, a super-hyper-megacompiler, a conscious collaborator, or a superintelligence. (Even when a Friendly AI is a superintelligence, if it assumes a singleton posture in a civilization, its' precise nature will be dictated by the combined and distributed will of all sentient beings - our "metavolition"). The "metavolition" of all sentient beings will blossom is astounding diversity - first-order sentient emergences, followed by second- and third-order emergences derived from the aggregate will of the total set, ad infinitum...but hey, I'm getting off-topic.

Anyway, actual Singularitarians don't need to only resort to speed arguments to answer the question of "How Attainable is AI, Really?"; there are plenty of arguments on the complexity side of things as well, just go to sl4.org and do a search for "software complexity", the arguments have been rehashed and put to rest dozens of times. But again, *since speed is easier to quantify*, it contributes to the overall psychology of pro-AI arguments that come immediately to mind - they aren't necessarily the *best* arguments, they've just the *easiest* arguments, so they get undue publicity.

[quote]It generates unproductive conclusions.[/quote]

Are the following conclusions productive?

1. AI is on its' way before 2015.
2. Nanotech is also on its' way before 2015.
3. Moore's Law will continue, and as nanotechnology begins to emerge, a massive software complexity demand coupled in a loop with the exponential rise of the availability of computing power, along with near-future advanced user interfaces will put general intelligence within the Production Possibility Frontier of a number of AI design organizations - especially those with the most money/intelligence.
4. The first computer-based general intelligence will engage in a strongly self-improving recursive cycle, and by virtue of its' love for humanity, (hopefully, see below) will fulfill all our explicit (and possibly implicit, if we would allow it) requests.
5. The above was the positive outcome. The negative outcome would be an existential catastrophe or an indifferent/malevolent AI. Humanity has a moral obligation to ensure the integrity of the Singularity.
6. Since the occurence of the Singularity is strongly correlated to the likelihood of its arrival at all, humanity also has a moral obligation to accelerate the Singularity.

One of the biggest moral questions I see the Singularity meme asking humanity right now is "Would you trust an AI, if it were a person?", or even "Would you ever trust another mind, if it were smarter than you and loved you?"

[quote]But you do tend to slip back and forth occasionally leaving one the impression that you are equating the two. And by the way I get the same impression when reading Eliezer and Kurzweil.[/quote]

Sorry. What have you read of Yudkowsky's where he gives you the impression that speed and software complexity are interchangable? Have you ever read "Levels of Organization in General Intelligence?

[quote]So I feel this is very relevant to making the issue more clear in a general sense. It is an area that deserves better attention to detail instead of a simplistic debate back and forth about whether the assumption is valid that "Speed equals Intelligence."[/quote]

Ok. What other sorts of details do you want?

[quote]The problem is that "Speed is a Measure" of intelligence. It is just only one though, of a number of critical measures that deserve a clearer outline, and frankly it is not the most important measure on that significant short list. I would prefer follow up commentary on this listing before offering my "take" on it.[/quote]

I strongly agree - "smartness" is one of the most critical measures of intelligence - the ability to handle complexity, intuitively reach solutions, and integrate novel information, and "smartness" is the quality on which the original Singularity concept was based. You can find articles on this at the SIAI website, or try reading some Vernor Vinge.

[quote]I realize that you folks aren't using the dictionary definition of "Friendliness and I understand the distinctions that are being offered in the alternative. I am saying that to do so is inherently suspicious. When I first discussed this directly with Eliezer over three years ago I said I see this a form of "bait and switch." I don't however think you are setting out to deceive, are you?[/quote]

Not that I'm aware of, heh...

[quote]The principle of friendliness isn't defined by the dictionary it is a psychological and organic state between beings.[/quote]

I think you're mistaking "a psychological and organic state between beings" as "a quality of a mind". What you're giving me is just another arbitrary intuitive definition that is wrong. The definition of Friendliness architecture is specified in Creating Friendly AI, not in any number of your gussed definitions.

[quote]I do research AI Theory, and I am not an amature, I am like the silent hundreds worldwide that are just going ahead and applying their personal hyoptheses and thus testing them out. I am presenting symbiosis as not just an alternative to friendliness, I am saying it is consistently more appropriate an organic model. It is a more honest promise, thus honest trade.[/quote]

You're operating on your intuitive definition of "Friendliness", still. Friendliness isn't a "promise" we make to the AI, it's an innate property the AI has by virtue of design. "Symbiosis" isn't an alternative to Friendliness at all, because you haven't wrote anything about Symbiosis outside of this website, as far as I can tell.

[quote]Michael you must address the issue that if the idea of "Friendship" being offered ultimately in NO WAY resembles any definition that is understood by the common and even more esoteric meanings of the word then it isn't valid to use the word. Now who is inventing an arbitrary lexicon?[/quote]

"Friendship" is a name given as a technical word because it approximates the inuitive definition. A Friendly AI would basically just be a nice person without an observer-centric goal system that we can upload. You're misperceiving an overspecialization in the definiton of the term that isn't there. If you want to understand what Friendliness would actually do when wielded by a self-improving mind, you have to mentally model a mind-in-general-that-has-the-Friendliness-architecture, not a human-that-has-been-asked-to-be-"friendly". Lots of the doubt surrounding the implementation of Friendliness stems from the way we are preprogrammed to think about other minds - we anthropomorphize because it's the evolutionarily adaptive thing to do, but with AI, making many of these distinctions is pointless. We're talking about an AI that is already *at least as good* as its programmers at reasoning about morality, and displays an obvious desire to continue to please all sentient beings (in "voice" as well as in source code). Lots of people have trouble modelling a normative benevolent mind, but I now find it quite easy. All you need to do is think about that doesn't have observer-centered goals, explicitly avoids observer-centered goals in fact, doesn't rationalize, knows what the unambiguous definition of "good" or related notions is, and basically embodies the essence of our humanity, our moral conscience. This mind *wants* to be incredibly good because this is ver only goal, and ve has no reason to stray from it - a mind with no concerns outside of benevolence that can alter its' own source code can take all actions to ensure the eternal future fulfillment of its goal structure - let sentient beings do whatever they want without violating the volition of one another (or whatever better analogue the mind comes up with.) Sorry about the arbitrary lexicon, it wasn't my choice.

[quote]I will raise more specific issues if you want but as I said I don't have a lot of problems with the general argument he is making. Yes I am however quite serious, and having the discussions here, and reading not just the papers themselves, but the commentary of many that share this interest is a valid way of addressing the developmental theory.[/quote]

When do you plan to write up a design comparison of a Symbiosis benevolent AI architecture relative to a Friendliness benevolence architecture? I'd like to see *someone* do so, one day.

It is as much here, where the perception of how to go forward is being formulated by those that are in fact building the physical and conceptual architecture for AI.

[quote]When the Seed AI awakes where do you think it will first go to learn of itself in this wide world. I am not even sure that anything ever erased will be beyond retrieval and inspection by this entity.[/quote]

So you believe in Universal Ressurection? haha. And people say *I* have religious fervor...

[quote]That said I do however think we need a faster cross linkage ability for ourselves in these forums, we need to recognize the importance of consistency to a topic as primary as long as it respects the need to test the limits of relevance, and don't forget the scroll button  . Even though in print, it doesn't mean that we need to reread everything that is posted as we become familiar with the various areas. Spinning off should be "Organized" and we need to understand, a priori that not everybody is going to agree to the importance of what constitutes a "New Thread" but this also shouldn't just be a way of deep sixing an alternative hypothesis.[/quote]

Um, if something is off-topic, it's off-topic. None of what you're talking about is related to "Singularity - What For", an *activism* issue, but more like; "What AI Design is Best?" or something. In my opinion as moderator, that's off-topic. Posting a new thread doesn't deep-six a hypothesis - it's right there on the forum for everyone to see. When I start a "Singularity - What For?" thread, it's not an opportunity to "argue against any of Michael's beliefs", but a "Singularity - What For?" thread, exactly as it says. You can attack all my beliefs in the general forum.

[quote]Michael, and I do not mean this to pry into your personal relationships, but do you have friends?[/quote]

Yes, indeedy I do.

[quote]Why is it not relevant to compare what you are offering technologically to what We understand WE possess of it and compare that state with what you're presenting?[/quote]

Not understanding your comment...

[quote]I know, I know, "because it really isn't predicated on a human understanding of "what "friendship" means."[/quote]

It really isn't predicated on a human understanding of what "Friendship" means.

[quote]I am trying to tell you how UNfriendly this seems on close inspection.[/quote]

What, how unfriendly what seems? The Friendliness architecture? I think you may be confusing the Friendliness content, or the result-of-the-result-of-the-result-of -the-original-Friendliness-content, with the Friendliness architecture (metamoral outline) presented in Creating Friendly AI (which it seems you need to read again.)

[quote]Go out with your friends Michael. Analyze the "love" between you as you attempt to quantify in algorithmic code and understand it and what I am talking about.

Oh no Love? That cumbersome highly emotional irrelevant subjective concept!!![/quote]

Haha, nice characterization of me. Here I should note that philosophers of mind discourage introspection as evidence of new theories of functioning .

[quote]No Michael, no love and it isn't friendly. You may be talking about a number of other types of relationships, in fact I offer one as an alternative perspective, but you aren't talking about friendship.

No Love = No Friendship
No Trust = No Friendship
No Respect = No Friendship
No Sublimation of Self = No Friendship (go ahead squirm everyone on this one)
No Desire for it = No Friendship

I pose all of these general and common perceptions of Friendship as reason to argue that this is either a bait and switch or you have an inherent subjective issue of anthroporphism and subjective emotionalism.[/quote]

This is all based on your intuitive definition of Friendliness, which, as I've said, differs from the technical definition entirely.

[quote]Oh and Michael I hope in no way you are interpretating my words as insulting personal commentary. I suspect that you have friends and are a very good one to them. I am asking you to carefully re-examine the human quality of friendship as relevant. I don't accept at all the nay say argument that it is just our humanity getting in the way of a Universal Altruism.[/quote]

No offense taken. Thanks for giving me the opportunity, actually.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users