[quote]"Why is it that you wish to see the singularity realized?"[/quote]
I think Eliezer Yudkowsky says it much better than I could:
http://singinst.org/...ingularity.html
(a short-ish document.)
[quote]From what you have stated, you claim to realize that there is indeed alot of suffering and pain that occurs as many would put it "Needlessly", on this rock that we call home.[/quote]
To quote Anand's "Why Support the Singularity Institute?":
"180 million are injured intentionally or unintentionally per year. 20 million children die per year from hunger. 680 million have a mental or physical illness. 25 million are in slavery by force, or by the threat of force. 3 billion live on two dollars or less each day. One person dies every two seconds; 150,000 die per day; and 55 million die per year. These are problems that thousands of for-profit and non-profits try to solve with hundreds of billions of dollars. If one-hundredth or one-thousandth of the resources used to try and solve them with humanity’s present intelligence were instead applied to improving humanity’s intelligence, thereby improving our ability to solve all of them, the latter’s return on investment would be much greater than the former, even with the disproportion in investment. John Morley is often quoted as saying, “It is not enough to do good; one must do it in the right way.” This relates to the necessary steps to improve our world. The first is the intention; the second is the determination of the most effective way to fulfill that intention. By not focusing on the safe improvement of humanity’s intelligence, the 100 largest non-profits in the world, all with yearly incomes over $90 million, have not taken the second step. It’s unfortunate SIAI isn’t one of the largest non-profits, because we have taken that step. We are working to help everyone in the most effective possible way—by safely achieving the Singularity."
[quote]You, in your compassion, a compassion no doubt directly influenced by your own established "morals" and ideals, wish to end such suffering.[/quote]
Yep.
[quote]After much ado, you realize that given the apparently current "Human limitations", these problems will never be solved.[/quote]
Yes. *Real* limitations, not apparent.
[quote]So, you opt to find the most efficient method for solving them right?[/quote]
Yes, but the phrase "most efficient" might throw somebody off. If the efficiency of the problem-solving process becomes an end in itself - then all you've done is created another problem, obviously. Part of intelligence is seeing when your problem-solving method is a problem.
[quote]Well what if we could create this super intelligent system, one modeled off of our supposed "Idealistic Benevolence", that could end up thinking and enacting its own artificial will far beyond human capacity?[/quote]
The phrase "Idealistic Benevolence" is misleading. It implies that someone is taking their idea of Idealistic Benevolence and forcing it upon everybody else. If the benevolence created in the spark of the Singularity is oppressive, bothersome, or annoying in any way to anyone, then the initial group of people that sparked the Singularity failed to protect its integrity. Another reason why I want this group to be bigger, smarter, and more active.
"Enacting its own artificial will far beyond human capacity" is also misleading. This implies that something is going to come along and snatch up all the trophies of potential future accomplishments - if you really want to reach a final goal by traversing a long incremental path with a series of rewards dependent on the psychological sensation of "being there before anyone else", then what kind of benevolent being would belittle that?
The term "artificial" is largely useless in this context. A toaster is "artificial". A freeway is "artificial". Windows is "artificial". A complex algorithm created in a sea of neural networks and adaptive systems engaged in live feedback with an army of programmers with advanced interfaces that produces the cure for AIDS is "sort of artificial", but not really. A true AI would not be "artificial" in any important sense at all. And a full-blown superintelligence, what you're referring to in the above paragraph, transcends all notions of natural-ness or artificial-ness to create a novel variety of complexity altogether.
Also, the entire way you present this argument is based on the assumption that we have a choice of whether to create transhuman intelligence or not. The state and trajectory of our society today is not one that we could sustain for long without substantial improvement. A planetwide nano threat watching system would need to be piloted by an altruistic transhuman, the demand for judges to be purely altruistic and objective is there, the demand in politics, economics, etc...eventually collective demand would result in the emergence of a transhuman intelligence anyway. See Mitch Howe's "Rapids of Progress", posted right here in this forum for your viewing amusement.
[quote]Ok first off, it appears to me like you "jumped the gun" as the saying goes, or rather, decided to take the highest conceivable leap in heuristics and reasoning; one that concludes that your own reasoning (as well as the rest of all humanity) is flawed and needs "augmentation" by a system that has been allowed to evolve past its own limitations, limitations defined and created as a direct result of our own inferior reasoning...etc...etc...[/quote]
Whoa there. Lots of words with heavy emotional connotations in this paragraph that are useful in the context of human social networks, but completely break down when you're looking at things from the Singularity perspective. First, the foundational tenet of *transhumanism-in-general* is overcoming biological limitations through technological progress. The big, scary "Singularity" concept is simply an extrapolation of the consequences of a greater-than-human intelligence going out and making a difference in the world. It's easy to say "oh no, automation of automobile manufacturing facilities will make all those poor factory workers loose their jobs!", but it's hard to conceive of the new, better employment opportunities that emerge spontaneously in an economy where the grunt work is done by machines.
Secondly, you're saying as if the creation or non-creation of transhuman intelligence is solely my choice, either that or you're talking to me as a representative of the intelligence advocacy community in general. There are forces at work here, bigger, greater, above all of us. See Kurzweil's book precis at KurzweilAI.net. Progress is accelerating - and riding this wave is the only way we can survive it.
"Flawed". Saying "human intelligence is flawed" is a very general blanket statement, with a wide range of potential interpretations. Let me try to single out a few. Speaking as someone who's grown up in a cultural environment composed solely of human-level intelligences with human-characteristic motivations and concerns, human intelligence is fantastic. It solves problems, creates fun, carries moral weight, and so on, but our yardstick for measuring the value of intelligence is a yardstick custom-made for measuring intelligence amongst other humans, *not* for measuring the competence of a given intelligence relative to the space of all possible intelligent systems. We *don't know* what an entity just off the right of the bell curve could do, because we've never seen one, and the advantages of such an intelligence would be qualitative as well as quantitative. Compared to such possibilities, human intelligence is "an interim step", not "flawed" or "incorrect" or "in dire need of augmentation". Our respective judgements of humanity's necessity for enhancement are not based on the respective intensities of our love for humanity, but on the size of the yardstick we choose to use in our measurements.
Incorrect interpretations the "flawed" statement might elicit:
1) "Michael is immoral because he believes human intelligence is flawed. If he believes human intelligence is flawed, then he must think that human beings have low moral value, and I'll assume that in place he assigns spare moral value to, um, himself, or some scary artificial machines."
2) "Michael believes human intelligence is flawed, and everyone knows that hatred of humans implies alienness and insanity, and therefore Michael is alien and insane. Therefore he is arguing for building these scary artificial machines as an escape from the reality which he cannot avoid."
and so on...
[quote]Do the words: "Slow down a sec partner. Yer gonna get thrown off yer high horse if you kick him with a spur under yer saddle..." mean anything to you?[/quote]
Thrown off? How?
[quote]I've read all the arguments, I've even read the books on it and the proposals for "what could be" as well as "what should be" etc...and while the benefits seem astounding in many areas, not one of them seems "beyond human comprehension" , not even remotely.[/quote]
That's because what you've read what written by humans. Reading books written by humans, by definition, ensures that all their contents will be humanly comprehensible. 1000-letter languages are outside of your comprehension. You may be able to analyze a pathway to a goal that is dependent on the conjunctive occurence of a relatively small set of actions, but you can't forsee a solution which takes half the time but involves 10 times the actions of the former. Humans solve problems with human-level elegance. Humans generally don't even pay attention to problems that would require greater-than-human problem solving elegance because our evolutionary history dictated that these goals are adaptively irrelevant (too far out of reach) and therefore made them perceptually unsalient. A real "meaning of life", a declarative philosophical goal that all humans interpret as personally important - is outside of your ability to conceive (it may be inconceivable even in principle, actually.) They say that there's concepts that are better communicated in different languages than others - this may or may not be entirely true, but if there is *any* concept that can be stated more elegantly using a different style of linguistic tags, then the extra time or effort used to state the concept in the "inferior" language was a waste. Creating higher standards for intelligence and elegance will not solve all possible problems and then suddenly make everything boring - if a society of amoebas happened to accidently solve all their "problems" by increasing their intelligence, they would not have reached the end of creation, but a new beginning which opens up a combinatorally larger problem-solution space. Are there still experiences left in the world which you think would "open up your mind" a bit more? Since your mind isn't opened up yet, then apparently that open-minded state is beyond your current comprehension, patiently waiting for you to cross paths with that experience to elicit a new enlightenment within you. Intelligence augmentation is likely to have many of the same benefits of relevation through experience, yet more so.
[quote]I can conceive of anything that exists within my defined reality.[/quote]
Yep, that's because our realities are defined by what we can conceive...
[quote]By what combination of sensory data, both manipulated internally and externally, am I (or any other human for that matter) unable to comprehend the super-radical artificial mind of the so-called Singularity?[/quote]
Not sure what you mean there...but I'll guess. Say there's an entity 1000 times more complex than you, and its overall goal system and motivations are dictated by the complex interaction of the atomic components of that entity. How could you possibly project the goals of that entity, or guess how they would change over time, or how that entity would react to a given set of sensory stimulus? Pretend that humans evolved in a higher-entropy environment. We would need a load of supplemental neural circuitry in order to develop general intelligence, otherwise it would take us forever to perceive regularities in reality and their causal relatedness. The time it takes for a given entity to perceive an external detail and match it up against all previously learned, internal complexity is strongly correlated to that entity's ability to draw causal connections between that detail and other events, use that external detail as a tool in accomplishing a desired action, reassign the appropriate level of salience to varying categories of future sensory input, and so on. All these tasks are central to our current definition of intelligence, and even if humans have the ability to create absurdly coarse-grained mental imagery of past, future, and present events, doesn't mean that they can "comprehend everything in the universe" in the way a superintelligence could. What is dark matter? Maybe a superintelligence will be the first to figure that out.
[quote]That's a big affirmation. One that I've already tested on many accounts.[/quote]
See http://www.psych.ucs...cep/primer.html
[quote]That will herald the era of ultimate (or seemingly ultimate) power and control, the likes of which is only currently attributed to distinguished monthesitic "Gods", if you will.[/quote]
"Seemingly ultimate"! We can't know because we aren't at that plateau yet! Basically what you're saying is, "I can see the ground beneath my feet, and that peak over there, even though I'm just a lowly human! I can see everything!" Just because what you know is impressive to mainstream humanity doesn't mean it would be impressive to something smarter than a human. Maybe we're thinking about the wrong goals entirely, or framing them conceptually in the most convoluted and unnatural of ways. We don't know, because we aren't there yet.
What is the a priori chance that the first species to evolve self-awareness will be able to comprehend every physical possible configuration of matter in the universe and every possible concept that goes along with these configurations?
Imagine a society imposed entirely of 1-micron high human beings. These human beings would be more concerned with dust specks, bread crumbs, and sand particles than cars, skyscrapers or shopping malls. They could create a whole culture, language, customs, beliefs, methods, sciences, etc completely different than our own. But in this case we're at least talking about *humans*! What about Garflunks? What if we're talking about entities that communicate by selectively contracting or expanding a millions-long series of tiny hairs on their abdomens, and these Garflunks live in a world with different physical law than ours, physical law that allows the stable formation of Escher-esque planets and cupcake-shaped stars? What if these beings communicate and think so quickly that a human being introduced to their environment would be so blinded by the complexity as to render an attempt at a meaningful classification or interpretation useless? The human culture and society you see today is a tiny subset of a tiny subset of a tiny subset of all possible cultures or societies, how can you say there's nothing new to learn, to see, or to comprehend?
[quote]Did I need a super-intelligent artificial system to do that? No.
Will I need a super-intelligent artificial system to transpose those thoughts into substantiated reality? Possibly...but I highly doubt it. The mechanics in and of itself, should prove to be all the "Power" I will need.
Now will I need super-intelligent artificial systems to create/invent/engineer such "power(s)"? I highly doubt it.[/quote]
The Singularity isn't a superintelligent artificial system. The Singularity is an event in which a slightly-greater-than-human intelligence is created with the will to do good. That's all. This intelligence could create an army of organics as easily as it could an army of artificials, but both of these labels miss the point.
[quote]Granted, technology can and usually does assist in decreasing the subjective ammout of time needed to master such a skill, however, I don't see the need to radically change an existing human system for one that's already established and proven to work.[/quote]
Great. Then your system doesn't need to be changed after the Singularity. But you might consider working towards the Singularity for those who *do* want their system changed.
[quote]Another simple quote: "If it ain't broke, don't fix it."[/quote]
Why don't you tell that to the 180 million who are injured per year, the 20 million children who die per year of starvation, the 680 million who have a mental or physical illness, the 25 million who are in slavery by force, the 3 billion who live on two dollars or less each day...
What this whole thread appears to come down to is your fear of humanity no longer being the greatest force of intelligence in the universe, or your life being changed when you don't want it to be. If you plan to live forever, the Singularity *will* happen in your lifetime. Quite shortly, actually. The kickoff of the Singularity will render all current technological goals obsolete in the objective scheme of things, although some individuals may implicitly chose to continue their struggles, a choice which will be respected by beings of transhuman intelligence and morals. Futurists and intellectuals from all walks of life would have a lot of great things to say about the Singularity if they could just get by that initial knee-jerk reaction to the prospect. Part of the reason why I spread the Singularity meme is to increase the body of people available for input before the real Singularity is actually initiated and it's too late for anybody to have a say over the initial conditions of the first transhuman mind. Every day, all of humanity works towards a vague better future, but if their vision of the future comes to pass, it will be within the context of a pre-existing Singularity. If we want to have leverage over the *real* future, the *real* crossroads that separates eternal life and cold death, then we must consider the Singularity itself - all other variables in the vast sea of reality soup are only relevant insofar as they influence the outcome of the Singularity - I didn't make it like that, but it appears to be the way the universe currently works.