• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

What Flavor Of Transhumanist Are You?


  • Please log in to reply
48 replies to this topic

#31 randolfe

  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 11 December 2003 - 11:13 PM

Wasn't the movie "Space Oddysey2001" based somewhat on this idea. It has been years since I saw it. I just remember that the computer, Hal, ended up in a struggle with the human.

Did anyone see the author of "The Age of Spiritual Machines" when he did his presentation on Book TV a few years ago. I still have that tape. It impressed me. I will have to look at it again.

His theory was that in ten years, they could have a computer which could have a conversation with you and you wouldn't even know it wasn't human. In twenty-five years, he predicted artificial intelligence would be greater than that of any human.

As I understand the Singularity debate, I think I find it "nonsense" when this AI takes on human motivations (striving for power) and takes over the world.

#32 reason

  • Guardian Reason
  • 1,101 posts
  • 251
  • Location:US

Posted 12 December 2003 - 12:05 AM

I think this thread nicely reinforces my perception that GAI is a much harder sell than life extension or cryonics...

Reason
Founder, Longevity Meme
reason@longevitymeme.org
http://www.longevitymeme.org

#33 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 December 2003 - 12:54 AM

"Harder sell" maybe, but unfortunately technological difficulty is not strongly correlated to public approval/disapproval. GAI may be dozens of times harder (than say, nanotechnology) to inform people about, but there's no evidence that it will take a dozen times as long to invent, implying that we'll probably be somewhat unprepared when it arrives. Bad news.

Reason; what on this thread, exactly, reinforces your mentioned perception? Randolfe stating that the Singularity is nonsense because AIs will always adhere roughly to human goals, like hammers and nails, no matter how intelligent they become, and this will happen automatically? Or was it because Laz suggested remaking "The Krell Machine"? ;)

I do agree that the Singularity/AGI topic is hard to present, but when I try to find out *why*, I often come up empty-handed. There is a whole continuum of beliefs and opinions; a greyscale, not isolated plateaus, so what I'm trying to figure out is "on which specific points do people personally tend to draw the line?". Randolfe draws the line closer to tradition than most, and I'm pretty sure I can think of specific reasons why - I draw the line much further away from tradition, and I know my specific reasons. It's just the area in between that I'm trying to understand better. Can anyone help me out?

sponsored ad

  • Advert

#34 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 12 December 2003 - 01:04 AM

Michael for the record the movie is called the "Forbidden Planet" and it has spawned many rip offs but no remakes. Its special effects were actually very important to cinematography historically and the theme ran over a decade (early 1950's) ahead of any concept of computer intelligence in the modern sense. It introduced both Robbie the Robot (later of Lost in Space fame) and the idea of cybernetic enhanced human intelligence.

Remaking it as a movie with modern effects is a no brainier (groan) and has to be better than the Sci-Fi channel reintroducing us to the Cylon's "Next Generation" in Battlestar Galactica II.

#35 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 December 2003 - 01:09 AM

Perhaps so, Laz! You would probably have a better idea of this than I would - I haven't sampled much sci-fi. My father is a film director who loves old sci-fi movies, and has spoken fondly of Forbidden Planet...that's all I know.

#36 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 12 December 2003 - 01:23 AM

Michael, just out of curiosity what are your own specific reasons for drawing the line where you draw it? Is there already something on the web you published to where you could provide a link?

I think people tend to draw the line with AI simply because it does, in fact, imply better-than-human intelligence—at least eventually. People generally already have trouble coping with the fact that they are not as competent as they would like to be in a world already seemingly saturated with competent people who are already making all the difference. People who are actually competent are just as human as the incompetent with respect to how they would feel when their own competence would begin to diminish in value in the face of superintelligence. People generally don’t care as much about what they have as much as what they feel they represent and can economically and socially accomplish. Tell most people that they will be living in paradise tomorrow but will be utterly worthless to others and the cosmos (since superintelligence is doing all the work), and most likely they would prefer their chaotic world in where they are making a perceivable difference.

#37 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 12 December 2003 - 05:43 AM

When it comes to Singularity policy, I'm on the same boat as Nick Bostrom, Brian Atkins, and Eliezer Yudkowsky. So the links would be:

www.singinst.org
www.nickbostrom.com
www.yudkowsky.net

with more advanced stuff at:

www.sl4.org

Jace, interesting comments you put forth regarding the emergence of superintelligence; perhaps superintelligence will refrain from creating too much visible structure in the eyes of willingly baseline humans so as to avoid scaring or intimidating them - I don't know. I just hope that the first superintelligence can apply the *same* (or an unambiguously improved version of) moral reasoning process as we do when considering which future humanity would genuinely want most.

Having a superintelligence that *genuinely* respects your (and the rest of sentientkind's) volition is different than all previous dramatizations of paradise or Heaven - it would genuinely be the best of all possible worlds. So instead of telling people "you could live in paradise tomorrow", what should I tell them? The real answer is quite complicated - it lies at the end of a long series of steps that take considerable time and effort to transverse.

Regardless of what people in the US and other First World countries claim to want, there are hundreds of millions, or even billions of people in conditions of straightforward pain and suffering. Starvation, rape, repressive regimes, prison camps, gangs, et cetera, et cetera. These people, obviously, deserve to be freed from their torment - and not necessarily through death (unless that's their informed choice.) This makes discussions of whether humans would feel incompetent in the face of superintelligence rather academic.

If a human has a problem with an intelligence greater than itself, then that human is still suffering from a "zero-sum psychology", a mental appraisal system that irrationally categorizes all powerful entities as potential threatening ones. This system comes from the survival challenges of our EEA, and becomes obsolete when dealing with transitions as massive as the Singularity (or even the atomic bomb.) One human's (or even a billion's) zero-sum psychological reaction to the emergence of superintelligence (say, through vocal disapproval) will never outweigh the moral imperative of making major progress in the fight against nonconsensual suffering and death (to a benevolent being, anyway.) Like the hand axe, the H. sapiens shell will eventually phase out, as will zero-sum psychologies, and hopefully, beings of all sizes and shapes will be able to live in peace. (Or maybe not - I obviously can't be 100% sure, humans and jealousy might be around until the end of time, but it seems unlikely.) Doesn't sound like too far-out of a scenario, does it?

#38 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 12 December 2003 - 06:16 AM

Thank you for the links. I already had singinst.org and nickbostrom.com bookmarked from the past. Good stuff. For some time, however, I've been reluctant in giving any attention to EY for the sole reason that he seems to worship AI for inheriting a condescending pedestal in the same way god-fearers do. I don't imagine that I will completely ignore his work forever though. Gotta drag through is expositions sometime if I am to understand what the hell you are talking about half the time.

Michael: If a human has a problem with an intelligence greater than itself, then that human is still suffering from a "zero-sum psychology", a mental appraisal system that irrationally categorizes all powerful entities as potential threatening ones.

I'm not so sure if it's always an issue of intimidation. People simply want to feel as though it makes a difference that they are alive. Many people feel that if they committed suicide, nothing will change and everything will go unphased. This is the psychology I'm referring to. It may be similar to zero-sum, but I don't think it is entirely.

Michael: Like the hand axe, the H. sapiens shell will eventually phase out, as will zero-sum psychologies, and hopefully, beings of all sizes and shapes will be able to live in peace. (Or maybe not - I obviously can't be 100% sure, humans and jealousy might be around until the end of time, but it seems unlikely.) Doesn't sound like too far-out of a scenario, does it?

You place a lot of emphasis on the hypothetical aspect of the Singularity. Have you written anything in the past that addresses how the economy must evolve to foster a direction that moves toward a Singularity, and not other futures? Perhaps there is some information on this as well to which you can lead me. I've been reading some essays at WTA lately, and it tentatively seems that futurist thinking is only plausible when not taking into account the social variable.

#39 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 13 December 2003 - 02:40 AM

Jace, thanks for your honesty regarding your opinion of the Singularity, EY, and other issues. I don't think it's right to say that Eliezer "is obsessed with" AI; he started off simply by being impressed with the idea of "transhuman intelligence", and then after he decided that AI was likely to become before the IA route, leading him to focus most of his work on AI. Also, how does he take up a "condescending pedestal"? I've read all of his work and I submit that only *some* of it is slightly condescending, mostly the older stuff. Perhaps the condescension comes partially as a side effect of being so smart - I don't know (that doesn't make it justified, of course.) Anyway, of course, if scientists only read the papers of their colleagues that they thought had good personalities, no one would get any science done...

If "everyone feeling like they're making a difference" and "the ending of nonconsensual suffering and ignorance" can't both happen at once, then the former will have to be violated, unfortunately. Hopefully, lots of stuff can be done in attempts to accommodate these people. It does point out a good point though, that even the Singularity can't be "perfect", because "perfection" is probably physically impossible, and all past arguments against this have been unconvincing. However, "best of all worlds" may be possible.

I'm not trying to place a lot of emphasis of the "hypothetical aspect" of the Singularity (what is the "hypothetical aspect"?) I was merely trying to comment on the extension of a trend that has already been happening through history (movement from zero-sum to positive-sum psychologies.) This may not be "old fashioned concrete style" like the numbers of human economics, but I wouldn't call it "hypothetical".

Based on your comments, I think we might mean entirely different things when we say the word "Singularity". The Singularity would simply be an invention, the creation of a transhuman intelligence, that would then go on to create more intelligence. I don't see a possible course of the economy in which this is impossible. (The technological prerequisites for transhuman intelligence are either already here or on their way shortly, and are desirable for a lot of other reasons besides "creating transhuman intelligence".) Futurist thinking usually should involve the "social variable", but "the social variable" means "the aggregate behavior of humans", and the aggregate behavior of humans would be largely irrelevant from the point of view of a recursively self-improving transhuman intelligence (in the pragmatic sense, I mean, not the moral one.)

Say you're trying to predict the future of a technological chimp society. Suddenly, one of the chimps invents a dimensional portal that brings in several thousand human beings with jet fighters, mortars, and machine guns. Say they want to kill all the chimps. The "social" patterns of the chimp society previous to the opening of this dimensional portal (and throughout the ensuing massacre) are largely irrelevant. Humanity has to get over itself - what Joe Normal (for that matter, Joe Intellectual) thinks is irrelevant to much of the unfolding of events behind the Singularity, except insofar as the seed AI's (or seed IA's) most recent iteration chooses to care. All we can do is set the *initial conditions* - make sure that transhumans care about humanity, so that we can call ourselves "we", instead of making moral distinctions between us and transhumans - an "Us vs. Them" scenario. If we screw up on the engineering aspect, then there won't be time for us to hate transhumans - the most likely scenario would be immediate disassembly (due to the recursive self-improvement aspect.)

The key to understanding the Singularity is understanding the concept of recursive self-improvement. Very, very few people understand it; I'd say that less than 1% of the transhumanist community does. It's a very technical idea, that employs points gleaned from cognitive science and evolutionary psychology. The very best explanation in existence is here:

http://www.singinst....OGI/seedAI.html

Grasping the "whole picture" requires understanding the technical meanings of the little points he mentions, like "Mutations are atomic; recombinations are random; changes are made on the genotype's lowest level of organization (flipping genetic bits); the grain size of the component tested is the whole organism". That's my current best guess for why most transhumanists don't understand it.

#40 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 13 December 2003 - 04:53 AM

Michael: I've read all of his work and I submit that only *some* of it is slightly condescending, mostly the older stuff.

Yes, I was referring to older stuff which is where I decided to begin when I initially set out to give some attention to EY. I think it was some quote in an article that went something like this, “If I had to make a choice between AI and humans, I choose AI,” which to me is on the same lines as, “I choose God so that I can speak on behalf of Him and look down upon Humanity since they are congenital subordinates,” and that’s where I got distracted and veered away.

Michael: Anyway, of course, if scientists only read the papers of their colleagues that they thought had good personalities, no one would get any science done...

LOL! Noted.

Michael I don't see a possible course of the economy in which this is impossible.

I don’t either. But scientists are usually under a lot of scrutiny. And they certainly don’t accumulate their own capital to carry out their research. Therefore, I can see many elites getting collectively involved if and when their influence begins to become prospectively threatened. Regular people won’t have the power to terminate a Singularity underway, but the elite will, because no matter how intelligent a few AIs are, they are still operating within institutional forces. If they initially begin to obstruct, or prospectively so, the large benefactors of business, a militia will be on call, and the militia will likely have the nanotechnological resources by then to preempt AI effort at accumulating their own resources for combat.

It would be a mistake to assume that the Singularity will never take place. But, in my opinion, a more likely scenario will be that AIs won’t physically be allowed to do much more than to figure out ways to augment the elites who, in turn, I think, will posit institutional initiatives that foster the augmentation of regular people before AI are allowed to do much more.

Michael, any talk about the Singularity is hypothetical because theories and principles that are applied to move toward recursively self-improving AI are a different matter than speculation of future scenarios.

#41 John Doe

  • Guest
  • 291 posts
  • 0

Posted 13 December 2003 - 10:52 PM

I detected an arrogant tone in Yudkowsky's email to me.

The key to understanding the Singularity is understanding the concept of recursive self-improvement.  Very, very few people understand it; I'd say that less than 1% of the transhumanist community does.  It's a very technical idea, that employs points gleaned from cognitive science and evolutionary psychology.  The very best explanation in existence is here:


Recursive self improvement is neither technical nor gleaned from cognitive and evolutionary psychology. Recursive self improvement simply states that things that improve their ability to improve themselves create a positive feedback loop that grows exponentially. If I improve my ability to improve my intelligence, and doing so allows further acceleration, I am going be very smart very quick.

Grasping the "whole picture" requires understanding the technical meanings of the little points he mentions, like "Mutations are atomic; recombinations are random; changes are made on the genotype's lowest level of organization (flipping genetic bits); the grain size of the component tested is the whole organism".  That's my current best guess for why most transhumanists don't understand it.


Yudkowsky mentions these details to explain the limitations of natural evolution and not to explain recursive self improvement.

The reason most trashumanists do not appreciate recursive self improvement is that human beings have never been able to substantially alter their biology or genomes (yet). We could never improve our ability to improve ourselves so we failed to appreciate the power of doing so. Humans also evolved as the supreme terrestial intelligence. Transhumanists are much quicker to appreciate our ability to improve our health or strength because we know of species that surpass our strength and health. Cheetahs can race cars and tree lives for centuries. But what cheetah or tree ever wrote a book? The fundamental shift I required to understand the Singularity was to realize that intelligence is quantitative and not qualitative.

#42 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 14 December 2003 - 12:24 AM

I disagree that the (full) concept of recursive self-improvement is neither technical nor gleaned from cognitive and evolutionary psychology. In the most broad possible terms, yes, the definition of "recursive self-improvement" you give does apply. But understanding *why* we should expect recursive self-improvement to be so extremely powerful relative to other advances requires a technical knowledge of the comparative advantages and disadvantages of evolution, human brains, software programs, and young AIs. For example, few people know that most cognitive programs must execute in 100 sequential steps, due to the limitations of neuron firing and information processing within the human brain, with huge implications for the advantages of engineered vs. evolved brains. Heck, many people are still hung up on the Church-Turing thesis!

Yudkowsky's explanation of the limitations of evolution goes hand in hand with his explanation of the advantages of intelligence - the essence of study for an AI-based Singularity are the comparative advantages of brains designed by evolution vs. brains designed by intelligence.

I don't think it should require a concrete example for someone to accept the possibility and power of recursive self-improvement - these qualities can be extrapolated straightforwardly from a minimum of assumptions (but require lots of technical data to really substantiate and explain.) Requiring a visual example of something that has been forthcoming for decades or centuries (such as the computer, for example) is the hallmark of short-sightedness.

(Yes, in spite of my railing against misunderstandings of recursive self-improvement in the above paragraphs, I try to avoid arrogant or self-centered tendencies to the best of my ability. Part of the problem is that one often needs to be formal and forceful to come across as credible, and people observing formal and forceful outputs often judge that the source of that output must be arrogant, because in most cases, they are. But not always! Sometimes altruists need to masquerade aggressively in order to get their ideas across.)

#43 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 14 December 2003 - 12:49 AM

Jace, for a number of reasons, I disagree with what you are saying, but, if possible, I'd like to defer further discussion until you've read some of EY's work - that will put us on a bit more common ground. Even if you disagree with the ideas, you'll at least understand a bit more regarding what I'm talking about. ;)

I especially recommend this:

http://www.singinst....ro/smarter.html

and this: http://www.yudkowsky...ingularity.html

If you feel that Yudkowsky is too arrogant to read, then I guess you'll have to wait until I've polished and published more of my Singularity writings, right? I personally consider Eliezer's seeming arrogance a funny personality quirk - also, it's worth noting that his quote about "Us and Them" was predicated on the idea that if humans and AIs had a conflict, the AIs would be the morally correct ones, because he thought at that point that compassion and intelligence were necessarily linked. Eliezer once mentioned that having all your writings floating around the internet is like having your baby pictures permanently stapled to your forehead. His comments on the media are here; http://yudkowsky.net/eliezer.html. (He took down his bio page due to that silly Wired article, I believe.)

Anyway, I really enjoy conversing with the two of you, Kip and Jace, you both have 1) very high intelligence and 2) excellent communication skills. Both are somewhat rare, but when they come hand in hand, it's really great. Thanks again for sharing all your comments and opinions.

#44 John Doe

  • Guest
  • 291 posts
  • 0

Posted 14 December 2003 - 04:28 AM

I disagree that the (full) concept of recursive self-improvement is neither technical nor gleaned from cognitive and evolutionary psychology.  In the most broad possible terms, yes, the definition of "recursive self-improvement" you give does apply.  But understanding *why* we should expect recursive self-improvement to be so extremely powerful relative to other advances requires a technical knowledge of the comparative advantages and disadvantages of evolution, human brains, software programs, and young AIs. 


You do not need to know anything about evolutionary/cognitive psychology in order to understand reculsive self improvement. Ask a person "what if, every time you lifted a barbell, you increased the amount of weight you can lift and also the amount of muscle that grew per pound lifted?" "You're be very strong very quick." If you apply the same method to intelligence, instead of strength, or anything else, the answer is the same.

For example, few people know that most cognitive programs must execute in 100 sequential steps, due to the limitations of neuron firing and information processing within the human brain, with huge implications for the advantages of engineered vs. evolved brains.  Heck, many people are still hung up on the Church-Turing thesis!


Again, you don't need to know any of this (how is the CT thesis, that every effective algorithm can be carried out by a Turing machine, relevant?) to understand recursive self improvement. Nor is the possibility of natural selection evolving a recursively self improvement intelligence zero (indeed, this will soon be exactly what natural selection has done).

#45 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 December 2003 - 02:25 PM

I agree with you John Doe. There doesn't seem to be anything mystifying about recursive self-improvement. It is very easy to understand.

How to achieve/create/stimulate recursive self-improvement within a human or computer brain is a more difficult question.

#46 randolfe

  • Guest
  • 439 posts
  • -1
  • Location:New York City/ Hoboken, N.J.

Posted 18 December 2003 - 07:07 AM

Reason; what on this thread, exactly, reinforces your mentioned perception?  Randolfe stating that the Singularity is nonsense because AIs will always adhere roughly to human goals, like hammers and nails, no matter how intelligent they become, and this will happen automatically? 


I didn't say AIs would adhere to human goals. I said I didn't believe that AIs would "adopt human emotions and human goals". If AIs came into existence, why do you think they would have the same lust for power, etc. that we humans have?

#47 macdog

  • Guest
  • 137 posts
  • 0

Posted 02 April 2004 - 05:50 AM

One part of this debate that is being considered and avoided here is that there is going to be a HUGE amount of money going into longevity treatments by the aging boomers. That's where the real effect of generations will come in. I remember reading about nanotech in the late 80's in where people were saying that by now we'd have houses with rooms that would collapse when they were no longer occupied. Didn't anybody else hear about the miserable failure of DARPA's independant robot race? Most of them went less than 6 Feet! Or anybody remember Creatures from ~1997? Those cute little Norns and their hegelian weight nueral networks were supposed to make them the pet of the future. Mine couldn't even feed themselves without prompting and none of them would go to sleep until they were so fatigued they just died. There were three or versions of the game and then the company went under. I think it's a lot easier to expect that some really cool computer is going to make you immortal, than that you need to excersise, eat right, and take very specific dosages of supplements. I'm sorry, but if you spend all your time wiring together pc's in the basement to make a nueral net and chowing pizza, you're more likely to die of a heart attack at 45 than hit the Singularity.

#48 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 02 April 2004 - 04:34 PM

How to achieve/create/stimulate recursive self-improvement within a human or computer brain is a more difficult question.


General intelligence , knowledge of its own hardware + some knowledge of its thinking processes + ultrafast thinking + the rest of the AI advantage should be far more than enough. That is the point that http://www.singinst....OGI/seedAI.html argues very convincingly. The bulk of the task is basically creating some form of general intelligence, which is not an easy one, but once you've achieved it, a Singularity is practically inevitable.

I didn't say AIs would adhere to human goals. I said I didn't believe that AIs would "adopt human emotions and human goals". If AIs came into existence, why do you think they would have the same lust for power, etc. that we humans have?


Well, I mean, they could if we duplicated all the specific complexity underlying the human lust for power, or built in some context-insensitive power-seeking "instinct". Both of these tasks would be difficult, not to mention unwise. So I doubt they would happen. You may be agreeing with me here. But something you said must have given me the impression that you felt AIs would converge towards certain human qualities - what do you think it was? Whatever it is, I'm pretty sure that http://www.singinst....FAI/anthro.html would cover it.

I think it's a lot easier to expect that some really cool computer is going to make you immortal, than that you need to excersise, eat right, and take very specific dosages of supplements.


No, building some "really cool computer" (a seed AI specifically) is a lot harder than exercising, eating right, and taking specific dosages of supplements. Being the *first* to build one is likely to require ample knowledge and innate intelligence, of a level significantly beyond practically anyone on these forums. Also, if you don't build your really cool computer *right*, it will simply proceed to acquiring nanotechnology and implementing its goals, which could easily entail the destruction of humanity through the mass-rearrangement of local matter. The complexity underlying compassion, understanding, benevolence, and plain old common sense will need to be duplicated deliberately in AIs; we can't expect these qualities to pop up on their own. If the first AI *does not* possess them, then the consequences could be really bad, like immediate species death. (See the "Impact of the Singularity" thread.)

As for Singularity timeframes, here are the usual links:

http://www.nickbostr...telligence.html
http://www.accelerat...encehowsoon.htm

See also "What is the Singularity?" if you aren't really familiar with the underlying reasoning:

http://www.singinst....ingularity.html

#49 Omnido

  • Guest
  • 194 posts
  • 2

Posted 18 April 2004 - 10:50 PM

Being the *first* to build one is likely to require ample knowledge and innate intelligence, of a level significantly beyond practically anyone on these forums. 


I wouldnt go that far Michael...
There are probably a significant number of individuals who are quite capable of accomplishing said event, although I would wager that the majority of them (if not all) are against the core idea behind the singularity and FAI itself.

I place it within my own capabilities to accomplish such an event with simplistic use of semantical logic and modular programming, as well as models for multiple avenues of what one could call "thought" within a machine. It would be a difficult task, no doubt. However, I have yet to feel so inclined to support the singularity theory or FAI for reasons that I have clearly stated.
Or perhaps not so clearly... [glasses]
This is going on the second year now, and I still am strongly opposed to the implimentation of the aforemetioned Singularity as well as the FAI, as it has been described, labeled, and heretofore defined.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users