• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Consequences of Singularity


  • Please log in to reply
7 replies to this topic

#1 Reno

  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 20 November 2009 - 09:41 AM


When we reach that point where human thought is obsolete what will become the purpose of human beings? It seems to me that if all of what we are, think and can achieve in a lifetime becomes something that can be simulated on chip the size of a button, human life will no longer be original. We will no longer have purpose. Am I wrong?

#2 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 20 November 2009 - 02:01 PM

We will merge with machines and become as smart as them. Some time after the singularity there will be no more difference between AIs that were born machines and born humans that were enhanced/merged with machines.

sponsored ad

  • Advert

#3 Reno

  • Topic Starter
  • Guest
  • 584 posts
  • 37
  • Location:Somewhere

Posted 20 November 2009 - 09:28 PM

If we reach that point to where we're so smart that we can't see the wrong answer wouldn't that take away free choice? We would be making the right answer all the time.

#4 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 20 November 2009 - 11:02 PM

If we reach that point to where we're so smart that we can't see the wrong answer wouldn't that take away free choice? We would be making the right answer all the time.


Now do you have to worry over everything? Not ever making the wrong choice wouldn't bother me in the slightlest, if it was a fruit of me being smart.

#5 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 20 November 2009 - 11:17 PM

If we reach that point to where we're so smart that we can't see the wrong answer wouldn't that take away free choice? We would be making the right answer all the time.


Now do you have to worry over everything? Not ever making the wrong choice wouldn't bother me in the slightlest, if it was a fruit of me being smart.

Yes, that does seem absurd :~

#6 Teixeira

  • Guest
  • 143 posts
  • -1

Posted 04 December 2009 - 12:43 PM

When we reach that point where human thought is obsolete what will become the purpose of human beings? It seems to me that if all of what we are, think and can achieve in a lifetime becomes something that can be simulated on chip the size of a button, human life will no longer be original. We will no longer have purpose. Am I wrong?


Yes you are. Because you are not considering the capacity of the human brain/body to exceed his own abilities, through, say, a singularity process.
We cannot think that computers can improve all the time and human nature stand still all the time. Besides, there is a serious problem of ontology. We and the computers are dramatically different things!

#7 harris13.3

  • Guest
  • 87 posts
  • 6

Posted 04 December 2009 - 01:14 PM

We will merge with machines and become as smart as them.


Agreed, but it's possible that this could be a minority since it's likely that most people who continue to follow traditional ways will not be accepting of such practices, especially among the religious in the United States. Furthermore, it could take some time for those technologies to reach the poorer countries of the world. Gamers and computer programmers/engineers who have lots of money and want to be at the forefront of cutting-edge technologies, as well as transhumanists may be the first to take advantage of such technologies but among the general population, my guess is that the merging of humans and machines will be done in small incremental steps out of necessity (ie. to treat cognitive decline) as there will be no doubt a majority who are distressed at the idea of suddenly becoming a cyborg.

Edited by Condraz23, 04 December 2009 - 01:15 PM.


sponsored ad

  • Advert

#8 Vgamer1

  • Guest, F@H
  • 763 posts
  • 39
  • Location:Los Angeles

Posted 04 December 2009 - 02:49 PM

Yes, at first it would be incremental. Some people may never adopt the technology (such as the religious). That's not really our concern though. I'm sure there will be people at all levels of the spectrum. Normal humans, Methuselahs, cyborgs, uploads, etc. I have no problem with people who would choose to not enhance themselves.

If we reach that point to where we're so smart that we can't see the wrong answer wouldn't that take away free choice? We would be making the right answer all the time.


Well free choice is only an illusion as it stands now, although I know what you mean. The illusion is pretty nice to have. I think there's a point in which I would want to stop (or dramatically slow) my enhancement to avoid the risk of becoming too smart for my own good. Let's say there's a program I could download into my mind that would cause me to never make a error in thought again (if that's even possible). I really can't be sure what kind of conclusions I would draw. For example, if I deduce that all life is without meaning, I may be inclined to just kill myself right then and there, and the current me definitely doesn't want that.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users