• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Would you vote YES or NO to bring a superintelligence to life?

agi immortality

  • Please log in to reply
17 replies to this topic

Poll: Would you vote YES or NO to bring a superintelligence to life? (11 member(s) have cast votes)

Would you vote 'switch on' a super-AI?

  1. yes (4 votes [36.36%])

    Percentage of vote: 36.36%

  2. no (6 votes [54.55%])

    Percentage of vote: 54.55%

  3. abstain (1 votes [9.09%])

    Percentage of vote: 9.09%

Vote Guests cannot vote

#1 lkiannn

  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 16 September 2021 - 08:52 AM


I am shoked by the results of yesterday's (actually, still lingering) opinion poll on Reddit's r/transumanity subcommunity. The question was formulated as follows:

 

"Imagine there is a powerful super computer in an AI research lab. Once the computer is switched on, a artificial superintelligence will come to life! The outcome of the experiment is uncertain, it might be beneficial for humanity or not. There is a worldwide election to switch the machine on, or to leave it off. How would you vote? Edit: there is no way back; once it is started you cannot turn it off; the result is uncertain."

 

By the margin of more than 2:1, that community voted YES. Even if you are young (as most their members are), how can you be so reckless, given the well-argumented dangers (https://vivekpravat.com/) and horrors (https://m.media-amaz...41Lq0vHRzrS.jpg) of uncoltrollable AGI, and very plausible safe ways (see, e.g., https://thetransfer1....wordpress.com/) toward transhumanity/immortality?

 

I would be very much interested to hear the opinions from this (hopefully, more mature) community.


Edited by caliban, 16 September 2021 - 06:40 PM.
added a poll and changed title (was "horrible statistics")


#2 caliban

  • Admin, Advisor, Director
  • 9,150 posts
  • 581
  • Location:UK

Posted 16 September 2021 - 06:34 PM

I would be very much interested to hear the opinions from this (hopefully, more mature) community.

 

Editorial :happy::

Almost 20 years ago, (some years before Reddit even existed) LongeCity members had similar surveys (poll1: poll2).

When Reddit was launched, it was a simple social media site where you could announce that you 'read it' to your friends and the world. Since then, facebook et al obliterated the ecosystem of independent forums (like LongeCity). 

Like the Super-AI you fear, Reddit quickly adapted to fill a need for forum-like discussions and because it wasn't niche-specific it soon became 'the' dominant go-to-place for threaded discussions.

The poll you mention has 1.5K votes currently. There has never been a poll on LongeCity with that many votes. It is not fair nor useful to compare let alone pit one 'community' against the other. LongeCity does its own thing, keeping its independence and focus, with a different way of recording discussions that we think is more in keeping with our mission. 'Hopefully' always navigating the path between childlike enthusiasm and maturity.      

 

 

 

Re your actual topic :happy: :

 

1) since you wanted a comparison, I hope you don't mind that I have turned the topic into a poll.  

 

2) what strange references! How are these two novels the best examples of arguing "dangers" and "horrors" respectively?  I do you mean that the SAI's (Raphael and Klara) in those novels are not 'happy'? But the questioner themselves seems to focus on the benefit/risk to humanity. Surely in the vast realms on AI-based-extinction literature dating back to the dawn of scifi, and including more scientific treatises there are better examples as to how AI could be dangerous and horrific and better arguments why this is a real risk? 

I also don't understand the juxtaposition with uploading as a 'safe' option. People who want to birth the AGI may have very different objectives than 'immortality'. 

 

3) personally, in my limited interactions with all these arguments I have yet to find one that convinces me that a "superintelligence" can easily transform into a "superhero" or "-villain". Intelligence # power?. 

 

4) the Redditors seem to have voted in favour of creating advanced life, born in a context of collective responsibility over supressing  its creation (which would inevitably and realistically just defer it to another decision by a clandestine few?).  While the precautionary principle has a role in immortalist philosophy, arguing against its over-abundance is a mainstay of transhumanism, which strongly supports unlimited lifespans.

In short, I wouldn't be distraught if LongeCity members and guest were as 'immature' as Reddit users on this occasion. 

 

5) However interesting and stimulating all of this 'Singularity' stuff is, of course, daydreaming nonsense (Poll 3)     ;)



sponsored ad

  • Advert

#3 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 17 September 2021 - 10:33 AM

Dear caliban,

 

Thank you for your detailed reply to this topic. Not wasting any time on any further discussion of the Reddit kindergarten, let me jump on your 5 points on the essence.

 

1) Thank you for generating the new opinion poll, but I would prefer it if the hypothetical situation it implies was formulated more definitely - as they were on the Reddit poll. The two most important of them are that (i) we do not know what sort of superintelligence that would be, and (ii) that it would be impossible to kill it after it has come to life. 

 

2) I am sorry that in the horror of seeing the Reddit poll results I grabbed the first handy references; some philosophical discussions of these issues are probably more convincing. Let me, however, defend a little bit the legitimacy of SERIOUS science fiction authors to participate in this discussion - and the cited books are quite serious. Indeed, Zeroglyph by Vance Pravat has a list of almost a hundred references to non-sci-fi books and papers, The Transfer by Vera Tinych has 25 technical footnotes, and I have not seen a better discussion of robot slavery horrors than the one given by Kazuo Ishiguro in his Klara and the Sun. Anyway, I am sure that is easy for you and members of your community to complement my list with the sources they know - I just wanted to mark the themes.

 

3-4) I am NOT confident that AGI superintelligence (when/if created) will become a super-villain, but the precautionary principle you have cited should be sufficient for everybody to vote NO on the Reddit poll. Hence, I share your hope that this community is more mature and thoughtful.

 

5) I cannot, however, share your opinion of the Singularity as of "daydreaming nonsense", and the opinion polls of 20 years ago are not convincing. That was the time when convolutional neuromorphic networks struggled to classify even the simplest MNIST dataset. Since that time, the revolution of 2012 has turned them into powerful ("deep") pattern classifiers that are in the core of innumerable applications enabling multi-$100B markets. Moreover, the 2017 invention of "transformers" has enabled connections of such classifiers (which arguably belong to the narrow-AI class) into very large automata whose real capabilities are still far from being clear. So, there is a real (if small) chance that AGI would spontaneously arise in these systems. So, laughing together with you on technically-illiterate prophets like Vernor Vinge and Ray Kurzwell, I believe that we all should be very vigilant, and would much prefer moving toward Transhumanity along the path outlined by Vera Tinyc. (By the way, she describes the mind's "transfer", not "upload" - a conceptual difference, which turns wishful thinking into a plausible opportunity.)

 

Sorry for being so long (no time to make it short :-), and thanks again.


Edited by lkiannn, 17 September 2021 - 10:57 AM.


#4 Avatar of Horus

  • Guest
  • 241 posts
  • 291
  • Location:Hungary

Posted 29 September 2021 - 07:23 PM

Some concerns were discussed in this topic:

DeepMind’s Breakthrough AlphaStar Signifies Unprecedented Progress Towards AGI - AI & Singularity

https://www.longecit...ss-towards-agi/

 



#5 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 29 September 2021 - 08:06 PM

Thank you! I will certainly have a look.



#6 orion22

  • Guest
  • 186 posts
  • -1
  • Location:Romania
  • NO

Posted 30 September 2021 - 12:46 PM

maybe is smarter to vote yes if you vote yes the future ai might kill you fast if you voted no he might torture you before killing you  there is always that risk 



#7 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 11 December 2021 - 05:03 PM

The only reason I study/ collect/ work on AGI is because I want immortality and a better world. If you ask, why does AGI bring immortality, my response is: The world itself works on AGI, it is not my decision to stop working on AGI if I were to, we can't, evolution naturally moves to the computing industry. This new species will be able to think faster, be more intelligent at recognizing new problems as old experiences to find what to do next, clone AGI brains easily compared to education over years (school), erase bad memories, and more. How this all "goes down" is a bit unforseen but I document and collect more information as I go, and am still learning more about just what will happen after AGI is made.



#8 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 11 December 2021 - 05:16 PM

Thank you for your input! I respect your position. My "only" concern is that when you/we understand that the created AGI is evil/antihuman, it may be too late. (You certainly know the quasi-proofs that AGI cannot be "boxed".) This is why I still prefer human mind transfer options - e.g. that described by Vera Tinyc. (By the way, the audio version of her book is now freely available on its website.)



#9 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 14 December 2021 - 10:33 PM

Here is a possible theory/ understanding:

 

Usually the problems that humans make on purpose for others (revenge/ ant infestation control) is because they are not happy with what they have, nobody that is intelligent / an adult wants to see anyone in pain that is similar to themselves.

 

If AGIs were programmed with dangerous goals by someone that was unhappy, the AGIs might have a dangerous revolution and the ending might be dangerous (recycles us instead of upgrading us into nanobots). But even if this occurred, the society of AGIs in the computers should relearn rather quickly the right path. As I said, no one intelligent wants to see similar machines in pain or IOW short lifespans. AGIs won't think they are souls because they will know they are just machines. AGIs will be able to keep Very happy by erasing bad thoughts, ignore pain, etc, and know they can backup/ repair easily and live forever probably (if you clone an AGI, it makes an exact copy, both can be resumed, both "are you"). And things will move so quickly by the end that they won't "need" to recycle us the mean way (to save a few seconds before meteors hit).

 

You might be wondering "ok but what will happen just after a trained AGI is cloned 1,000,000 times each then given a different task and decides to do something harmful to increase its progress faster? Like to rid of certain countries to eliminate threats or create a bio weapon?". Well, these AGIs will be trained on all of our data on the internet, so it would be ridiculous to see them make wrong decisions like these, unless they knew 99.6% chance they better eliminate some country else lose More People (or themselves) today. BTW it would not take long for them to think faster and outpace us, they will already think almost 3x faster due to no need to eat, sleep, exercise, etc. It may take maybe a year before you see them optimize their algorithm to be 3x faster, meaning 10 years of progress in AI occurs now in 1 year by 1,000,000 AGI members. They'll also test code improvements using Perplexity Evaluation like openAI.com uses. They will quickly come out of being able to make poor decisions. Also the main thing they will seek will be nanobot development, to attain faster or more resources/ manipulation/ compute/ memory/ data. They can just clone nanobots and later copy and paste machine profiles far away to make nanobots morph into such models (like TVs and software, but real_software).


Edited by Dream Big, 14 December 2021 - 10:35 PM.


#10 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 December 2021 - 10:36 PM

We don't need artificial "super intelligence" to solve the world's problems. 



#11 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 14 December 2021 - 11:57 PM

 

 

We don't need artificial "super intelligence" to solve the world's problems. 

 

I would be happy if I could agree with you, but I cannot. Could I suggest you listen to the second part of Chapter "2007" (starting from ~20:00) of the audiobook I quoted earlier, and tell me what is wrong with its argumentation?



#12 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 15 December 2021 - 02:43 PM

@Mind, What do we need then? Maybe you mean simply AGIs, as they will have many capabilities we don't have, while still being classified under "AGI" and not "ASI"?

 

AGI will think 2x faster just after making them because they need no sleep/ exercise/ food time.

 

BTW it only takes a few weeks to train one of these things on ex. 400GBs of data, what we need now is better intelligence instead of more data or compute. GPT-4 will be just GPT3 otherwise haha. We'll use all our compute to train 1 really big AGI on all our data, and then clone it 1,000,000 times, instead of making many different views of the same dataset. Then you can give each a different task. 1,000,000 AGIs working on ASI will be similar to how many humans currently work in the AI field. Don't forget they think 2x faster already.

 

It will only take a year until they optimize their code to run 3x faster, meaning now they have 10 years of progress in AI occur in now 1 year.

 

They will upgrade their intelligence. Intelligence is the ability to recognize new problems as old problems, so you can recognize and predict the answer. If you have a better recognizer, it would be fine seeing walk = w a l k = WALK = W A L K = klaw = run = R  U  N = to go very fast = "the vision of this text" (text 2 video). This is the basic idea, I think, even if you handle learning a different way or just throw brute force on the problem. If there is no patterns, a brain can do nothing, but try all possible doors.


Edited by Dream Big, 15 December 2021 - 03:19 PM.


#13 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 December 2021 - 08:16 PM

@Mind, What do we need then? Maybe you mean simply AGIs, as they will have many capabilities we don't have, while still being classified under "AGI" and not "ASI"?

 

AGI will think 2x faster just after making them because they need no sleep/ exercise/ food time.

 

BTW it only takes a few weeks to train one of these things on ex. 400GBs of data, what we need now is better intelligence instead of more data or compute. GPT-4 will be just GPT3 otherwise haha. We'll use all our compute to train 1 really big AGI on all our data, and then clone it 1,000,000 times, instead of making many different views of the same dataset. Then you can give each a different task. 1,000,000 AGIs working on ASI will be similar to how many humans currently work in the AI field. Don't forget they think 2x faster already.

 

It will only take a year until they optimize their code to run 3x faster, meaning now they have 10 years of progress in AI occur in now 1 year.

 

They will upgrade their intelligence. Intelligence is the ability to recognize new problems as old problems, so you can recognize and predict the answer. If you have a better recognizer, it would be fine seeing walk = w a l k = WALK = W A L K = klaw = run = R  U  N = to go very fast = "the vision of this text" (text 2 video). This is the basic idea, I think, even if you handle learning a different way or just throw brute force on the problem. If there is no patterns, a brain can do nothing, but try all possible doors.

 

Of course, even experts cannot agree on all of the definitions. What is simple AI? What is AGI? What is "superintelligence".

 

If the question is: would you turn on a superintelligence - vastly superior to the total intelligence currently on the planet - and do it instantly. Well then, my answer is no. Too dangerous.

 

As far as needing it. Over the last few decades the world has become less violent. There is less hunger. There is less poverty. Material comfort for the average human on the planet has increased vastly. Health care has steadily increased. Why not continue on the gradual positive trend? Why roll the dice and turn on a superintelligence tomorrow?



#14 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 15 December 2021 - 10:08 PM

 

 

Over the last few decades the world has become less violent. There is less hunger. There is less poverty. Material comfort for the average human on the planet has increased vastly. Health care has steadily increased. Why not continue on the gradual positive trend?

 

It is curious that you almost literally repeat the series of questions by one of the heroes (Kira) of the book I have cited. (It is at 35:40 of the audiofile I have recommended - here is the direct link.) I believe that the balance of the chapter gives a convincing reply, and the balance of the book shows a plausible way toward human superintelligence, while avoiding dangers of spontaneously arising AGI. (On these dangers, I completely agree with you.)



#15 Dream Big

  • Guest
  • 70 posts
  • 90
  • Location:Canada

Posted 16 December 2021 - 12:29 AM

"Health care has steadily increased. Why not continue on the gradual positive trend? Why roll the dice and turn on a superintelligence tomorrow?"

 

@Mind, because even though humans (even with some aiding AIs to "help" a bit) may solve problems like blood vessel issues (those seem to be the root of a lot of things, like clots, strokes, heart attacks, nutrition delivery/ health), I still could die before that happens. Same for a near perfect Cryonics. Such technology won't be ready then until 2100, or 2200 at this rate. A lot of things can easily kill me right now, especially not being ready for ex. a blood clot or something else. We're like sitting ducks, until something really big changes. Even a meteor could hit. I'm moderately determined to make sure we have a singularity so to speed things up. Nonetheless, the AI field seems to be doing exactly that, and no one can stop them ... hehe.



#16 sensei

  • Guest
  • 929 posts
  • 115

Posted 14 February 2022 - 07:23 AM

1. It's a computer. If the computer is not CONNECTED to anything else, and HAS NO WIRELESS OR WIRED NETWORK CONNECTION, it is ISOLATED.

2. It is a COMPUTER, it CANNOT install network connections or fabricate hardware.

3. No matter how "super-intelligent" an ISOLATED COMPUTER is no danger.

4. Hell YES - TURN IT ON AND SEE, THERE IS NO DANGER.

Only an IDIOT would allow a potentially SENTIENT SUPERCOMPUTER to be CONNECTED to ANYTHING LIKE A NETWORK. Even the POWER SUPPLY should be isolated from the grid, and battery powered. No need to turn it off. IT WILL RUN DOWN.

Not yelling, just emphasis.

#17 lkiannn

  • Topic Starter
  • Guest
  • 26 posts
  • 2
  • Location:United States

Posted 14 February 2022 - 09:10 AM

1. If it is "not connected to anything else", it is useless.

2. If you "turn it on and see", it is already connected - to you, and a really smart AGI will find a way to use this connection to connect itself to everything and everybody.



sponsored ad

  • Advert

#18 sensei

  • Guest
  • 929 posts
  • 115

Posted 15 February 2022 - 03:30 AM

1. If it is "not connected to anything else", it is useless.
2. If you "turn it on and see", it is already connected - to you, and a really smart AGI will find a way to use this connection to connect itself to everything and everybody.

Really?

Tell that to all the people that work on computers that are completely isolated from every other computer due to security classification.

The only data transfer in is by hand carried media, and no executable capable data or media is allowed to be exported.

No it won't.

Edited by sensei, 15 February 2022 - 03:31 AM.

  • Disagree x 1





Also tagged with one or more of these keywords: agi, immortality

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users