• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Humanity needs to invent general AI to survive.

general ai government

  • Please log in to reply
10 replies to this topic

#1 jroseland

  • Guest
  • 1,117 posts
  • 162
  • Location:Europe

Posted 30 October 2016 - 01:20 PM


This philosopher, Sam Harris is convinced that general AI is the greatest threat that humanity faces. I disagree.
Humanity needs to invent general AI to survive.

 

Despite building the most compassionate and prosperous society the world has ever seen with technology barely indistinguishable from magic the governments of the world are on the verge of suicidal wars.

In recent history governments around the world have killed +250,000,000 of their own citizensIt reveals a great irrational faith in government to consider general AI to be the principal threat facing our species.

There’s a good chance that the digital gods we build will be benign helpers of humanity, there is also a chance that they may snuff us out but there is a certainty that evil of the government multiplied by the irrationality, out group preference and impulsiveness of people will turn this planet into a radioactive wasteland.

 

Technology is rapidly replacing our jobs, a disturbing trend that foreshadows great social upset and violence. I can only hope that general AI will first replace the jobs of utterly ineffective government bureaucrats and paper pushers who in their organized irresponsibility deprive the people of freedom. The waste, fraud and abuse of the government could be replaced by elegant algorithms that efficiently solve problems instead of perpetuating them.

 

Sam hypothesizes that general AI will squash us like a human stepping on a bug that crosses his path, what’s not hypothetical is that even now the boot of government power stomps on the face of human liberty.

 

Despite the having the accumulated knowledge of history, economics and philosophy freely available to us in our pockets everywhere we go, the electorate still vote based upon the most petty scandals, visceral emotions and base human biases. General AI could fairly restructure democracy to favor intelligent, philosophically robust policies and politicians and General AI could eventually completely replace the government, the way that democracy replaced aristocracy throughout the world.

As a government AI would likely not be a central planner, AI would interpret all the data provided by the free market, and make decisions unbiased by emotion, ego, nepotism or political correctness.

 

There’s a chance that the infinite intelligence that we can divine from 1’s and 0’s can wrest power away for good from the homicidal institution of government and that’s worth taking a chance on!

 


  • like x 1

#2 Danail Bulgaria

  • Guest
  • 2,212 posts
  • 421
  • Location:Bulgaria

Posted 30 October 2016 - 02:35 PM

How about if the AI becomes a better and a cheaper worker than the people, invades all professions, and the people suddenly become useless for the governments?

 

What do you hink happens with the people when they are no longer needed? What will happen to you, when your own government decides, that it no longer needs you?



sponsored ad

  • Advert

#3 A941

  • Guest
  • 1,027 posts
  • 51
  • Location:Austria

Posted 03 November 2016 - 03:39 AM

How about if the AI becomes a better and a cheaper worker than the people, invades all professions, and the people suddenly become useless for the governments?

 

What do you hink happens with the people when they are no longer needed? What will happen to you, when your own government decides, that it no longer needs you?

 

We may lose our job as the people, and just be the subjects, or worse.

But I dont think this will happen this way.



#4 Danail Bulgaria

  • Guest
  • 2,212 posts
  • 421
  • Location:Bulgaria

Posted 03 November 2016 - 06:06 AM

Lets hope I am wrong. How do you imagine it? 



#5 Wuvit

  • Guest
  • 7 posts
  • 2
  • Location:United States

Posted 03 November 2016 - 02:08 PM

I would argue that both are possible, and the real question might be what actions should we be taking to make sure AI goes in the good direction.

 

AI will be out there, given that it's practically impossible to control every single curious developer in every part of the globe. One of the great benefits for having AI early would be its usefulness to work on the aging process, which is Ray Kurzwell's driving point. It's safe to say that I am split in my thoughts on it. I would like as many people to benefit from anti aging medicine as possible, but also don't have a plausible answer to how to prevent AI from going into the wrong hands and making a self conscious machine that  would be thousands of times smarter than a human, be able to become powerful over a very short period of time via the stock market or whatever method it finds more efficient. I guess the my main concern is what programs might people put in the AI as it's "final goal". If the final goal of the AI is good then great, but I'm sure that if there was global access to it, someone out of the billions on the planet would modify the AI to have not the best of intentions. And only 1 bad apple is needed really.

 

So I'm split here. I see the vast benefits, but then again, I believe we need a solution to controlling AI, which I haven't seen anyone put forward yet.



#6 jroseland

  • Topic Starter
  • Guest
  • 1,117 posts
  • 162
  • Location:Europe

Posted 24 December 2016 - 12:52 PM

Government not needing us, is contrary to the definition of government. Government only exists because it taxes the citizenry.

How about if the AI becomes a better and a cheaper worker than the people, invades all professions, and the people suddenly become useless for the governments?

 

What do you hink happens with the people when they are no longer needed? What will happen to you, when your own government decides, that it no longer needs you?

 


  • like x 1

#7 jroseland

  • Topic Starter
  • Guest
  • 1,117 posts
  • 162
  • Location:Europe

Posted 24 December 2016 - 12:57 PM

If you are just a little rational you must see how governments as they work are a much greater real threat to humanity than general AI is as a totally hypothetical threat. Governments have killed millions (maybe billions) in recent history, has the rudimentary AI that's integrated in almost all areas of our lives and society ever killed anyone?

 

I say bring on the General AI

I would argue that both are possible, and the real question might be what actions should we be taking to make sure AI goes in the good direction.

 

AI will be out there, given that it's practically impossible to control every single curious developer in every part of the globe. One of the great benefits for having AI early would be its usefulness to work on the aging process, which is Ray Kurzwell's driving point. It's safe to say that I am split in my thoughts on it. I would like as many people to benefit from anti aging medicine as possible, but also don't have a plausible answer to how to prevent AI from going into the wrong hands and making a self conscious machine that  would be thousands of times smarter than a human, be able to become powerful over a very short period of time via the stock market or whatever method it finds more efficient. I guess the my main concern is what programs might people put in the AI as it's "final goal". If the final goal of the AI is good then great, but I'm sure that if there was global access to it, someone out of the billions on the planet would modify the AI to have not the best of intentions. And only 1 bad apple is needed really.

 

So I'm split here. I see the vast benefits, but then again, I believe we need a solution to controlling AI, which I haven't seen anyone put forward yet.

 


  • like x 1

#8 Danail Bulgaria

  • Guest
  • 2,212 posts
  • 421
  • Location:Bulgaria

Posted 24 December 2016 - 01:17 PM

How about if the governments simply decide that they dont need to be governments anymore. At least at the way you define them. They not need anymore your money, and simply want to have evberything - the entire planet. 


  • like x 1

#9 Duchykins

  • Guest
  • 1,415 posts
  • 72
  • Location:California

Posted 06 February 2017 - 08:05 PM

Humanity doesn't even need agriculture to "survive."  I don't like it when people play hyperbolic games with biological terms.


  • like x 1

#10 sensei

  • Guest
  • 929 posts
  • 115

Posted 19 December 2017 - 07:26 PM

If you are just a little rational you must see how governments as they work are a much greater real threat to humanity than general AI is as a totally hypothetical threat. Governments have killed millions (maybe billions) in recent history, has the rudimentary AI that's integrated in almost all areas of our lives and society ever killed anyone?


AIDS, Malaria, Malnutrition, Cholera and Tuberculosis have killed more people in recent history than the combined governments of the world.

sponsored ad

  • Advert

#11 sensei

  • Guest
  • 929 posts
  • 115

Posted 19 December 2017 - 07:31 PM

Government not needing us, is contrary to the definition of government. Government only exists because it taxes the citizenry.



SO wrong on so many levels.

1. Taxation is not part of the definition of government "the governing body of a nation, state, or community"

2. There are seven US States that do not tax income

3. There are governments that own all means of production and all property -- nothing to tax

I suggest you read some of the Culture novels by Iain Banks -- AI and post scarcity.

Edited by sensei, 19 December 2017 - 07:32 PM.






Also tagged with one or more of these keywords: general ai, government

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users