• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Hawking's claims about x-risks maybe harmful

x-risks

  • Please log in to reply
4 replies to this topic

#1 turchin

  • Guest
  • 59 posts
  • 7

Posted 25 January 2016 - 01:08 PM


Rant mode on:

 

Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.

 

So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war.

This statement brings all the problems to about the same distance as that to the nearest black hole.

 

In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.

 

Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.

 

The first thing is mobilizing, while the second one causes enjoyable relaxation.

 

 He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’

 

The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread.

However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed.

Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”

 

Also, the statement that migration to other planets automatically means salvation is false.

What catastrophe can we escape if we have a colony on Mars? It will die off without supplies.

If a world war started, nuclear missiles would reach it as well. 

In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.

 

I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative.

Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere. 

 

Finally, some words about AI. Why namely 100 years? Talking about risks, we have to consider a lower time limit, rather than a median. And the lower limit of estimated time to create some dangerous AI is 5 to 15 years, not 100. 

 

Rant mode off


  • Ill informed x 1
  • like x 1

#2 Julia36

  • Guest
  • 2,267 posts
  • -11
  • Location:Reach far
  • NO

Posted 31 March 2016 - 10:19 PM

H thinks a lot.

He has different aims in making his statements.

 

But he has a truly good mind.

 

Making predictions is better than it used to be, but the complexity is higher the further away and that distance is a factor of interactions.

 



#3 greenwich

  • Guest
  • 32 posts
  • 8
  • Location:United States
  • NO

Posted 08 December 2017 - 04:58 PM

You've heard of uploading your brain to the cloud. That's coming.

 

But SSDs are so fragile! One EMP and they're fried.

 

That's why the *obvious* solution to surviving extinction events is to encode our brains into quartz stalagmites! ,About 100 meters down, WMDs, no worries!

 

That's what I'm working on. Very hard. I mean, sort of. I did type out the idea. 

 

Seriously though, I agree. Hawking is way off. Secret tech is always at least 5 years ahead of public tech. AI is frighteningly good already, in R&D. 

 

I think in 35 years it will no longer be possible to tell reality from simulation.

 

Immersive 3D will have evolved into Immersive Simulation - smells, tastes, touch. They already have invented computers that can smell, and suits that convey full-body touch. In fact you can now telepathically (sort of) control artificial limbs. It's getting scary.


Edited by greenwich, 08 December 2017 - 05:03 PM.


sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#4 Danail Bulgaria

  • Guest
  • 2,212 posts
  • 421
  • Location:Bulgaria

Posted 08 December 2017 - 07:17 PM

This claim

‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’

may come from pure mathematical perspective. If you have something with a low chance of happening that year, but you keep it with that chane each other year, then the chance of it happens increases. It is like if you have 1 black ball in 1000 white balls, and you daw a random ball, if you make 1000 draws, the chance of pulling out the black one increases.



sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#5 GRanjeet

  • Guest
  • 3 posts
  • 2
  • Location:USA
  • NO

Posted 03 April 2018 - 08:01 PM

This is pretty interesting information.







Also tagged with one or more of these keywords: x-risks

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users