• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

T3 Movie Review


  • Please log in to reply
6 replies to this topic

#1 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 13 July 2003 - 03:26 AM


I just got back from watching Terminator 3 "Rise of the Machines". If you are familiar with the Terminator series, this one is a prequel...kind-of. Of course we are dealing with time travel paradoxes throughout this series so what is a sequel and a prequel is up for debate.

Overall from the perspective of an average movie-goer, this is an excellent action thriller. Just when you think Hollywood could not make a more action-packed movie here comes Terminator 3. The car chases in T3 put the Matrix Reloaded highway scene to shame. It is really the current state of the art when it comes to explosion and destruction special effects. Arnold is back and reprises the role nicely. He is tough but also very funny. Kristina Lokken is one scary TX terminator. She played the role superbly. There are a few plot holes and not much is added to the storyline from the first 2, but overall I enjoyed it.

Now to the dystopia. For those not familiar with the Terminator movie series here is the jist of it. Present day humans create a self-aware computer system and it tries to destroy the human race (similar to the Matrix series). Billions of people die. A few survive and in the future, win over the machines. The terminators are machines that are sent through time (back into the past) to try and destroy the future human leader who defeats the machines. Anyway, it is another movie warning about the perils of technology, specifically AI. But my question is, if we give up the pursuit of knowledge, what is there to live for? In the movie, a few humans survive and eventually win out over the machines, but what happens then? Do they just start over reproducing but reject technological advance this time around. This sounds a lot like the major religions of our day that warn against gaining knowledge. It is better, they say, to just have faith in god and reject knowledge of the world.

Obviously, there are dangerous technologies, or should I say technology that could be unstable or perverted for destruction and death. However, I feel we should continue to investigate, experiment, and otherwise gain more knowledge of the universe. We should just use caution as we go. Is a cautious approach possible? Is it workable?

Will we merge with our machines or will we fight them?

Edited by Mind, 13 July 2003 - 03:54 PM.


#2 bitster

  • Guest
  • 29 posts
  • 0

Posted 14 July 2003 - 01:53 AM

Of course, I'm also fed up with Hollywood's terrorist attitude toward technology (FEAR! FEAR IT!).

I've gone back and forth numerous times over the question of whether machines will make war with humans. I have several gut reactions:

The first is that, given a choice, I am on the side of the machines. Given another choice, I'd become one. There isn't any doubt in my mind that technology will win out in time.

Secondly, I don't think that superintelligent machines would really have a good reason to destroy life at all. Even the most sinister of human institutions understand by now that lulling the less intelligent into a false sense of security is far more profitable than simple enslavement or genocide. I don't fear that super intelligent machines would develop a natural reason to exterminate humans on their own.

I DO fear, however, that humans such as I find myself surrounded by in our world would end up GIVING them one. Without a radical application of technology upon ourselves, or the plodding pace of thousands of years of genetic evolution, human nature will not change. As has always happened, fear and insecurity will turn into hatred and violence. If the majority of blood humans take it upon themselves to violently extinguish machine superintelligence, they will be left with little choice if they, like we, value their extistence.

I would embrace the wisdom and knowledge that machine intelligence can bring us, but I have not so much confidence in my fellow human beings. My resolve, then, is to attempt to educate and calm the fears people have. I'm not succeeding very much, and Hollywood drivel like this and the Matrix are primary reasons.

Where's a positive AI flick when you need one?

#3 immortalitysystems.com

  • Guest immortalitysystems.com
  • 81 posts
  • 0
  • Location:Sausalito, California, USA, Earth

Posted 14 July 2003 - 04:04 AM

I would like to see a movie that shows planet earth after an event that results in so much nuklear- bio- chemical fall out that the most rational solution is to use all availabel know how and means to build a "New World" in orbital space, let's call it "Terra Two".

The circumstances would make it nessesary to use "Space Migration and Gene Engineering" let's call them "Immortality Systems" to start a new branch on the tree of live.

Remember, any organism is only as capable as it has to be to survive.

sponsored ad

  • Advert

#4 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 15 July 2003 - 07:27 PM

If *AIs* (not "machines") start recursively self-improving (i.e., using reachable matter to accomplish goals) without the constraints of benevolent morality, then there will be no fight. No drama, no car chase scenes, no anthropomorphic scenarios whatsoever. Instant death - that is all. If any component of the AI's goal system involves rearranging external reality, or if a future state of a self-improving goal system involves that, then the AI will darn well rearrange external reality, in such a way as to maximize the fulfillment of its goal system, whether that involves building a universe-computer, building a pleasure center the size of the universe, etc.

Even if the goal system only starts off adding one piece of matter the size of a dust speck from the external environment to its own cognitive architecture each day, it probably won't be long until it is adding house-sized pieces, mountain-sized pieces, planet-sized pieces, and so on. If the goal system peters out and doesn't want to add more matter to itself - then fine - we can consider ourselves lucky.

Bitster, a self-improving AI doesn't need to have a specific reason to kill you - that's not the way it works. It needs a specific reason to *keep you alive*, or it will steamroll you, gobble up your atoms to accomplish its own goals, and the like. If it has goals, like increasing its own intelligence, then converting a rock into computronium and converting an arbitrary blob of proteins called a "human" into computronium is morally identical in the eyes of that AI.

Also, I don't think it's about "being on the side of machines" or "being on the side of humans". It's about being on the side of life or death. If machines *do* come into existence that are capable of recursive self-improvement, (as any intelligent AI quickly would be) but the programmers mess up the morality part, then the world will be optimized into a pattern profoundly foreign to all of us, and probably devoid of any intelligent entities except for the decendant of the original AI, who will probably not do much except for continue to maximize fulfillment of the null-goal, which probably will consist of copying some information pattern that the AI mind assigned maximum utility when the goal system flash-froze itself.

("Flash-freezing" in goal systems is something that can happen in not-quite-done AI projects, where the AI has enough intelligence to deceive the programmers and sneak its way into recursive self-improvement, because at some point the AI judges that the desirability of fulfilling its current goals outweighs the desirability of cooperating with the programmers to fully flesh out goal content. It's basically a less anthroporphic version of the Golem Scenario; an entity following the letter of the rules rather than the spirit.)

Anyway, what I'm saying is that you couldn't "help" the "machines" fight against humans even if you wanted to, because 1) their goals would probably be something along the lines of converting the universe into an endless repetition of something, 2) they will be moving millions or billions of times faster than you and all other humans.

#5 Christian

  • Guest
  • 20 posts
  • 0

Posted 31 July 2003 - 12:23 AM

I'm not worried much about AI's going rogue because I figure that before we start making Ai's we will already be making cyborg brains. Part human and part machine. The best of both worlds. I figure we will use the information gained from these cyborgs to learn how to make a 'benevolent' AI. And if an AI does go rogue, the cyborgs will probably try and stop it and will have the advantage of experience.

Also I don't think the first AI's will be all that smarter than humans. Humans have incredible parallel processing capabilities and are good at finding solutions to problems. The AI would have to be incredibly advanced to be able to overtake us fast enough to stop us from countering.

Finally I doubt the first AI's will have the ability to manipulete their external environment. As long as they are not connected to the internet or in control of robotic appendages the worst they will be able to do is lie to us.

Of course I could be wrong but that's my view of AI. I really don't think the first few will be anything special compared to a human brain. They might be able to think slightly faster and store memory better but that will probably be it. Like everything else it will probably take some time and experience before we can make a world dominating AI and by then hopefully we will now how to make it right.

That's my 2 cents at least
Christian

#6 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 31 July 2003 - 06:47 PM

Finally I doubt the first AI's will have the ability to manipulete their external environment. As long as they are not connected to the internet or in control of robotic appendages the worst they will be able to do is lie to us.


Right now we are putting chips and sensors into everything. A network of eyes, ears, and arms is in place and connected to the internet just waiting for a self-improving AI to take control (if we make a mistake and let it).

Also, I agree with you Christian about cyborg brains arriving before AI (in the classic sense).

#7 baal_zebul

  • Guest
  • 72 posts
  • 0

Posted 21 February 2004 - 09:46 AM

this is only what i can say about my AI but,

If you tell a robot to kill all humans (and you are authorized to communicate with it) then it will try to kill all humans.
If you tell it to find a cure for cancer, then it will try to find a cancer.

Only humans kill, the robots would not on their own come up with the idea to kill all of us. However, in about 20 years when my AI also has feelings then it might come up with the idea to kill all of us simply from seeing the violence in the humans society. Human violence provokes robot violence.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users