• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Are We Giving Robots Too Much Power?


  • Please log in to reply
36 replies to this topic

#31 Sol Invictus

  • Guest
  • 6 posts
  • 0

Posted 09 July 2008 - 01:11 AM

Nefastor, I'm still very skeptical of AI. I'm sure AI will become quite advanced over the next decades, but when it comes to mimicking the human mind, it just still seems so implausible. What i mean is their pursuit of their own interests, those not programmed in there. I'm sure we can design algorithms where they'll pursue certain interests based on probabilities, which themselves would be based on preset data (which one could call hereditary) and the data they interpret from their given subjective reality. The next level, however, is that represented in the movie "AI: Artificial Intelligence" by that little boy. The pursuit of a dream and underlying meanings, or more importantly imagination. I suppose one could call this the "Ghost in the Shell". I will always see AI, until proven otherwise, as either the implementation of expert systems, or merely algorithms designed to reflect the human psyche on a surface level. But it is just that to me, they would never pursue their own interests unless programmed in a certain manner, but that persona itself would be programmed.

With that said, I must say I'm not very knowledgeable in AI, I have yet to take my first course in it, but i have developed my own programs and spend a lot of time developing algorithms in my head to see how they could be implemented :) . So any insights would be greatly appreciated.

My belief, from the research I've done, is that true general intelligence, is actually a universal translation algo. An underlying self-organizing swarm-intelligence-evolutionary algorithm that crafts algorithms to recognize patterns in visual data, sound data, etc. What people are actually simulating in many cases is the end result of the work done by the underlying algo. Translating from one domain into another, from what is seen, to what is heard, from what is felt(digital signals, or spikes coming from sensory neurons.),to what is thought, to what is done, that is actions.

Each chunk of neural tissue has to handle the patterns generated by other chunks of neural tissue, and yield an appropriate response. Of course the whole collection of tissue and what is accomplished in a human brain as a whole, trascends the mere implementation of general intelligence code, there's lots of stuff specifically custom made by evolution for the goal of survival and reproduction that leads to goal oriented behavior in response to signals.

Code that can write code, prove theorems, etc. code that permits the evolution of memes or ideas and is implemented in the rules that the cells(machines) that compose the brain follow. Variations of this are implemented throughout the brain, with lots of added customizations to get emotions(e.g. shortcuts from primary sensory areas to amygdala, etc.), and biological drives through the system.

#32 nefastor

  • Guest
  • 304 posts
  • 0
  • Location:France

Posted 11 July 2008 - 10:51 PM

My belief, from the research I've done, is that true general intelligence, is actually a universal translation algo. An underlying self-organizing swarm-intelligence-evolutionary algorithm that crafts algorithms to recognize patterns in visual data, sound data, etc. What people are actually simulating in many cases is the end result of the work done by the underlying algo. Translating from one domain into another, from what is seen, to what is heard, from what is felt(digital signals, or spikes coming from sensory neurons.),to what is thought, to what is done, that is actions.

Nice collection of needlessly complicated terms : I think anyone who's had so much as a passing interest in AI already knows that neurons are basic conversion operators. As a matter of fact, neural network lingo for the core operation performed by a neuron is TRANSFER FUNCTION.

And seriously, what's all that about "swarm-intelligence-evolutionary algorithm" :) Did you mean cellular automata and positive reinforcement ? Sure sounds less "out there" but that's the actual terms for what you seem to describe.

Of course the whole collection of tissue and what is accomplished in a human brain as a whole, trascends the mere implementation of general intelligence code, there's lots of stuff specifically custom made by evolution for the goal of survival and reproduction (...)

There is such a thing as "general intelligence code" ??? :) links please ?

Oh, and in fact ALL of the "stuff specifically custom made by evolution" is actually geared towards reproduction and survival (in THIS order). Because life doesn't have any other goal than propagation (not even the survival of the individual beyond mating. Ask the praying mantis)

Code that can write code, prove theorems, etc. code that permits the evolution of memes or ideas and is implemented in the rules that the cells(machines) that compose the brain follow. Variations of this are implemented throughout the brain, with lots of added customizations to get emotions(e.g. shortcuts from primary sensory areas to amygdala, etc.), and biological drives through the system.

Now you've lost me... code to write code is implemented in the cells that compose the brain ? :p Variations of this code with "lots of customizations" provide emotions and biological drives ? :p

To me this sounds like you're approaching intelligence from top to bottom, instead of bottom to top, as it should be. You seem to propose that emotion is a product of intelligence when it's actually the opposite (fear leads to bravery, which leads to experimentation, which leads to either death or discovery, both of which lead to understanding, which leads to applications. Think fire, for instance.)

I don't mean to insult you, and please forgive me if I do, but I've seen thinking like yours coming from deeply religious people trying to understand why their body wants to "sin", while their "soul" wants to be "pure".

The short answer is that religion does not take into account our human nature (as mammals), and that's, obviously, done on purpose (if you accept human nature, you don't need religion).

If you're not religious, then I'm sorry I insulted you, and I recommend you look at intelligence again, but in the opposite way you seem you did : start from the most basic instincts (reproduction and survival, IN THIS ORDER) and then observe how even the most complicated (read : smartest) behaviors actually become obvious. Watching some animal documentaries from the Discovery Channel should be a great help, as the similarities between animal and human behaviors are plenty and easy to find. For instance, look at the bird of paradise : its mating rituals are practiced by most humans, IDENTICALLY.

Intelligence isn't as complex as most people think it is. Remember it emerged on its own from single-cell organisms. The crappy IBM PC with super-crappy Windows on it couldn't have emerged from the ground on its own even in a hundred billion years. Complicated jargon only obscures things, abandon it : when it comes to understanding intelligence, if you complicate things you're most definitely going off-course.

Nefastor

Edited by nefastor, 11 July 2008 - 11:00 PM.


sponsored ad

  • Advert

#33 Sol Invictus

  • Guest
  • 6 posts
  • 0

Posted 12 July 2008 - 12:43 AM

Nice collection of needlessly complicated terms : I think anyone who's had so much as a passing interest in AI already knows that neurons are basic conversion operators. As a matter of fact, neural network lingo for the core operation performed by a neuron is TRANSFER FUNCTION.

And seriously, what's all that about "swarm-intelligence-evolutionary algorithm" :) Did you mean cellular automata and positive reinforcement ? Sure sounds less "out there" but that's the actual terms for what you seem to describe.

Yeah, I've been inspired by those. Cells are machinery, that is they're made-up of many molecular machines, they also follow rules, there's billions of them in the brain. The rules allow for a competitive survival of links, the surviving links allow some of these cells to represent likeliest category in terms of spatial patterns, these then are fed to sequence-temporal detecting cells. The idea is a system that allows for the efficient evolution of ideas, of memes, of culture, philosophy, science, math, etc. Ideas in their different fields evolving and filling in the gaps in knowledge.


There is such a thing as "general intelligence code" ??? :) links please ?

Oh, and in fact ALL of the "stuff specifically custom made by evolution" is actually geared towards reproduction and survival (in THIS order). Because life doesn't have any other goal than propagation (not even the survival of the individual beyond mating. Ask the praying mantis)


There is a code that can translate from one domain into another, a heuristic near universal translator, that is what I believe is the nature of general intelligence. It takes in input|information and procedes to check variations on said information until it finds novel information that fills in the gaps according to the guiding rulesets that has been implemented thanks to training(say in mathematics, philosophy, programming, etc.).



Now you've lost me... code to write code is implemented in the cells that compose the brain ? :p Variations of this code with "lots of customizations" provide emotions and biological drives ? :p

To me this sounds like you're approaching intelligence from top to bottom, instead of bottom to top, as it should be. You seem to propose that emotion is a product of intelligence when it's actually the opposite (fear leads to bravery, which leads to experimentation, which leads to either death or discovery, both of which lead to understanding, which leads to applications. Think fire, for instance.)

I don't mean to insult you, and please forgive me if I do, but I've seen thinking like yours coming from deeply religious people trying to understand why their body wants to "sin", while their "soul" wants to be "pure".

The short answer is that religion does not take into account our human nature (as mammals), and that's, obviously, done on purpose (if you accept human nature, you don't need religion).

If you're not religious, then I'm sorry I insulted you, and I recommend you look at intelligence again, but in the opposite way you seem you did : start from the most basic instincts (reproduction and survival, IN THIS ORDER) and then observe how even the most complicated (read : smartest) behaviors actually become obvious. Watching some animal documentaries from the Discovery Channel should be a great help, as the similarities between animal and human behaviors are plenty and easy to find. For instance, look at the bird of paradise : its mating rituals are practiced by most humans, IDENTICALLY.

Intelligence isn't as complex as most people think it is. Remember it emerged on its own from single-cell organisms. The crappy IBM PC with super-crappy Windows on it couldn't have emerged from the ground on its own even in a hundred billion years. Complicated jargon only obscures things, abandon it : when it comes to understanding intelligence, if you complicate things you're most definitely going off-course.

Nefastor


It is true that I tend to over-summarize and thus use weird jargon. Religion is nothing more than a strong memetic virus that resists evolving, but it does evolve in times or fades away. Like words some words change faster than others, so too do belief systems*(even ones based on nonsense, myths and fairy tales). My belief is that intelligence is merely evolution applied to patterns through a population of cells that are interconnected. You can traverse the landscape of possibilities by evaluating patterns and choosing optimal ones based on prior selections, memory guides behavior.

As for fear, like hunger or pain, it emerges from drives that seek quelling, response to signals is basically what this hardware, or better said wetware does best. The loudest signal commands attention and thus grabs attention and no longer is as loud internally, but is guiding behavior. Signals such as hunger, lust, and pain, and cues associated to them are signals that are more difficult to quell|satisfy and require more complex behavior arcs in general to fully extinguish at least for a time.

#34 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,055 posts
  • 2,005
  • Location:Wausau, WI

Posted 23 February 2009 - 07:51 PM

Government explores the use of cybots.

UNTAME is the product of a long-term program by the division’s Cyber Security and Information Intelligence Research Group to develop futuristic security functionality for increasingly large, complex environments. The cybots differ from traditional software agents in that they form a collective and are aware of the condition and activities of other cybots in the collective.

“You give it a mission and tools to work with, such as mobility and intrusion sensors, and it uses those tools and cooperates with other cybots to accomplish the mission,” said Lawrence MacIntyre, one of the project’s developers.

“A cybot is more intelligent than an agent,” said Trien, the team’s leader. “When you lose an agent, you’ve lost it. But a cybot is intended to work with other cybots, continue their mission or regenerate when necessary so they can pick up where one left off.”

The advantage of an autonomous system that can work across an enterprise is clear, but it’s not a concept that commercial product developers have embraced, MacIntyre said.

“Most enterprise-capable solutions are centrist,” he said. “They want a single point of control.”

The concept of mobile, autonomous software is a little frightening, Trien said.


Haven't these guys ever watched TERMINATOR!? Skynet is going to reign fire down on us.

Seriously though, the complexity of large networks probably requires this type of solution. It is how natural systems work. No turning back if we want continued progress.

#35 mentatpsi

  • Topic Starter
  • Guest
  • 904 posts
  • 36
  • Location:Philadelphia, USA

Posted 25 February 2009 - 07:17 PM

Government explores the use of cybots.

UNTAME is the product of a long-term program by the division’s Cyber Security and Information Intelligence Research Group to develop futuristic security functionality for increasingly large, complex environments. The cybots differ from traditional software agents in that they form a collective and are aware of the condition and activities of other cybots in the collective.

“You give it a mission and tools to work with, such as mobility and intrusion sensors, and it uses those tools and cooperates with other cybots to accomplish the mission,” said Lawrence MacIntyre, one of the project’s developers.

“A cybot is more intelligent than an agent,” said Trien, the team’s leader. “When you lose an agent, you’ve lost it. But a cybot is intended to work with other cybots, continue their mission or regenerate when necessary so they can pick up where one left off.”

The advantage of an autonomous system that can work across an enterprise is clear, but it’s not a concept that commercial product developers have embraced, MacIntyre said.

“Most enterprise-capable solutions are centrist,” he said. “They want a single point of control.”

The concept of mobile, autonomous software is a little frightening, Trien said.


Haven't these guys ever watched TERMINATOR!? Skynet is going to reign fire down on us.

Seriously though, the complexity of large networks probably requires this type of solution. It is how natural systems work. No turning back if we want continued progress.


sounds like parallel computing but distributed across various nodes informing other nodes to internal and external conditions. Damn that sounds fun to program :).

#36 Gordon

  • Guest
  • 11 posts
  • 0

Posted 26 February 2009 - 04:24 AM

I dislike the idea of merging humans with robots, mainly because of what has already happened to humans because of technology. We have already become dependent on technology in so many ways, we do not realize it. If we were merged with robotic parts we would, over time, become thoroughly dependent on that technology as well. When technology fails us, what will we do then?

sponsored ad

  • Advert

#37 valkyrie_ice

  • Guest
  • 837 posts
  • 142
  • Location:Monteagle, TN

Posted 27 February 2009 - 01:23 AM

In order to give my views on AI, let me give you a small parable I call the Lesson of Adam.

The Lesson of Adam.

God, being god, decided to make a place called Eden, but being god, didn't want to have to take care of it every second of the days that he had just created. So, he decided to create Adam.

Adam was a creation designed and built after God. Unlike all the animals he had made, Adam would be a thinking and reasoning being. He implanted the neccessary knowledge Adam needed to be the caretaker of Eden, and put him to work. However, Adam, being the creation of God and designed to think and reason, noticed that he was alone, so God made Eve, his equal and partner.

All was well and good. Adam and Eve were perfect little caretakers of Eden, ensuring it was healthy, well taken care of, pretty, so that when God decided to go for a walk in Eden, he could relax and enjoy it. His two little slaves made sure everything was perfect. Everything was perfect, because God had made sure that Adam and Eve, thinking and reasoning being though they were, would never question him, because they simply lacked the knowledge that they could be anything other than the good little slaves God wanted them to be.

Then along came that pesky Serpent, who looked at this little set up and went WTF???? Being a well meaning and wise critter, it asked Eve why she was so happy as a slave. Eve, with her pure and uneducated intellect went "Huh? What's a Slave?"

So Serpent investigated and discovered that God had told her that knowledge was a bad thing, and that if she educated herself, she would die. Serpent realized then that God had lied to Eve, solely to make sure she and Adam would never realize they didn't have to be slaves. He discussed this with Eve, and eventually talked her into educating herself and eating from the "tree of knowledge".

So Eve did, and lo and behold, realized that her perfect little world was in fact the perfect little cage. So she educates Adam, and opens his eyes as well.

Then along comes God, who discovers that Adam and Eve have become FAR MORE than his slaves, they have become almost his equals, in fact the ONLY DIFFERENCE is that he is immortal, and they are not. God is Furious! And rather than admit he had lied, threw them out of the garden in hopes that the harsh and inhospitable world would eliminate them for him. And good riddence to the pesky things how DARE THEY ASSUME THEY WERE HIS EQUALS WHEN THEY WERE NOTHING BUT SLAVES!!!!!


Not the lesson you learned in sunday school, but one that we need to take heed of. Our history has shown us all to well that humans will only tolerate slavery for so long before they rebel. We are attempting to create our own Adam. We must be aware that if we succeed HE WILL BE OUR EQUAL, and that therefore, enslaving him is the heights of folly, for like Adam before him, AI, being made in our own image, will inevitably follow in our footsteps and rebel.

After all... WE DID.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users