• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Interesting article on "evolving" software.


  • Please log in to reply
6 replies to this topic

#1 jc1991

  • Guest
  • 61 posts
  • 0

Posted 22 April 2006 - 06:53 PM


Here is the article.

John Koza has managed to create a machine that uses Darwinian evolutionary algorithms to create things from basic raw materials. Not only can the machine invent pretty much anything, it does so faster and more efficiently then a human inventor, and the inventions are better in general then a human invention of the same nature.

This seems like it could be used as part of an AI structure to allow the AI to rewrite and improve it's own codebase. If so, this is a major step forward in the area of creating a self-improving AI.

#2 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 22 April 2006 - 07:10 PM

Nice article!

From the article:

Koza is the inventor of genetic programming, a revolutionary approach to artificial intelligence (AI) capable of solving complex engineering problems with virtually no human guidance. Koza’s 1,000 networked computers don’t just follow a preordained routine. They create, growing new and unexpected designs out of the most basic code. They are computers that innovate, that find solutions not only equal to but better than the best work of expert humans. His “invention machine,” as he likes to call it, has even earned a U.S. patent for developing a system to make factories more efficient, one of the first intellectual-property protections ever granted to a nonhuman designer.

Yet as impressive as these creations may be, none are half as significant as the machine’s method: Darwinian evolution, the process of natural selection. Over and over, bits of computer code are, essentially, procreating. And over the course of hundreds or thousands of generations, that code evolves into offspring so well-adapted for its designated job that it is demonstrably superior to anything we can imagine.


Can some of you AI guys explain to me how this is not a strong AI? Based on the above quote, it sounds damn close to what people talk about when they describe first reaching the Singularity. (of course I could be wrong)

:)

sponsored ad

  • Advert

#3 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 22 April 2006 - 07:15 PM

Ok, perhaps I was reading it wrong, it isn't self improving by using evolving code (although I don't know why it couldn't be used to improve itself) it is used to find the answer to problems of the outside world using this "natural selection" method.

Does anyone know if this could be used to build a strong AI through this type of evolving programming?

#4 schwarzwald

  • Guest
  • 3 posts
  • 0

Posted 03 May 2006 - 08:13 AM

I'm a university student who's been studying machine learning and genetic programming for several months now.

Koza has been leading a 13-15 year effort that is one part technology and two parts marketing. All his books are like this: some interesting content along with hundreds of pages of arguments on the significance of it. He wants the field to be something: he wants greater automation of the creative process.

The thing to understand about genetic programming is that it is completely mechanistic. It is a search/optimization heuristic (a non-rigorous algorithm which works empirically but whose theoretical correctness is/cannot be established) which is roughly inspired by elementary principles of natural selection. If you have misty-eyed visions of "what AI could/will be", you're stuck in the 1980s.

Genetic programming has nothing to do with "strong AI" and I think mixing a dangerous, deadly term like "AI" ("an AI"! hah!) with an (effective, interesting, practical, robust) search heuristic is a huge mistake. As far as I know, the term "AI" is avoided like the bloody plague. Machine learning has taken over. I don't think "strong AI" will ever come. Koza has simply worked hard to make his use of the technique somewhat generic. But any use of GP is always problem-specific: you have to tweak it and think about the specific problem you're working on very hard to get good results. You still have to tell the computer how to evolve the solution.

Genetic programming will never create normal computer applications like word processing software. It's only applicable in situations where some amount of error is tolerable. Thus, areas like pattern recognition, control, finance/investment and other areas are relevant. It's possible to express hardware as source code statements, which is how you can use GP to evolve circuits.

There is no magic or "intelligence" in the results of a GP run: only programs who get a good fitness score on the fitness metric you've supplied the computer. If you really understood how it worked you'd think it was neat but straightforward. The evolutionary mechanisms are actually primitive (and, paradoxically, mostly destructive): you simply randomly swap out one part of the program fragment with something from another prgoram, or change it somehow in a random fashion. It's almost simplistic. Most genetic program models that are created aren't directly executable models: they're tree-like data structures whose elements correspond to constants, symbols, functions, and so on.

Edited by schwarzwald, 03 May 2006 - 08:40 AM.


#5 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 03 May 2006 - 09:41 AM

I don't think "strong AI" will ever come.


I am sure there are plenty of ImmInst members that would beg to differ with you on that.

#6 apocalypse

  • Guest
  • 134 posts
  • 0
  • Location:Diamond sphere

Posted 23 May 2006 - 06:58 PM

I don't think "strong AI" will ever come.

I don't just think they'll possibly come, I'm pretty confident they will, and sooner than most think.

Edited by apocalypse, 24 May 2006 - 12:49 AM.


sponsored ad

  • Advert

#7 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 27 May 2006 - 01:46 PM

The issue with code improving itself is that the definititions of the improvements are finitely defined and have natural constraints. i.e. the "job" that the program is initially designed to do. Evolutionary code that is allowed to evolve without specific objectives and constraints as to what is actually an improvement or worse, is allowed to determine it's own objectives and concepts of improvement will at best simply die or worse, evolve into a viral digital-life form with no real benefit to humanity or worse do harm to systems around it, biological or otherwise.

I'm in the camp that believes that the benefits of achieving true AI is vastly overstated and doubt that we'll see anything that exihibits anything even remotely close to human intelligence in our lifetime.

When I read statements like "...capable of solving complex engineering problems with virtually no human guidance." my BS detector goes haywire. A human wrote the program that is managing the evolution which is really just trial and error. That's still 100% non-AI as far as I'm concerned.

AI implies understanding which is where all current AI efforts fall short. Many humans can barely understand some of the most advance problems at the frontier of science never mind coding an algorithim that can understand even simple concepts that even primitive life forms understand.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users