The basic philosophical hurdle here is:
1. AI is not a singluar entity. Any AI could have any goal, all depening on what is programmed into it.
2. The vast majority of goals and goal systems in a true AI would almost certainly lead to the extinction of humanity.
That's our lesson for today.
I think the basic point about de Garis being loony is that first of all, he agrees with 2, at least in some sense. He "expects humanity to be made extinct by 'artilects'."
However, he blazes ahead in creating these brain structures of totally unsafe, and even totally arbitrary goal content and structure- "His architecture (at least as of 'CAM-brain') is just about as horribly emergent and uncontrollable/unpredictable as it is possible to get. If you accept hard takeoff, and you're using an architecture like that, then it doesn't make a jot of difference what petty political goals your funders might have; they're as irrelevant as everyone else's goals once the hard takeoff kicks in."
Flying totally in the face of 1, and even his own stated agreement with 2, "Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary AGIs as soon as possible, and damn the consequences."
Thus, "I'd have to characterise this goal system as quite literally insane"
Edited by Savage, 27 October 2008 - 03:50 AM.