...
Edited by Jace Tropic, 07 November 2003 - 09:33 AM.
Posted 15 September 2003 - 08:23 AM
Edited by Jace Tropic, 07 November 2003 - 09:33 AM.
Posted 15 September 2003 - 09:57 AM
I know I’m not adding anything profound. But I don’t care. Anyone halfway paying attention has probably already figured out I use illicit drugs.
I think AI or GAI or whatever should be created with one thing in mind: choices—ours. We should create inanimate servants that figure out how to allow us to live as long or as little as we want, to learn as much or as little as we want, and to do as much or as little as we want.
What, so it’s better to be robocentric?
What if I and other people said that all we wanted was to live on Earth until the Sun begins heating up, and that we wanted some robots to pick us up and drop us off in the next star system conducive for life, and so on?
My vision for AI is to give me freedom—the way I define freedom—not the way anyone wants to define it for me—and designer governances for everyone else to decide for themselves what freedom means to them.
*very deep, sad sigh* I jes wanna die.
Posted 15 September 2003 - 01:28 PM
To say that the Singularity will pretty much take care of itself after some initial directive; that we cannot whip superintelligent, albeit fundamentally dead, objects into acting on our behalf; that we will be creating superintelligences, yet cannot exhale and be complacent because mere, useless humans will somehow still need to be on high-alert status; and that we are being anthropocentric otherwise, is downright ludicrous.
There is every reason to be cautious and diligent every step of the way until AIs become the ultimate problem solvers; for it couldn’t happen otherwise.
Posted 15 September 2003 - 01:56 PM
I think AI or GAI or whatever should be created with one thing in mind: choices—ours. We should create inanimate servants that figure out how to allow us to live as long or as little as we want, to learn as much or as little as we want, and to do as much or as little as we want.
that we will be creating superintelligences, yet cannot exhale and be complacent because mere, useless humans will somehow still need to be on high-alert status
and that we are being anthropocentric otherwise, is downright ludicrous. What, so it’s better to be robocentric? Fuck that.
So maybe it’s likely that it’s inevitable that we will be uploaded into AI systems; and therefore, the best choices we can make today are those that recognize all inevitabilities and make the best of them.
Well, I think making the best of prospective smarter-than-human intelligence is to simply aim at making smarter-than-human problem solvers—nothing more. Humans already inspire enough problems to solve. We don’t need any better-than-human thoughts to think of bigger problems.
But all everyone—everyone—really wants is choices and today’s problems, such as death and violence and suffering, eradicated now.
Are we wrong to have designed smarter-than-human problem solvers, not illusionary feelers demanding liberty, to give us freedom not only to indulge in infinite knowledge and awesome technology, but also to be able to say, “Well, jeez, I would really like my life right now if only people would just get along, if people weren’t always so miserable, the standards of living for everyone ranged from at least very comfortable on up, and if I could do trivial things without the underpinning requirement that I must be one of the economy’s whores in order to survive”?
My vision for AI is to give me freedom—the way I define freedom—not the way anyone wants to define it for me—and designer governances for everyone else to decide for themselves what freedom means to them.
Posted 15 September 2003 - 02:25 PM
Posted 15 September 2003 - 03:45 PM
Posted 16 September 2003 - 01:14 AM
Posted 16 September 2003 - 02:08 PM
Posted 24 September 2003 - 01:31 AM
0 members, 0 guests, 0 anonymous users