• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Computers are dead


  • Please log in to reply
22 replies to this topic

#1 Casanova

  • Guest
  • 93 posts
  • 0

Posted 12 January 2004 - 12:58 AM


Here are some quotes from "that" person...
I agree with much of what, "that" person, has to say...

Roger Penrose quotes, from
http://psyche.cs.mon...23-penrose.html

For those who are wedded to computationalism, explanations of this nature may indeed seem plausible. But why should we be wedded to computationalism? I do not know why so many people seem to be. Yet, some apparently hold to such a view with almost religious fervour. (Indeed, they may often resort to unreasonable rudeness when they feel this position to be threatened!) Perhaps computationalism can indeed explain the facts of human mentality - but perhaps it cannot. It is a matter for dispassionate discussion, and certainly not for abuse!

I find it curious, also, that even those who argue dispassionately may take for granted that computationalism in some form - at least for the workings of the objective physical universe - has to be correct. Accordingly, any argument which seems to show otherwise must have a "flaw" in it. Even Chalmers, in his carefully reasoned commentary, seeks out "the deepest flaw in the Gödelian arguments". There seems to be the presumption that whatever form of the argument is presented, it just has to be flawed. Very few people seem to take seriously the slightest possibility that the argument might perhaps be correct! This I certainly find puzzling.

We must ask whether it is conceivable that this mathematical community, or its individual members, could be entirely computational entities even though the ideal for which they strive is beyond computation.

What is important is the fact is that there is an impersonal (ideal) standard against which the errors can be measured. Human mathematicians have capabilities for perceiving this standard and they can normally tell, given enough time and perseverance, whether their arguments are indeed correct. How is it, if they themselves are mere computational entities, that they seem to have access to these non-computational ideal concepts? Indeed, the ultimate criterion as to mathematical correctness is measured in relation to this ideal. And it is an ideal that seems to require use of their conscious minds in order for them to relate to it.



.... might seem to some to be inappropriately "Platonistic", as they refer to idealized mathematical arguments as though they have some kind of existence independently of the thoughts of any particular mathematician. However, it is difficult to see how to discuss abstract concepts in any other way. Mathematical proofs are concerned with abstract ideas - ideas which can be conveyed from one person to another, and which are not specific to any one individual. All that I require is that it should make sense to speak of such "ideas" as real things (though not in themselves material things), independent of any particular concrete realization that some individual might happen to find convenient for them. This need not presuppose any very strong commitment to a "Platonistic" type of philosophy.

My contention is that without any genuine understanding on the part of the computer, it will (at least in most cases) eventually be found out, when subjected to sensitive enough questioning. Trying to simulate intelligent responses by having mountains and mountains of stored-up information, using the programmer's best attempts to assimilate all possible alternatives, would be hopelessly inefficient.

Likewise Moravec and McCarthy appear to belong to the "no big deal" school. McCarthy puts forward various suggestions for the circumstances under which he would consider that "consciousness" occurs. These are all within the computational model, so it is clear from this that I am not in agreement with him that his computer systems, acting according to his criteria, are actually conscious (in the sense that one could actually be such a system). Again, I fear that McCarthy does not appreciate the force of the logical arguments that I have given, which inform us that the quality of "understanding" cannot be accommodated within the computational model.

It is easy to suggest definitions within the computational model (as McCarthy does) of such things as "consciousness", "awareness", "self-awareness", "intentions", "beliefs", "understanding", and "free will". But such definitions need not convey to us (and do not convey to me) any conviction that the corresponding mental qualities that humans actually possess are in any real sense captured by computational definitions of this nature. As I have argued extensively above, the actual quality of human understanding cannot be captured within any purely computational scheme. So it is clear that I cannot be in agreement with all of McCarthy's definitions.



#2 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 12 January 2004 - 01:19 PM

What are you doing here?

sponsored ad

  • Advert

#3 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 12 January 2004 - 07:19 PM

Is it just me or can computers allready be programmed to do much of what simple annimmals do in terms of behavriour and annimals are conscoius obvoiusly most are much less so than us but still we are simply highly complicated annimals thereofore eventually computers will reach this point as well

#4 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 12 January 2004 - 08:13 PM

True. AI is currently evolving millions of times faster than biological beings have in the past.

#5 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 12 January 2004 - 11:02 PM

Is it just me or can computers allready be programmed to do much of what simple annimmals do in terms of behavriour  and annimals are conscoius obvoiusly most are much less so than us  but still we are simply highly complicated annimals thereofore eventually computers will reach this point as well.


Yes, but can a computer "experience" on even the level of a salt water slug? Can a computer experience the color green or taste the salt in a margarita? How do you make the jump from computation to experience? Once again, I am not discounting that consciousness can have various substrates, but how exactly do you program consciousness into a computer? Frankly, we still haven't a clue, which is why I am agnostic on the short to medium term potential of AI.

And Jay, even if AI were evolving a million times faster than its predecessor, biological life, it would still take a thousand years to reach our level of consciousness.

#6 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 13 January 2004 - 09:58 AM

Yes, but can a computer "experience" on even the level of a salt water slug?  Can a computer experience the color green or taste the salt in a margarita?  How do you make the jump from computation to experience?  Once again, I am not discounting that consciousness can have various substrates, but how exactly do you program consciousness into a computer?  Frankly, we still haven't a clue, which is why I am agnostic on the short to medium term potential of AI.


I'd say you need a collective. One processor (= neuron) is not going to cut it. You need a lot of them, wired up in some orderly fashion.

Which orderly fashion?

How about a neural network that's modelled after our own brain? That's a good strategy.

And Jay, even if AI were evolving a million times faster than its predecessor, biological life, it would still take a thousand years to reach our level of consciousness.


No, because it is growing exponentially. Any process that is crammed into an exponential curve will take a relatively short time to be completed.

Yesterday, I saw the documentary in which Hans Moravec stated that robots were evolving about 10 million times faster than biological creatures ever have (at that time). That documentary was from 1999.

Nowadays, I'm sure, they're evolving much faster. Just take a look at the annual robot expo. Robots keep getting better and better at what they do every year. The competition is only rising.

There's no way on earth that AI won't be built. Intelligent machines already exist. We are them. To state that consciouss AI is impossible, is to state that we ourselves could not exist.

Obviously, this is wrong.

#7 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 14 January 2004 - 05:15 AM

And Jay, even if AI were evolving a million times faster than its predecessor, biological life, it would still take a thousand years to reach our level of consciousness.


No, because it is growing exponentially. Any process that is crammed into an exponential curve will take a relatively short time to be completed.


1 billion years (rough estimate of the time it took life to evolve into human form) / 1 million = 1 thousand. I was just being glib Jay. :) I agree with you that evolution is a blind process with no specific goal. As such, it's rate of progress could be viewed as (roughly) algebraic, with large peaks and valleys along the way. Versus humanity and AI which have intentionality and are thus progressing at a greatly accelerated rate.

Theres no way on earth that AI wont be built. Intelligent machines already exist. We are them. To state that consciouss AI is impossible, is to state that we ourselves could not exist.

Obviously, this is wrong.


Jay, to get an idea of my mind set just look at my signature. :) I have never denied the possibility of AI, I just have my doubts regarding the time frames predicted by many singularitarians. It's a matter of perspective. It's also a matter of priorities. I am much more interested in biotech than info tech. My logic being: stopping the aging process is much more urgent than achieving Immortality in the true sense of the word.

Baby steps, it's all about the baby steps.

DonS

#8 Jay the Avenger

  • Guest
  • 286 posts
  • 3
  • Location:Holland

Posted 14 January 2004 - 12:59 PM

I once used your signature in some other thread, ya know. :)

#9 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 15 January 2004 - 07:11 PM

computers may allready be conscoius the only way you 'know' that a slug is conscoius it's because it's living like you

#10 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 15 January 2004 - 08:10 PM

I don't "know" that a salt water slug is conscious, I assume it. Just as I assume that every other human being in the world around me is consciousness. As JD mentioned, I can justify these assumptions on Ocsrazor, among other principles.

Computers may be this, they may be that... who cares. Has a comuter ever passed a Turing test? No.

And you completely ignored my reservations about AI regarding the concept of "experience". Do you have no thoughts on the matter or are you trying to skirt the subject?

DonS

#11 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 15 January 2004 - 09:58 PM

Um, I'm coming here a little late in the discussion, by why would you assume a salt water slug is conscious? I mean, it's not like it has much of a brain to begin with. Whatever you're using consciousness as, it seems unlikely that such a species would have it.

#12 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 16 January 2004 - 02:51 AM

All right, ding ding ding. Stop. Stop. Stop.

My original statement was -- Can a computer "experience" on even the level of a salt water slug?

That's the quote, now let's stop playing semantics.

I am not assuming that a salt water slug is conscious like a human being is conscious. I am assuming that a salt water slug "experiences" as all living things experience. Maybe this assumption is wrong, but this whole line of debate is really off topic anyway.

Delete the example of salt water slug from your memory. DELETE IT.

Change the example to zebra. Is that better? Are we through @#*#ing nit picking?!?

Can a computer "experience" as a zebra can experience? Am I assuming that a zebra "experiences"? Your damn straight I am. Just like I assume that you experience, or my girlfriend experiences. But I could be wrong. It could all be an ellaborate hoax. Everyone around me could be zombies with the incredible ability to perfectly mimick consciousness. Maybe I'm the only truly conscious entity in the whole universe.

Or maybe, just maybe, we could apply oc'srazor to this conundrum created by subjective experience and make the leap of faith that other human beings are conscious and living entities do "experience" living.

Now how do you bestow the ability to experience upon an entity, biological or synthetic? Do any of you Sings out there care to answer this question?

#13 Jace Tropic

  • Guest
  • 285 posts
  • 0

Posted 16 January 2004 - 03:42 AM

Don, I understand what you're saying. I believe it's an issue between sentience and non-sentience.

I don't think there is any good evidence out there that would suggest that an AI could ever be sentient. We don't know if true sentience is forever confined to the unique valence of the carbon atom. I think the general attitude is that it doesn't matter whether AI would ever be self-aware in the same way we are. A behaviorist perspective has been adopted, i.e., as long as the AI acts conscious, it must be.

But I truly think that most humans are innately compassionate. Once artificial intelligence reaches human intelligence, people generally won't think twice that they are communicating with a non-living, albeit volitional, being. It is rather disheartening, however, that Singularitarians don’t speak openly about behaviorism. At this stage of the game, that’s what they are essentially: behaviorists (almost like women who are in denial when in a relationship—all trust is consigned to their companion's superficial show).

#14 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 16 January 2004 - 04:14 AM

Here's a thingy I wrote on Singularity forecasting:

http://www.accelerat...encehowsoon.htm

#15 tbeal

  • Guest
  • 105 posts
  • 0
  • Location:brixham, Devon, United kingdom of great Britian

Posted 18 January 2004 - 06:05 PM

obviously a computer does not experience on the same level that a 'normal' human being does it does not recieve enough information nor can it organise it at the same speed - nor can it learn what shapes are etc so to a computer it will allways see what it was originally programmed to 'see' and since human consciousness is partially about learning and progressing SO NO comps are not self conscious.

But in terms of consciousness I think that the only evidence you can have for it is behavioural and speed and complexity of the processor (or brain in living things)
( I have a logical reason Don to reject ocs Razor to at least a degree in this case since the idea of simplicity is subjective as surely only one consciousness is a lot more simple than the idea of billions of them) if the processor is as complex and it behaves the same then it is probably conscoius since thats all the logical (since apperance doesn't matter) evidence that you have that anyone else is conscoius

#16 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 25 January 2004 - 04:09 AM

Just scanned this thread quickly, but it seems to me the point you are missing here is complexity. Yes, there are computational simulations which have the same level of behavioral complexity as a sea slug. The 'experiece' of these simulations is just as robust as that of an animal of equivalent complexity, in an equivalently complex sensory environment.

Penrose's specialness of humanity is an old philosophical argument that breaks down to the fact that there is only one example of human level conciousness that we know of in the universe - us. The only way to definitely falsify the argument is physical proof, which will not be long in coming, but there are already very strong indications in many branches of sciences that the phenomenon of 'life' in general and human level conciousness can be duplicated in other substrates as long as these entities have the requisite complexity in both their structure and the structure of their environments.

When the AI's start behaving like humans we will be hard pressed to say they do not possess humanlike conciousness.

There is an excellent book on these subjects entitled "The Fourth Discontinuity" by Bruce Mazlish. The three previous discontinuities in human thinking have been:
The Copernican - Humans are at the center of the universe
The Darwinian - Humans are differently created than animals
The Freudian - The ego is in complete control of the human mind
and the 4th - Intelligence and/or conciousness is 'special' to the biological substrate

All four of these represent clear discontinuities in the logical flow of the structure of nature. The human mind is a direct result of the of the level of complexity it contains. It is the most complex physical system we have yet encountered in the universe, so it sets a high bar, but if this level of informational complexity can be achieved in another substrate, there is no reason to doubt that that entity will possess the same features as the human mind.

#17 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 25 January 2004 - 04:17 AM

quick addendum:

Yes Jace the Turing Test and the like are pure behaviorism, and this goes all the way back to Descartes argument: the only thing we can ever really know as human beings is that conciousness exists, but we can never positively identify it because there may be some supeintelligent trickster that creates a puppet conciousness. Something had to be there to give the puppet conciousness though, hence his argument. The behaviorist test is the best we've got without opening up the hood on conciousness.

As some one who is attempting to open up the hood, I would be satisfied with a definition of conciousness in which the system of an artificial intelligence both behaved and operated with the same level of complexity as a human.

Edited by ocsrazor, 25 January 2004 - 09:46 PM.


#18 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 January 2004 - 05:45 AM

Good to have you back Peter [!]

#19 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 25 January 2004 - 10:10 AM

Peter, you come back to the forums and already you are helping me gain a better understanding of the issues at hand. :) Your style and logic really jive (jive, jeez I sound like my mother, lol) with the way I think. I am not saying this to be a kiss ass. [tung]

There are some simple conclusions I can make that are leading me down a path I have not gone down before. For instance, can the processes of a biological neuron be replicated using a different substrate? I would answer yes. There is nothing to suggest this violates the laws of physics. Further, from what I've read, this has in fact already been done on a limited basis. So then yes, it certainly does boil down to an issue of compexity. I agree.

I am trying to express a certain idea I have regarding "consciousness", but I seem to be failing and getting frustrated at my inability to communicate my idea.

Can Fritz, the highest ranked computer chess program, give Kasparov a run for his money. You bet it can.

Can it speak, laugh, cry, think about the baseball game or the weather forecast, enjoy a vanilla milk shake, take a run in the park, argue the meaning of life with friends?? No, no and more no.

The way I see it, the human mind is formed by both nature and nurture. The "nature" is the cognitive systems that we were supplied with by evolution and that we all possess through inheritance. These cognitive systems are the building blocks for the mature human mind. We are not, as Locke would suggest, blank slates; and the cognitive sciences are uncovering more of the mysteries behind the human mind every day.

However, an adult human mind (or even a 10 year old’s mind) is not created instantly. The process of cultural transmission and assimilation is a continuous one that molds the mind over the entirety of a human life span. I guess I would just be stating what for many of you would be presupposed knowledge when I say that it takes the better part of two (three?) decades for the human mind to be considered mature. This process encompasses everything from learning spatial relations to understand the complex social nuances of everyday life. How can you program that "real life" experience into AI? That's what I'm asking, that's it. How can you program "real life" experience? And if we can’t program this “real life” experience into a computer before intelligence arise from an artificial substrate, then wouldn’t we be forced to deal with an intelligence that is so foreign, so alien to our own mode of thought that cooperation and mutual respect would be beyond our capacities?

I think Michael, this is some of what you were trying to convey to me when you posted your link. However, remember something. Look at my post. Look at the level I’m at. I’m on level 1 while your on level 5. I’m just starting to get this.

Sincerely,

DonS

#20 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 January 2004 - 01:39 PM

Jace: A behaviorist perspective has been adopted, i.e., as long as the AI acts conscious, it must be.


As ocsrazor and Jace point out, the behaviorist perspective is all we have to go by right now. Why do I think DonSpanton is conscious, because he acts like he is as conscious as me. If a computer walked and talked like me, I must assume it is conscious. It doesn't matter if it computes with silicon, carbon, or strawberry jelly.

#21 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 January 2004 - 04:30 PM

Mind the same is true when looking at lichen to distinguish it from the rock. Just some people are more perceptive about behavioral activity than others and can apply criteria to distinguish sedimentation from growth.

The very words we still use like "animate" and animal are predicated on movement. We have some very basic behaviors for defining life; consumption, movement (modified for plants), growth, and reproduction.

Sentience as a behavior is one we are still arguing about whether or not it belongs on the list. :))

By default if there is argument there is limited sentience and thus a form of movement back and forth of ideas, which I suggest defaults it to the living category. But then sentience becomes a question of degree and not all or nothing.

Even the lowly lichen possess a "degree of sentience," a 'tropism" of cognition leading it toward better conditions, sustenance, survival, and self resurrection (reproduction). This remains true to most primitive scale of what is alive.

So would an Artificial Intelligence that is "alive," not also share many of the other characteristics?

#22 bacopa

  • Validating/Suspended
  • 2,223 posts
  • 159
  • Location:Boston

Posted 25 January 2004 - 06:51 PM

Don it seems to me and I'm at a sub 1 level...but it seems to me that once AI can recursivley self improve itself than perhaps those so called 'human' emotive states like being able to epress empathy, compassion, and appreciation might come with the whole package.

Inotherwords once a silicon subsrtate or whathave you can figure out the algorithms of human sentience, logical thought, basic conceptual understanding than emotion may automtically follow suit.

Furthermore if nano is all that it's cracked up to be than the shear nuance of emotion may blow our minds away, along with it's amazing ability to be very ,very intelligent millions dare I say....billions Austin Powers? smarter than us. Does that intimidate you or make you feel disempowered?

Well it does me, but who knows not only could these insanely bright AI help us out with our meager lives but they may help us become smarter ourselves, making our lives less meager, or whichever may come first... our own ability to upgrade ourselves through human intelligence or AI upgrading us. Point is we will be far smarter than we have been in the past.

As to your sea slug analogy I would imagine, as Michael keeps saying, benevolent AI will be benevolent because it will possess emotions perhaps far beyond our own limited, yet significant, set of emotions that we claim ownership to have now.

sponsored ad

  • Advert

#23 ocsrazor

  • Guest OcsRazor
  • 461 posts
  • 2
  • Location:Atlanta, GA

Posted 25 January 2004 - 09:52 PM

Thanks for the welcome back gang!

dfowler - you just reminded me of something else I wanted to mention about emotions. I think that something like emotional states will definitely appear in general artificial intelligence, although it may be of a different (more nuanced?) quality. The usefulness of emotional states in animals is that it allows us to react quickly with a generalized response to lots of sensory input.

For any system to react intelligently and timely to large data streams it will probably be necessary to have something similar. The capabilities of general artificial intelligence may be such that the response may not have to be as general, but something like this averaging of response to input may be necessary to develop good solutions for operating in the messy and complex environment of the real world.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users