• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Free Will...


  • Please log in to reply
200 replies to this topic
⌛⇒ MITOMOUSE has been fully funded!

#1 A941

  • Guest
  • 1,024 posts
  • 50
  • Location:Austria

Posted 04 September 2002 - 03:36 PM


Archive of this topic may be found at bjklein.com ->


I think the common Idea of free will like a causeless "thing" from nowhere is nonsense. Our consciousness needs informations and experience to act and this limits our potential. But does our consciousness really acts free or is it a marionette? Does older parts of our Brain choose before we choose? Whats to say about the Libet experiments? Weeks ago I heard that he told the test persons to wait for "the urge to move" and not only to move their fingers at will.

#2 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 05 September 2002 - 01:39 PM

We're most likely determined in our conscious activities. To be not otherwise would suggest that at the core of our consciousness is a chaos... bringing order to our actions.
I've read about neural networks that firing at random, but eventually lead to order (i.e you and me). More or less, if everything was understood, chaos wouldn't exist. Even in systems of complete randomness, there is underlying laws of the randomness (no apparent order). But, if there are any absolute laws out there, eventually we have to submit to some type of order that brings about our mind's order.
Unless you want to be a relativist, which is just a scapegoat idea that has no real utility in trying to objectively observe reality (seemingly the most difficult task of all - or trying to completely understand yourself, another biggie).

#3 Guest_chip@ergodicity.org_*

  • Lurker
  • 0

Posted 10 September 2002 - 04:53 PM

I believe in 100% total free will. Lets see if I can offer you two major observations that make up an inductive proof.

I find so-called Western science to deny subjective reality though that is all any one can know. Subjectively, each of us has never known death. Though some may claim to have died and experienced something and then was lucky enough to have come back to tell us about it, I find it plausible to explain this as humans having an active imagination and brain activity never quite stilled in such instances. If we have never known death, how can we say it ever happens?

Consider how many things could happen to stop your next breath. Didn't happen, right? Consider the total over all finality scenario for universe. Some claim that it will end sometime though this is a great leap of human conjecture with uncertainty increasing as time parameters involved render the human consciousness virtually infinitesimal. A theory of universe considers the possibility of a steady state system, one that never ends. Confidence in these theories is not totally certain. That evidence points to one or the other is always only that, an inference, never the total truth as humans cannot perceive this from first hand knowledge. If universe could cease and you could experience this then universe will not have ceased because your perceiving would still be here and you are a part of universe. Of course this is just addressing a semantic argument but then many right now argue virtually anything willy-nilly. We have to consider the possibility that universe is steady state as the time factor must be continuously expanded to consider such a scenario and as time increases, the law of large numbers makes the understanding as never totally refutable, that is, if one were to retain logic and rationality. In a steady state universe, given enough time, all things, no matter the improbability, will come to pass and not only once, but again and again. Death does appear to be a true state of being from a strictly objective experience. We can see that people die, other life forms cease, machinery can fail. But what about the individual? Can we perceive our own death? If we do leave behind a portion of the universe in which others saw us die, what's to say the same scenario may evolve again except lacking the cause of death that others experienced? How much time passes subjectively when we are without thought or consciousness? Appears possible that every moment we leave behind a portion of universe where others see our demise but we continue on where universe picks up the same experiment again except without the death thing, basically, a real time reincarnation.

Okay, so we are totally immortal but free will addresses more than just escaping death. So what if you wanted to make pink fire breathing dragons flying around some incredible gravity defying architecturally breath-taking fairy tale castle? Well, if a realistic animation would suffice, then you could feasibly make such as is evident in todays movies. Or, you might have to wait until we have many space colonies of much variety including gravitational anomalies with bio and mechanical engineering skills far beyond today's abilities. But what if you want to do it now? I would state that anything that appears impossible now is because we have chosen this. We truly do not want the scenario of an insecure magical existence, where we could do anything or anything could happen to us. Consider, if we are truly immortal, there will come a time when there is no surprise when we have learned to increase our knowledge and ability to use it. We might choose at this time to reevolve ourselves with an apparent blank slate to start the process all over again in order to bring surprise and unpredictability to our lives just for the sake of the richest treasures in the universe, the emotions. We might just be gods and goddesses, each on a path of achieving omniscient prescience and we may have actually reached such a state before. Does this mean that life is worthless? Hardly, life, especially conscious life, is the most inestimable due to its inherent longevity. It is the least transient thing. Well, confusion is rampant and many lose valuation of life. People lose consciousness of their own life, the lives around them, our biosphere. The more we act in accordance with the idea that we are mortal, the more we suffer. If existence is endless, why spend this time suffering? In order to realize an objective reference that immortality exists we must strive to help other life survive without calamity. If we can only be and feel, then why not seek the good feelings?

sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#4 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 11 September 2002 - 02:38 AM

I do not see what this has to do with free will. Besides that, I'd rather be a cog in a machine than a random floating of "mind stuff".

#5 Chip

  • Guest
  • 387 posts
  • 0

Posted 11 September 2002 - 06:51 PM

Gee, looks like you need to open your eyes futher and perhaps take in a bit more information. Still kind of wondering what your second sentence implies. Maybe I just don't understand but seems this short blurp of yours is meant as criticism of my post. Are you upset about something?
Please excuse these remarks if it is meant as a reply to some other post.

What are you calling "a random floating of 'mind stuff'" specifically?

I've experienced a number of chat rooms on the web and have learned that they are not conducive to much immediate real work because of the anarchy implicit in their design. I mean they are interesting and worthy of experimentation but, heck, the software is young. I suspect that eventually, chatting on the web will become more of a peer-to-peer process where discussion is amongst equals without antagonism. Any one else wish to comment? Have I just witnessed again the out-of-the-blue random irrational "mind stuff" that a chat room often shows? lol

#6 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 11 September 2002 - 07:44 PM

Gee, looks like you need to open your eyes futher and perhaps take in a bit more information.  Still kind of wondering what your second sentence implies.  Maybe I just don't understand but seems this short blurp of yours is meant as criticism of my post.  Are you upset about something?


Oh, haha. I'm not upset about anything. I think you're upset about me seeming to be upset ;). Simply put, I do not see *really* what your post had to do directly with free will at all. It just seemed to be, no offense intended, a blob of thoughts about death, immortality and etc.

What are you calling "a random floating of 'mind stuff'" specifically?


When someone speaks of their mind having "free will", I interpret this as if their minds are free from following any laws the universe has of a organism's substrate (for example,a brain), thus having "free will". I said "random floating of 'mind stuff'" because this essence of free will, from what I've interpreted is a mind based on indeterminate actions. If such a "system" of sentience exists, all uploading plans aren't going to be as easy as previously hopefully assumed.

Have I just witnessed again the out-of-the-blue random irrational "mind stuff"  that a chat room often shows?  lol


I think you're being too quick to presume [roll]

#7 Chip

  • Guest
  • 387 posts
  • 0

Posted 12 September 2002 - 01:30 AM

Thank you for the clarification. I found a succinct definition of inductive reasoning as opposed to deductive reasoning and believe I followed the former properly (http://philosophy.wi...20/notes_1.html).

Yes, my post did not deal with free will directly. Free will can not be deduced directly because it would require covering all possible scenarios for all time and we do not have this knowledge at our disposal. However, by looking at the possible characteristics of universe, we can conclude that free will is probable. This is what I attempted to do.

In lieu of this, I then addressed why we may actually have free will when it appears not there by bringing in the time factor and the possibility that we are not entirely cognizant of what we truly want. "Any laws the universe has" I consider as probably chosen by our free will. It is precisely because our "substrate" is subject to these laws that I can use inductive logic. What ever our substrate, which is more than just physical brains, I believe we are part of the universe and thus, exhibit characteristics that can only be guessed at by looking at the characteristics of universe itself. If we take the existential perspective that our selves contain some component that is not of this universe then you end up with that difficulty for uploading you mention as that requires adherence to what is potentially knowable, something that is of the universe. Humans have the indeterminism of the universe as a part of themselves. Strange but plausible, our deterministic nature implies that we are a part of universe and that implies we are indeterministic in the final analysis. See, it really can only be addressed inductively and I like how pretty the logic plays out. I find it a happy thought. lol

Yes, perhaps I presume too much with my general denouncement of your post as an example of a chat room's failings. No offense meant. Thanks for the opportunity to explore my thoughts more completely.

#8 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 12 September 2002 - 03:59 AM

Basically you're saying that since we can't entirely, with total assurance, understand ourselves or the universe (yes, indeterminism, I know alot about that) that somehow that makes free will probable.

I don't understand how. What that means to me is we can't determine at all whether free will or determinism are more correct than the other. Otherwise I could simple say determinism is more probable. I plan to determine that uploading is possible, and this virtually REQUIRES determism to be true.

It is precisely because our "substrate" is subject to these laws that I can use inductive logic.  What ever our substrate, which is more than just physical brains, I believe we are part of the universe and thus, exhibit characteristics that can only be guessed at by looking at the characteristics of universe itself.


First of all, saying our sentience is more than our physical brain is not only going against tons of research, it also has no real use. There's no point to say that consciousness isn't in the brain at this point. We're trying to upload people's awareness unto a different substrate. Remarks like that are not only semantic, but if taken seriously to the point of no research in the area, is simply going against the goals of immortality.

If we take the existential perspective that our selves contain some component that is not of this universe then you end up with that difficulty for uploading you mention as that requires adherence to what is potentially knowable, something that is of the universe.


IF you take that view.

Humans have the indeterminism of the universe as a part of themselves.  Strange but plausible, our deterministic nature implies that we are a part of universe and that implies we are indeterministic in the final analysis.


I think you have the definition of determism construed; the definition of determinism in relation to free will is: Determinism is a belief that the substrate of a individual's awareness, if viewed from a perfect objective viewpoint on reality, can yield understanding entirely of that individual actions, emotions, beliefs, theory, and etc.

I've always believed, to a reasonable extent, that everything is indeterminable. But we should, as objectively as possible, try to achieve what is beyond us.

#9 Chip

  • Guest
  • 387 posts
  • 0

Posted 12 September 2002 - 09:40 AM

The indeterminism of the universe is only part of the two basic components of my reasoning. As for determinism being a requirement for the possibility of uploading I do believe you were quite correct in using the term "virtually" which means a task has to be done to satisfy observational analysis as to its success. We can expect to make copies or one way uploads that can appear as the original to all extent of purposes but I'm afraid we must always consider that a copy can never be its original. That means the upload can be given some enhancements. One doesn't need actual complete determinism to perform uploads.

I looked for a mathematical definition of inductive reasoning as I learned in college but I haven't found it on the web. From memory, I believe it goes like this, prove your hypothesis for 1 then (n+1) and you will have an acceptable inductive reason to believe your claim holds true in all instances pertaining to the elements of the domain under observation. Let me see if I can formulate a more concise and complete presentation of the logic of my claim that we have free will. This is not a mathematical presentation but still I believe that I can break this down to follow the dictates of Boolean algebra, if any one wants to label the various clauses of my statements as variables with varying conjunctions of juxtaposition, logical operators; AND, OR, "IF ... THEN..." and NOT feel free though it might require a team of workers and a lot of time. Somebody refresh my memory if I'm wrong but I believe all other operatives pertaining to logical statements can be derived from these few. I ramble.

Here's the logic. Free will or self determination can only be if death is not. So I analyze the theory that death is not existent. To my understanding, I have never known death, that's the "1" of the first part of this induction. Because a steady state universe can not be logically discounted, because the universe in general appears indeterministic, death may not exist for other people also. As long as I can not claim with full conviction that the universe had a beginning and an end, no matter how small the probability, time will make us all reincarnate in similar enough conditions where we can't tell the difference and this must be happening all the time, every moment. That's the (n+1) part of the reasoning. So the apparent existence of death is just that, apparent but not actually the case. Death is no obstacle to the premise of free will.

The former inductive reasoning then becomes the "1" of another induction where the (n+1) becomes anything else we might want to do besides not have to die. Here is where I brought in the idea that we often do not know our own wants and that the existence of natural laws may be what we really want for our existence. For the other part of the "n" I suggested that virtual realization can be sufficient (such as an animated movie or maybe more appropriately a virtual reality) and that perhaps time and technical know how could also bring any true wants to fruition.

That covers the whole ball of wax! As far as I can tell, I've given you a very logical and relatively complete argument for the existence of free will.

Do you see my reasoning now?

⌛⇒ MITOMOUSE has been fully funded!

#10 Lazarus Long

  • Life Member, Guardian
  • 8,090 posts
  • 237
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 12 September 2002 - 02:14 PM

Chip and Davidov have you noticed the serendipity that not only your philosophies compliment one another in an application of dialectical reasoning that demands synthesis but that even your avatars do?

I am enjoying your debate thoroughly. Please continue.

You two are forming a *Circle Squared*.

#11 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 12 September 2002 - 02:32 PM

The indeterminism of the universe is only part of the two basic components of my reasoning.  As for determinism being a requirement for the possibility of uploading I do believe you were quite correct in using the term "virtually" which means a task has to be done to satisfy observational analysis as to its success.  We can expect to make copies or one way uploads that can appear as the original to all extent of purposes but I'm afraid we must always consider that a copy can never be its original.  That means the upload can be given some enhancements.  One doesn't need actual complete determinism to perform uploads.  


It definitely may not be possible to achieve true uploading, but that is not going to stop me from trying. There's other methods than the shoot-up-to-your-computer method. Using nanotechnology you could slowly alter your brain from a biological to a non-biological one (I believe Moravec conceived this).

Here's the logic.  Free will or self determination can only be if death is not.  So I analyze the theory that death is not existent.  To my understanding, I have never known death, that's the "1" of the first part of this induction.  Because a steady state universe can not be logically discounted, because the universe in general appears indeterministic, death may not exist for other people also.  As long as I can not claim with full conviction that the universe had a beginning and an end, no matter how small the probability, time will make us all reincarnate in similar enough conditions where we can't tell the difference and this must be happening all the time, every moment.  That's the (n+1) part of the reasoning.  So the apparent existence of death is just that, apparent but not actually the case.  Death is no obstacle to the premise of free will.


Even though I don't see too much how free will is relevant to death, reincarnation, and other seemingly irrelevant issues, I believe you're just stating theories and somehow deriving facts from them. There is no real evidence we're reincarnated through Imagination does not equal reality. Sounds more like inductive imagination.

Here's something that you could consider inductive or deductive:

One Third!

That covers the whole ball of wax!  As far as I can tell, I've given you a very logical and relatively complete argument for the existence of free will.  

Do you see my reasoning now?


You're basically saying inductive reasoning 'proves' free will. I don't agree. Inductive reasoning is still deductive reasoning connected to a person's imaginative abilities. There is no magical thought process (I hope).

Besides that, I'm asking you this, a more central question to the argument: Are you saying our awareness goes beyond our physical bodies, as in beyond our brains? If you are, I don't think hardly anyone would completely disagree with you (because of that pesky indeterminism of possibilities), but we wouldn't find any real utility in proposing such a possibility. That's religion's job.

I sure freaking hope that uploading is possible, but I'm willing right now to focus more on advocating and spreading memes like FAI (Friendly Artificial Intelligence).

#12 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 12 September 2002 - 02:37 PM

Chip and Davidov have you noticed the serendipity that not only your philosophies compliment one another in an application of dialectical reasoning that demands synthesis but that even your avatars do?

I am enjoying your debate thoroughly. Please continue.

You two are forming a *Circle Squared*.

Chip and Davidov have you noticed the serendipity that not only your philosophies compliment one another in an application of dialectical reasoning that demands synthesis but that even your avatars do?


You know, I did notice his avatar was similar to mine. ;)

#13 Chip

  • Guest
  • 387 posts
  • 0

Posted 12 September 2002 - 04:02 PM

Yes, I look at the two graphics (been wondering if the use of "avatar" might be twisting the meaning of the word, but, what the heck) and am immediately reminded of the Golden Mean rectangle. I think that's what it is called, where subsequent rectangles rotated 90 degrees then scaled to fit within the previous delineates a curve. I believe it's related to that irrational number, Phi. Perhaps they even begin to exemplify the general positions we hold, Davidov's being relatively discrete and mine continuous (both valid points of view, probably). I wouldn't be surprised if there were a symmetrical equation that could describe the transition from one to the other and back again. Oh well.

I'm completely in agreement with you about uploading. Could be a useful trick to have in one's bag of toys.

I was kind of hoping I was working from facts to theory and not vice versa. "Inductive imagination," now that's an interesting concept. [roll] It's possible that I delude myself, ;) .

Okay, I figured we'd need to address this mind/body dichotomy stuff. I don't believe in a "physical" world or level or whatever. As Bucky Fuller stated it, everything is metaphysical. This is because all information is brought as symbolic representation of what appears to be. You know, if you consider the space between molecules, atoms and their constituent particles, you come to the realization that things are actually more space than substance. It is our perceiving relationship, the nature of our own beings that gives that space substance.

#14 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 12 September 2002 - 04:50 PM

I'm completely in agreement with you about uploading.  Could be a useful trick to have in one's bag of toys.


Not only useful; I hope and will try very much to bring this technology into fruition. Sometime after the Singularity, I guess.

I was kind of hoping I was working from facts to theory and not vice versa.  "Inductive imagination," now that's an interesting concept. [roll]


;) . I hypothesize that inductive reasoning is a illusion to complex deductive reasoning.

A good example is when I was reading in Eliezer's CFAI (What is friendly AI?) about the reasoning in inventing the wheel. The "inductive" part was where he mentioned a individual thinking of logs rolling down a hill. The logs gave person the person of that concept of the wheel. In effect, the individual didn't really inductively reason such. It was just a freedom of thought he/she hadn't understood before within trying to invent the wheel.

It's possible that I delude myself,  :) .


We all have that problem lol

Okay, I figured we'd need to address this mind/body dichotomy stuff.  I don't believe in a "physical" world or level or whatever.  As Bucky Fuller stated it, everything is metaphysical.  This is because all information is brought as symbolic representation of what appears to be.  You know, if you consider the space between molecules, atoms and their constituent particles, you come to the realization that things are actually more space than substance.  It is our perceiving relationship, the nature of our own beings that gives that space substance.


[mellow] ...

You don't have to "believe" in this physical world, but it might help to try to work with the 'apparent' laws inherent within it. Particularly to create sentience within it (for example, FAI).

About a month ago I came to realize everything can be metaphysical. In other words, all possibilities of existence exist. Even ones we can't conceive of. So in "another" universe, I'm gay. Another I'm bisexual. Another I'm female! Even in another I'm typing just a inch in difference, but on the "same" computer.

Right now, though, having such thoughts on reality is totally futile. Working with the hazy framework we understand from transhumanism ideals may be the only way to achieve the Singularity and beyond.

Do you not assume awareness can be a emergent property of highly complex physical systems, such as brains?

#15 Chip

  • Guest
  • 387 posts
  • 0

Posted 12 September 2002 - 06:47 PM

I think we are in basic agreement of most concepts. Sounds like you are kind of in denial of the possibility of inductive reasoning. It does have a distinct and explicit definition as does deductive reasoning. I agree that invention of the wheel most likely occurred using deduction. I'm enjoying reading the text of http://philosophy.wi...20/notes_1.html for example, "Tip: Defined terms must be used as defined. You can’t use the term differently just because you don’t agree with the definition." which I find to be kind of funny after the text went to great lengths to explain good and bad conclusions, the author threw that sentence in there which sometimes I wish was true but often is not. lol Otherwise I'm finding the page to be quite informative. Apparently inductive reasoning has a bad reputation. Perhaps because it is more complex than deduction and has been used more often to hide invalid arguments and misconstrued conclusions. The author suggests using a different term, "ampliative inference." He further states "To understand empirical science we need to understand ampliative inference."
I did not prove the existence of free will to complete certainty. I just tried to arrange some valid premises that infer that there is a better than 50-50 chance that free will exists. I find there is reason to believe that this is the most we can hope for.

Apparently we seem to have a slightly different definition of metaphysical. I'm glad to see that we seem to agree that an "anything goes" universe renders things futile, meaningless. See why possibly our free will chose to have natural laws? It gives meaning and direction to life.

I think maybe I see another issue here, FAI. I'm not opposed to the development of artificial intelligence. I do believe there is other more pressing science that needs greater attention first. Consider that most human labor and resources go into the production of weaponry, relatively non-friendly technology. We need to figure out how to turn that around before we go messing with some of the most powerful possibilities we can create or we could very well end up at the whim of non friendly AI. If you ever get a chance to read Buckminster Fuller's Utopia or Oblivion, I highly recommend it as a primer on sociology. In the same way I wonder of the drive towards the singularity. If we don't destroy ourselves first it will probably happen. I suggest we try to solve the problem of our inability to collectively govern ourselves for our own good first, or the singularity might end up thinking it would be a nice thing to do to rid the universe of this plague called humanity that may have caused the greatest extinction of species that ever hit the planet. To try to bring this back to the topic at hand, if we're to have the powers of supreme beings, with singularity a servant to us rather than strictly vice versa then that will be something we will have to actively pursue in accordance with these natural laws that abound. In order to have free will, we have to make some exhaustively informed, well thought out and highly reasonable i.e. eclectic choices.
In order to be free, we need to limit some freedoms.

Sure, "awareness can be an emergent property of highly complex physical systems, such as brains" but I can say the same thing about a relatively non complex element. Without iron in your blood, oxygen would not get to the brain. Does this mean that iron is necessary for awareness? For us humans, yes, oxygen too. Where should one stop? What about the elements within amino acids? Can we say there is awareness without basic proteins? Can you begin to see that ascribing to a metaphysical view of all universe and its parts does not necessitate that all things are possible and happen. Free will means that not only can we get what we truly want but we also don't get what we truly don't want.

Phew, got to take a break for a while and do some pressing work.

#16 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 12 September 2002 - 07:15 PM

[

I think we are in basic agreement of most concepts.  Sounds like you are kind of in denial of the possibility of inductive reasoning.  It does have a distinct and explicit definition as does deductive reasoning.  I agree that invention of the wheel most likely occurred using deduction.


Before I mentioned that, would you have provided that inventing the wheel was inductive?

Otherwise I'm finding the page to be quite informative.  Apparently inductive reasoning has a bad reputation.  Perhaps because it is more complex than deduction and has been used more often to hide invalid arguments and misconstrued conclusions.



Or PERHAPS it is a form of free deduction; deduction that logically works outside the supposed system of rules to come to different conclusions (or confusions [ggg]).

Is that what you think inductive reasoning is?

I did not prove the existence of free will to complete certainty.  I just tried to arrange some valid premises that infer that there is a better than 50-50 chance that free will exists.  I find there is reason to believe that this is the most we can hope for.


Coolness.

Apparently we seem to have a slightly different definition of metaphysical.  I'm glad to see that we seem to agree that an "anything goes" universe renders things futile, meaningless.  See why possibly our free will chose to have natural laws?  It gives meaning and direction to life.



That is very true? I think we were arguing much without understanding the definitions between each other were quite similar... ;))

I think maybe I see another issue here, FAI.  I'm not opposed to the development of artificial intelligence.  I do believe there is other more pressing science that needs greater attention first.  Consider that most human labor and resources go into the production of weaponry, relatively non-friendly technology.  We need to figure out how to turn that around before we go messing with some of the most powerful possibilities we can create or we could very well end up at the whim of non friendly AI.  If you ever get a chance to read Buckminster Fuller's Utopia or Oblivion, I highly recommend it as a primer on sociology.  In the same way I wonder of the drive towards the singularity.  If we don't destroy ourselves first it will probably happen.  I suggest we try to solve the problem of our inability to collectively govern ourselves for our own good first, or the singularity might end up thinking it would be a nice thing to do to rid the universe of this plague called humanity that may have caused the greatest extinction of species that ever hit the planet.


There's existential humanity's death reasons for creating FAI as quickly as possible. Don't ramble on about how a AI could get pissy with us until you've got a clear understanding of Eliezer's FAI Click here for the full explanation (lengthy)

To try to bring this back to the topic at hand, if we're to have the powers of supreme beings, with singularity a servant to us rather than strictly vice versa then that will be something we will have to actively pursue in accordance with these natural laws that abound.  In order to have free will, we have to make some exhaustively informed, well thought out and highly reasonable i.e. eclectic choices.
In order to be free, we need to limit some freedoms.


I read the end of your response first, and I didn't know what is what you meant by free will! I agree with YOUR definition of it, now that it was clarified.

Sure, "awareness can be an emergent property of highly complex physical systems, such as brains" but I can say the same thing about a relatively non complex element.  Without iron in your blood, oxygen would not get to the brain.  Does this mean that iron is necessary for awareness?  For us humans, yes, oxygen too.  Where should one stop?  What about the elements within amino acids?  Can we say there is awareness without basic proteins?  Can you begin to see that ascribing to a metaphysical view of all universe and its parts does not necessitate that all things are possible and happen.


Of course, it doesn't neccessitate it at all. Particularly in view that there's not too much evidence to back up mulitverse theory.

Free will means that not only can we get what we truly want but we also don't get what we truly don't want.

Phew, got to take a break for a while and do some pressing work.


I wish you would've told me that was your conception of free will so succinctly. We've probably wasted our time a little here.

Though I still think inductive reasoning is just advanced deductive reasoning lol

#17 Chip

  • Guest
  • 387 posts
  • 0

Posted 14 September 2002 - 10:36 AM

“Before I mentioned that, would you have provided that inventing the wheel was inductive?”

No.

“Or PERHAPS it is a form of free deduction; deduction that logically works outside the supposed system of rules to come to different conclusions (or confusions ). Is that what you think inductive reasoning is?

No. Now I begin to wonder if you are just a bit wary of any logic at all. Deductive and inductive logic are two different phenomena. For example, for the exploration of my social theory at http://www.ergodicity.org I attempt to identify the characteristics of society that are analogous to generalized understandings in the fields of information theory, general systems theory and the characteristics of dynamic systems modeling as my deductive approach. In short, presently known science leads to a deduction that there is something to my idea. I have also carried out collecting the statistics implied by natural history, the demographics deduced from archaeology and the population records that humanity has preserved to begin to create a changing directed vector stratified hierarchy description of social evolution, how our social experiments have changed over time. By looking at the trends this discloses, I seek to see if my social theory might fit the expectations that result from ampliative inference on this data. My understanding is that any valid theory must have a decent deductive and inductive approach.

When some early member of the human family, rolled a log down a hill to his camp for firewood did he realize that he had invented the wheel? Deduction appears to be looking at things as they are and using that knowledge to pursue certain goals. Induction appears to be a process of looking at how things have happened and guessing what will occur. I postulated that nature has given me free will. I expect that I will have it tomorrow.

In that last post I state, “There is a better than 50-50 chance that free will exists.” In my first post here, I state, “I believe in 100% total free will.” Belief is a personal thing that includes opinion and often goes at least one step beyond logic. I have to believe it! The only way I can justify taking another breath is if I believe it is meaningful. No free will is a belief that says nothing matters and “anything goes.” It is the justification for mayhem. Got a quarrel with someone? Don’t seek understanding so you can avoid it in the future, instead, kill the bastard! This is justified if there is no free will. If you are going to have to die someday anyways then why not make enemies?

Okay, I was in a lengthy and detailed discussion with a self proclaimed extropian who is a member of an institute that is pushing for the development of nanotechnology. He looked at my web site and said we don’t need or want any governing system. I gave up the conversation after many emails back and forth because he was obviously sure that nanotechnology was the technology that would save humanity from itself; nothing else really mattered to him. I should of asked him if he was willing to give up obeying traffic lights or if he wanted to remove the float valve from his toilet or thermostat from his wall, being examples of true governing systems we use. I did not look yet at your link to data of FAI, but I think I can begin to guess what is there, allusion to the idea that intelligence would of necessity be benevolent, that us humans couldn’t begin to handle the information explosion without the singularity happening and brought to fruition. From the strong response concerning my mention that FAI should not be the major science we seek to develop to save us, but rather sociology, I gather that this is a sacred cow for you, just as nanotechnology was a sacred cow to this other guy. I’m sorry but I can’t see ignoring the science of how we make and apply technology (which is sociology) as being subordinate to the actual creation of any fantastic technological magic bullet. The benevolent application of the singularity, of nanotechnology, of any technology, will be dependent on how and why we make our tools. That “how and why” is sociology. That is the science any one truly wishing to get beyond the rhetoric and become an honest to goodness transhumanist will have to face.

Inductive reasoning is advanced deductive reasoning in a way but they are mutually exclusive concepts. We use them both in our day-to-day activities. Without either we would fail to function.

You state that we may have wasted some time here. That isn't really uncommon for chat rooms. [woot]
[woot] [woot] [woot]

#18 Lazarus Long

  • Life Member, Guardian
  • 8,090 posts
  • 237
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 14 September 2002 - 01:31 PM

The application of the dialectic shows that the synthetic integration of deductive and inductive reasoning produces the alternative method of seductive reasoning. It also hopefully produces other options.

Seductive reasoning is an element of human psychological function described by memetics. It is the process by which we convince, "proof" in a truly objective manner is not specifically required.

I suggest that we are also looking for a synthetic alternative for integrating deductive and inductive reasoning that is qualitatively more objective but the issue of integrating the pragmatic reality of subjective awareness into the reasoning process can't be logically overcome, it must be subsumed. Explained in the fashion of mathematical paradox.

If Bill Hay is still at the UW Philosophy department tell him that a former student sends well wishes from the ether. You might want to introduce him to this site. I wish Marcus Singer were here at times as well.

I would also suggest that perhaps some of you that are studying philosophy seriously (or perhaps humorously) could suggest that a review of arguments made in this forum constitute a modern equivalent of Platonic Dialogues that could be utilized to encourage participation among students and as a means of focusing attention on salient philosophical issues while facilitating the sifting and winnowing of "chaff".

#19 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 14 September 2002 - 06:38 PM

I think you've made much more sense to me this time - were you not in a hurry this time? B)

When some early member of the human family, rolled a log down a hill to his camp for firewood did he realize that he had invented the wheel?  Deduction appears to be looking at things as they are and using that knowledge to pursue certain goals.  Induction appears to be a process of looking at how things have happened and guessing what will occur.  


Okay, I'll give the two methods different names, but I think they both have the same underlying reasoning methods, despite some external differences.

In that last post I state, “There is a better than 50-50 chance that free will exists.”  In my first post here, I state, “I believe in 100% total free will.”  Belief is a personal thing that includes opinion and often goes at least one step beyond logic.  I have to believe it!  The only way I can justify taking another breath is if I believe it is meaningful.  No free will is a belief that says nothing matters and “anything goes.”  It is the justification for mayhem.  Got a quarrel with someone?  Don’t seek understanding so you can avoid it in the future, instead, kill the bastard!  This is justified if there is no free will.  If you are going to have to die someday anyways then why not make enemies?


I don't plan on dieing. ;) If I had understood your conception of free will instead of just blindly assuming you had the same definition I had, I wouldn't have argued with you about it.

I did not look yet at your link to data of FAI, but I think I can begin to guess what is there, allusion to the idea that intelligence would of necessity be benevolent, that us humans couldn’t begin to handle the information explosion without the singularity happening and brought to fruition.  From the strong response concerning my mention that FAI should not be the major science we seek to develop to save us, but rather sociology, I gather that this is a sacred cow for you, just as nanotechnology was a sacred cow to this other guy.


Nanotechnology in the hands of slow-thinking beings like humans is much more dangerous than in the hands of a fast-thinkin' superintelligent being.

There's a certain "race" to make sure that FAI becomes available before nanotechnology does. It's a existential risk. Imagine nanotechnology wielded by us instead of a SI (superintelligent) being.

I sure damn hope the "sacred cow" worshippers of nanotechnology don't advance that far. [hmm]

I’m sorry but I can’t see ignoring the science of how we make and apply technology (which is sociology) as being subordinate to the actual creation of any fantastic technological magic bullet.  The benevolent application of the singularity, of nanotechnology, of any technology, will be dependent on how and why we make our tools.  That “how and why” is sociology.  That is the science any one truly wishing to get beyond the rhetoric and become an honest to goodness transhumanist will have to face.  


There are people quite dedicated to the prospect of FAI and similar AI projects. Even if I totally agreed with you it would be futile, because they're not going to stop for me or you. Science and technological progress just doesn't stop on the dime of a person's opinion. So instead of idly standing by or protesting about how unready we are, I'm going to help my beloved "sacred cow" the best I can.

I don't believe that ultratechnology known as FAI is the key to life, the universe, and the greatest mystery of all: Me ( [roll] ). Simply put, though, it's the best shot I think we've got. The other two viable ultratechnologies, uploading and nanotechnology, are much farther off, and much more untrustworthy at that. I'm going to do what I can to make sure FAI reaches maturation before either of those two do.

If you can revolutionize society for the better, without *really* hindering the acceleration of the Singularity, go for it!

Inductive reasoning is advanced deductive reasoning in a way but they are mutually exclusive concepts.  We use them both in our day-to-day activities.  Without either we would fail to function.  


Alrighty.

You state that we may have wasted some time here.  That isn't really uncommon for chat rooms.   [woot]
[woot]  [woot]  [woot]


No! I cannot accept this! Oh you're right. :))

#20 Chip

  • Guest
  • 387 posts
  • 0

Posted 16 September 2002 - 08:09 PM

“were you not in a hurry this time?”

You have definitely got me to thinking and being more careful .

“Nanotechnology in the hands of slow-thinking beings like humans is much more dangerous than in the hands of a fast-thinkin' superintelligent being.

There's a certain "race" to make sure that FAI becomes available before nanotechnology does. It's a existential risk. Imagine nanotechnology wielded by us instead of a SI (superintelligent) being.”


What do you mean by ‘existential risk?” I hope that the SI will be us, a tool that we wield for the good of humanity and of life in general. If it’s something toastally lol external to ourselves, we might just have to say “bye bye.”

“If you can revolutionize society for the better, without *really* hindering the acceleration of the Singularity, go for it!”

Well, sometimes it’s kind of lonely wanting the super technologies of our future to be designed and kept to benevolent ends, but, I can’t think of any thing better to do. I don’t seem to have the option to use the justification that you present for pursuing the singularity, i.e. someone is going to do it so it might as well be me. So far, I’ve been kept pretty much alone on my desires to facilitate a rational society. I sincerely hope, and I know it couldn't be otherwise if it’s going to happen, that someday others will want the same.

#21 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 16 September 2002 - 10:42 PM

“were you not in a hurry this time?”

You have definitely got me to thinking and being more careful .


Thanks :D

What do you mean by ‘existential risk?”


A existential risk/threat is a risk/threat that can wipe out life on Earth - in particular all human life.

 I hope that the SI will be us, a tool that we wield for the good of humanity and of life in general.  If it’s something toastally lol external to ourselves, we might just have to say “bye bye.”


There's still a crack in this foundation of thought: You're hoping people are going to stop researching for you. I know a few AI researchers (not too personally), and they're planning on making this external entity, despite a irrational society, etc.

Well, sometimes it’s kind of lonely wanting the super technologies of our future to be designed and kept to benevolent ends, but, I can’t think of any thing better to do.


FAI is meant to be as benevolent as possible for a ultratechnology.

I don’t seem to have the option to use the justification that you present for pursuing the singularity, i.e. someone is going to do it so it might as well be me.  So far, I’ve been kept pretty much alone on my desires to facilitate a rational society.  I sincerely hope, and I know it couldn't be otherwise if it’s going to happen, that someday others will want the same.


Good luck. ;)

#22 Chip

  • Guest
  • 387 posts
  • 0

Posted 17 September 2002 - 04:12 AM

“A existential risk/threat is a risk/threat that can wipe out life on Earth - in particular all human life.”

Your use of the words “existential risk/threat” is interesting. Is this terminology that others use in the manner you describe? I find the philosophical concept of existentialism to be an abhorrent malady of the continued alienation our failing social experiments facilitate. I like to have my recognition of evil in the universe to be clear and unfettered so that others can help me fine-tune my perspective and so that I may better address how to combat such evil. Might “existential risk/threat” be better phrased as “existence risk/threat?”

“There's still a crack in this foundation of thought: You're hoping people are going to stop researching for you. I know a few AI researchers (not too personally), and they're planning on making this external entity, despite a irrational society, etc.”

Do you honestly believe that I’m just trying to get my way? If so, I would greatly appreciate your sharing how you come to this. My hope is that scientists will come to have some forethought of their own perhaps with help of a social system that works to recognize and avoid dangers to existence. Sure, some will pursue any science for the sake of science though it may have the potential to put an end to science by destroying our selves, truly a maniacal obsession. If I convince any one that we need to be quite eclectic in the tasks we choose to work on, great but I hope my convincing is done from a place of sharing knowledge rather than just ego. Interesting that we both use imminent danger as justification for two stances. Perhaps this is because our stances are not directly opposing. For those who may want no singularity to ever happen, I would say your argument is valid but both of those polarities seem to come from a non-contextual understanding. I find from a point of view that we are all in this together, that the repercussions of our activities can not help but have repercussions that effect others, the development of the singularity must happen with great care and effort to insure that this potential is benevolent. I don’t think you disagree with this. I’m just asking you to make the leap to the understanding that our society right now allows the creation of technology for ends that are both nonsensical and destructive; that how and why we make any technology should be the prime consideration. Saying that sociology should be of higher priority than the singularity does not mean that the singularity shouldn’t be of high priority also. I just hope we can avoid the “damn the torpedoes, full speed ahead” perspective. If we can dodge and avoid dangers that we face, I say, let it be. No blinders for this horse.

Yes, might I have good luck and may it rub off on you too, thank you. I’m not a superstitious person, me thinks, so I can share my birthday wishes without compromising their possible manifestation. Since about the age of thirteen, my wish before blowing out the candles has been “I wish the world had all happy people.” Though I really haven’t had a birthday cake for myself for more than thirty years, I still have this wish. In the final analysis, I find altruism to be self-serving. lol

⌛⇒ MITOMOUSE has been fully funded!

#23 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 17 September 2002 - 06:26 AM

Your use of the words “existential risk/threat” is interesting.  Is this terminology that others use in the manner you describe?


I don't think MANY people use it, just people in philosophical Singularitarian discussions and essays.

I find the philosophical concept of existentialism to be an abhorrent malady of the continued alienation our failing social experiments facilitate.  I like to have my recognition of evil in the universe to be clear and unfettered so that others can help me fine-tune my perspective and so that I may better address how to combat such evil.


Yeah, existential risks and their causes are usually considered "bad" (can't think of a good one! ;)) ).

Might “existential risk/threat” be better phrased as “existence risk/threat?”


You could, but I'm thinking it'd just cause more confusion than anything. I know why you think it should be called that though (and it does make sense to do so, but society, as I'm sure you're aware doesn't make too much sense a lot, either).

Do you honestly believe that I’m just trying to get my way?  If so, I would greatly appreciate your sharing how you come to this.


I couldn't COMPLETELY infer such, but when you speak of rationalizing a society and complaining about our full-speed-torpedoes-be-damned ideology, such does become slighty inferrable and credible (whether it is true or not).

My hope is that scientists will come to have some forethought of their own perhaps with help of a social system that works to recognize and avoid dangers to existence.  Sure, some will pursue any science for the sake of science though it may have the potential to put an end to science by destroying our selves, truly a maniacal obsession.


I have but one question: Which do you think is more dangerous... nanotechnology, uploading technology, or FAI?

Interesting that we both use imminent danger as justification for two stances.  Perhaps this is because our stances are not directly opposing.  For those who may want no singularity to ever happen, I would say your argument is valid but both of those polarities seem to come from a non-contextual understanding.  I find from a point of view that we are all in this together, that the repercussions of our activities can not help but have repercussions that effect others, the development of the singularity must happen with great care and effort to insure that this potential is benevolent.  I don’t think you disagree with this.  I’m just asking you to make the leap to the understanding that our society right now allows the creation of technology for ends that are both nonsensical and destructive; that how and why we make any technology should be the prime consideration.


FAI, in particular, is meant to be as benevolent as possible for humanity. To say that FAI is just a very ambitious general intelligence project is simply underestimating SIAI's (Singularity Insititue for Artificial Intelligence) goals.

Saying that sociology should be of higher priority than the singularity does not mean that the singularity shouldn’t be of high priority also.  I just hope we can avoid the “damn the torpedoes, full speed ahead” perspective.  If we can dodge and avoid dangers that we face, I say, let it be.  No blinders for this horse.


Sorry, damn the torpedoes! As long as there are greater risks other than a gone-crazy "Friendly" AI, such as nuclear war, biological war, nanotechnological disasters and wars, and uploaded individuals running through the Internet amok, or something unforeseen, I plan on helping to create in anyway I can a world revolutionizing FAI (and I, too, will be rational about the implications of such before advocating it).

Let it be is to let the dangers stare you in the face. Call me crazy, call me irrational, but I feel that trying to take the time to think about a Terminator AI scenario, or something similar, is just people's general bias to Strong AI (AI with the capacity more intelligent than humans). Aggressive behavior is inherent within human beings as a complex evolved trait. A AI would have no instinctual reason to retaliate, or initiate, violence against humans. That's not to say one wouldn't, but humans today are probably more prone to violence than proposed AI are.

If at all in the scientific community,your hopes are fulfilled in the proponents of FAI. We're here to think of what such a possibly vast intelligence would do. If you would, just take the time to read a introductory article on FAI:http://www.singinst....dly/whatis.html. It's not TOO long, but you'll see there is thought of the implications of FAI extensively.

You might also like Staring Into the Singularity, by Eliezer Yudkowsky:http://sysopmind.com/singularity.html

Yes, might I have good luck and may it rub off on you too, thank you.  I’m not a superstitious person, me thinks, so I can share my birthday wishes without compromising their possible manifestation.  Since about the age of thirteen, my wish before blowing out the candles has been “I wish the world had all happy people.”  Though I really haven’t had a birthday cake for myself for more than thirty years, I still have this wish.  In the final analysis, I find altruism to be self-serving. lol


Funny, Eliezer Yudkowsky, auther of "Creating Friendly AI" would probably agree with you. :)

#24 Chip

  • Guest
  • 387 posts
  • 0

Posted 17 September 2002 - 12:01 PM

I went and looked up the definition of existential and found that it has both meanings, the one I thought it alluded to and the one you are explaining so I can’t fault those who use the word as you define it but I can certainly state that it is ambiguous and not helping to clarify situations but then, if we are somewhere where people can gain recognition for championing a cause, clarity of thought might not be all that important. In some cases using a term of double meaning can be a method of obfuscation. If one sees that anyone challenging some postulated idea must be an enemy, then it can be good to have terms that add confusion to the expression of an idea. It makes the idea easier to defend as any challenger must figure out what they are challenging first before they can get anywhere, sort of like peeling back the layers of an onion to find “?”.

AH … “Singularity Institute for Artificial Intelligence” Oh boy. Sounds like this group has already made up their minds. Doesn’t sound like a good place to go to find an unbiased view. I bet there are people who are actually getting some of their livelihood behind the cause. Instances where people will defend the idea more for the sake of their personal wealth than rational merit probably occurs there. Of course we do know that the major funding for AI occurs in military circles but then, maybe the SIAI can come up with something that the military can use. Wouldn’t be the first time a well-meaning group came up with knowledge that was used to kill.

What is going on here? First of all I think the term "artificial" is basically difficult. Even artificial lighting is real lighting. Seems that the meaning is that if humanity makes something it is not natural. Can this possibly be a fallacious biased perspective that humans share? We've named our species Homo sapiens, the smart ones. We have proclaimed that we are the wise guys. Might it be a case of bravado? The use of that “A” word brings up visions of patents, marketing and moneymaking, which next to making weaponry and war, is the major force to which we trust teleological growth and my beef is that is not sufficient for the development of FI. In other words FAI is a self-contradictory impossibility. Intelligence cannot be both artificial and friendly. I consider intelligence to be an information consideration, consolidation and synthesis process. If I use beads on strings, or knots on ropes, or even words for that matter, are they artificial? EVERYTHING IS REAL! Computers are natural phenomena. An existential component is here and I mean as in existentialism and not existence. It is from seeing oneself as not a part of universe that one can find use of the word “artificial” as acceptable. So the alienated, the inherently out-of-touch are proclaiming the worthiness of a cause. Oh boy. No, I don’t think I’m underestimating SIAI. I think I’m nailing them on their little pointed heads lol. It doesn’t really matter to me if I’m getting through to you. Getting through to me is my intent and I welcome the opportunity. Okay, so I clarify my position, I am for the development of FI, not FAI.

“I have but one question: Which do you think is more dangerous... nanotechnology, uploading technology, or FAI?”

I think they are all equally dangerous as they are promising, just as any tool though I wonder about using that “A” symbol. Might mean that developers of FAI may be more prone to self-delusion.

Okay, I’ll study your links now. Thanks again for the chance to clarify my thoughts.

#25 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 17 September 2002 - 12:29 PM

Oh boy. Sounds like this group has already made up their minds. Doesn’t sound like a good place to go to find an unbiased view.


Well, you're probably right about most having made up their minds.. however, from personal experience, the group is open to reasonable debate on the topic. A research fellow with SIAI, Eliezer Yudkowsky, is active in our online chat room on occasion.

BTW, Chip, your more than welcome to join us in the chat room.

#26 Davidov

  • Guest
  • 20 posts
  • 0
  • Location:Georgia (USA)

Posted 17 September 2002 - 01:25 PM

Of course we do know that the major funding for AI occurs in military circles but then, maybe the SIAI can come up with something that the military can use.  Wouldn’t be the first time a well-meaning group came up with knowledge that was used to kill.


This is the OPPOSITE intentions of SIAI.

It is from seeing oneself as not a part of universe that one can find use of the word “artificial” as acceptable.  So the alienated, the inherently out-of-touch are proclaiming the worthiness of a cause.  Oh boy.  No, I don’t think I’m underestimating SIAI.  I think I’m nailing them on their little pointed heads lol.  It doesn’t really matter to me if I’m getting through to you.  Getting through to me is my intent and I welcome the opportunity.  Okay, so I clarify my position, I am for the development of FI, not FAI.


I've used this reasoning, and basically understood that the words "natural" and "unnatural" are inherently defunct as humans become less anthropocentric. Though, for the mainstream, it only helps to clarify that "artificial" is a physically or conceptually tangible thing created by the thought (as hazy as thought can be defined in a materialist world...) of human beings.

I think they are all equally dangerous as they are promising, just as any tool though I wonder about using that “A” symbol.  Might mean that developers of FAI may be more prone to self-delusion.


Or more prone to providing to the masses. They need funding you know :D

Well, I can't convince you myself any further that FAI is inherently less dangerous than the others. Those links are much better written than what I could post here.

Okay, I’ll study your links now.  Thanks again for the chance to clarify my thoughts.


Now? I wish you would've studied them BEFORE you replied [hmm]

#27 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 17 September 2002 - 03:31 PM

Intelligence cannot be both artificial and friendly


Hm, why not? If you're going to delve into the semantics of the word "artificial", then you can say that when Martin Luther King's parents copulated, they were "creating" an "artificial" child that was intelligent and friendly. In any case, the distinction between artificial and natural is meaningless in most contexts, and should probably be avoided.

#28 Psychodelirium

  • Guest Philosopher
  • 26 posts
  • 0

Posted 17 September 2002 - 04:59 PM

In the bizarro world of Aristotle and the medievals, man-made and natural things were diametric opposites. For us, however, this need not be so. Our technology is 'natural', because we are, and 'artificial', because we constructed it. Or would one argue that Nature abhors beaver dams, as well?

"Artificial Intelligence" is intelligence designed by another intelligence. It's a rather simple concept, really. Not an iota of any existential alienation that I can see.

#29 Chip

  • Guest
  • 387 posts
  • 0

Posted 17 September 2002 - 06:14 PM

Ooh wee! Granted that the pursuit of this topic has gone astray and I expected Bruce to at the least, gently stir things towards their respective categories, I do get the feeling that I tread on sacred ground here and may be some what of an infidel and as we all know from the fascism that seems to be engulfing the planet, dissent is an act of terrorism.

Davidov said, "This is the OPPOSITE intentions of SIAI."

Exactly! With all the glorious pontification about the good intentions of SIAI, the result could be 180 degrees opposed to those stated intentions. Must we allow militarism, nationalism, the drive to amass money, be the deciding forces? I say we address that question first. This is what I suggest. Is that difficult to understand? I dare say that I believe this is unarguable using logic and rational premise BUT

In the post before you took my close of a sentence with "let it be" to be indicative of a call for a laissez-faire attitude when I was referring to our abilities to foresee and avoid calamity. I found that to be nit picking at a possible double definition for the use of the phrase.

Look, I think I’ve just about spent my cajones out at this site. lol I think the jabberwocky rises to the top in chat rooms and I don’t desire to argue with those who think the point is argument.

Davidov said: “I feel that trying to take the time to think about a Terminator AI scenario, or something similar, is just people's general bias to Strong AI (AI with the capacity more intelligent than humans).”

Interesting that you should use the word “bias” as meaning having an opinion against FAI. Ah, there’s that old argument against democracy, “just people’s general bias.” Guess we can trust things to experts, right?

Hey, seems to me that a lot of your understanding is not taking into account that there is AI on the planet right now and most likely, its major work is in delivering firepower.

Davidov said, “Though, for the mainstream, it only helps to clarify that "artificial" is a physically or conceptually tangible thing created by the thought (as hazy as thought can be defined in a materialist world...) of human beings."

Okay, define what is meant by “artificial” as is used in Friendly Artificial Intelligence” instead of beating around the bush with a comprehensive haziness that could be used to justify or denounce anything. Or better yet, describe FAI in different words. If it is a valid concept, then there’s got to be another way to describe it what with all the words we have and all the metaphors etc. Heck, you ought to know by now that reference to there being a “materialist world” only conveys that there are many who are duped into a grab-as-much-as-you-can behavior that just does not reflect the reality that nothing is material, in my eyes. You can take the phrase “nothing is material” and use it’s double meaning too if you want to deliver another blow to my reasoning though it doesn’t work very often with me. I know that there are people out there who ride the roller coaster of spin but it’s hard enough for me just to consider the spin of our planet’s revolving.

Davidov said “Or more prone to providing to the masses. They need funding you know”

Tell me, why isn’t SIAI SIFAI? Why do they keep the friendly out of there? They get no donations from me. If anybody wants to address the possible technology that will substantially support the development of FI. By all means, send some tokens my way.

MichaelAnissimov, thank you for pointing out how outrageous the use of the word “artificial” can be. As far as the words being meaningless in most contexts, I suggest you consider your circles of discourse to be a bit lacking. I know what I mean when I use the words but if your comrades don’t, does that make the words meaningless? Come now. How about making an attempt at some communication rather than just a baseless snide remark. You’re not into picking at the semantic disparities of a platform? Why not, especially if that platform could lead to disaster (nay, I should say that it is very likely that AI has already led to disastrous consequence for some).

Psychodelirium said, "'Artificial Intelligence' is intelligence designed by another intelligence. It's a rather simple concept, really. Not an iota of any existential alienation that I can see.”

Artificial intelligence won’t be an adjunct or supplement to the designing intelligence? It implies two separate intelligences? You don’t see how this might be construed as a striving for an alienated and existential (as in existentialism) condition where a creator and its creation are divested of each other’s responsibilities? Is your name supposed to be taken seriously?

Well off track now, Bruce. I apologize. I do believe that the continuity of thought in the conversation with Davidov has helped me to see things more clearly and wanted to take advantage of the opportunity. Anybody agree? I will try to stay off the singularity subject in this chat area but I suspect that more will be said in response to this post and if I leave it alone, then the discussion becomes one sided. As far as discussing things in the chat room regarding the singularity, I balk. I bet folks there have fine tuned their debating procedures and I will see lots more double-entendre and obfuscation. Give me a couple of days, huh please? I will look at the data and maybe participate in the other chat room but, this stuff takes a lot of my time and I’m not independently wealthy. If anybody wants to make some jabs, now might be good as I am more and more hard pressed for time. There’s more than one way to win an argument if that is the intention.

sponsored ad

  • Advert
Advertisements help to support the work of this non-profit organisation. [] To go ad-free join as a Member.

#30 Psychodelirium

  • Guest Philosopher
  • 26 posts
  • 0

Posted 17 September 2002 - 08:22 PM

Okay, define what is meant by “artificial” as is used in Friendly Artificial Intelligence” instead of beating around the bush with a comprehensive haziness that could be used to justify or denounce anything.  Or better yet, describe FAI in different words.


Before this goes further, let me ask you if you are familiar with the actual branch of computer science that goes by the name of "Artificial Intelligence", since you seem to have the notion that we've made up the term on the spot. And if you're as dissatisfied with it as you're letting on, take it up with McCarthy (I think he was the one who coined AI), and not with us. We merely adopt the convention (not that I think there's anything wrong with it, in the proper context).

As for describing FAI in laymen's terms, that's not especially challenging. It's a mind that humans design in such a manner as to benefit humans. "Artificial" = constructed by intelligence. What's the bloody confusion about? [hmm] Is it you, perhaps, that has fallen into the trap I mentioned above of alligning artificiality with "unnaturalness"?

Artificial intelligence won’t be an adjunct or supplement to the designing intelligence?  It implies two separate intelligences?


False dichotomy alert. [!] AI can be adjunct to the designing intelligence, but that doesn't mean it won't be intelligent. It all depends on how complex the AI is, and the years to come, there will be many different AI's, possessed of many different levels of complexity.

You don’t see how this might be construed as a striving for an alienated and existential (as in existentialism) condition where a creator and its creation are divested of each other’s responsibilities?


No? I do think you analyze overmuch, but if I were to engage in the analysis myself, I would likely reach the opposite conclusion. Creating a new and original kind of intellect would be more like an escape from the alleged existential condition of humanity.

Is your name supposed to be taken seriously?


Very much so.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users