• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Singularity Poll: Goodness


  • Please log in to reply
10 replies to this topic

Poll: Singularity Poll: Goodness (21 member(s) have cast votes)

Singularity Poll: Goodness

  1. No - it spells our certain doom (0 votes [0.00%])

    Percentage of vote: 0.00%

  2. Depends on how it is created (14 votes [70.00%])

    Percentage of vote: 70.00%

  3. Absolutely! It's fabulous! (3 votes [15.00%])

    Percentage of vote: 15.00%

  4. It may result in the loss of the essence of humanity (3 votes [15.00%])

    Percentage of vote: 15.00%

Vote Guests cannot vote

#1 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 23 August 2002 - 07:48 AM


-from bjklein


Is the creation of smarter-than-human intelligence an overall "good thing"?

#2 Skycap

  • Guest
  • 2 posts
  • 0

Posted 25 August 2002 - 05:38 PM

MichaelAnissimov:
Is the creation of smarter-than-human intelligence an
overall "good thing"?

Skycap:
Not ussually. Not in most universes. But I admit that
that information comes to me supernaturally. I have
a lot of supernatural help (I claim) and I can't even
get people to turn down their stereos.

So, it appears to me that it must require ENORMOUS
amounts of supernatural help to get through a
singularity safely.

And it is pausible that the singularity could be a disaster
because many SF stories have AI's that get out of control.

Examples of rules (sort of like a computer or algorithm)
out of control:
1 -- The government requires my girlfriend to live in a
residential care facility, which I consider ridiculous.
(Bueuracrats, and judges mindlessly following rules.)
2 -- The government is supporting millions of drug addicts
habits with their SSI programs.
3 -- Psych meds -- many people are taking them simply
because of "experts" and rules.

I have to leave.
I wonder if I should post this?

I should think about it some more.

Oh what the hell.

sponsored ad

  • Advert

#3 Chip

  • Guest
  • 387 posts
  • 0

Posted 19 September 2002 - 11:52 AM

Hope the transition from the “Free Will” forum (http://www.imminst.o...t=20) does not cause too much difficulty. This is specifically in response to the post by MichaelAnissimov, Sep 18 2002, 09:32 PM at that other forum on “Free Will.”

I believe there is a danger at finding human nature to be faulty. Seems to me that human nature is incredibly awesome and good, we just lack the proper “mapping” situation that allows its best qualities to emerge while dampening the non-altruistic components. If we seek to create an artificial intelligence with a nature that is better than humans then we may be setting ourselves up to be out evolved. This mapping or realization of human potentials seems largely determined by our social experiments which have been woefully lacking but which have also made great strides in helping us as statistics show that so far we have beat Thomas Malthus’ predictions that state we are destined to soil our own nest until we destroy ourselves. In this light we can see that developing machine intelligence is a subset of sociological science. Creation of super intelligence has, is and will be a factor of the social conditions under which we live. It is one component of sociology, not the other way around. Using the terms “super intelligence” rather than “artificial intelligence” seems to bring things into more logical harmony with easier and more pragmatic consideration of the entire scheme of things. Using the term “artificial” seems all too conducive to not understanding exactly what it is we have and are trying to create.

I wonder at the validity of the concept of “singularity” as understood in this discussion. I understand that the term “singularity” has a scientific definition used perhaps most predominantly in the study of quantum physics. I usually quickly suspect terms that are used without consistency but it took me until this post to wonder at the use of this word. I take it that “singularity” as used here pertains to the creation of a separate intelligence, external and superior to human intelligence. We have the option of demanding that the same technology that is necessary for the creation of a non-human superior intelligence can be used to augment human intelligence. I do not find it intelligent to forsake the creation of super human intelligence for some disconnected non-human entity or entities. This is much the same argument I’ve given my children and other kids who have come within my guidance, namely, of all of the things we can do might as well choose the ones that promise more fun and good times. In the long run, I do believe that our good nature will come to realize that altruism serves the self. It is for selfish reasons that I want humanity to be a success. As far as evolution being too slow, we have taken conscious effort to evolve ourselves and the capacity to do so has only increased. Though this has mainly been in ways that is outside of our DNA limited designs, we are now approaching the capacity to directly address our evolutionary programming and not just for future generations but also existing individuals. Best that we embrace this technology for the benefits it can bring rather than imply that human evolution is at a stand still or essentially, a dead end. Evolution is not too slow. We are a component of evolution and our self-improvement efforts have brought the ability to accelerate it. I do not find slow evolution to be a valid argument for the creation of intelligence that is disconnected from human intelligence.

Your last paragraph communicates confusion to me. First of all, if being a transhumanist means knowing the lingo and desiring certain things or having a special knowledge and certain faiths, then perhaps there are transhumanists existing now but outside of a religious type sectarian definition, I believe there are no transhumanists yet. I believe that danger of developing AI is exactly the opposite of what you state, that they may not be the “the game-theoretic equivalents of their creators.” In my eyes, humans were evolved to be “comprehensive anticipatory design scientists” using the words of Buckminster Fuller. We see that collectively we have enhanced and can accelerate the welfare of our own lives. Continuance and enhancement of the “game-theoretic” inclinations that we seem to have naturally could logically lead to our acting and being as life preservers and stewards in a grand way solely for selfish reasons. We wouldn’t want this for a super intelligence be it us or some new entity? All of the great attributes you list for AI can be favorable abilities of ourselves if we choose to do so, in fact, if we decide to not incorporate such abilities within ourselves or at least immediately available to us and under our implicit control, then aren’t we taking the risk that AI could become just so many Frankenstein monsters? You can feel empathy for Frankenstein’s monster as a misunderstood and misunderstanding fictional entity but do we want to take the chance that we will create some real monster of a more powerful than human misunderstood and misunderstanding entity? Can we have intelligence without compassion? Certainly, lots of examples abound.

“existential escape isn't an impetus, at least as far as I can tell.”

Well, I would agree that it shouldn’t but creating intelligence that is other than ourselves would make us just another subcategory of universal intelligence, further removed from seeing ourselves as an integral and necessary part of all. It might not be an impetus but it could be a result that increases our floundering dysfunctional lack of purpose or worth. We avoid this if we seek to make ourselves the super intelligence we choose to create.

#4 MichaelAnissimov

  • Topic Starter
  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 24 September 2002 - 05:42 AM

I am finding that people here can't seem to bring themselves to address the more salient features of my posts pointing out the ridiculousness of the concept of Singularity.


Before I respond to your above post, can you post out the features that you consider salient? The concept of the Singularity is simply the concept that the world passes beyond human understanding when a greater-than-human intelligence is created. Google for "Vinge, Singularity", read his seminal essay, and tell me what you disagree with. Is that ok? If you point out what is ridiculous about what, very specifically, step by step, then I can promise a similarly clear and complete response.

I can't imagine but that you don't like this as your text contains some rather extreme superlatives about the singular importance of the concept of Singularity.


Can you give me a specific example? In my future posts I can eliminate all comments of its nature when discussion the Singularity with you, if you'd like. I'll wait to respond to your larger post

I wonder if you consider yourself an authority on the subject of Singularity?


No. Most people I know don't consider me an authority on the subject of the Singularity, as far as I know, so I guess I'm not one. Maybe one day.

I know you have de facto authority in this forum as its initiator so ultimately you have authority this way but I think that was a rather arbitrary affair. I see no testing going on to determine who gets to head a forum.


As moderator of this specific forum, (I don't really like the word "authority", I'm here to moderate a topical forum and help steer the quality of its material to the best it can possibly be) I have a responsibility to remove material that's off topic. If you really, really would like your comic to be here, and think that it makes an important point not made in your text, then feel free to post it again. Incidentally, I never argued that a specific AI project would succeed, so how is your comic pertinent or even realistic in the context of our conversation? It's like you're arguing against a comment like "I think AI is really great and every AI project will succeed perfectly, no matter what.", a comment that never was made.

Is it possible that the ultimate authority might depict Singularity as a conundrum?


Ultimate authority? What is this ultimate authority you speak of? :D

#5 Chip

  • Guest
  • 387 posts
  • 0

Posted 24 September 2002 - 10:44 AM

I am greatly appreciative of your courteous reply. I will do the search and reading you suggest as well as look for the sci-fi tale Vinge authored, perhaps more than just one?

From what I have learned from this web site and a little study of SIAI I find that the concept of Singularity is in opposition to continuing and increasing respect for existing singularities, namely instances of human consciousness. I originally voted on this poll with the majority, that it depends on how Singularity is created. If I could, I would change my vote. As defined, I cannot see any other way for Singularity to be manifest except as a disaster for humanity and life in general. The superior intelligence we create should be our own. “The concept of the Singularity is simply the concept that the world passes beyond human understanding when a greater-than-human intelligence is created.” I never want to see such a state of being. Any one who seeks immortality probably does not want to spend that time in servitude to some other entity or at the least; we do not want to lose our self-determination. We will embrace, desire and put great effort into enhancing our abilities to understand this world. We will not want to create any “wild card” that renders us unable to see our place in universe. We want to know more about where we came from, where we are and where we are going. Without our seeking and finding answers to the question “Why are we?” we are open to being sacrificed for reasons that do not make sense, that do not respect life and our prospects for immortality. We should not seek to render ourselves unconscious, unaware or not understanding if we are to seek the greatest and the best for our own singularities. The tools we have to increase human intelligence are in known and yet to be discovered science. The very term “Singularity” trounces on science as it claims a special incomprehensible definition of the word. We should be working with words and concepts that respect utility and understanding. Working for ends that are defined as ultimately incomprehensible by ourselves is irrational. We need to work to increase our ability to see and understand, our inherent capacity to be rational. That is what the information explosion demands. Luckily, it appears that this is also the path of creating the greatest possible sustainable freedom for humans, for you, for myself. We don’t want any force to coordinate and direct our lives for reasons we cannot fathom.

“The Singularity isn't religion - the Singularity is the *real* future, and it's going to profoundly effect us all whether we like it or not.”

“It's not necessarily supposed to give you a warm fuzzy feeling or perfect clarity at first, all Singularitarianism is is a group of people trying to accelerate and ensure the integrity of the biggest event on Earth since the rise of life.”


I could dig for more but these jump out as conjecture that is offered as fact, extreme allegiance to a concept beyond reason or as I stated before, “extreme superlatives about the singular importance of the concept of Singularity.” You don’t have to go through the difficulty of trying to avoid such in the future for my sake. I might suggest you do so for your own credibility but it is your choice. Maybe the only people you are trying or want to communicate with are those that hold these beliefs or perhaps could be swayed to share such conviction.

I hoped that comic spoke about something I haven’t communicated in the text and in a way that was easier to see and understand rather than having to go through the time and effort of reading a great deal of text. It is possible that some one will claim some day that the Singularity has arrived. What if the next day evidence becomes known, perhaps in a sorry way, that Singularity was not met? The intelligence we created was not benevolent, was not intelligent because it was too cold and heartless about human needs and wants. Should we just say that our technology was insufficient and with further work we could come up with another experiment to try? Could the damage of this pursuit be so much that we would want to stop that experimentation altogether? Will we have given the super non-human intelligence so much power that we will not be able to abort the experiment and its violent repercussions grow? Maybe it would have been better to replace AI with Singularity but the comic is meant to be communicative and the term Singularity seems defined as obscure, an exercise in obfuscation, something that cannot be pinned down so we can deny its possible failure and paint it as glorious and unavoidable, in short, evidence that changes to match the hypothesis. It requires a faith that is not of knowledge, just as with religions. As it is, I believe the layperson would interpret AI as concerning machine intelligence that is hoped to be more useful or better than human intelligence. Computers have beaten the best players of chess. AI has already shown it is the singular best chess player on the planet. Heck, it is the hope of every AI project that it be found as a singularity, an event that showed a better way to do things than to depend on human intelligence. This concept of Singularity seems to entail a great deal of denial of existing singularities. It seems nonsensical to me, yes, a conundrum, a nonexistent thing that only complicates the world rather than lead to greater understanding and freedom.

I’m sorry. I know this thing means a lot to you. It is all too common for humans to consider someone who does not share their convictions to be against them, to be an enemy. I wish you no ill. I’ll do more research as you suggest. I honestly believe I have your best interests at heart, mine too, as well as all human singularities, perhaps other conscious singularities too if we find that we are not alone as thinking life.

“Ultimate authority? What is this ultimate authority you speak of?”

Human understanding, science, the body of knowledge of known shared truths, this is the authority I speak of. If Singularity contradicts known science, as I believe it does, I would have to be made aware of how and why its exceptions and contradictions get around what we have already learned about how universe works. I guess if I was in some accident and lost a lot of my learning or ability to make cognizant choices, I might embrace the thing but at present, it does not qualify as a distinct concept. It never ceases to amaze me how many embrace something they don’t understand, especially something that has as its defined nature as being incomprehensible.

Let me address one more thing here that alludes to another post of yours elsewhere, namely something to the effect that academics and scholars use the term, “Singularity.” At one time most, including scientists and academics and scholars, thought the world was flat. Is it the number and social status of people who hold an opinion that makes it truth? You can actually pay money to an institute in Washington D.C. to come up with experts that will stand behind virtually any claims. Us humans are quite confused and there are enough people that you can find lots of conflicting opinion. Science is a process of seeking functional hypothesis in accordance with known conditions and not to seek evidence that fits our hypothesis. The onerous component of sound science is to find that something is not false, not to prove that it is true. Look for the contradictions first and foremost, not the agreements. Academics and scholars as well as Columbus could see lots of evidence that the world was flat but Columbus also saw evidence to the contrary. It only takes one inconsistency to illicit questioning of majority opinion. Copernicus came up with a novel idea that appeared to explain things better than established doctrine and later; Galileo saw that moons orbit Jupiter. This was an exception to established “truth.” Should they have denied their own reasoning and observational abilities because the majority saw otherwise?

#6 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,364 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 September 2002 - 04:45 PM

Copernicus came up with a novel idea that appeared to explain things better than established doctrine and later; Galileo saw that moons orbit Jupiter. This was an exception to established “truth.” Should they have denied their own reasoning and observational abilities because the majority saw otherwise?


This is precisely the point Transhumanists/Singulitarians/Extropians are trying to make. There are a growing number of technologists "using their own reasoning and observational abilities" to predict a post biological evolution of life on earth. These people have studied trends in technology, they may have been major players in the development of faster processors, or better algorythms...etc. In short they are very "in-tune" with the technological developments of the present day. They perceive a singularity in our future...when a greater than human intelligence evolves. To quote Simon Smith

If artificial intelligence is possible -- if there is nothing supernatural about our brains -- then it seems that, sooner or later, it will be a reality.



Whether the development of greater than human intelligence is 5 years away or 100 years away, we should discuss and prepare for this scenario. Whether it comes through human augmentation or through artificial/non-biological means, we should discuss and prepare for this scenario.

Development of greater than human intelligence does not have to be a dystopia. We already augment our bodies with machines. Look at all the people that have coclear implants, artificial legs with embedded computer chips, or new implants for restoring sight. There are millions of cyborgs walking among us today...and it doesn't seem like a dystopia. The more informed I become about the changes that lie ahead the less fearful I am...and the more I want to work towards ensuring a peaceful transition into our post-human future.

#7 Chip

  • Guest
  • 387 posts
  • 0

Posted 25 September 2002 - 02:11 AM

Seems to me, Mind, that you give examples of human augmentation to support the idea that “Development of greater than human intelligence does not have to be a dystopia.” I’m all for augmentation and supplementation of human intelligence. As far as making an intelligence greater than human, if it is in the service of humanity, it will be human intelligence, greater than present human intelligence but not humans at the time of its happening. The only way it could not be human intelligence is if it is not a tool for satisfying and pursuing human needs and wants. Should we make something that is more intelligent than us that is not designed to serve us? I think whether or not we ever see a dystopia will be dependent on what we choose to do. Maybe I shouldn’t say we. I don’t consider myself a Transhumanist/Singulartarian/Extropian. Maybe humanist/singulartarian/syntropian. I had to make up that last word, pertains to being an anti-entropy force.

“The more informed I become about the changes that lie ahead the less fearful I am...and the more I want to work towards ensuring a peaceful transition into our post-human future.”

I have heard it said that if one is not paranoid they must be crazy. Yes, I want a peaceful transition too but not to a “post-human future.” I want to be there in the future and though I may be quite different than I am today what with all the supplementation and augmentation, I will still be human. Enhanced, more intelligent, more rational, more capable, more free but still happily human with all of the potentials and possibilities that entails. I don’t think you can have “a peaceful transition into our post-human future.” A lot of people are going to fight being relegated to being has-beens. If no one has started, if the mass of opinion is to forsake our own singularity, then let me be the first to say “NO.”

#8 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,364 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 September 2002 - 01:02 PM

You make a good distinction in your post Chip (about what post-human means). When I say post-human, I mean "significantly more intelligent than humans of the present day", most likely occurring through augmentation.

However, I can still envision a run-away intelligence scenario at some point in the future where some intelligent being has access to its own source code and bootstraps itself into super-intelligence, much higher than its peers. By acknowledging this possibility, I don't feel I am being anti human. I would like to see all humans make it through the major changes that lie ahead with as little trauma as possible.

#9 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 September 2002 - 03:53 PM

Those that have sought transcendence through thought throughout the ages have always been transhumanist. Just like our common lucky uncle monkies and aunts that grabbed burning forest branches, or prairie sedge and made dinner and an industrial epoch.

The very quintessentially human behavior of seeking god through analysis of spirit is at the heart of all human "progress". Those that take this path have sometimes found wisdom and often much more. They haven't always found God, by they were never harmed by the asking, regardless of most answers arrived at for themselves.

Good and Evil are Human Ideas anthropomorphized onto our perspective of Nature. It is a false dichotomy when thinking about "Life".

Physics for the most part defines our understanding of Nature, for the most part...

Quantum Mechanics suck at politics.

The problem about Law, and the distinction between Man Made and Natural is both a question of Human versus Natural selection as much as it is about Laws governing behavior, physical and social.

Natural Laws can't in fact be broken, those of us that twist technology into giving that false impression are in fact like lawyers finding loop holes in the limits of application. But Social Constructs are not so absolute and the "memetic sense" of it is that it is "supposed to be that way".

It is not an accident that both the syntax and lexicon of law for physic and juris prudence overlaps in this discussion. It's not coincidence either.

It is an example of social engineering of remarkably elegant design. It is also human, not extraterrestrial, or divine. It was our own ancestors that set us up. This has been debated since Pythagorus and Hamuraabi.

The first government was Theocratic. History must always be kept in perspective. The memetics of social structures are constructed of paradigms described by models understood as examples of "Natural Events". The concept of asocial Contract can be extended to all living participants in the structure of a state.

Now I will piss a lot of people off, this can be said to include even domestic animals as they are "property" at the very least and as such constitute a requirement for a constituent set of rules governing treatment, possession, responsibility, benefits, etc. When you start making rules it opens a major can of worms.

So who are Transhumans? They are the ones throughout all history that have made their neighbors look askance often by saying they are discontent with the status quo, unwilling to just preserve tradition, and felt that self improvement wasn't just a diversion but a goal.

I think we are just creating a scientifically PC way of using the word Transcendent. But that words carries too much baggage for most.

So what?

If I can devise a viable and safe way of uploading data directly into my cerebral cortex with functional techlepathy as an advantage for a new form of powerful global social communication then bring it ON!

We are talking the ultimate in distance learning and Virtual Experience. Stimulate the pons of the brain described in my post on the brain and a factual experience for data input can be made into a perceptual out of body reality for conceptual transport. Tactile for social as opposed to strictly physical interaction. The web as some are starting to use it now.

Cyborgs are us.

Prosthetics is toolmaking fellow apes. We are in a new tool age of information and communication as principle concept tools. But can we make this tool carry love that doesn't just come off as prurient smut? Can we communicate more than just words, good wishes and ideas? Can we manifest deeds?

#10 Psychodelirium

  • Guest Philosopher
  • 26 posts
  • 0

Posted 25 September 2002 - 05:00 PM

Seems to me, Mind, that you give examples of human augmentation to support the idea that “Development of greater than human intelligence does not have to be a dystopia.” I’m all for augmentation and supplementation of human intelligence.  As far as making an intelligence greater than human, if it is in the service of humanity, it will be human intelligence, greater than present human intelligence but not humans at the time of its happening.  The only way it could not be human intelligence is if it is not a tool for satisfying and pursuing human needs and wants.


I'm going to expand a bit on what Mind and Lazarus said. First of all, when I first came upon the term "transhumanism", my default definition presented itself as "some philosophy that derives itself from humanism but goes beyond it", and "posthumanism" would be pretty much the same. Compare "post-structuralism", which refers to a set of ideas (however tangled and confused they may be) that derives itself by and large from structuralist ideas but reacts against some key aspects of structuralism and expands on others. The same is true for "transhumanism" as related to "humanism". Transhumanism is built upon the foundation of humanism, but it rejects some tenets of humanist philosophy - like any existential justification of death, or the idea that transcending the present human condition is too risky, or outright morally wrong - while expanding and/or adding others - like futurism and the concern with emerging ultratechnology.

Now that would be all well and fine, but transhumanists create a certain confusion by using words like "transhuman" and "posthuman". These words carry the connotation that something valuable is lost in the future, the human drama, if you will. Humanists faced with these terms become instantly suspicious of transhumanist philosophy, and many transhumanists themselves carry "posthumanity" to the extreme of "anti-humanity", perhaps unintentionally. Ubiquitous mention of "anthropocentrism", and the idea that humans are somehow flawed, incomplete, stupid, or morally incapable, are examples of this anti-humanist trend in transhumanism. In fact, the term "transhuman" merely refers to a human in a state of transition to posthumanity, and the term "posthuman", to a human that has transcended to a state so far removed from his present one that the term "human" will no longer suffice. Nietzsche recognized this phenomenon of self-transcedence in his "overman". In this sense, We Have Always Been Transhuman - the title of an essay about this very topic which I began writing a while ago, and which this recent exchange has inspired me to complete - and uploaded or augmented humans represent the very essence of "posthumanity". Transhumanism is ancient. It is simply a more acute and properly articulated expression of a drive that has been with us since the dawn of humanity, one that is really a part of what it means to be human, in fact.

sponsored ad

  • Advert

#11 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 25 September 2002 - 06:46 PM

You certainly undermined my confidence in using such words as Meme succinctly.

[!] NOT [!] ;)

But in any case I will continue to use them. And as Bucky did, with reason.

Regardless as the operations of Social science become functionally more refined as well as their predictive accuracy, the more this field will resolve into a Science of manipulating the paradigm constructs of culture and society, these "Socially Engineered" elements will become recognizable and manipulable by more than politicians, priests, and cons. They are as important as law and in fact it is impossible to live under a Rule of Law if the society that is capable of manipulating memetics does not understand the interaction of the two. Law is an element of memetics, not the other way around. Choke on that Hans Kelsen.
Hans Kelsen: International Peace through International Law

Genes are to biology what Memes are to psychology combined with sociology. But Memetics, less than genetics is still very nascent. It took genetics almost two centuries to go from where memetics can be said to be right now to where genetics is. But this time it takes less a Watson-Crick revolution and more Computer Models that learn from successive generations and updated data. Here we have a science that is self developing and too potentially powerful to be left in the hands of the simple well intentioned. These area is more like Nuclear Physics for politics.

The argument about lexicon is an old one. Many say the same thing about saying Nanotech, that it is really indistinguishable from a variety of other terms. Pragmatically they are wrong. Nano is different and so are many new terms. Transhuman is another.

We invent words as a function of sociolinguistics to meet the memetic challenges of generational shifts in language use and perception of meaning. I can argue that Transcendent is essentially the same as Transhuman, but the transcendentalists are not transhumanists. Well at least that is not how many who claim to be describe themselves for either.

And Chip, you call me long winded? How about shorter paragraphs that possess a more concrete structure? :)

You are right, it takes one to know one. :)

Regardless, my point was made and I think many here are beginning to tackle it. Bucky's new lexicon was appropriate to the usages he was trying to add to the composite word structures.

I think it is fair to invent and use words at their limits but I also think we need to be consistent and precise when we do. Most here are when we stop to realize that we are often arguing at cross purposes.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users