Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.
If you could augment your intelligence, how far would you go?
#61
Posted 06 December 2009 - 03:56 PM
So even if you would be superhuman intelligent, you hardly would have more knowledge than the rest of the scientifc community. And whether you have the motivation to investigate scientific issues or just sitting at home solving sudokus is a somewhat different issue.
#62
Posted 06 December 2009 - 04:30 PM
Actually beeing very intelligent doesn't at all necessarily imply knowledge or even wisdom. You can have an IQ of 1000 while still being narrow minded ("math is so much fun, the rest isn't interesting"). If you would have indefinitie intelligence right now you still would be limited by the empirical knowledge we currently have. You couldn't tell whether string theory or quantum loop theory would be right, as they don't make testable predictions for current levels of particle accelerators energies - you just would be able to undestand them when spending a year or two reading through all the background literature written in the last 30 years. You couldn't just make up new technologies from nothing the same way a super intelligent guy from the 1700drs couldn't create landline telephone.
So even if you would be superhuman intelligent, you hardly would have more knowledge than the rest of the scientifc community. And whether you have the motivation to investigate scientific issues or just sitting at home solving sudokus is a somewhat different issue.
I'm not so sure about this, we have no idea what someone with an IQ of 1,000 is capable of..
sponsored ad
#63
Posted 06 December 2009 - 06:14 PM
Of course, with a high IQ you are able to contruct very complicated theories as the string theories, you might even alone do the work of the hundreds of physicists who originally developed it. But this doesn't tell you whether the theories are valid or just empirical non-sense.
You can be hyperintelligent, but if you are blind and deaf this doesn't count so much.
#64
Posted 08 December 2009 - 04:08 AM
#65
Posted 12 December 2009 - 04:19 AM
I would augment my intelligence, but how much would heavily depend on any new information as my intelligence augments. Assuming that the augmentation is not instantaneous, who would be foolish enough to decide beforehand? Let's say that I hit an IQ of 300 and find that as I become more intelligent I become less and less happy. Why would I continue to augment? This is just one example of a reason to stop augmentation that you couldn't possibly foresee.
So saying you'd augment to any degree beyond where you can reasonably assume you'd still want to augment is sheer folly.
(This all assumes that the opportunity to augment isn't fleeting. But even if it were fleeting and instantaneous, a degree of caution would be advised. I'd choose to make myself smart enough to be able to make my own augmentations, obviating the need for a snap decision)
#66
Posted 25 March 2010 - 07:11 AM
Edited by full_circle, 25 March 2010 - 07:12 AM.
#67
Posted 25 March 2010 - 07:32 AM
The poll's choices were:
Actually, I want to reduce my intelligence. Knowledge is a burden [ 1 ] [0.84%]
No thanks, I'm smart enough as it is [ 5 ] [4.20%]
I'd make myself as smart as Einstein [ 1 ] [0.84%]
I'd give myself super human intelligence but I'd keep some limitations so I could still be challenged [ 13 ] [10.92%]
I'd become smart enough to understand every concept that can be understood [ 88 ] [73.95%]
other (explain) [ 11 ] [9.24%]
*******************************
I tried to vote for "other", but couldn't, because the (no doubt) infallable software here informed me that I had "already voted in this poll". So much for my present level of intelligence! This is in line with my discouragement in the mid-1980's that a $35 Radio Shack chess game could unfailingly defeat me so long as I set the level above some particular point (I won't reveal how low that was; it's too embarrassing).
Nonetheless, I'm hopeful that with something better than a biological brain I'd be able to be a "better me", and given that no one edited my life-experience mindfiles on the way to that, I'd be happy for any improvements that could be made, however vast.
One thing that tickled me about the questionnaire was the idea of "understanding every concept that could be understood". This implies that concepts, phrased they are usually are, are even worth understanding or (to put this in the most disparaging way possible) that "understanding them" might not mean developing an inescapable feeling of contempt for the mind that originated them. One example seems to me a classic illustration of this, and it has to do with Ayn Rand's position on free will.
I have a great deal of respect for this great lady, and am sad that she was not frozen when she died. For example, one of the most powerful (and as yet not widely recognized) ideas I'm indebted to her for is the use of the word "existence" vs. "universe". The latter term has spawned all kinds of chaotic (and for the most part incomprehensible or worse) concepts about "multiple universes", "many worlds", and even Carl Sagan fell prey to this in his speculations about "universes within atoms", etc. The term "existence", however, steps neatly around this. "Existence exists," Ayn Rand said, "And only Existence exists!" Yes, I know, we can hear these words being spoken by Nathanial Branden, but Ayn Rand has them in a novel somewhere or something close enough to it for any practical purpose. This is very clean thinking, given the limitations we have by using any of the many human languages with which we are now stuck.
On the other hand, when it came to free will, Ayn Rand got trapped. She couldn't give it up, and she couldn't reconcile it with either strict or sloppy causality. To her, free will represented a possibility of choosing moral action over immoral action, and she couldn't see how deterministic behavior could be moral. I won't get into explaining how that could be, because it would take up too much time here, but Ayn Rand (again, we can hear Nathanial Brandon speaking, but it's her thinking behind him) said that free will depended on "causal primaries". They didn't leave interpretation of this up to your imagination. A sentence or two later, it was explained that "a causal primary is an event in man's brain having to do with man's consciousness that has no causal antecedants." Really! We're to rest the whole of morality on the idea that something can happen in our brain which has no causal basis? This has to be one of the most embarrassing corners anyone has ever painted their way into, and yet, it is such a small flaw in the context of all the other really great things she created that one has to simply acknowledge that it is possible to stumble once in awhile and move on.
So, "understanding every concept" may not really be a very good way of describing merit in terms of intellectual function, unless it is framed in a better grid of interrelationships itself. And the choices of lesser mentality are there for those whose thought processes are so painful and troublesome to them as to be wished away, if it were possible, so that by being dumber one might be happier? OK, there are times when that point of view has been expressed, too. Once, while going through engineering school, I found myself on a date with a nursing student who was very happy with this way of thinking about things. She told me proudly that one of her high school teachers had told them (and she took this seriously) that they were lucky, because they would never have any psychological problems. When she (the teacher) was asked, "Why?" she was supposed to have replied, probably without screwing her face up into a sarcastic expression, "Because you're just far too simple, that's why!" And, this was taken literally and in a positive way, by this nursing student. As you can presume, this date did not lead to any long term romantic relationship.
So, where am I going with this choice of more intellect or less, as a goal? I guess at the moment I'd have to say that "more is better", and that it had better be "orchestrated with the goal of being networked with others who have resolved to treat each other fairly in a highly defined way", or the outcomes could be disastrous, as warned by Ray Kurzweil in his "The Singularity is Near" and by Howard Bloom in his books "The Lucifer Principle" and "Global Brain". The idea of a "hard AI takeoff" where the self-conscious cyberbeings are reflections of "how we are now" is so bleak as to make movies like "Terminator" and "Matrix" (by comparison) seem like stories of Goldilocks and the Three Bears. If we don't seriously reconstruct our personalities before we equip them with super-high performance mental capacities, we'll have the kind of society where water guns have been replaced by paint ball guns launching compact tactical nuclear weapons. It will be a bad day for us all!
By now, hopefully most of you have given up on me, but I've been holding in the Windows clipboard the contents of a webpage URL'd http://www.lifepact.com/Imachild.htm, where the text portion is this (now I'll paste it in):
I Am A Child
by Fred Chamberlain
(April 1970 – Revised January 2002)
[Note: The term “man” below is intended to include both sexes, particularly inasmuch as it may well turn out that women have higher leadership potential than men over the next several centuries, as a result of their greater natural capacities to nurture and network.]
I am a child - among infants who call themselves adults and imagine that their years of growth have passed. I will remain a child because to mature is to prepare for death, and my goal is life.
The purpose of life is survival. The weed and the Sequoia both survive, but somehow there is a difference. Man, being self conscious, can work to alter his nature.
The man with the stature of a weed can seek to become like a Sequoia. The man with the stature of a Sequoia can seek to become anything he can comprehend. But the infant who calls himself an adult seeks nothing.
What interaction do I seek with regard to others?
Besides exchanging my work for theirs, I seek to help them grow. I will use my strength to maximum advantage, to increase the growth of others to the greatest extent for each minute of my time invested.
I do not seek to help those who cannot benefit from my help - perhaps I can benefit from theirs. I do not seek to help those who do not want my help - there are too many others who do want it. I do not seek to help those who exploit or damage others. Survival does not lie that way. I do not seek to teach first grade if I am geared for teaching at a high school level.
I seek most to help those who wish to deal with me freely, without coercion. Their growth can only increase the fruition of our relationship, and they do not threaten my freedom to pursue my own values.
Today, I swap apples for oranges. Tomorrow, perhaps I will trade materials extracted from a planet's core for products manufactured in the corona of the Sun.
Today I help children to learn about the pitfalls of dependency, the limitations of independence and the potential of interdependent synergy. Perhaps tomorrow I can help other children to develop further potentialities of networking along the lines suggested in Paolo Soleri’s “City in the Image of Man” .
Beyond all, my goal is life. For the time being, I will fight biological aging, but - I will not become an adult; I will never mature; I am a child.
Additional note, 12/22/2009. This set of thoughts from long ago now mirrors the image of a future that I see evolving along the lines suggested in "The Terasem Truths".
Edited by boundlesslife, 25 March 2010 - 07:54 AM.
#68
Posted 26 March 2010 - 03:45 PM
#69
Posted 12 April 2010 - 10:17 PM
I would only chose to augment my intelligence if I had the resources to fork my sentience.Suppose artificial neurons were invented that could connect with your brain to make you as intelligent as you wanted. Assuming it was 100% safe and guaranteed to preserve your consciousness and personal identity, would you do it? And, if so, how much?
The unmodified copy would have authority on the altered copy and could decide the augmentation has resulted in corruption of some basic human trades (and undo the augmentation or the whole fork).
The way the main character in Simon Funk's 'Afterlife' handles it seems sensible enough.
#70
Posted 15 July 2010 - 09:17 PM
Not only would I want to leave myslef a room for a challenge that could potentially beat me someday ( I kind of feel that the option 5 would be like playing on cheat mode - no real rush ), but I would also speculate that at a certain level of cognitive abilities an individual may possibly gravitate towards some forms of insanity, or at least societal disorders ( especially if he is surrounded by unaugmented minds )
Something like in Borges's short story about a gaucho named Funes who remembered not only a particular exemplar of an object, but also each certain state of it was to him a distinct memory. He didn't really like it.
#71
Posted 21 September 2010 - 07:30 AM
#72
Posted 22 September 2010 - 02:43 AM
hey can you tell me what is the difference between BCA and BSc(computer science)
http://acronyms.thef...tionary.com/BCA
#73
Posted 23 September 2010 - 01:02 AM
Only it would be terribly lonely, being able to see through the charade of people's exterior and so read their every intention. Mystery is beauty, but so is order. To have to choose between the two is a heartbreaking notion.
#74
Posted 26 September 2010 - 07:46 AM
#75
Posted 26 September 2010 - 11:34 AM
Intelligence could also be based in collective consciousness and in the collective memories of the human race as a whole. So you having perfect neurones may not be enough to attain perfect intelligence.
I've always had troubles believing the notion of some "over level" of human consciousnes, I try to be open, but really - what's the rationale here ? How can this be tested empirically ? Jung had some ideas that may be appealing but he wasn't the most intelectually disciplined scholar, sort of half - mystic actually.
#76
Posted 03 October 2010 - 01:23 PM
Intelligence could also be based in collective consciousness and in the collective memories of the human race as a whole. So you having perfect neurones may not be enough to attain perfect intelligence.
I've always had troubles believing the notion of some "over level" of human consciousnes, I try to be open, but really - what's the rationale here ? How can this be tested empirically ? Jung had some ideas that may be appealing but he wasn't the most intelectually disciplined scholar, sort of half - mystic actually.
One example here is the internet. You put knowledge into it and you draw knowledge from it. It is a collective thing. Is it intelligent? Maybe it is. It can predict my actions, knows my preferences, can learn from my past experiences and usage, etc. It is a small and crude example, but I am sure it will become much more powerful in the near furture. I can see the time when it would be difficult to know where human (personal) intelligence stops and Artificiial Intelligence starts.
#77
Posted 04 October 2010 - 01:39 PM
#78
Posted 04 October 2010 - 05:14 PM
I would want to be as smart as possible in the broadest sense rather than the narrow band that IQ measures. Also I would like to increase the speed of thought, which would require a new brain infrastructure clearly, though this will may make it much more difficult dealing with people working at normal speed, as if everyone in the world had a terrible stutter.
Are you a millionaire?
If not it shouldn't be a problem, in this situation most people would have this faster brains in fact people who wouldn't have them would be discriminated on the job market etc. so it wouldn't be much of a choice.
#79
Posted 06 October 2010 - 03:36 AM
I would want to be as smart as possible in the broadest sense rather than the narrow band that IQ measures. Also I would like to increase the speed of thought, which would require a new brain infrastructure clearly, though this will may make it much more difficult dealing with people working at normal speed, as if everyone in the world had a terrible stutter.
Are you a millionaire?
If not it shouldn't be a problem, in this situation most people would have this faster brains in fact people who wouldn't have them would be discriminated on the job market etc. so it wouldn't be much of a choice.
Sure there would be a choice. There wouldn't be much of a job market if everyone augmented their brains on top of a new faster brain structure. If everyone on earth turned themselves into super intelligences there wouldn't be very many problems left to solve. Hence, there would be an abundance of resources, and an explosion of habitations to use them in. Most if not all intellectual problems would be solved within the first few decades.
People are inherently lazy. Who would need to work in a "job market" if your smart enough to find a way to do all the work you need to do without actually working? Have you ever read Tom Sawyer?
Edited by Reno, 06 October 2010 - 03:37 AM.
#80
Posted 08 October 2010 - 11:30 PM
#81
Posted 25 November 2010 - 12:58 PM
Attached Files
Edited by Ark, 25 November 2010 - 12:59 PM.
#82
Posted 26 November 2010 - 03:15 AM
Stephen Wiltshire draws Rome from memory
http://www.youtube.com/watch?v=jVqRT_kCOLI
Edited by e Volution, 26 November 2010 - 03:16 AM.
#83
Posted 04 December 2010 - 05:39 AM
#84
Posted 27 December 2010 - 12:22 PM
#85
Posted 09 September 2011 - 10:38 PM
We, almost all here, seem to want to have the answers so as to ask even bigger questions and seek more answers :D
#86
Posted 10 October 2011 - 06:14 PM
#87
Posted 30 October 2011 - 03:59 PM
A second thought: I wonder what would happen if you suddenly became extremely intelligent and saw how the world truly works. Would you fall into a deep depression seeing the futility of it all?
Maybe. But even if you totally know how a RC car works, it wouldn't make it less fun to play with. Even if it is futile.
#88
Posted 30 October 2011 - 05:51 PM
Further along these lines, communism might want all people equally 'smart'
#89
Posted 31 October 2011 - 05:43 AM
Also, it can be a "temporary" answer before clonned stem cells neurons become reality.
So, why we not to forget for the super inteligence and think about those two aspects.
By the way, where the progress with artifitial neurons went so far? Is there a prototype of an implantable artifitial neuron?
sponsored ad
#90
Posted 05 November 2011 - 09:23 PM
I would sacrifice myself for my race, no matter how unstable it would make me or how boring my life would become. With this knowledge i would make myself immortal guide for humanity.
Can't guarantee my view on the topic wouldn't change once I obtain omniscience tho.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users