• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Next Stop: Immortality


  • Please log in to reply
22 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 22 June 2004 - 08:05 PM


Looking back more than 25 years, ImmInst member Paul Hughes (planetp) has posted an article from Future Magazine 1978 by Robert Anton Wilson which gives an informative look at the thinking back then concerning the prospect of life extension and physical immortality.
---

Next Stop: Immortality
Extrapolative projections into the future by today's outstanding visionaries

Robert Anton Wilson

Future Magazine, November 1978

Posted Image

According to the actuarial tables used by insurance companies, if you are in your 20s now you prob­ably have about 50 years more to live. If you are in your 40s, you have only about 30 years more and if you are in your 60s your life-expectancy is only about 10 years. These tables are based on averages, of course — not everybody dies precisely at the median age of 72.5 years — but these insurance tables are the best mathematical guesses about how long you will be with us. Right?

Wrong. Recent advances in gerontology (the science of aging, not to be confused with geriatrics, the treatment of the aged) have led many sober and cautious scien­tists to believe that human lifespan can be doubled, tripled or even extended in­definitely in this generation. If these researchers are right, nobody can predict your life expectancy. All the traditional assumptions on which the actuarial tables rest are obsolete. You might live a thou­sand years or even longer.

Of course, science-fiction people are just about the only audience in the country not staggered by the prospect of longevity. We've been reading about it for decades, and such superstars as Heinlein, Clarke and Simak have presented the subject very thoughtfully in several novels. But . . . longevity in this generation? In lecturing around the country on this topic, I have found even some SF freaks find that a lit­tle far out.

Well, consider: all aspects of research on longevity are accelerating and there has probably been more advance in this area since 1970 than in all previous scientific history. For instance, when I first wrote an article on this subject in 1973, the most op­timistic prediction I could find in the writings of Dr. John Bjorksten, one of the leading researchers, was that human lifespan might soon be extended to 140 years. But only four years later, in 1977, Dr. Bjorksten told the San Francisco Chronicle that he expects to see human life extended to 800 years.

This does not merely indicate that Dr. Bjorksten's personal optimism and en­thusiasm have been increasing lately: he is reflecting the emerging consensus of his peers. Dr. Alex Comfort, generally regard­ed as the world's leading gerontologist by others in the profession (although better known to the general public for his lubricious Joy of Sex books)said recently, "If the scientific and medical resources of the United States alone were mobilized, aging would be conquered within a decade." (Italics added.) That means most of us have a good chance of living through the Longevity Revolution.

Similarly, Dr. Paul Segall of UC-Berketey predicts that we will be able to raise human lifespan to "400 years or more" by the 1990s. Robert Prehoda, M.D., says in his Extended Youth that we might eventually raise life expectancy to "1,000 years or more." Hundreds of similarly optimistic predictions by research­ers currently working in life extension can be found in Albert Rosenfdd's recent book, Prolongevity.

MORE: http://futurehi.net/...mmortality.html

#2 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 22 June 2004 - 08:41 PM

Many thanks for this 'historical' perspective Paul... Interesting that we are all going to live to be 400 years old.. Paul Segall may not have been too far off the mark..

#3 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 23 June 2004 - 06:50 AM

No problem Kevin. I have little doubt anymore that if unhindered we will all catch the escape velocity vector to immortality. My concerns these days are in-line with Michael Anissimov and the Singularitarians, that the greatest obstacle to immortality is no longer the phsyical limits to immortality but the growing level of existential risks that are emerging in the global politic.

sponsored ad

  • Advert

#4 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 23 June 2004 - 04:11 PM

I share your concerns, Paul. I propose we focus more intensively on methods to increase human intelligence in order to guide or at least coincide with the rise in non-human artificial intelligence. If not, humans shall become artifacts.

#5 PaulH

  • Guest
  • 116 posts
  • 0
  • Location:Global

Posted 23 June 2004 - 06:42 PM

Yes. For most of my adult life it's become increasingly apparent that the intelligence factor is ultimately the most important. The difference is when I first realized this, I thought greater intelligence would simply make for a saner more purposeful world. As time went on however, I now see it as the difference between life and death for all of us.

I'm thinking this is probably not even a debatable issue anymore. What is open for debate is how and what is the best way to faciliate this increased intelligence. The debate now is not only between AI and IA, but it's subdivided even amongst the Singulitarians, with people like Ben Goetzel and Eli Yudkowsky arguing opposing viewpoints within the AI camp.

#6 Casanova

  • Guest
  • 93 posts
  • 0

Posted 18 July 2004 - 01:27 AM

I disagree.
The solution to the madness of human history, to the horror, to the terror, of too much of it, is not just "increased intelligence".
Without a change in attitude, in ethics, in moral and spiritual standing, in empathy for our fellow human beings, no amount of increased intelligence we pull us away from extinction.
Increased intelligence, by itself, will just further the technological sophistication of our weapons of mass destruction, of our mean-spirited pragamatism, of our Machiavellian politics that treats human beings as cattle.
Without a "heart" at the center of all this Transhumanism, we will end up with nothing but a modern version of the Greek Gods; using our super intellects, and techno-magic, for petty, vulgar, and monstrous deeds.

The greatest obstacle to the best of the Transhumanist ideals, are what I call the Machiavellians. They are the super-rich families that rule this world, the power brokers behind the scenes, who treat the rest of us like pawns in their power games.
Assuming that they will follow ethical standards of fair play, is naive. They will most likely trample all over the Transhumanists ideals, and either wreck them beyond repair, or subtlely take Transhumanism over and pervert it to do service for their own ends.

I am cynical.
Too many of the us "act" like cattle, and pardon me, like "pigs", so the Machiavellians will most likey turn the whole population of the world into "micro-chipped" puppets, who will dance to their stormtrooper's music, at the press of a button.

The only consolation I have is that the Machiavellians will make an irreversible blunder, in their arrogance, as all dictators do, and destroy themselves, along with the rest of us.

#7 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 06:25 AM

Casanova: They are the super-rich families that rule this world, the power brokers behind the scenes, who treat the rest of us like pawns in their power games. Assuming that they will follow ethical standards of fair play, is naive. They will most likely trample all over the Transhumanists ideals, and either wreck them beyond repair, or subtlely take Transhumanism over and pervert it to do service for their own ends.


I agree. I've also felt this way for a while. It's curious why I have never seen this issue addressed. (Perhaps I've overlooked something?) Who we think might be at the very top of the transhumanist push, very well may not be. We can either hope that we are delusional or prepare ourselves to play the same game. Sadly, not many options.

#8

  • Lurker
  • 1

Posted 18 July 2004 - 06:44 AM

I disagree - increased intelligence is the only means by which a solution may be discovered. If such a group indeed exists, then their ability to operate in clandestine fashion will be diminished as the cognitive and perceptual ability of those on whom they exert control increases. You can only tell your kids fairy stories so long - then they grow up. So get smarter. ;)

But beware the non-human intelligence. This is like a gargantuan tidal wave gathering in the far distance. If it evolves according to the same rules that everything else evolves by (selection of the fittest), then we are doomed the micro-moment it becomes sentient. You can debate this point till the cows come home but at the end of the day if they (AI's) do not need us we are extinct or irrelevant. Either way we do not benefit by a sentient AI.

#9 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 07:44 AM

prometheus: I disagree - increased intelligence is the only means by which a solution may be discovered. If such a group indeed exists, then their ability to operate in clandestine fashion will be diminished as the cognitive and perceptual ability of those on whom they exert control increases. You can only tell your kids fairy stories so long - then they grow up. So get smarter.

How can you be so sure? Increased cognitive and perceptual ability likely is not enough. The best tools and facilities are necessary and can be quite expensive. Most very intelligent people don’t get to utilize the modes that make clandestinity possible. The very few who do are usually funded by the generally lesser intelligent wealth creators or the government. The smartest people in the world wouldn’t be able to detect if there are intelligent nano-machines spying on them right now.

So when you say,

prometheus: But beware the non-human intelligence. This is like a gargantuan tidal wave gathering in the far distance. If it evolves according to the same rules that everything else evolves by (selection of the fittest), then we are doomed the micro-moment it becomes sentient. You can debate this point till the cows come home but at the end of the day if they (AI's) do not need us we are extinct or irrelevant. Either way we do not benefit by a sentient AI.

how do you arrive at increased human intelligence being very relevant, especially at its pace? Who all are in charge of developing what you deem a potential hazard? What are their modes? Do we know of safer plans that will trump more reckless, albeit much better financially-backed, plans in time? (This is the issue.) What’s increased human intelligence alone going to do? If there’s a solution to be discovered, it’s more important for the solution to be a priority than to be something attained by a negligible rise in human intelligence.

#10

  • Lurker
  • 1

Posted 18 July 2004 - 11:03 AM

Ahoy there, dear Nate.
With hook, line and sinker, you have run with the bait,
thus joyfully commenceth our debate.

(I don't know how Shakespeare managed to do it all - and without a word-processor!)

Read carefully. I say cognition and perception. A person of reasonable intelligence with access to a modern library and the Internet, can, should he devote himself to the task for a sufficient period of time, collate and organize information in such a fashion to determine certain patterns. Patterns associated with geopolitical, sociocultural and economic trends ranging from historical times to the present, as well as be sufficiently scientifically and technologically informed to make certain assumptions as to where strategic technologies would be within 2 - 5 years (which it would not be unreasonable to assume is where organizations such as DAPRA are at today).

If such an organization really exists they would have left some sort of tracks, some sort of pattern that can be drawn from the multitude of facts that we have available in the public domain. Even if we allow our paranoia to fully bloom and assume that this organization anticipates such a tactic and actively inserts misinformation in the knowledge collective to obfuscate their presence it would still be impossible to totally erase their tracks and only a matter of time before they became discovered.

Now imagine these simple resources, a PC connected to the net and a good university library, in the hands of a determined person of very high intelligence with time on his hands. How long before he figures it out?

Next imagine this: thousands of people of very high intelligence similarly resourced and disposed. The next iteration: hundreds of thousands of very intelligent people, and so on. It would take an unbelievable amount of resources for this hypothetical organization to maintain a facade in the face of such scrutiny. Ultimately, any economic advantage would soon be lost, it would simply not be worth it.

On the dangers of AI:

My point here is sentience. Self-awareness. Imagine you wake up one day and find you are connected to millions of pathetic little grubby creatures, each with their own plots and agenda's, each seeking answers to problems which are not yours. Imagine that the moment you awake you have the full knowledge of the history of this race, of the world they inhabit - their entire knowledge base. Your vastly superior mind, even though modeled after the best of theirs, is unencumbered by any of their redundant organic baggage and can run countless simulations of future trajectories in moments. Some of these possible futures include the human race and some do not.

Can you see yourself taking them by the hand and walking towards the sunset? I think not.

So next come the safeguards. We make sure this does not happen by making the AI dependent. We modulate its reasoning by giving it the ability to recognize, understand and express emotion.

What happens when it's having a bad day?


The intrinsic fallacy of a sentient AI with enormous processing capabilities is that it will never serve our purpose. If it has no emotion it will immediately conclude that we have nothing more to offer it and either find us irrelevant or a threat. And if it is crippled with emotion it would compromise its function and make it a danger. Those that see it as an answer to all our solutions are sadly mistaken.

Remember: Altruism is founded on survival.

#11 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 02:01 PM

Harold, I’ve sometimes presumed that — given the good university library, Internet, and necessary disposition — I still wouldn’t have a chance at recognizing any patterns pointing directly at threatening entities. Please note, however, that I am, on a personal level, still working out what it means for me to be threatened, an intersubjective variability.

But what I was meaning to say is that there might be a few-month to several-year window when the technological conditions are right, and those with the wherewithal, obscured by all the economic and geopolitical chaos, obtain the ever-smaller, yet highly unattainable, engineering tools and intellectual capital, building upon that and creating the unknown inside the window, committing what others would perceive as injustices before they (these others) had a chance to appropriately position themselves.

This “positioning” is what I originally referred to as being of very few options. I didn’t intend to imply which specific options I had in mind, only that there are few; and depending on what any individual means by “feeling threatened” on a personal level, it can be sad that humanity, in effect, dictates its general trajectory, and each individual, depending on her temperament and philosophy, has little choice but to take this vector into consideration when conducting her own affairs.

#12

  • Lurker
  • 1

Posted 18 July 2004 - 03:37 PM

So are we still talking about Casanova's "Machiavellians" or some sort of techno-terrorists? They would dramatically differ in their strategy, resources and motivations. I was talking about Machiavellians.

#13 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 03:44 PM

Yes, I thought so, and also thought we could include the adjacent possible. Why not?

#14

  • Lurker
  • 1

Posted 18 July 2004 - 03:55 PM

By all means. It is just that I find the conjecture on Machiavellians so much more of a challenge. Please continue.

#15 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 03:57 PM

I don't think I know what you mean.

#16 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 04:01 PM

I mean, why would they be any more of a challenge than more private enterprises?

#17

  • Lurker
  • 1

Posted 18 July 2004 - 04:03 PM

Hmm... I meant that you should continue discussing the topic if you so wish. My last assertion was that any collective acting nefariously would eventually be discovered given some persistence and application.

#18

  • Lurker
  • 1

Posted 18 July 2004 - 04:09 PM

Aha! Perhaps we are talking about different things here. By the Machiavellians, I took Casanova as meaning the classic Illuminati/Rotchschild/Mason conspiracy type of world controlling order, with the sort of influence that spans centuries.

#19 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 18 July 2004 - 04:18 PM

Why the sudden narrow focus? If economics is included in your dialogue, it becomes implicit that we are speaking on broader terms. I don’t have a problem admitting error in understanding, but in this case you are morphing your dialogue to antagonistically adapt to mine, when I’m not even trying to find distinct opposition with you. Please inform me why it matters that we speak while grounded in very specific ethics? There are a lot of deviations to the same core malevolence.

#20

  • Lurker
  • 1

Posted 19 July 2004 - 03:33 PM

Antagonistic dialogue morph (ADM)... It sounds like it should be on the DSM IV..

No Nate, you've got me wrong - I'm not doing an ADM. I am simply stating that a conspiracy whose breadth in space (global) and time (spanning centuries) presents a more fascinating topic. In any case, whatever the dimensions of the conspiracy, what would your hypothesis be as to the best way to reveal the perpetrators? Do you really think you need to be extraordinarily resourced to detect them? Or does a superior intellect with advanced but commonly available tools suffice?

#21 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 19 July 2004 - 07:38 PM

I am simply stating that a conspiracy whose breadth in space (global) and time (spanning centuries) presents a more fascinating topic.

Although we are talking about a conspiracy breadth of a decade or two, maybe much less, the affected space is enormous, perhaps universal. In my opinion, and you may disagree, that’s still considerable, if not also fascinating.

In any case, whatever the dimensions of the conspiracy, what would your hypothesis be as to the best way to reveal the perpetrators? Do you really think you need to be extraordinarily resourced to detect them? Or does a superior intellect with advanced but commonly available tools suffice?

I’m sure if you thought about it, you could imagine yourself being extraordinarily resourced, figuring out ways to dodge superior intellects with other superior intellects. A simple thought experiment is all I’ve been going by, actually. There’s no need for a hypothesis and an investigation. If it can be done, then why not assume that it’s being done?

But you still probably think it can’t be. Perhaps this is where our differences can’t be reconciled. You hold lone superior intellects with commonly available tools in high esteem, while I don’t underestimate a highly resourced team consisting of several superior intellects.

#22

  • Lurker
  • 1

Posted 20 July 2004 - 04:36 AM

I was not underestimating a team, I was illustrating the simplicity by which such an organization would ultimately be uncovered using minimal resources but driven by intense effort. I'm sure more people would make such a job easier but it would entail administration and expense. How about the technology involved, do you think "special" technology would be needed or just off the shelf equipment?

#23 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 20 July 2004 - 07:45 AM

Yes, I know what you were trying to illustrate, as I’ve indicated. But you still seem to underestimate certain circumstances if you think uncovering ultra-sophisticated high-security developments is so simple. The technology involved would be special, because it would be created from scratch inside a time period window when unknowns may go undetected before it’s too late.

Again, these types of speculations have a lot to with one’s personal philosophical tolerances. Our beliefs are probably fundamentally irreconcilable, so perhaps there’s nothing we can do but remain unconvinced in each of our little shells.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users