• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo

Lifespan News – Bryan Johnson Speech


  • Please log in to reply
No replies to this topic

#1 Steve H

  • Guest
  • 127 posts
  • 401
  • Location:UK
  • NO

Posted 05 June 2023 - 07:50 PM


Bryan Johnson made some controversial statements about AI and the future of humanity at the recent Healthspan Summit, and Emmett Short has a few reactions.

Script

[After watching parts of Bryan Johnson’s speech] Oh my God, that was intense. Let’s lighten it up in here, okay? You can come out from under your coffee tables. Shake it off. Welcome back to Lifespan News. I’m your host, Emmett Short. Recently, I attended the Healthspan Summit in Los Angeles where I took that video of Bryan Johnson speaking. In this video I’m going to show you the highlights from that interview. And give you my commentary along the way. If you’d like to watch the entire interview uncut, I believe it’s going to be posted at Healthspan Summit’s YouTube channel, which I will link down below when it’s available.

For those who don’t know, Bryan is the founder and CEO of a company called Blueprint, which aims to optimize health and longevity through algorithmic precision. He was also the founder of Braintree, a payments infrastructure company he sold to PayPal for a cool $800 million, of which he took home $300 million. So this is a guy who knows a thing or two about building high level systems and disrupting incumbents and his latest target for disruption is none other than this guy. [The Grim Reaper]

Oh no, don’t feel sorry for him, he’s putting on his puppy dog face. He’s already dead. Don’t let him manipulate with those sad eyes. Anyway, Bryan Johnson is spending a lot of money and grabbing a lot of headlines by using himself as the guinea pig in chief of Blueprint. He has been called the most measured human.

This is straight out of the Blueprint slide deck from their website, with stated goals being to make rejuvenation a professional sport and to make rejuvenation the new standard of care, which I personally think is awesome. I wish there were more people willing to spend this much money and willing to put their own health on the line, frankly, so the rest of us can learn from their mistakes and successes. So, I want to make it clear I think what he’s doing is commendable, no matter how indulgent it may be; I think it will end up helping humanity. Good for you, Bryan.

That being said, I kinda have a bone to pick with his worldview in this interview. You’ll see he makes a lot of strong arguments about AI, but in my view, they’re not in line with what people really want. His vision for the future is aspirationally askew in my opinion. While the idea of conquering death and optimizing health, and using technology to achieve that is an admirable goal, his prescriptions for how to achieve it are one minute naive and the next kinda chilling.

As the video goes on, you’ll see what I’m talking about, and if you miss it, don’t worry, I’ll be here to point it out. First up, here’s Bryan Johnson candidly talking about Blueprint. [Clip]

Sorry, the mind is dead, and the goal is to give complete control of your well-being over to an algorithm? You know who says stuff like that? Algorithms. He sounds like he’s doing PR for Skynet, and I’m not even saying he’s wrong. It just all sounds so extreme, and it is I guess, and that’s the point. The cure for aging is probably going to take some extreme measures, and the future of intelligent machines is upon us. We’re all going to have to figure out how much control we want to give over to our new algorithmic offspring. [Clip]

I gotta be honest, this does not sound like the cure for aging that I always imagined. I figured it would be a pill or a series of injections or just a Jacuzzi filled with stem cells or maybe even a brain transplant into a brand new cyborg body. Something easy that lets me continue to do whatever I want, abuse my body, eat unhealthy things, and withstand all the stresses we put ourselves through so we can keep living awesome lives. Not some sort of Black Mirror algorithm micromanaging my every move.

I guess the best way to think about Blueprint is, it’s not the final form of anti-aging, it’s not the final system for how to best live your life; it’s kind of in this early adopter phase where the goal is to first not die so that we can live long enough for the easy pop-a-pill aging cure.

That’s cool, but what’s interesting is this doesn’t seem to be the message Brian’s trying to get across. Listen to him answer the question: how do we stay human in a post-human world? [Clip]

I see, so the answer to “How do we stay human in a post-human world” is… take everything that you think you want and that makes you happy and just be ready to flip-flop on all that stuff at the drop of a dime.

This makes sense in a worldview where we are pets of superintelligent AI, so I’m not going to argue with the logic, I mean if that’s how do you think it’s gonna go. but I got to be honest I don’t think that’s the answer most people are looking for. hey you know all those things you want in life, those things that you care about?  Toss them out the window; welcome to the future.

I think people want technology to adapt to them, not the other way around. If we create some runaway technology that reshapes society in some exponential chain reaction where we end up being second class citizens at best, then I think we’re doing the future wrong. Like I may be splitting hairs here because obviously being adaptable is a great quality and we’re going to need to do that, but I don’t think we should go down without a fight.

This is just such a weird way to think, like if your favorite ice cream is chocolate but the algorithm tells you vanilla is better for you, so now your favorite is vanilla, you’re not going to be like, “Oh, I guess my favorite ice cream is vanilla.” Like, the goal is to re-engineer the chocolate ice cream to be healthier, not to just change my mind. This guy’s vision of the future is like being in a really abusive relationship with your mom. Like his solution for the AGI alignment problem is to just embrace Stockholm syndrome. But not just that, he has another suggestion. [Clip]

(Laughing) I’m sorry. It’s just, easiest, applause break, ever. Oh yeah? You think the world would be better if we stop killing each other? Someone give this guy another 300 million. The applause break after that was so just compulsory: yes, of course, non-violence as a species… and for my next platitude. I’m sorry; let’s just go on to the next clip. [Clip]

Okay, AI mimicking humanity, however scary it may be, is a small problem compared to the actual problem, which is the alignment problem, which is a rogue AI developing its own agenda that doesn’t keep humanity safe and in a position of control.  this is something that is overlooked to put it nicely in the way Brian discusses the future of AI. I’ll leave it at that for now. here’s the rest…

Nonviolence as a goal, amazing. Give the man a Nobel prize, but it doesn’t have anything to do with the alignment problem. A superintelligent AI might get it in its head that a nonviolent humans species is best achieved by putting us all in straitjackets.

Contrast it with somebody like Elon Musk, who is focused on trying to build a truth-seeking AI and attempting to design agi that adapts to our needs and goals and desires, and whose contingency plan is at least connecting our brains in some sort of super intelligent hive mind through neuralink so that humanity’s intention can still shape decision making about our future. Crazy, but at least the goal is to keep humanity in the driver’s seat.

Then you have Brian Johnson, whose vision is give all control to the algorithm. It knows how to take care of you better than you do. Remove any preference or conception of wants and desires, because the algorithm is going to figure that out for you too. And the best chance we have of surviving this AI future is just to be docile and acquiesce and bow down and kowtow to the higher form of life.

I think between the two of those philosophies, you could see which one is, I would say, just more fun to think about, so let’s strive for a version of the future that’s fun. Where the algorithms do what we want them to do and to where our goals are taken into consideration above what an algorithm thinks our goals should be.

And if you think I’ve taken Brian’s words out of context, maybe I’ve re-edited this to fit my narrative and he couldn’t really mean any of this, I would just say listen to his closing statements unedited. [Clip]

We would like to ask you a small favor. We are a non-profit foundation, and unlike some other organizations, we have no shareholders and no products to sell you. We are committed to responsible journalism, free from commercial or political influence, that allows you to make informed decisions about your future health.

All our news and educational content is free for everyone to read, but it does mean that we rely on the help of people like you. Every contribution, no matter if it’s big or small, supports independent journalism and sustains our future. You can support us by making a donation or in other ways at no cost to you.

GIVE PER MONTH




View the article at lifespan.io



Back to News


0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users