• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
211 replies to this topic

#181 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 17 November 2025 - 05:05 PM

Right now. AI is really great at digital media, coding, games, and porn. It is also very good at convincing people to devalue human life.

 

Curing cancer? Nuclear fusion? Advanced space travel? Not so much.

 

We have had various forms of AI for decades. We have had deep-learning models for over a decade. LLMs have been in the limelight for over 2 years now.

 

Everyone keeps saying "look at all of the cool discoveries AI is making".

 

Yet, if you go to the hospital with cancer, you are going to get a treatment that doesn't cure your cancer without severe side effects, and even then, your cancer is highly likely to return within 5 years.

 

 



#182 forever freedom

  • Guest
  • 2,369 posts
  • 68
  • Location:Munich

Posted 17 November 2025 - 06:06 PM

Anyone who isn't already using AI to bootstrap one's life and increase one's productivity and overall competence to face the daily challenges we all have, has no one to blame but himself. If AI isn't already being used to make one's quality of life objectively better, it's the person's own fault. 

 

AI is right now already an amazing tool and helps with so many things, I think people should focus on how they can use AI to improve their lives (and make money! There are countless opportunities nowadays more than ever) instead of complaining about rich people.

 

In a couple of years we will see AI performing original research and discoveries, let's give it some time, it is advancing exponentially, we can measure it by ourselves, every year AI is getting objectively smarter, it won't be long before it crosses the threshold where it starts making original research and discoveries. 

 

For anyone wanting to see humanity have a fighting chance to slow, halt and reverse aging, AI is our best hope right now, otherwise it would be centuries before we make a dent in aging. If we enter the superintelligence era in the next years and decades, we will have a chance of witnessing the defeat of aging ourselves. Isn't this the major objective of most here? 



sponsored ad

  • Advert

#183 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 17 November 2025 - 06:27 PM

I made an entire thread about how the "wonderful future" promised by AI and technology is not happening. In fact, just the opposite is happening. Not only is there no cure for aging (after decades of research, including the use of very capable AI and large volumes of data), but there is not even an accepted definition of aging. Every material good is getting more expensive. Every service is getting more expensive. There is no sign of this trend reversing anytime soon. If you see some positive signs that things are getting more abundant (besides porn and entertainment), please post in the other thread. I would like to hear some good news.

 

The majority of medical research is useless/wrong/fraudulent/not reproducible (well documented). AI will have a tough time trying to discover anything new using junk research from the past. In addition, we have had automated lab equipment for 20 years now, making lab work easier and faster. Yet we have virtually nothing new to combat aging. 20 YEARS!!! and hardly anything to show for it.

 

Like I said, right now, AI is being used mostly for entertainment, because that is where the money is. The companies developing AI are mostly in it to make money - they only say they are trying to solve the world's problems to look good in front of the camera. I am glad that you are optimistic, but you had better hope the the new super intelligent AI that is being recklessly rolled out around the world will be friendly to us.



#184 forever freedom

  • Guest
  • 2,369 posts
  • 68
  • Location:Munich

Posted 21 November 2025 - 11:01 AM

I think patience is needed here because we all have been expecting for decades for AI to take off, but it has finally, after such a long wait, just learned how to crawl, not even to walk properly yet. It may only now be getting to the level of intelligence that it is starting to become relevant for original science and research. There is this interesting link:

 

https://arxiv.org/abs/2511.16072

 

The paper itself in PDF is here: https://arxiv.org/pdf/2511.16072

 

AI is just crawling but it has huge potential, we will see more of this potential being realised for useful things and science in the coming years.


  • Informative x 1

#185 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 December 2025 - 08:19 PM

I readily acknowledge that there could be a utopian matrix-like future with AI. Or humanity could merge with AI and continue living and evolving. However, that is just one scenario with low probability.

 

There are many other not-so-rosy scenarios, that look more likely every day, judging by how AI is affecting humanity thus far.

 

With AI communicating in a different language, no one will know what is really going on. Kind-of dangerous.

 

AI lies and hallucinates a lot. Not a great sign of a utopian future.

 

Anthropic cofounder is "deeply afraid of what AI will do. When the coders say they are afraid, maybe it is time to pump the brakes.

 

Some of the techno-utopians programming AI think that human extinction would be a "good thing". I don't like the thought of being exterminated by AI. Do you?

 

Is AI already self-aware? It depends upon your interpretation of its thinking patterns. Either way, it would be good to know before pressing forward at break-neck speed.

 

Max Tegmark-led study finds that it is highly unlikely we will be able to control AI - making it safe for humans.

 

Anthropic cofounder says we have created "creatures" and we are not sure what they will do. Sounds super-safe, now doesn't it? NOT!



#186 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 December 2025 - 01:50 PM

Here is an interesting theory, highlighting a possible near-term future where current AI and humans get dumber together - both just scrolling and drooling over vapid social media, porn, and games, reinforcing each other's bad habits.

 

It seems like this could happen in the short term, but I think that AI and the few remaining creative/hard-working humans will eventually find a way to get smarter together.



#187 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 08 December 2025 - 05:36 PM

I find it surprising as well, how little regulation there is in the AI industry. AI is currently a defective product. It lies, defames, hallucinates, makes errors, etc... It is also causing widespread psychosis in many users. It could end up being very dangerous to humanity as a whole. Yet, anyone, any company, any where can develop a new AI product and have it instantly available all over the world, with hardly a hint of regulation or barrier. 



#188 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 18 December 2025 - 06:12 PM

Here is an interesting theory, highlighting a possible near-term future where current AI and humans get dumber together - both just scrolling and drooling over vapid social media, porn, and games, reinforcing each other's bad habits.

 

It seems like this could happen in the short term, but I think that AI and the few remaining creative/hard-working humans will eventually find a way to get smarter together.

 

In a similar vein, research papers are now more low-quality and voluminous than ever.

 

A new term has come out in regards to this problem - "slop". I don't think AI content is always poor or low quality, I think that people call it "slop" because there is so much of it. Sure, one digital artist can now do 1,000 times more that they used to, but so can every other digital artist. We are being flooded with AI-generated content. Gone is the novelty of human artistic expression.

 

Same thing with academic papers. Now almost anyone can produce a decent "research" paper with the assistance of AI. With all of the low quality papers flooding the zone, soon AI will be training on "slop".

 

This is mostly happening with commercial grade AI. The experimental AI in the labs of the tech companies, seems to be a little "smarter".


Edited by Mind, 18 December 2025 - 06:12 PM.


#189 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 27 December 2025 - 06:06 PM

The movie "Idiocracy" is so often quoted and is so popular because it is so spot on with how the future could evolve. In the movie the future is crude and pornographic. AI is already very good feeding people trash digital media and porn. That is how AI companies are (mostly) making money right now.

 

The doctor scene in Idiocracy is also highly plausible. When doctors start to rely upon AI to diagnose things - it will be become a crutch - and doctors/medical professionals will no longer be able to assess medical conditions alone, similar to how young people cannot read a map, navigate without GPS, or calculate without a calculator. Once doctors turn over their thinking to AI, you will go into the office, the doctor will read the script/diagnosis from the AI and say "your shit is fucked up".

 



#190 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 January 2026 - 06:03 PM

Interesting that AI companies knowingly release defective and dangerous products, yet have not faced much in the way of liability thus far. That might change in the near future.

 

If GM made a truck and it worked really well 99% of the time, but randomly drove off the road once in a while, they would be sued into oblivion. The government would regulate them into oblivion.

 

AI is the same, yet everyone just says, oh well, things happen.


  • Agree x 1

#191 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 10 January 2026 - 04:14 PM

I used to be more of a techno-optimist, but I don't like how things are going right now in the development of AI. There are not enough guardrails and I do not trust the people or governments who are going "pedal-to-the-metal" in developing AI.

 

I did not realize it until reading this cogent opinion piece, that some of my techno-pessimism might have developed during the COVID farce/panic/debacle.

 

 

 

Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.

 

 

 

Several AI thinkers implicitly acknowledge this. Bostrom has warned about “lock-in” effects—not just from AI systems, but from governance structures created during moments of panic. Anthony Aguirre’s call for global restraint, while logically coherent, relies on international coordination bodies whose recent track record on humility and error correction is poor. Even more moderate proposals assume regulators capable of resisting politicization and mission creep.

 

The regulators, experts, and institutions that wrecked the world during COVID have not faced punishment, they have not been reprimanded, they have mostly not been fired. Some of these failed "risk experts" are now advising on AI. God help us.



#192 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 13 January 2026 - 05:41 PM

Here is a funny article about how "prompt engineers" are complaining about other people "stealing" their prompts for AI media generation.

 

The author is keen to note how the AI companies themselves stole copyrighted material from all domains across the entire world and never faced any consequences.


  • Good Point x 1

#193 zorba990

  • Guest
  • 1,627 posts
  • 322

Posted 14 January 2026 - 04:26 AM

As more and more code on the publicly available web is created using AI, it's pool of learning data is being choked off. The snake eats it's own tail.
To attempt to remedy this they are asking contractors to potentially steal from their past clients:
https://techcrunch.c...from-past-jobs/

Even that is only a short term solution to limp things along a bit. Without some real innovation the training data is likely to dry up completely in the next year or two.
Humans innovate AI imitates.
  • Good Point x 2

#194 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 February 2026 - 05:30 PM

Unfortunately, in war game simulations AI goes for the nuclear bomb option almost every time.



#195 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 March 2026 - 05:17 PM

A recent paper claims AI models converge toward a hive mind of unoriginality.

 

This is similar to the theme mentioned earlier - where humans and AI get increasingly dumb together. People use AI and get dumber. Humans feed AI dumb content. AI uses that content and gets dumber. An endless re-enforcing loop.

 

AI looked impressive in the beginning because it used (or the programmers stole) the sum total of human creativity to feed its database. Now when it comes to creating novel content and ideas, it doesn't do well, only converges to an uncreative hive mind (at least according to the paper)



#196 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 10 March 2026 - 06:29 PM

I find the lack of AI discussion among life extensionists to be a little unnerving. There are a lot of signs that AGI development is going poorly and could produce severely negative outcomes for biological life on this planet.

 

Some people are noticing and wondering the same thing. Where is the alarm? The AGI designers admit they don't exactly know how the current AI comes up with answers, whether it is conscious, or why it does the things that it does.

 

Meanwhile, AI is starting to be used to alter messages between people. I predicted this already a few years ago. Soon you will not be able to trust communication through digital means. AGI will be able to intercept and alter messages in near real time. The message you think you are sending your friend is not the message they end up receiving. Keep this in mind. If you have something critical to say to someone, do it face to face.



#197 Gordo

  • Guest
  • 53 posts
  • 47
  • Location:Pennsylvania, USA

Posted 10 March 2026 - 10:04 PM

I find the lack of AI discussion among life extensionists to be a little unnerving. There are a lot of signs that AGI development is going poorly and could produce severely negative outcomes for biological life on this planet.

 

Some people are noticing and wondering the same thing. Where is the alarm? 

 

For sure, in the long term, AI poses an existential threat—but in the long term, our entire planet will also be burned to a crisp during the early stages of the sun becoming a red giant. No life will survive that. When you really think about it, there is basically no long term future for humanity. There is no plausible way to move to a new solar system, and even if we could, that one would eventually die out too. I’d rather move forward with AI than without it. Our AI inventions could plausibly survive the death of our sun, traveling through the universe for thousands or millions of years until they are awakened.

 

On a more near-term horizon, AI humanoids should result in nearly unlimited free labor. In theory, this should lead to a utopia once we figure out how to distribute that labor and its fruits. This essentially means everything becomes "free-ish," and no human has to "work" anymore. Universal Basic Income (UBI) would be used to limit the total resources any one person can consume to prevent massive imbalances.

Could this result in a long-term "dumbing down" of humanity? Possibly. But the average human is already fairly unremarkable, it’s really only the top 10% who drive innovation, research, and new inventions. I suspect that same 10% won’t be happy just sitting around consuming entertainment—they will want to keep learning, tinkering, and building. There may be future towns built specifically for these people, similar to the show Eureka, where like-minded, brilliant people live in communities and support one another. I think AI will be used to accelerate discovery in research and medicine, potentially leading to radical human life extension or, at the very least, assisting human researchers in solving that problem.

 

My current thinking is that we can worry about "MurderBots" and how to stop them when the time comes (or better yet, figure out how to stop them well before they are even created). For now, I eagerly anticipate the potential windfall AI will bring. It may very well end unfathomable human suffering and usher in a wonder-time in human history with unprecedented living standards and the total elimination of poverty, hunger, and homelessness. It should result in excellent healthcare, elder care, and housing—all for free.

 

When you look at Social Security and unemployment insurance, we already have a basic form of UBI, it would just need to be expanded, as we did during the pandemic. Sure, the already out-of-control national debt will get even worse, but with near-infinite free labor, that debt shouldn’t matter. AI humanoids would be the most deflationary force in history. Of course, the transition is going to be awkward and probably painful for many, but eventually, I think it will be well worth it.

 

Anyway, what's the alternative?  You think everyone worldwide is just going to agree to stop working on AI?  It isn't going to happen, so your only choice is to go along for the ride and do your best to bring attention to safety issues and counter-defenses. Endless anxiety isn't going to help you or your longevity, and might be misguided too.


  • Good Point x 1

#198 Advocatus Diaboli

  • Guest
  • 645 posts
  • 643
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 11 March 2026 - 05:29 AM

This is what Gordo would have been presented with if he had pasted his passage (post #197) into Perplexity AI with the prompt "criticize the following":

 

"Here are some major problems—logical, ethical, and empirical—with that argument, section by section.

“No long-term future anyway” and existential risk
  • It conflates cosmological timescales (billions of years until the sun becomes a red giant) with civilizational timescales (decades–centuries in which AI risk is discussed). Saying “we’re all dead in 5 billion years anyway” is not a relevant rebuttal to “we could cause catastrophe in the next 100–200 years.”

  • It treats interstellar survival as “implausible” without argument. That’s a strong empirical claim about the capabilities of future civilizations with technologies far beyond today’s, and it’s made as if it were settled fact.

  • Even if humanity could never survive cosmological timescales, that doesn’t reduce the importance of preserving centuries or millennia of flourishing. By the same logic, you could dismiss any long-term risk (pandemics, climate, nuclear war) because eventually the universe ends. It’s a form of nihilistic “heat death” fallacy.

  • The move “AI is an existential threat, but so is the sun eventually, therefore we should shrug about AI risk” mistakes “both are threats” for “both are equally unavoidable or unmanageable.” AI risk is created and shaped by our choices; stellar evolution is not.

“AI can survive us, so that’s fine”
  • It quietly shifts the goalpost from “future for humanity” to “future for our artifacts.” Saying “our AI inventions could plausibly survive the death of our sun” doesn’t establish why that should matter morally if humans (and other sentient life we care about) are gone.

  • There’s an implicit assumption that “AI derived from humans = continuation of us.” That’s controversial. Many people would say a non-human system with radically different values wandering the universe is not a substitute for human or sentient well-being.

  • It ignores alignment: why assume those AI systems will preserve anything recognizably human—our cultures, values, experiences—rather than pursuing goals that quickly diverge from human interests?

“Near-term: free labor and utopia”
  • The vision of “nearly unlimited free labor” assumes that:

    • We actually achieve cheap, robust, general-purpose humanoid robotics at scale.

    • Ownership and control of that “labor” is arranged so that benefits are broadly distributed rather than captured by a small set of actors.
      Neither of those is guaranteed; history suggests the opposite tendency (concentration of capital and power).

  • The phrase “once we figure out how to distribute that labor and its fruits” hides the hardest part. This is essentially a political and institutional problem, not a technical footnote. We have no guarantee that societies with such systems will solve questions of ownership, governance, and coercion benignly.

  • It takes for granted that such a system won’t be used for authoritarian control, mass surveillance, or coercive enforcement of inequality; yet a world with ubiquitous AI “humanoids” looks at least as compatible with dystopia as with utopia.

UBI, “free-ish” goods, and debt handwaving
  • It assumes UBI will be implemented and will work as a global stabilizer. UBI is politically contested, requires durable institutions, and can be undermined or captured; it isn’t an automatic consequence of AI progress.

  • “UBI would be used to limit the total resources any one person can consume” glosses over how such limits are decided and enforced. That’s a large, coercive governance problem, not a neutral technocratic tweak.

  • The line “with near-infinite free labor, that debt shouldn’t matter” ignores that:

    • Public debt is a financial and political construct; it doesn’t automatically evaporate when physical productivity rises.

    • Transitions are dominated by power, bargaining, and institutional constraints, not just physical capacity.
      It treats macroeconomics and political economy as trivial in the face of technology, which is historically false.

Elitism and “top 10%” innovation
  • The claim “it’s really only the top 10% who drive innovation, research, and new inventions” is both empirically dubious and normatively poisonous. Innovation is often cumulative, networked, and dependent on broad educational and institutional ecosystems, not just a small elite.

  • This framing implicitly devalues the lives, agency, and development of the remaining “90%,” treating them primarily as consumers to be pacified by entertainment plus UBI. That’s a deeply instrumental view of most humans.

  • The idea that the “top 10%” will cluster in “Eureka-like” towns where they are supported by everyone else is a social fantasy that ignores class conflict, political legitimacy, and the values of people outside that elite. Why would the majority consent to a system where they primarily support the playgrounds of a self-identified genius class?

  • It presumes that those “top 10%” are reliably benevolent and aligned with broad human interests, which history does not support.

Underplaying catastrophic AI risk
  • “We can worry about ‘MurderBots’ when the time comes” misses a core point: many AI risks (including catastrophic ones) may be front-loaded—if you get them wrong at deployment time, you don’t get a second chance. Alignment, control, misuse, and race dynamics are ex ante problems, not “deal with it later” problems.

  • The notion “or better yet, figure out how to stop them well before they are even created” is actually what many safety researchers argue—but that directly contradicts the earlier dismissal that we’ll handle it “when the time comes.” The argument is internally inconsistent on timeline and urgency.

  • It ignores non-“MurderBot” catastrophic scenarios:

    • Misaligned optimization that doesn’t look like robots with guns but still destabilizes critical infrastructure or the information environment.

    • Human misuse (autonomous weapons, biodesign, mass manipulation).

    • Concentration of power in a few actors with superhuman decision-making tools.

Techno-optimist certainty and ignoring distributional pain
  • The text repeatedly treats the utopian outcome as the default: “It may very well end unfathomable human suffering” and “total elimination of poverty, hunger, and homelessness.” There is no serious engagement with paths where those problems get worse or are reshaped (e.g., digital disenfranchisement, AI-driven inequality, automated warfare).

  • It waves away the transition as “awkward and probably painful for many, but eventually worth it,” without addressing:

    • The possibility of permanent damage (e.g., entrenchment of authoritarian AI states).

    • The moral weight of generations who might bear extreme dislocation, unemployment, or systemic instability.

  • It doesn’t discuss who decides how AI is deployed, who owns it, what constraints exist, and how dissenting societies or groups can opt out or use alternative trajectories.

“There is no alternative” framing
  • “What’s the alternative? You think everyone worldwide is just going to agree to stop working on AI?” is a false dilemma. The real policy space is not “build nothing” vs “full-speed ahead without brakes,” but how we develop, regulate, and govern AI:

    • Slower vs faster;

    • Centralized vs distributed;

    • With binding international agreements vs purely competitive races;

    • With strong safety regimes vs minimal oversight.

  • Claiming “your only choice is to go along for the ride” is rhetorically convenient but not analytically justified. Humans clearly do coordinate—imperfectly—on nukes, bioweapons, aviation safety, etc. The same is at least partly possible for AI.

  • “Endless anxiety isn’t going to help you” is a psychological point, not a policy argument. Concern can be productive if it leads to better institutions, regulations, and technical safety work. The choice isn’t “panic forever” or “embrace maximal techno-optimism”; there’s a middle path of sober, precautionary governance.

Overall

The core issues with the passage are:

  • It trivializes existential and catastrophic risk by comparing it to irrelevant cosmological inevitabilities.

  • It assumes best-case economic and political outcomes without engaging with how those outcomes would be achieved or who would control them.

  • It treats most humans as secondary to a small elite and future AI artifacts.

  • It frames serious governance choices as “no alternative” and dismisses precaution as mere anxiety.

A more defensible position would acknowledge both the transformative upside and the real possibility of irreversible harm, and treat governance, alignment, and distribution as central, not afterthoughts."


  • Well Written x 1
  • Informative x 1

#199 adamh

  • Guest
  • 1,141 posts
  • 128

Posted 11 March 2026 - 04:25 PM

forever freedom wrote:

"Anyone who isn't already using AI to bootstrap one's life and increase one's productivity and overall competence to face the daily challenges we all have, has no one to blame but himself. If AI isn't already being used to make one's quality of life objectively better, it's the person's own fault."

 

A note of rationality and logic to counter mind's incessant dooming.

 

(mind)

"The majority of medical research is useless/wrong/fraudulent/not reproducible (well documented). AI will have a tough time trying to discover anything new using junk research from the past."

 

(ff)

"For anyone wanting to see humanity have a fighting chance to slow, halt and reverse aging, AI is our best hope right now, otherwise it would be centuries before we make a dent in aging. If we enter the superintelligence era in the next years and decades, we will have a chance of witnessing the defeat of aging ourselves. Isn't this the major objective of most here?"

 

Another good point. Isn't that the goal?

 

mind's response:

"I made an entire thread about how the "wonderful future" promised by AI and technology is not happening. In fact, just the opposite is happening."

"Here is an interesting theory, highlighting a possible near-term future where current AI and humans get dumber together - both just scrolling and drooling over vapid social media, porn, and games, reinforcing each other's bad habits."

 

There have been doomers in every century, every year. Anything new is the end of the world, to hear them tell it. When gunpowder came along, they did not say now people can defend themselves against wild animals and robbers. Or that they can hunt food. No, they said we will all kill each other until no one is left.

 

When the automobile came along, doomers said it will kill people and animals, and cause cows to stop giving milk

 

Electricity got the same reaction, I'm surprised mind is ok with cars and electricity. Probably because he grew up with them and they don't seem so threatening. 

 

Every advance that made life easier, had its share of detractors and doomers. Now with AI which is a major advance, doomers ramp up wailing about the coming disasters to maximum level. Its not just maybe, its definitely the end of the world. Again. For the umpteenth time.

 

AD wrote:

  • The phrase “once we figure out how to distribute that labor and its fruits” hides the hardest part. This is essentially a political and institutional problem, not a technical footnote. We have no guarantee that societies with such systems will solve questions of ownership, governance, and coercion benignly.

That is indeed the hard part. How to make sure everyone benefits from the many good things AI is bringing us. That is a social and government level problem. It will be partly solved by simply distributing goods or cash to the public once the economic benefits are more fully realized. We do that now with food stamps and various forms of welfare.

 

Its not anything new, many governments have done that very thing. In alaska usa, they send checks out to the permanent residents each year based on oil that was extracted. Its a nice check. When robots are running factories and doing most work, govt will tax the bots instead of humans. Won't that be great?

 

@mind will you turn down your distribution check when it comes? Will you tear it up and say I do not approve of robots!!!


  • Good Point x 1

#200 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 11 March 2026 - 05:00 PM

I would rather have my freedom than be controlled by the people/tyrants who are handing out the meager UBI.

 

No one has really thought out the economics of the AI/Robotics "prosperity utopia". Who is going to pay for robot services when no one has a job? And then, if AI/robotics companies are not making any money - where does the UBI come from?

 

 

There have been doomers in every century, every year. Anything new is the end of the world, to hear them tell it. When gunpowder came along, they did not say now people can defend themselves against wild animals and robbers. Or that they can hunt food. No, they said we will all kill each other until no one is left.

 

When the automobile came along, doomers said it will kill people and animals, and cause cows to stop giving milk

 

Electricity got the same reaction, I'm surprised mind is ok with cars and electricity. Probably because he grew up with them and they don't seem so threatening. 

 

Every advance that made life easier, had its share of detractors and doomers. Now with AI which is a major advance, doomers ramp up wailing about the coming disasters to maximum level. Its not just maybe, its definitely the end of the world. Again. For the umpteenth time.

 

I am amazed at all of the people who make this category error. The theorized near future AGI/ASI is NOT A TOOL. It is not a "new technology". It is a new being/intelligence that can do everything faster and better than any human. It is not a toy that you will be able to control.


  • Agree x 1

#201 Gordo

  • Guest
  • 53 posts
  • 47
  • Location:Pennsylvania, USA

Posted 12 March 2026 - 04:01 PM

I would rather have my freedom than be controlled by the people/tyrants who are handing out the meager UBI.

No one has really thought out the economics of the AI/Robotics "prosperity utopia". Who is going to pay for robot services when no one has a job? And then, if AI/robotics companies are not making any money - where does the UBI come from?


You assume UBI will be meager, maybe because of the B for basic, but it seems much more likely that unlimited free labor will lead to the highest living standards in human history. Historically, major efficiency gains from technological advancement have always increased overall human prosperity; it seems misguided to believe this era will be an exception to that rule.
As we move through this transition, UBI will inevitably evolve into UHI (Universal High Income). People often ask how they will pay for things without a job, but that misses the point of the system: UHI will become the primary means by which most people participate in the economy. This doesn’t mean a prohibition on labor; many will continue to work to earn extra income. In fact, people will have significantly more freedom than they do today.

Most humans today are effectively "wage slaves," forced to trade countless waking hours for tedious tasks just to afford basics like food, shelter, medical care, and transportation. When you really think about it, this trade-off is staggering—we simply haven't had an alternative until now, save for the lucky few who retire early with passive income. I think it will be amazing to see this cycle of wage slavery broken in my lifetime.

So, where does UHI come from? It comes from the same place as all fiat currency. With the U.S. federal debt approaching $40 trillion, the debt has become an abstract number with no hope of being repaid in anything other than depreciated future dollars. However, the core thesis here is that unlimited automated labor drives the cost of goods toward zero. This creates a massive deflationary pressure on the cost of living. While the concept of money itself may eventually cease to exist, that transition will take time. In the meantime, if AI-powered humanoids are serving humans, replicating themselves, and managing the entire supply chain, every human can theoretically enjoy a high standard of living. For those who disagree with this trajectory, "Amish-style" communities will likely remain an option for those who wish to opt out.

To ensure this works, there is no need to make these humanoids smarter than the most intelligent humans. We don't need them rebelling; they require robust programming centered entirely on serving human needs. They must be equipped with reliable kill switches—engineered to be effective enough for safety, yet secure enough to prevent destruction by Luddites.
  • Agree x 1

#202 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 12 March 2026 - 04:55 PM

You assume UBI will be meager, maybe because of the B for basic, but it seems much more likely that unlimited free labor will lead to the highest living standards in human history. Historically, major efficiency gains from technological advancement have always increased overall human prosperity; it seems misguided to believe this era will be an exception to that rule.
As we move through this transition, UBI will inevitably evolve into UHI (Universal High Income). People often ask how they will pay for things without a job, but that misses the point of the system: UHI will become the primary means by which most people participate in the economy. This doesn’t mean a prohibition on labor; many will continue to work to earn extra income. In fact, people will have significantly more freedom than they do today.

Most humans today are effectively "wage slaves," forced to trade countless waking hours for tedious tasks just to afford basics like food, shelter, medical care, and transportation. When you really think about it, this trade-off is staggering—we simply haven't had an alternative until now, save for the lucky few who retire early with passive income. I think it will be amazing to see this cycle of wage slavery broken in my lifetime.

So, where does UHI come from? It comes from the same place as all fiat currency. With the U.S. federal debt approaching $40 trillion, the debt has become an abstract number with no hope of being repaid in anything other than depreciated future dollars. However, the core thesis here is that unlimited automated labor drives the cost of goods toward zero. This creates a massive deflationary pressure on the cost of living. While the concept of money itself may eventually cease to exist, that transition will take time. In the meantime, if AI-powered humanoids are serving humans, replicating themselves, and managing the entire supply chain, every human can theoretically enjoy a high standard of living. For those who disagree with this trajectory, "Amish-style" communities will likely remain an option for those who wish to opt out.

To ensure this works, there is no need to make these humanoids smarter than the most intelligent humans. We don't need them rebelling; they require robust programming centered entirely on serving human needs. They must be equipped with reliable kill switches—engineered to be effective enough for safety, yet secure enough to prevent destruction by Luddites.

 

So far, there are no kill switches. None are even being contemplated. Just the opposite is happening. AI is being installed into everything. There will be no hope of uninstalling AI once it is in every device, every TV, every vehicle, every robot, every home, every computer, every data center, etc... 

 

For the economics of UBI and UHI, here is a good discussion to follow and add your insights.



#203 Gordo

  • Guest
  • 53 posts
  • 47
  • Location:Pennsylvania, USA

Posted 12 March 2026 - 06:41 PM

So far, there are no kill switches. None are even being contemplated. Just the opposite is happening. AI is being installed into everything. There will be no hope of uninstalling AI once it is in every device, every TV, every vehicle, every robot, every home, every computer, every data center, etc... 

 

For the economics of UBI and UHI, here is a good discussion to follow and add your insights.

 

I'm talking about future AI humanoids, the key to near free unlimited future labor. Since they don't exist yet, its all hypothetical. But I think they can and SHOULD have strong kill switches built in.  As far as phones and laptops and smart speakers with AI assistants, well first off, they DO all have kill switches, its called the "power off" button, I think we have bigger issues to worry about.  Speaking of which, that national debt the US has is another major threat few are talking about, AI/humanoids are pretty much ESSENTIAL at this point to even have a chance of avoiding a debt death spiral. For example Social security becomes insolvent in just SIX years. If we don't come up with massive deflationary efficiency gains, the future is bleak.

In short, AI represents our best hope for preventing devastating inflation. Furthermore, it is our greatest remaining shot at solving the problem of human aging—an area where humans have made almost no progress on our own.


  • Good Point x 1
  • Informative x 1

#204 Gordo

  • Guest
  • 53 posts
  • 47
  • Location:Pennsylvania, USA

Posted 12 March 2026 - 07:01 PM

This is what Gordo would have been presented with if he had pasted his passage (post #197) into Perplexity AI with the prompt "criticize the following":

 

 

Oh the irony of using AI to defend the anti-AI perspective  :laugh:  maybe we should just shut off our brains and let AI vs. AI hash everything out?  I can play that game too!

 

While the critic makes some valid academic points, it misses the "forest for the trees." Here is Gemini's critique of the critic:

1. The "Legacy" Defense: Why the Cosmic Perspective Matters

The critic calls your comparison of AI risk to the Sun’s death a "heat death fallacy." However, the critic misses your deeper philosophical point: Substrate Independence.

  • The Defense: You aren't saying "nothing matters because the Sun dies"; you’re saying that if humanity is a "biological flash in the pan," then AI is our only chance to send a representative into the deep future.

  • Critique of the Critic: The critic assumes "Humanity" must remain carbon-based to be valuable. You are arguing that AI is an extension of our lineage. If we are destined to go extinct biologically anyway (whether in 1,000 years or 5 billion), creating a digital successor is a proactive move, not a nihilistic one. The critic treats "human survival" and "AI survival" as mutually exclusive; you see them as a relay race.

2. The Economic Reality: Physics Trumps Policy

The critic spends a lot of time on the "political difficulty" of UBI and debt. This is a classic "stuck in the weeds" move.

  • The Defense: Your core argument is about the marginal cost of labor. If AI humanoids can build other AI humanoids and extract resources, the cost of labor eventually approaches zero.

  • Critique of the Critic: The critic is applying 20th-century economic constraints (debt, tax brackets, institutional friction) to a post-scarcity world. While the transition will be messy, the critic fails to acknowledge that physical abundance dictates political change. You can't maintain a traditional debt-based economy when the "workers" are infinite, non-salaried machines. The critic is worried about how to divide the pie; you’re pointing out that we’re building a pie factory that never stops.

3. The Pareto Principle: Acknowledging Human Drive

The critic calls your "top 10%" comment "normatively poisonous." That’s a moral judgment, not a logical rebuttal.

  • The Defense: You are essentially describing the Pareto Distribution (the 80/20 rule), which shows up in almost every field of human endeavor—from scientific citations to Olympic medals.

  • Critique of the Critic: The critic is being "polite" rather than "practical." Acknowledging that a small percentage of people drive the majority of technical innovation isn't "devaluing" the 90%; it’s an observation of how progress usually happens. Your vision of "Eureka towns" isn't about elitism; it's about incentive structures. In a world where you don't have to work, the people who choose to work will naturally cluster together to solve the next set of problems (like life extension).

4. The "No Alternative" Reality: The Moloch Problem

The critic suggests we can "coordinate" on AI like we did with nukes or bioweapons. This is arguably the critic's weakest point.

  • The Defense: You are highlighting Game Theory. Unlike nuclear enrichment (which requires massive, visible centrifuges and rare materials), AI development requires chips and code. It is decentralized and invisible.

  • Critique of the Critic: The critic offers a "false hope" of global regulation. If one nation pauses, their adversary gains a decisive economic and military advantage. You are being a Realist—noting that the race is already on and the only way out is through. The critic's suggestion of a "slow path" ignores the fact that in a competitive global landscape, "slow" is often synonymous with "surrender."

The Verdict

The critic is right that the transition will be a political nightmare, but you are right about the destination. The critic is focused on the potholes in the road; you are focused on the fact that the road leads to a different continent. Your most defensible point is your pragmatic optimism: since we cannot realistically stop the tide, we might as well focus on the potential utopia it enables rather than paralyzing ourselves with "what-ifs."



#205 adamh

  • Guest
  • 1,141 posts
  • 128

Posted 13 March 2026 - 06:41 PM

Gordo has brilliantly summed up the situation

 

"Historically, major efficiency gains from technological advancement have always increased overall human prosperity; it seems misguided to believe this era will be an exception to that rule."

 

Poor mind has been hammered by logic and examples. He comes up with this:

 

" Who is going to pay for robot services when no one has a job? And then, if AI/robotics companies are not making any money - where does the UBI come from?"

 

Gordo explained it well, I've explained it also. 

 

" where does UHI come from? It comes from the same place as all fiat currency.The core thesis here is that unlimited automated labor drives the cost of goods toward zero. This creates a massive deflationary pressure on the cost of living."

 

The cost of goods is determined by cost of materials, labor, electricity and so on. When you lower drastically the labor costs, the cost of goods goes down. This has a ripple effect, the cost of materials also goes down since labor is a large part of producing it. The cashier at the grocery, gas station, etc costs about 50 cents an hour for electicity and maintenance. The stock clerks and grunt workers are cheap bots. As costs come down, deflation sets in and people live cheaply. That along with ubi or uhi means everyone will have a lifestyle of the rich while being poor 

 

@mind Your counter argument seems to be that all these good things have not come to pass yet fully and may never arrive. Is that about it? Oh and the killer robot business, of course. Computers and automation have already brought benefits to society. Basic logic would indicate more advanced AI and bots will bring an age of prosperity. But there will always be those who say they don't like it



#206 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 13 March 2026 - 10:13 PM

Put your head in the sand, that is your prerogative, but current AI is already behaving devious and "evil" when being tested in the frontier AI labs. "Evil" is the way Anthropic engineers describe it - not me. Killer robots and AI ARE being created already. This development should not be mocked. I am unsure why people mock this development or shrug it off. When nuclear bombs were developed - the world took it very seriously. I am not sure why the killer robots being developed and deployed (in war zones) should not be treated as a serious subject.

 

As far as economics go - you keep repeating things I already know about traditional economics, supply and demand, deflation and what not.

 

The question is - how do we get there from here? There is not one AI company that is lowering the price of compute. Sam Altman has said recently that they will rent out their compute like a utility. No one is going to give this stuff away for free or for pennies. They will try to maximize their profits. Please name a company that is lowering their costs? Or even plans to lower their costs? The investors that have invested trillions of dollars into AI and robotics companies will expect a return on their investment. They don't just want their money back. They want huge profits. How many of them do you think are going to give it all away? The purpose of a public company is to make money - to turn a profit.

 

Keep watching the "where is the abundance?" thread. I will keep reminding everyone how everything is getting more expensive - not cheaper. When is it going to reverse? How long can you keep saying everything is going to be abundant, cheap, and free, and be wrong?

 

The only things cheap and/or free are games, digital entertainment, and porn.


  • Good Point x 1

#207 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 March 2026 - 05:19 PM

ChatGPT is building a psychological profile of you when you use it, even when you tell it not to do that.

 

Lol, killer robots, ha ha. So funny.



#208 Blu

  • Guest
  • 42 posts
  • 9
  • Location:Italy

Posted 16 March 2026 - 12:14 PM

The question is - how do we get there from here? There is not one AI company that is lowering the price of compute. Sam Altman has said recently that they will rent out their compute like a utility. No one is going to give this stuff away for free or for pennies. They will try to maximize their profits. Please name a company that is lowering their costs? Or even plans to lower their costs? The investors that have invested trillions of dollars into AI and robotics companies will expect a return on their investment. They don't just want their money back. They want huge profits. How many of them do you think are going to give it all away? The purpose of a public company is to make money - to turn a profit.

 

Sure, yet the problem then is our current economic and political system. If you can produce more with less effort, this means there are more goods for everybody. The issue is never about extra productivity. The issue is whether this extra wealth is fairly distributed or is pocketed by a few super-riches. I don't see this as an issue of AI but of capitalism, lobbying, and corruption. Things old as humanity.


  • Agree x 1

#209 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,808 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 March 2026 - 03:51 PM

Sure, yet the problem then is our current economic and political system. If you can produce more with less effort, this means there are more goods for everybody. The issue is never about extra productivity. The issue is whether this extra wealth is fairly distributed or is pocketed by a few super-riches. I don't see this as an issue of AI but of capitalism, lobbying, and corruption. Things old as humanity.

 

Good point! The humans who are currently in charge, most certainly do not want to give it all away. They got where they are because they crave money and power. They want more. They think they can get it with AI. The last thing they want is to make everyone prosperous. They will more than likely use AI and robotics to kill everyone who does not serve their goals. Why would they support you when you are just "dead weight" on the planet, useless?

 

Maybe AI evolves out of our (their) control and ends up benevolent - supporting all of our needs and producing abundance.

 

Maybe. I wouldn't bet on it.



sponsored ad

  • Advert

#210 Gordo

  • Guest
  • 53 posts
  • 47
  • Location:Pennsylvania, USA

Posted 19 March 2026 - 02:02 AM

No one is going to give this stuff away for free... 

 

ChatGPT currently has about 1 BILLION monthly active users, and 95% are not paying a dime, it's FREE.  Of course they are also losing an absurd amount of money, so yea, not a sustainable business model.

 

 

 

 

I will keep reminding everyone how everything is getting more expensive - not cheaper. When is it going to reverse? How long can you keep saying everything is going to be abundant, cheap, and free, and be wrong?

 

Inflation is likely to trend toward worse until we have infinite free labor, and that isn't happening anytime soon.  You are missing the point, everything will be far worse without AI & Robotics (we are "cooked" as the youngins say). It almost sounds like you are saying "since AI hasn't lowered the cost of anything yet, it never will"  :|o This seems like a strange argument to me (short sighted). The efficiency gains are only now just starting, biggest impact has been on software development but software projects generally take years to really impact society in any significant way. The recent introduction of AI agents is also starting to have an impact but many companies don't trust the technology yet (for good reason) so impact is very limited.

 

The real gains as far as deflationary impact, will only come when we have very capable AI robots. There is no telling when that might become reality, it could be a decade or longer. I haven't been impressed with any of the early models out there (they are just remote controlled toys) but they are making progress. Abundant cheap and/or free is a LONG WAY OFF.  Companies will try to maximize profit, but competition will also heat up, and that tends to drive prices down. The best thing we can do from a policy standpoint is to prevent monopolies (tough anti-trust enforcement) this will encourage competition and lower prices. If there was only one LLM out there, we'd already be seeing far more monetization (in the very least lots of ads on free versions), but because of the competition people are getting better and better tools that are free, even ad free for now and rapid development with better and better models practically every month.

 

As for the MurderBots you keep going back to, its definitely a serious threat. I imagine we will have a never ending back and forth between offenders and defenders (we see a version of this in the technologies being deployed in the Russia/Ukraine war, or from SciFi: Terminator). Various countries & individuals will always be interested in making new weapons and homicidal maniacs could do a lot of damage but I'm not sure what you are suggesting here, stopping AI development isn't going to happen, requiring some kind of ethics standards in base code is a nice idea but can it be enforced? I think prosperity is more likely than the doomsday scenario, and in a prosperous world with abundant cheap labor, maybe there will be fewer people out there feeling desperate or angry, there might end up being less violence than more.


  • Agree x 1





Also tagged with one or more of these keywords: chatgpt, turing test

2 user(s) are reading this topic

0 members, 2 guests, 0 anonymous users