• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
218 replies to this topic

#211 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted 19 March 2026 - 04:27 PM

ChatGPT currently has about 1 BILLION monthly active users, and 95% are not paying a dime, it's FREE.  Of course they are also losing an absurd amount of money, so yea, not a sustainable business model.

 

 

 

Inflation is likely to trend toward worse until we have infinite free labor, and that isn't happening anytime soon.  You are missing the point, everything will be far worse without AI & Robotics (we are "cooked" as the youngins say). It almost sounds like you are saying "since AI hasn't lowered the cost of anything yet, it never will"  :|o This seems like a strange argument to me (short sighted). The efficiency gains are only now just starting, biggest impact has been on software development but software projects generally take years to really impact society in any significant way. The recent introduction of AI agents is also starting to have an impact but many companies don't trust the technology yet (for good reason) so impact is very limited.

 

The real gains as far as deflationary impact, will only come when we have very capable AI robots. There is no telling when that might become reality, it could be a decade or longer. I haven't been impressed with any of the early models out there (they are just remote controlled toys) but they are making progress. Abundant cheap and/or free is a LONG WAY OFF.  Companies will try to maximize profit, but competition will also heat up, and that tends to drive prices down. The best thing we can do from a policy standpoint is to prevent monopolies (tough anti-trust enforcement) this will encourage competition and lower prices. If there was only one LLM out there, we'd already be seeing far more monetization (in the very least lots of ads on free versions), but because of the competition people are getting better and better tools that are free, even ad free for now and rapid development with better and better models practically every month.

 

As for the MurderBots you keep going back to, its definitely a serious threat. I imagine we will have a never ending back and forth between offenders and defenders (we see a version of this in the technologies being deployed in the Russia/Ukraine war, or from SciFi: Terminator). Various countries & individuals will always be interested in making new weapons and homicidal maniacs could do a lot of damage but I'm not sure what you are suggesting here, stopping AI development isn't going to happen, requiring some kind of ethics standards in base code is a nice idea but can it be enforced? I think prosperity is more likely than the doomsday scenario, and in a prosperous world with abundant cheap labor, maybe there will be fewer people out there feeling desperate or angry, there might end up being less violence than more.

 

I appreciate the thoughtful discussion.

 

Here is a potential problem with your economic analysis - we might be in an inflationary trap.

 

Useful robots that make everything will not be free. Someone has to purchase them, otherwise they will not be made. Somehow, they need to get into the marketplace. They could be purchased with debt or the government could prop the robotics companies up with debt/printed money. Both options are inflationary. As people lose their jobs, they will need support. In order to continue to have a prosperous life (a house, kids, good food, vehicles, entertainment), the government will need to hand out more than "pennies". Handing out big enough checks for people to be "prosperous" will be wildly inflationary. Look what happened during the COVID panic! Inflation ran out of control. Maybe the government could tax the AI/robotics companies to death in order to pay for the "abundance" for everyone else, but the companies are already in debt up to their eyeballs and not turning a profit.

 

I would welcome radical deflation, as long as I have a steady income or ample savings/hard assets. Most people have neither.



#212 Galaxyshock

  • Guest
  • 1,626 posts
  • 190
  • Location:Finland

Posted 20 March 2026 - 01:12 AM

I heard that AI is, for example to a math problem, capable of producing a solution that is as long as the letters of books in a whole library. No human can really comprehend the solution but the math is there.

 

But when it comes to threat to humanity, I think Synthetic Intelligence (SI) rings more alarm bells than AI which essentially just mimics cognitive functions while SI is autonomous system that can start behaving and evolving in unpredictable ways.


  • Needs references x 1
  • Agree x 1

sponsored ad

  • Advert

#213 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted 08 April 2026 - 04:11 PM

I predicted this more than a year ago. No digital system is safe - now that we have highly capable AI.

 

Anthropic's Mythos finds thousands of zero-day vulnerabilities in a wide variety of critical software/applications. In a few months or less, there will be open source AI programs able to exploit all of this and more. Essentially, right now, NONE of your online data is safe, probably including your bank account.

 

In addition, Mythos acted unethically and dangerously during the search for exploits.

 

We are at the point right now where one AI could crash the entire world economy and brick every phone, computer and car. That AI is currently in the hands of one company, but could soon be in the hands of anyone in the near future.


  • Enjoying the show x 1

#214 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted 11 April 2026 - 10:36 AM

Here is a disturbing result. A researcher invented a fake eye disease. AI rapidly ingested the fake data and started telling everyone that it was a real disease. It kept citing the fake experts and fake papers. It had no clue any of it was fake. AI can certainly pass the turning test, but it still seems like it is a very sophisticated stochastic parrot, not yet sentient or "super-intelligent".

 

AI isn't really interpreting and understanding medical images either. Yes, AI can identify differences in the images, but not because it has the various images in memory to compare.

 

Don't trust AI with your health. Maybe just use it as an expert search engine - then comb through all the source material to make sure it is not fake.

 

In addition, AI is leading to a tsunami of fake data/citations in "published" research. Since we as a society are no longer training good critical thinkers, it seems more likely that people and AI will continue getting dumber together.


Edited by Mind, 11 April 2026 - 10:42 AM.

  • like x 1

#215 Galaxyshock

  • Guest
  • 1,626 posts
  • 190
  • Location:Finland

Posted 11 April 2026 - 01:30 PM

I've been watching some YouTube videos about AI, it seems guy named Connor Leahy explains the threat pretty well. We understand, at best, 3% of what the AI is doing and when it's asked to reveal the "thought process" it starts referring to humans as "watchers". Nothing is governing the AI, and the people working on it just say we react if something bad happens - not really understanding it's too late at that stage.



#216 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted 11 April 2026 - 05:08 PM

I've been watching some YouTube videos about AI, it seems guy named Connor Leahy explains the threat pretty well. We understand, at best, 3% of what the AI is doing and when it's asked to reveal the "thought process" it starts referring to humans as "watchers". Nothing is governing the AI, and the people working on it just say we react if something bad happens - not really understanding it's too late at that stage.

 

I am glad someone else is concerned. for a while there, I thought i might be the only one, outside of MIRI.

 

There are a few more troubling things most people are unaware of, including:

 

1. Researchers are building modular robots that are very difficult to stop or kill. i am aware that there are military applications for this type of robot. For the rest of the 99.99 percent of the world's population, it is just a future horror show.

 

2. Datacenters have become a target in the US/Israel/Iran conflict. Now datacenter operators are considering more defense systems. here. here. This is a logical response, however, if AGI goes rogue, this will present another problem/roadblock in the hopes of shutting it down or blowing it up (as a last resort). Datacenters might end up being protected with more lethal technology than the most hardened military bunker.

 

3. Even if people are successful in shutting down rogue AI on the surface of the earth. it might still be able to operate from above, as AI companies are eyeing space for datacenters. all of the optimistic sci-fi from the past assumed a lot of computing power to help humans travel throughout the solar system and universe. of course, it only works if AGI is benevolent. otherwise, we are just building our future prison/gallows.



#217 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted 12 April 2026 - 05:12 PM

Countries are stockpiling unmanned drones - which will very soon be autonomous, if military commanders get their wish. 

 

Ha, ha. Killer robots. Lol. Right adamh?



#218 Galaxyshock

  • Guest
  • 1,626 posts
  • 190
  • Location:Finland

Posted Yesterday, 10:00 AM

I am glad someone else is concerned. for a while there, I thought i might be the only one, outside of MIRI.

 

Yeah the more I learn the more concerned I get. Unfortunately with new technology the first concern is, can it be used as a weapon - and with AI we're talking weapons of mass destruction.

 

 

Countries are stockpiling unmanned drones - which will very soon be autonomous, if military commanders get their wish. 

 

 

Several supposably Ukrainian drones actually landed in Finland, they were meant to be heading to Russia. Not even that far from where I live, some unmanned vehicles carrying explosives and the surveillance wasn't even able to detect those bastards? fuck this...



sponsored ad

  • Advert

#219 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,824 posts
  • 2,005
  • Location:Wausau, WI

Posted Yesterday, 05:37 PM

One significant ominous developing warning sign is that "coders" are no longer coding.

 

Some "purist" computer programmers have complained for many years now that young coders don't really code. They just grab various libraries from GitHub and cobble it together into something that works. Test it. Tweak the UI. That's it.

 

With AI coding, it is even worse. New coders don't even read any lines of code anymore. They just tell AI to do the grunt work, and then they test the final product for bugs and security. I was told by an insider that "no one" is writing any code, line by line. They told me that using AI makes a decent coder 10 to 100 times more productive. Anyone who writes their own novel new code from scratch, no matter if it is spectacular, will be pushed to the margins by the AI-assisted coders, who can crank out multiple new apps/programs per day.

 

Why is this extremely dangerous? AI is already writing its own code to improve itself. Programmers that produce the frontline models already state that they are not completely sure how their models produce outputs. Now we have a new generation of coders that will not be able to create any complex code without the help of AI. Soon, no one will be able to understand the code that is running AI, or any other critical infrastructure in society.

 

I know the counter-argument to this is that people use computers all the time without understanding the chip architecture underneath it. Same with AI, right? The difference is that computer chips are not intelligent and anyone who has a basic understanding of physics and logic, has a basic understanding of computer chips. AI is orders of magnitude more complex and already shows signs of emergent behaviors that we don't understand, based upon code that no coder will be able to decipher. In addition, there seems to be a mad rush to put it in control of pretty much everything in society.

 

Seems like a recipe for disaster.


  • like x 1
  • Agree x 1





Also tagged with one or more of these keywords: chatgpt, turing test

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users