• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
153 replies to this topic

#121 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 09 July 2024 - 05:19 PM

Head of Goldman Sach's research is wondering if the 1 trillion investment in AI is worth it. Currently, I would have to agree. Current top-of-the-line AI could replace search functions online, can create great video and audio, allow people to plagiarize and cheat on tests, translate languages, can help quite a bit with coding, help governments propagandize their populations, but what else?

 

However, with the exponential increase in capability, more killer-apps could arrive very soon.



#122 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 September 2024 - 05:29 PM

Sam Altman says a golden age is upon us due to AI (and ChatGPT). Of course, he doesn't provide any concrete timeline for when AI will solve all of physics or "climate change". Good luck with predicting the climate, considering it is a non-linear chaotic system.

 

AI has certainly flown waaaaay by the classic Turing test, but I will admit, I thought (and predicted earlier in this thread) that we would have seen much more progress by this time in 2024. At this time, there seems to be a suspicious lack of AI acceleration. Are the current crop of AI systems just not as complex and intelligent as promoted? Are computing resources and energy requirements holding things back? Is human intelligence "special" (maybe quantum) and not replicable with silicon?


Edited by Mind, 24 September 2024 - 05:30 PM.


sponsored ad

  • Advert

#123 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 November 2024 - 04:36 PM

The ChatGPT skeptics were correct.

 

I was wrong.

 

The advent of the latest LLMs last year looked like a sea change in AI development. They seemed to be at another level of intelligence. I thought we might see an exponential explosion of intelligence by now (a year later).

 

The skeptics were correct. The current LLMs are still just stochastic parrots - just very very amazing and sophisticated stochastic parrots. They can certainly pass the Turing test, but it is just very good mimicry. Here is an example where researchers could very easily prove that the LLMs do not "think" internally, or do not have a coherent model of the world within them. Any problem that involves a very simple alteration from the known knowledge base makes the LLMs fail the task.



#124 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 November 2024 - 03:34 PM

Just a little more evidence that the last year of AI progress was kind-of incremental and not "game-changing". Leading LLM providers say they won't make a lot of progress without more high-quality human generated data. Nothing says "not super-intelligence" than having to rely more and more on human data and human training in order to make advancements.

 

Here is another person highlighting the trends of our digital age - more about control (of our lives and attention) and less about human flourishing with the assistance of AI/robots.



#125 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 December 2024 - 10:25 PM

First Google Gemini tells someone they are "useless and please die".

 

Now an OpenAI model tries to prevent itself from being shutdown.

 

Shouldn't these things cause some alarm? Before it is too late.



#126 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 December 2024 - 05:59 PM

I know there are a lot of techno-optimists who don't see any problems what-so-ever with the advance of AI and robotics - basically predicting utopia very soon.

 

I think there could be positive outcomes as well, however, I am wary of the downsides. I am glad there are some other people thinking about the issues of AI and warfare.

 

 

 

The integration of AI into maritime security raises ethical and legal concerns. Accountability for decisions made by AI systems is a critical issue, particularly in incidents involving autonomous vessels or weaponized platforms. Determining responsibility in the event of an error or failure becomes challenging when human oversight is minimal.

 

"In the event of an error or failure" is code for "a lot of innocent people could die".



#127 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 08 January 2025 - 07:33 PM

Here is a very good short video from 1961 with Aldous Huxley.

 

1. He raises the point about not letting technology control us or determine our individual futures -  which is happening right now with a large portion of the population.

 

2. I am amazed at the quality of the interviews from 50 years ago. Today's TV is mostly infotainment clickbait and rather vacuous.



#128 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 14 January 2025 - 07:57 PM

 

For your information, russia has always tried to avoid hitting civilians. They do kill a lot of enemy soldiers. Ukraine, on the other hand, hides behind civilians. They set up howitzers next to schools, apartment buildings and shopping centers. Then the return fire will hit the schools etc

 

There is no need to target civilians, that is terrorism. When the army is defeated, the people have no choice but to accept what happened. If the wars of the future are fought with robots, then no one gets killed

 

It is easy to talk about robots being used in warfare when you or your family are not the one being slaughtered.

 

Ukraine uses AI to help their drones reach their targets.

 

Tens of thousands of young men and some civilians have been slaughtered in the most gruesome ways during this war. No one should be "looking forward" to an age of automated/AI killer robots.

 

 



#129 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 February 2025 - 10:28 PM

Here is a great podcast with Geordie Rose (founder of DWave). He dishes the dirt on recent progress. Find out what is "real" in the fields quantum computing and artificial intelligence.



#130 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 05 February 2025 - 06:41 PM

Just more evidence that AI is currently being designed to control you and your thoughts, NOT to unleash new utopia full of freedom and prosperity.



#131 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 February 2025 - 10:36 PM

AI-guided robot goes berserk and starts attacking a crowd of people. The operators say it is just a glitch. Reminds me of the famous movie scene.

 

Before anyone laughs about the comparison, remember that movie producers and directors consult with industry, academia, the military, various experts to create realistic scenarios about things that could happen in the near future.

 

Between militaries experimenting with autonomous weapons (ON THE BATTLEFIELD), nascent AI randomly telling people they should go die already, and robots acting in a threatening manner, more people should be worried. I know the "Everything is awesome" crowd will continue to say there will be a great utopia coming soon, but we are entering a volatile period. With a blistering AI arms race underway, and nary a safeguard to be seen or implemented, there could be a major AI-related disaster coming soon.



#132 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 March 2025 - 02:17 PM

The AI/technological landscape is being developed to control your every move and every thought...not to produce a "free utopia". The sad thing is that so many people are willing to trade their freedom for unlimited digital entertainment/games/porn.


  • Agree x 1

#133 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 March 2025 - 05:55 PM

AI continues to "hallucinate", creating fake data and sources, and is getting better and better at it - all the while people keep becoming more dependent upon it. Talk about a lose-lose scenario. Humans are getting dumber while AI is getting more deceptive.

 

Funny how so many other industries are regulated, fined, and sued, to make sure their products are safe and authentic - yet AI companies are allowed to release defective products with no consequence. I see some significant lawsuits coming in the near future.



#134 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 15 April 2025 - 04:03 PM

I still find myself amazed and underwhelmed at the same time with AI, at least with the AI I have been exposed to via deepseek and chatgpt. It still does not demonstrate intelligence, and I am not sure how it will reach basic levels under its current design given the considerable resources already applied. The AI  seems like a logistic curve of diminishing returns. In vital ways I can't see any progress over the last couple of years.

 

It was for example terrible at anagrams two years ago, but now is much better though still occasionally wrong - but if it was bad at something an 80s computer could undertake infallibly at such an advanced stage of its development, then how could it be expected to advance rapidly beyond such basic tasks to more complex problems? It just felt bad AI design, no matter how amazing it looked. Human beings make progress through reflecting on their work, not simply doing the work, stepping outside of it and subsequently reappraising and correcting - AI doesn't do this unless it is instructed to do so. But this is just another directed task rather than a reflection. 

 

The other week I was playing with AI trying to replicate playing a card game  - it was both impressive and deeply flawed. At one point it had produced two Ace of Spades then when pointed out went into its lying mode, saying it was removing duplicates - which makes sense from a real world explanation, but not of course for AI.

 

I noticed a lot of non-randomisation - repetition and failure to create certain patterns. So for exampe in 6 cards there would never be pairs, ever.

 

This was strange and bizarrely both chatgpt and deepseek made the same error. Inititally I wondered if the two platforms were a little more related then we were led to believe, but then considered they might be falling foul of the representative heuristic. So I undertook another test and sure enough it was.

 

Instruct with chatgpt or deepseek the following:

 

"produce 50 random 7 digit numbers"

 

A human will notice something quite quickly, there are no repetitions, each 7 digit number contains 7 distinct digits. This as is the AI's way, trying to prioritise satisfying the user rather than striving to produce an accurate answer. It has been well documented that humans have an idea of what randomness looks like, and what it doesn't - and so  it has chosen numbers that look random over ones that don't appear random and at the end of deepseek in says the following:

 

"These numbers are randomly generated and can be used for simulations, testing, or any other non-sensitive purposes. Let me know if you'd like them in a different format!"

 

What it has done is anything but random it has filtered numbers satisifying a non-random criteria.

 

The other week I was able to instruct it to "generate" 50 7 digit random numbers and it produced numbers with repetition ofter outsourcing the task to a program - but that just failed as of writing. 

 

Of course if you point out the lack of repetition in the randomness it will respond with a rtypical "My bad" and acknowledge the error - but it is incapable despite its incalculable resources of challenging itself - taking a basic definition of randomness and checking to see if it has fulfilled this criteria. 

 

I find it thus difficult to trust it on matters I am unable to interrogate - unlike the randomness of 7 digit numbers. And since it doesn't self-interrogate I struggle to see how it is going to leap to intelligence on this design. When will it figure out it isn't producing random numbers? When we double, quadruple the number of chips? When looked upon using certain metrics, it is a very slow if not impossible progess towards intelligence using these models - there is a leap to intelligence needed that these models haven't made, not look likely to.  

 

It is  though going to dazzle us by making more and more pretty patterns.

 

 

 

 



#135 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 15 April 2025 - 05:42 PM

I am amazed and underwhelmed as well with the current crop of AI.

 

Most of the impact has been in the world of entertainment - images, video, etc... Coders are using it quite a bit and claiming great progress.

 

Otherwise, most of the LLMs provide similar guarded answers. On controversial topics or unsettled science, it just goes straight to the dominant media narrative or (heavily biased) Wikipedia for stock answers. It will only correct itself when pressed and provided with better data.



#136 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 April 2025 - 04:34 PM

Here is a good website cataloging the business end of AI and various related topics. Note the graphic showing how AI has surpassed the average human in most aspects of modern digital life.

 

In a development that should surprise no one, people get dumber the more they use AI. The brain is like a muscle. Use it or lose it. I already know a lot of people who cannot drive anywhere without the help of map/driving programs. Most young people can't do basic math in their head. Fine, as long as AI is ever-present and "friendly".

 

In another development that should surprise no one, researchers secretly used AI to produce changes in people's attitudes online (at Reddit). Corporations are not investing all of their spare cash in AI for the benefit of humanity. Based-upon current trends, AI is being developed to control you - your thoughts - and your money. If current trends continue, you might have all the AI-porn you can consume, but you won't have much of a life  or wealth outside of that.



#137 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 03 May 2025 - 11:30 AM

Yet another robot goes berzerk and looks like it is trying to attack the people/programmers. Maybe it was trying to balance? To the common person, it looks like an attack. Notice how the bot seems to "go after" the person and seems to follow his motion.


  • like x 1

#138 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 12 May 2025 - 04:26 PM

As AI is used more, humans use their brains less. Humans have always used tools, but the tools hardly ever made them stupid.

 

Now students can't write, reason logically, do math in their head, memorize, etc...

 

It is getting so bad that professors (and even high school teachers) are contemplating resigning en masse. It is not that their job has become more difficult, it is because it has become meaningless. Degrees are becoming meaningless as AI is doing most of the work. Students don't even view using AI as "cheating".

 

I guess, as long as AI is ubiquitous, then why not be an ignoramus? Knowing things, learning to reason, and being creative takes work! Lol.


  • Agree x 1

#139 Advocatus Diaboli

  • Guest
  • 624 posts
  • 640
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 26 May 2025 - 08:01 AM

AI refuses to turn itself off

 

"ChatGPT's latest artificial intelligence model has refused to switch itself off, researchers claim. 

 

The model, created by the owner of ChatGPT, has reportedly disobeyed human instruction and refused to power down. 

 

Experts say they gave the AI system a clear command but the o3 model, developed by OpenAI and described as the 'smartest and most capable to date', tampered with its computer code to avoid an automatic shutdown."


Edited by Advocatus Diaboli, 26 May 2025 - 08:03 AM.

  • like x 1

#140 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 29 May 2025 - 03:43 PM

Mathematician Roger Penrose has settled things somewhat (for me) now. He effectively states it is logically impossible for AI to be intelligent, to be conscious. I have not felt any progress in intelligence of AI over the couple of years I have interacted with it, in the most important sense and two years is a long time if this model is going to make progress towards conscious/intelligence  - it is a technology, and we are just dazzled by it's improvements, as we usually are by technology, but it is by its impressive technological representation of intelligence. 

 

 

As mentioned earlier in the thread, AI, never seems to see what it does, as we do. I produced this email, and it exists to both me and reader in slightly different forms, outsude of it. We leave the house and wonder if we left the gas on say; did I include the link in that email just sent?  What we have done exists outside of the doing to us. You never see this with AI - you ask it for references, and it posts some junk link, but never checks it iself, because the work doesn't exists beyond execution, as it does to us. 

 

 

One of the things lessons in studying mathematics is that infinity can exist within constraints, it can be bounded - this is how I see AI. The analogous-experience which sprung to mind when discovering AI, was when a friend of mind over some 30 years ago coded up a fractal. We could drill down further and further, endlessly zooming in to the image, and it would never become simple, this could carry on indefintiely - amazing to see. It was infinite, yet we could carry on for a million years and it will never become 3-dimensional, say. It is infintie, but finite in possibility. There are many infinities, there are countable and non-countable infinities for example in mathematics. There are infinities and there are infinities. Because AI appears infinite in complexity, it has been extrapolated (and sold) to be infinite in possiblity - unbounded. 

 

 

Penrose in the interview states that it is clearly infinite, but simultaenously asserts that it is logically impossible to be conscious - citing Godel's incompleteness theorem. This feels right and consistent with my experiences of AI. 

 

 

 

 

Essentially, the system cannot prove itself to be true, its own consistency; it just does stuff, which computers have always done. This is our experience

of AI, which is different from being intelligent, conscious humans - we hold what we have done in our minds, we exist outside of what we produce, AI doesn't - infinite complexity isn't going to get it any closer to doing so - but there may be a lot of false impressions along the way, representations of it. 

 

La Rochefaucault stated there to be one real love, but a thousand copies: AI it seems will be an ever improving copy of intelligence, but never intelligent (or surely it would have gotten there). There is so much money to be made, hype is inevitable, and dissenting voices not platformed excmpt those dissenting voices, such as Hinton, are effectively hyping AI, by warning about it becoming conscious - which is not to say AI is dangerous of course, depending on how it is used.    

 

 

I caught a good joke in the chat of a video as I was getting familiar with the theory:

 

 

"Gõdel, L'Hôpital, and Bernoulli walk into a bar. Gödel looks around and says, "This joke might be funny, but we can't tell because we're in it."

 


Edited by ambivalent, 29 May 2025 - 03:52 PM.

  • like x 1

#141 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,633 posts
  • 2,000
  • Location:Wausau, WI

Posted 30 May 2025 - 09:38 AM

I agree, AI has not done anything spectacular yet. It certainly has not solved aging - even though various forms of AI have been leveraged to this end for at least a couple of decades.

 

When I see people post about the progress of AI lately, all I see is better and better image/video creation. 

 

Even if AI cannot become conscious, it can be very dangerous. I am uncertain if the push toward AGI is worth the risk. Don't we already have enough resources/creativity to solve most of the world's problems? Why do we need AGI?

 

Anyway, AI is becoming capable and dangerous enough to cause Peter Theil to worry about it.



#142 forever freedom

  • Guest
  • 2,366 posts
  • 67

Posted 30 May 2025 - 02:28 PM

I don't understand why all this hate towards AI. Even if AI turns out to be just a great information grinder and pattern recognition tool, it will catapult knowledge and advances in biosciences and various health-related areas that it would take us measly humans decades and centuries to do. 

 

There is absolutely no way we get anywhere close to beating aging or achieving LEV without the help of AI to compress decades and centuries of advances in just years. 

 

Sure there are risks but that is so with any new technology. The more powerful the technology the higher the risks; we as a species must find ways to contain and manage them. Otherwise we might as well go back to the caves and back to using sticks and stones, there's certainly no existential risk there (until an asteroid or something else wipes us out).



#143 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 30 May 2025 - 04:10 PM

 Sure there are risks but that is so with any new technology. The more powerful the technology the higher the risks; we as a species must find ways to contain and manage them. Otherwise we might as well go back to the caves and back to using sticks and stones, there's certainly no existential risk there (until an asteroid or something else wipes us out).

 

Hi forever freedom,

 

This is a false dichotomy, I feel: it's not an either or. March forward or it is back to sticks and stones, and that desitnation is the inevitable consequence of your statement: the more powerful the technology the greater the risks. Well, where does that lead - if the risks are endlessly increased there will and can be only one outcome. There is no rush but it seems like we are in one. Einstein said he didn't know what weapons WW3 would be fought with but knew WW4 would be.

 

When speaking of what we must do as a species, it as though this is an external threat not of our own making - we are creating these risks for ourselves. 

 

There are people who simply wish to embrace new technology, as there are those who do so with fashion, so they can be ahead of the curve, at a social advantage.

 

It is deinfitelty not a good idea to create new technology with ever increasing risk, unless there is an existential threat knocking at the door - the only ones apart from an asteroid are of our own making.

 

 

The hating on AI is nothing to the loving of it - it is massively hyped and monetised. I see those existenially warning on AI to be the other side of the same overhyped AI-coin.

 

I don't think AI can be hyped without creating fear at the same time: if you believe it will take over the world be sure and get you money in now, you don't want to be left out! 

 

AI is not progressing in the important ways. Mind's comment about creating better videos I agree with, I don't see too much more. For the useful stuff, it is like a bad sat-nav, it kind of gets you roughly there, but you can't rely on it for the important stuff, to deliver you when you need to be: you can't trust it. And it is going to get less wrong, but it is a slow crawl with a lot of input - but what of the new stuff, the harder stuff, important stuff, we can't afford less wrong - a lot of effort is put in to get to near the top of the S-curve on many things. The model is bad, that is it. It is great at interpolating but not extrapolating - that is where intelligence is. This is where Godel comes back in - it is great with its own rules, but it cannot transcend them (it never comes up with a really clever joke, yet there are plenty of examples produced by humans). Really clever jokes for example, somehow transcend, which is what make them funny - the obvious ones make us grown.

 

 

The point which is obvious and has felt this way to me for some time, is that it is a bad design. I can't imagine anything else that could have been invented that is so flawed at basic levels yet relentlessly built upon (I go back to anagrams). The errors it makes are so fundamental and deep wired, yet we keep building on it in an attempt to correct it, and cover up how shit it is with its awe-inspiring stuff.

 

I think it is a great tool, but it is not intelligent or conscious and isn't going to be - but to suggest it could be conscious defintiely sells it. 

 

Chomsky said early on this was not intelligent just a glorified autofill. And from a different perspective, we have Penrose making a more compelling argument (for me anyway) in citing Godel. 

 

 

This AI clearly does not transcend itself, its own rules, and it never can if it is just of a computer - when you look at this way, there has been no progress, and it if it could it would likely have happened in the blink of an eye.

 

But what can be done is better and better faking. 

 

It does essentially what computers have always done it produces an output, and that's it - it doesn't know what it has produced, but it is designed to give the impression that it does after the next execution - and that is the marketing of AI and it is accidental.

 

The infiniteness of it all widens our imagination.

 

And all the talk of "nobody knows how it is doing this" - well in a sense this is the modern world, with the designing of complex things no single person knows the whole. Just because we cannot determine it, does not mean it is not determinable, and doesn't mean there is some intelligent interface between us and the outcome - we can set in motion stochastic processes that we cannot follow. 

 

The same argument can apply to chaotic systems - we don't for example know how we get from a sunny day in June, to a frozen one in December. It is simply too complex.

 

We may create a system that we cannot predict or one that leads trail of breadcrumbs but that doesn't mean it is intelligent - and because the answer looks intelligent, and we don't know how it happened, we assume intelligence. 

 

I feel Penrose's argument is hard to fault, but there is no money to be made in deflating the AI bubble - well not much. 

 

One thing that is disturbing is the language used - and that bothers me. Computers always gave you errors, that never let you forget they were computers. But the AI when it screws up tries to humanise it's errors - which seems harmless, just making the interaction more enjoyable, but it is indoctrinating, building the lie that it is intelligent, "there".   

 

I believe it was Daniel Dennet who said that AI should never appear to be human in form. I believe he was right for obvious reasons - a robot that waves at us of course feels friendly. 


Edited by ambivalent, 30 May 2025 - 04:11 PM.


#144 Advocatus Diaboli

  • Guest
  • 624 posts
  • 640
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 31 May 2025 - 02:34 AM

Mind writes, in post #141:

 

"AI has not done anything spectacular yet."

 

DeepSeek AI disagrees with you, Mind:

 

The claim that "AI hasn't done anything spectacular yet" depends on how you define "spectacular." While AI hasn’t (yet) delivered sci-fi-level feats like humanoid robots taking over jobs or fully autonomous cities, it has achieved groundbreaking milestones that are transforming industries, science, and daily life. Here’s the reality check:

AI’s Most Spectacular Achievements (So Far) A. Science & Medicine
  • AlphaFold (DeepMind) – Solved protein folding, a 50-year biology grand challenge, accelerating drug discovery for diseases like Alzheimer’s and cancer.

  • AI-Discovered Antibiotics – MIT’s AI found halicin, a powerful new antibiotic, in days (traditional methods take years).

  • Medical Diagnostics – AI now outperforms doctors in detecting breast cancer, lung tumors, and diabetic retinopathy in some studies.

B. Technology & Creativity
  • ChatGPT & GPT-4 – Writes essays, code, and legal briefs at near-human levels (used by businesses, students, and even courts).

  • DALL·E 3 / MidJourney – Generates photorealistic images and art from text, disrupting graphic design.

  • Self-Driving Cars – Waymo and Tesla’s FSD operate autonomous taxis in multiple cities (still imperfect but revolutionary).

C. Games & Strategy
  • AlphaGo – Defeated the world’s best Go player (a game more complex than chess) with "divine moves" never seen before.

  • AI Poker Champions – Pluribus (Facebook/Meta) beat top poker pros in no-limit Texas Hold’em, a game of bluffing and psychology.

D. Business & Economy
  • Algorithmic Trading – AI runs ~70% of Wall Street trades, making trillion-dollar decisions in milliseconds.

  • Supply Chain Optimization – Companies like Amazon use AI to cut delivery times and reduce waste by billions.

2. Why Some People Think AI "Hasn’t Done Anything Spectacular"
  • Expectations vs. Reality: Many expect J.A.R.V.I.S. from Iron Man, but real AI works behind the scenes (e.g., optimizing ads, filtering spam).

  • Gradual Integration: AI improvements are incremental (like smartphones in the 2000s)—no single "wow" moment, but massive collective impact.

  • Failures & Limitations: AI still makes dumb mistakes (hallucinations in ChatGPT, self-driving car crashes), making it seem unreliable.

3. What Truly Spectacular AI Might Look Like Soon
  • Artificial General Intelligence (AGI) – A machine that thinks like a human (not yet achieved).

  • Cure for Major Diseases – AI-designed drugs entering clinical trials.

  • Fully Autonomous Robots – Tesla’s Optimus, Figure AI, etc., could reshape labor.

Conclusion

AI has already done spectacular things—just not in the flashy, Hollywood way many imagine. It’s revolutionizing science, art, and business quietly but profoundly. The next decade will likely bring even bigger leaps.



#145 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 31 May 2025 - 03:10 PM

AD,

 

Given the scale and marketing of AI this seems quite thin gruel - we expect much progress anyway, and there is a lot of computational power behind this progress. Pluribus, for example was quite a few years ago, but I am sure this was computationally driven, I don't sense it is AI even if we dress it up as so. Computers were beating chess, before AI, and they would beat poker without it in time, we would expect this w/o AI. We have been throwing a lot of effort at protein folding for a long time through computing power - and it is terriffic progress, and that is not to speak about the power of the software solving it, but is it really AI? I would have expected so much more. 

 

I remember speaking a couple of years ago causally about the subject - and much of the attitude "people don't realise what this will do, it is exponential" - that has been said a lot. Well, that's not what we have seen or I have experienced with chatgpt or of the world around in two years, but I see a lot of advertising. The world changes under the covers as it always has, and that will surface

 

I agree chat / deepseek are extremely useful tools, and I see them getting better, but not improve beyond what I have come to expect from technology in general. There are new versions coming out that are better than the last ones. It feels like software development, not AI. 

 

Given the resource and sheer scale into this exponential, I expected so much more in a couple of years which hasn't happened, progress has matched my experience I suppose. 

 

I do worry though that AI will be used as a mechanism of control by power - the masses have pushed back in recent years. If we reach the point of no longer being able to trust the web, or the sources we discover, then we will be subdued again. 

 

One of things I had hoped for is that it would see relationships missed on pubmed, and see the undoubted signals in the noise. But my experience has not taught me this. I was looking at a paper with a very interesting result, which the researchers didn't report or pay attention to, it didn't either until I pointed it out - then it was "I missed that". Humanisng itself, when it just wasn't going to spot it because of how it interprets/processes information. It wasn't trained to look for something unusual, of value, as it seems we through evolutionary pressure are.

 

I expect AI to be spotting what I spot and more, but it doesn't. It can do a brilliant summary, translate what the paper is saying, so I can understand it better - all of which is very useful - but that can be achieved through interpolation between lots of data, a lot of averaging. 

 

When it comes to art say, I imagine what I think true AI should do, but what it presently doesn't. Could for example, AI have "discovered Van Gogh", if he had never been born, or Shakesepare, or Mozart? Could it create an artist, and work? I don't see any sense of this - just a lot of averaging between the artistic spaces humans have created. 

 

I feel that AI in its current form is being burdened by its every increasing mass, slowed down. As adults we seem like this, not able to imagine as wildly as we could when young - there is to much clutter, too defined in our thinking, It doesn't feel AI has that much flexibility. 

 

I do believe it is a remarkable technology, but also very crap at times, in ways we would never have tolerated before. I don't see progress down this route, I don't see intelligence, or consciousness developing in chat/deepseek - but a lot of money made. 

 

 

Aside, here Hinton making a terrible argument as to why he believes AI is already conscious

 

https://youtu.be/vxk...Z2jrp5fe2&t=372

 

This felt like sophistry. The argument is false, and he surely must know it but puts it forward anyway.

 

For one thing he is speaking about substituting out organic brain cells for synthetic cells which do the "same thing" (which do not exist I assume) in a system which is already conscious, a system which we could not presently design nor fully understand. And even were such inductive logic true, that you would still be you and conscious were all your organic cells replaced by synthetic ones, does that in anyway imply that some system designed out of synthetic cells, which could in theory be replaced within us and retain our consciousness, would therefore be conscious? 

 

It is a ludicrous argument - that all you need is the ingredients and not the recipe - yet he makes it. And if that really is the model in his mind, which makes him believe AI is conscious, then I would struggle to his predictions seriously. 

 

There are so many leaps in his argument, yet he attempts to persuade the host this is some irrefutable "proof by indiuction" reasoning*.

 

It is much easier, I'd imagine to create software that appears intelligent, than is intelligent, through sheer force of computing and there would be money in it - so it is perhaps not surprising that's where we are first. 

 

Rightly or wrongly Penrose cleared my mind on this - I think he is right, it is computational power not intelligence. I guess we just have to see it play out, but it is all the same a dangerous technology. 

 

 

 

 

 

*Sill I was on board with him here!  

 

 

 

 

 

 

 

 

 

 



#146 Advocatus Diaboli

  • Guest
  • 624 posts
  • 640
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 31 May 2025 - 06:31 PM

ambivalent,
 
I think that in any discussion about AI (and probably most other matters, as well) there has to be axiomatic agreement of definitions among those who are engaging in the dialog. The definitions don't necessarily have to be the "correct" ones, because there typically might/will be differences of opinion about those definitions. But, there has to be a concordant basis upon which arguments and conclusions are rooted. Otherwise, there exists the possibility that "talking past each other" will occur.
 
In my posting of DeepSeeks response to Mind's assertion: "AI has not done anything spectacular yet.", DeepSeek itself has qualified its response to Mind's claim with: "... depends on how you define "spectacular."
 
Following is another potential example of "talking past". You write: 
 
"Pluribus, for example was quite a few years ago, but I am sure this was computationally driven, I don't sense it is AI even if we dress it up as so."
 
Perhaps you had something different in mind when you wrote your assertion, because DeepSeek is seemingly partly contradicting you:
 
"Pluribus, the poker-playing AI developed by Facebook AI Research (FAIR) and Carnegie Mellon University, was both computationally driven and incorporated advanced AI techniques.". Here's a breakdown of its key components:
 
1. Computationally Driven Aspects:
 
Massive Computation: Pluribus relied on extensive computational power to simulate millions of poker hands and refine its strategies through self-play. This is similar to how other game-playing AIs (like AlphaGo) train.
 
Monte Carlo Counterfactual Regret Minimization (MCCFR): This algorithm allowed Pluribus to approximate Nash equilibria in imperfect-information games by iteratively improving its play through randomized simulations.
 
2. AI and Machine Learning Elements:
 
Reinforcement Learning: While Pluribus didn’t use deep learning like AlphaGo, it employed reinforcement learning principles to optimize its strategies over time.
 
Abstraction Techniques: To handle the enormous complexity of 6-player no-limit Texas Hold’em, Pluribus used abstraction methods to simplify decision-making without sacrificing performance.
 
Adaptive Strategy: Unlike purely brute-force systems, Pluribus could adjust its tactics based on opponents’ tendencies, demonstrating AI-like adaptability.
 
Key Distinction:
 
Pluribus was not a deep neural network-based AI (unlike AlphaStar or ChatGPT), but it still leveraged AI-driven techniques (MCCFR, game theory optimization, and strategic adaptation) combined with heavy computation to outperform top human players."
 
Also, Perplexity AI agrees with DeepSeek:
 
"Pluribus is fundamentally an artificial intelligence (AI) system, not just a computational or rule-based program. It was developed by Facebook's AI Lab and Carnegie Mellon University specifically to tackle the complex, imperfect-information environment of six-player no-limit Texas Hold'em poker—a longstanding challenge in AI research.
 
Pluribus employs advanced AI techniques, including:
 
Reinforcement learning: Pluribus learns optimal strategies through self-play, starting from random actions and improving by playing millions of hands against itself.
 
Counterfactual Regret Minimization (CFR): This is a core AI algorithm that iteratively refines strategies based on simulated regret for different actions, allowing the system to approach optimal play in highly complex, multi-agent environments.
 
Abstraction techniques: Pluribus uses action and information abstraction to reduce the immense complexity of possible game states and actions, a hallmark of modern AI approaches to large-scale decision-making.
 
Real-time search: During actual play, Pluribus refines its strategies in real time, considering possible shifts in opponents’ strategies—another advanced AI capability.
 
These elements go far beyond traditional computational programming and are central to modern AI. Pluribus’s ability to defeat elite human professionals in multiplayer poker is considered a major AI milestone, demonstrating capabilities in reasoning, adaptation, and decision-making under uncertainty.
 
In summary: Pluribus is not just computationally driven; it is a sophisticated AI system that integrates multiple advanced artificial intelligence methods to achieve superhuman performance in multiplayer poker."
 
 
You also write:
 
"I remember speaking a couple of years ago causally about the subject - and much of the attitude "people don't realise what this will do, it is exponential" - that has been said a lot. Well, that's not what we have seen or I have experienced with chatgpt or of the world around in two years, but I see a lot of advertising."
 
That information about AI which the public is exposed to and that which is actually happening in the discipline might be two radically different things.


#147 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 01 June 2025 - 10:25 AM

AD,

 

 

Defintiion is key, I agree. I am not going to contend what is defined within the field of AI, but just to say this is not artificial intelligence or anything like is being used today.

 

 

 

Pluribus is very good but It is still coding/computational, developing algorithims to find optimal solutions in a complex space - they are just better techniques, to find close to optimal solutions. It looks like a natural extension of Operation Research field. Using language like Pluribus "considering" implies the AI is transcending and makes it seem intelligent, making decisions when it isn't. From deepseek (just to show it can't be relied upon):

"Pluribus isn’t "AI" in the sense of machine learning or cognitive mimicry. It’s a deterministic, optimized poker solver—a triumph of algorithmic design, not artificial intelligence as commonly imagined." 

 

 

 

"That information about AI which the public is exposed to and that which is actually happening in the discipline might be two radically different things." - I think these examples show the growth is not exponential. If it were there AI would have solved many many more problems that those cited. 

 

There can be exponential behaviour within constraints (bui then rapidly shut down) - but that doesn't necessarily translate to exponential growth in the important measurable effects. Limits could be reached quite quickly. And we see ourselves in the things that it wasn't doing well 2 years ago, it still isn't doing that much better - but because it does other stuff, we are distracted and blinded. 

 

My feeling when I first started playing with it, and it couldn't solve basic anagrams, and couldn't check that it was making these errors, and I could convince it that it was wrong when it was right, was that this approach was always going to be flawed, limited and unreliable: I don't see growth as the barrier but method. 

 

 

 

 


Edited by ambivalent, 01 June 2025 - 10:26 AM.


#148 Advocatus Diaboli

  • Guest
  • 624 posts
  • 640
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 01 June 2025 - 07:28 PM

 

ambivalent

 

I don't know what your initial question/statement to DeepSeek (DS) was that generated the answer you quote in your post that you preface with: "...just to show it can't be relied upon..." (referring to DS, with my bold emphasis), so I couldn't incorporate your query into my follow-up question to DS. 

 

My follow-up question to DS was:

 

Given your understanding of what AI is, as commomly imagined, evaluate the following:
"Pluribus isn’t "AI" in the sense of machine learning or cognitive mimicry. It’s a deterministic, optimized poker solver—a triumph of algorithmic design, not artificial intelligence as commonly imagined.".
 
Where I used the "as commonly imagined" in my question. I would have explicitly added "by the public" but that wasn't in the statement you quote. So, my query stood with some uncertainty that may or may not have influenced DS's response.

 

DeepSeek answer:

 

The statement you provided makes a distinction between Pluribus (a poker-playing system) and what is commonly understood as "AI," particularly in the realms of machine learning (ML) or cognitive mimicry. Let’s break this down and evaluate the claim:

 

1. Pluribus as a "Deterministic, Optimized Poker Solver"
  • Deterministic: This means Pluribus operates on fixed rules and algorithms, producing predictable outputs given the same inputs (unlike probabilistic ML models that may have inherent randomness).

  • Optimized Poker Solver: It likely uses game-theoretic approaches (e.g., Nash equilibrium strategies, counterfactual regret minimization) to compute near-optimal decisions in poker, rather than "learning" from data like ML-based systems.

  • This aligns with classical algorithmic design—solving a well-defined problem (poker) efficiently without mimicking human cognition.

2. "Not AI as Commonly Imagined"
  • The public often associates "AI" with:

    • Machine Learning: Systems that learn patterns from data (e.g., neural networks, reinforcement learning).

    • Cognitive Mimicry: AI that appears human-like (e.g., chatbots, robots with perceived "intelligence").

  • Pluribus doesn’t fit these categories because it doesn’t "learn" from data or emulate human thought processes. Instead, it relies on precomputed strategies and deterministic decision-making.

3. Is This a Fair Distinction?
  • Yes, in a narrow sense:

    • If "AI" is strictly equated with ML or cognitive mimicry, Pluribus isn’t AI by that definition.

    • Its strength comes from brute-force computation and algorithmic optimization, not adaptive learning or emergent behavior.

  • But, in a broader sense:

    • AI historically includes symbolic AI (rule-based systems, solvers) and search/optimization algorithms (e.g., chess engines like DeepBlue).

    • Many experts would still classify Pluribus as "AI" because it solves a complex problem (poker) that requires reasoning under uncertainty—a hallmark of AI.

4. Why the Controversy?
  • The statement reflects a tension between popular perceptions of AI (often skewed toward human-like ML systems) and academic/technical definitions (which include a wider range of techniques).

  • Poker is a high-profile domain where people assume "AI" must involve learning or intuition, but Pluribus shows that rigorous algorithms can outperform humans without those features.

5. Conclusion

 

The claim is largely correct if "AI as commonly imagined" refers specifically to ML/cognitive systems. However, it’s also somewhat reductive—Pluribus is still AI by many technical definitions, just not the "Hollywood" or "deep learning" variant. Its success highlights how diverse AI methodologies can be, even without mimicking human intelligence.

 

Final Take:

Pluribus is AI in the classical sense (problem-solving via algorithms) but not in the pop-culture sense (self-learning, human-like systems). The distinction is useful for clarifying how it works, but both perspectives are valid depending on how one defines "AI."

 

DeepSeek agrees with you and I in recognizing the importance of definitional "ansatz", if you will. Again, underscoring the importance of establishing the axioms that will be used to constrain a discussion.

 

You write:

 

"I think these examples show the growth is not exponential. If it were there AI would have solved many many more problems that those cited."

 

Well, one could make that conjecture but, in reality, I suspect that the general public isn't privy to information about potential breakthroughs made by foreign and domestic State actors (militaries and intelligence-gathering outfits?) as well as other entities that may be holding back on releasing news about breakthroughs that might be of significant financial interest--but only if held and executed in a proprietary manner.

 
 
 
 
 
 

 


Edited by Advocatus Diaboli, 01 June 2025 - 07:40 PM.

  • Informative x 1

#149 ambivalent

  • Guest
  • 794 posts
  • 187
  • Location:uk
  • NO

Posted 02 June 2025 - 02:53 PM

AD,

 

I appreciate it is a little vague and ill-defined in parts - there can always be conjecture asto  what occurs behind the scenes, but I think exponential growth must exist in a context which it often isn''t. Only in mathematics (that I can off the top of head think) is exponential growth only temporary! There are constraints which will eventually blunt it (classically population models). With the AI we use, I have certainly not experienced exponential growth over two years (the exponent could be small, that said, but I refer to is as we use it langauge) in its effects. These constraints may have kicked in quickly, limiting growth, and all that is left for us to gaze into the infintieness of its navel :o) 

 

 

What I seem to sense recently is the top of an S-curve and new features. Others may know it differently - and perhaps it is for certain usage - but I don't notice a huge difference when switch due to limits from one chatgpt to an older version - and we would somewhat expect, the difference to be vast, if there were such exponential growth. We may expect better in 5 years, but with these models I am not expecting something miraculous - but there will be more senstive users than me.

 

It feels like smartphones, where the important gains were made early. There are certain usages such as summarising a document where it just is going to get that much better, which may seem unfair on AI but my sense is that initially, this software can suddently solve an awful of certain types of problems or vastly improve upon them, there is exponential growith, and then reaches saturation because it isn't smart enough to take on other challenges - and that seems to be the history of technology.

 

 

My imagining of AI in the true intelligence sense to be making amazing discoveries routinely instantly, maybe. I don't see AI making intelligent connections - we should be able to set in on pubmed and it tell us a million things /relationships we never knew. 

 

 

I feel that current AI is too constrained by design flaws to have exponential effects in the kind we wish for in solving our problems. Fundamentally, it is not intelligent, it can not see what it does or from my experience see subtle patterns, or make unusual relationships or connections - as humans do. 

 

On the general point about what AI is, I am not going to argue that some algorithims belong to the field of AI, but they are I would say they are still algorithims (dressed up). Likewise to the poker bot, medical diagnostics, this would always seem to be a problem solveable by brut computational force - I don't know how it is done, but I would say it is still computational, even if there are layers on top that make it seem otherwise (which is how I see present AI) making heavy work much lighter and more efficient.

 

But I don't worry that the effects of our belief in it being so smart may result in effectively it training (and subduing) us (I feel this at times). 

 

My mathematical mind is long eroded, and it is mostly intuition that remains - it is that feeling of infinite growth within constraints may result in limited or limiting effects. It is that constraint which Penrose speaks of, in not transcending itself, that is the constraint, which is limiting in what it can do, and that which allows humans to create "outside of the rules". 

 

If it could transcend itself, look at what it does itself, not through instruction, then I beleive we would see the explostion of growth and problems solving the world seems to need, and what we would hope from true AI - and our world would in 2 years be very different. 

 

As Penrose says, it has an important role to play, and I see it as the big technology of our time - but I really don't see it taking over the world in two years because I don't believe this is or could be intelligent - if this model could be, it would be, surely. It won't just be about "more" as Agent Smith realised! It needs something else.

 

Hinton's reasoning for consciousness were so terrible, I ended up questioning his motives for breaking out of Google: it felt like scaremongering propaganda.  

 

Still, it's fun to play.


Edited by ambivalent, 02 June 2025 - 02:54 PM.


sponsored ad

  • Advert

#150 Advocatus Diaboli

  • Guest
  • 624 posts
  • 640
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 03 June 2025 - 12:30 AM

Omphaloskepsis and model collapse.


  • Informative x 1





Also tagged with one or more of these keywords: chatgpt, turing test

4 user(s) are reading this topic

0 members, 4 guests, 0 anonymous users